• $649.00
  • Free 30-day Returns
  • Price alert
    Shop it at NeweggBusiness.

    Meet Your Seller

    NVIDIA/HP Tesla V100 PG500-216 32GB HBM2 PCIe 3.0 x16 Passive GPU Computational Accelerator for AI Machine Learning HPC Deep Learning 699-2G500-0216-400

    REFURBISHED Product Type
    • Brand: Nvidia/HP
    • Nvidia Part Number: 699-2G500-0216-400
    • HPE Part Number: P44861-001/P44899-001
    • Form Factor: Dual-Slot PCIe Full Height/Length
    • GPU Architecture: NVIDIA Volta
    • Capacity: 32GB
    • Interface: PCIe Gen 3.0 x16
    • Memory Bandwidth: 900GB/sec
    +
    +
    Overview
    Specs
    Reviews
    avatar

    Any questions? Our AI beta will help you find out quickly.

    NVIDIA/HP Tesla V100 PG500-216 32GB HBM2 PCIe 3.0 x16 Data Center AI Deep Learning HPC Accelerator GPU

    1. Type: GPU Computational Accelerator
    2. Nvidia Part Number: 699-2G500-0216-400
    3. HPE Part Number: P44861-001/P44899-001
    4. Form Factor: Dual-Slot PCIe Full Height/Length

    Key Features

    1. GPU Memory: 32GB HBM2 (High Bandwidth Memory 2) with ECC (Error-Correcting Code) for reliable performance in AI, HPC, and data-intensive workloads.
    2. CUDA Cores: 5,120 CUDA cores, delivering high parallel compute performance for deep learning, scientific computing, and simulation tasks.
    3. Tensor Cores: 640 first-generation Tensor Cores, accelerating AI training and inference workloads with mixed-precision computing.
    4. Performance: Up to ~14 TFLOPS FP32 and ~125 TFLOPS Tensor performance for AI and HPC workloads.
    5. Interface: PCIe 3.0 x16 for high-bandwidth server and cluster connectivity.
    6. Display Outputs: None; compute-only GPU with no display support.
    7. Form Factor: Dual-slot, full-height, full-length passive cooling design for server airflow systems.
    8. Power Consumption: ~250W TDP via PCIe slot and auxiliary power connector
    9. Enterprise Features: Supports CUDA, cuDNN, and TensorRT for AI, machine learning, and HPC workloads.

    Ideal Applications

    1. AI Model Training (Deep Learning / LLMs) -- Accelerates large-scale neural network training using Tensor Core compute for mixed-precision workloads
    2. High-Performance Computing (HPC) -- Used in scientific simulation, physics modeling, climate research, and engineering computations requiring massive parallel processing
    3. Data Analytics & Big Data Processing -- Ideal for large dataset analysis, machine learning pipelines, and GPU-accelerated database workloads
    4. AI Inference & Research Clusters -- Supports deployment of trained AI models in research environments and multi-GPU inference systems
    5. Enterprise GPU Compute Nodes -- Designed for data center environments running 24/7 workloads, including cloud computing, virtualization, and AI server clusters

    Warranty & Returns

    Warranty, Returns, And Additional Information

    Warranty

    • Please contact the Seller directly for warranty information. Warranty information may also be found on the Manufacturer's website.
    • CONTACT

    Return Policies

    Manufacturer Contact Info

    LOADING...