-27%
, , , , , ,

PNY NVIDIA H100 NVL 94GB NVH100NVLTCGPU-KIT


  • NVIDIA Hopper Architecture
  • 94GB HBM3 Memory
  • FP64 30 TFLOPS
  • FP64 Tensor Core 60 TFLOPS
  • FP32 60 TFLOPS
  • Up to 350-400W Power Consumption

PNY NVIDIA H100 NVL 94GB NVH100NVLTCGPU-KIT

Min. Quantity – 5 Nos

Note: Below are the approximate and promotional prices. For the latest pricing and further details, please WhatsApp or call us at +91-8903657999.

659,999 900,000

NVIDIA® H100 NVL supercharges large language model inference in mainstream PCIe-based server systems. With increased raw performance, bigger, faster HBM3 memory, and NVIDIA NVLink™ connectivity via bridges, mainstream systems with H100 NVL outperform NVIDIA A100 Tensor Core systems by up to 5X on Llama 2 70B.

Product Name: NVIDIA H100 NVL
FP64: 30 TFLOPS
FP64 Tensor Core: 60 TFLOPS
FP32: 60 TFLOPS
TF32 Tensor Core: 835 TFLOPS | Sparsity
BFLOAT16 Tensor Core: 1671 TFLOPS | Sparsity
FP16 Tensor Core: 1671 TFLOPS | Sparsity
FP8 Tensor Core: 3341 TFLOPS | Sparsity
INT8 Tensor Core: 3341 TOPS
GPU Memory: 94GB HBM3
GPU Memory Bandwidth: 3.9TB/s
Maximum Thermal Design Power (TDP): 350-400W (Configurable)
NVIDIA AI Enterprise: Included