HPE DLBS TensorRT: ResNet50 and ImageNet

Publish date: 2024-06-04

The other unique aspect of HPE DLBS is the feature of a benchmark for TensorRT, NVIDIA's inference optimizing engine. In recent years, NVIDIA has pushed to integrate it with new DL features like INT8/DP4A and tensor core 16-bit accumulator mode for inferencing.

Using a Caffe model, TensorRT adjusts the model as needed for inferencing at a given precision.

DL Inference: DLBS TensorRT- ResNet50 and ImageNet Throughput

In total, we ran batch sizes 64, 512, and 1024 for Titan X (Maxwell) and Titan Xp, and batch sizes 128, 256, and 640 for Titan V; the results were within 1 - 5% of the other batch sizes, so we've not included them in the graph.

The high INT8 performance of Titan Xp somewhat corroborates with the GEMM/convolution performance; both workloads seem to be utilizing DP4A. Meanwhile, it's not clear how Titan V implements DP4A. all we know is that it is supported by the Volta instruction set. And Volta does has those separate INT32 units.

ncG1vNJzZmivp6x7orrAp5utnZOde6S7zGiqoaenZH5zgpZsZq2hpJa7bsKMnZyeqF2hsqK%2BzaKloGWUmrKxecOirZ5nYWo%3D