"Roughly seven months ago, Nvidia launched the Tesla V100, a $10,000 Volta GV100 GPU for the supercomputing and HPC markets. This massive card was intended for specialized markets thanks to its enormous die size (815 mm sq) and massive transistor count (21.1B). In return, it offered specialized tensor cores, 16GB of HBM2, and theoretical performance in certain workloads far above anything Nvidia had shipped before.
Today, at the Conference on Neural Information Processing Systems (NIPS), Jen-Hsun surprise-launched the same GV100 architecture in a traditional GPU form factor. Just as the GTX 1080 Ti is a trimmed-down version of the Nvidia Titan Xp, this new Titan V slims down in some spots compared with the full-fat Tesla V100. Memory clocks are very slightly lower (1.7Gbps transfer rate, down from 1.75Gbps), and the GPU has three memory paths at 3,072 bits, rather than the 4,096-bit interface the Tesla V100 offers. It also offers just 12GB of HBM2, rather than the 16GB on the Tesla V100.
Nvidia is trumpeting the Titan V as offering 110 TFLOPS of horsepower, “9x that of its predecessor.” We don’t doubt that’s literally true, but it’s not a comparison to the single-precision or double-precision math we’ve typically referenced when discussing GPU FLOPS performance. It’s a reference to Volta’s performance improvement in deep learning tasks over Pascal, and it’ s derived by comparing Volta’s tensor performance (with its specialized tensor cores) against Pascal’s 32-bit single-precision throughput. That doesn’t mean the comparison is invalid, since Volta has specialized tensor cores for training neural networks, and Pascal doesn’t, but it’s a little like comparing AES encryption performance on a CPU with specialized hardware for that workload with another CPU that lacks it. Is the comparison fair? Absolutely. But it’s fair only for the specific metric being measured, as opposed to being a generalizable test case for the rate of improvement one CPU offers over the other.
Nvidia’s stated goal with the Titan V is to offer researchers who don’t have access to supercomputers or big iron HPC installations the same access to cutting-edge hardware performance that their compatriots enjoy. While the GPU is priced at an eye-popping $3,000 (relative to the regular PC market), that’s not very much compared with the typical cost of an HPC server.
“Our vision for Volta was to push the outer limits of high performance computing and AI. We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links,” said Nvidia CEO Jen-Hsun Huang. “With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world. I can’t wait to see their breakthrough discoveries.”
You can buy a Titan V at the Nvidia store right now, but we can’t honestly say we’d recommend one for anyone not working in these fields. Despite the “Titan” brand having originally debuted as a high-end consumer card with some specialized scientific compute capabilities, this GPU family has been moving back towards its scientific computing research roots for a number of years. While Nvidia will obviously support the GPU with a unified driver model, I wouldn’t hold my breath waiting for fine-tuned gaming support from a GPU family that so few customers will ever have access to."