Tuesday, 17 October 2017
Latest news
Main » Nvidia unveils massive AI processing chip Tesla V100

Nvidia unveils massive AI processing chip Tesla V100

12 May 2017
Nvidia unveils massive AI processing chip Tesla V100

"You can do your best work no matter where you are, using our latest technology in the cloud". In contrast, using a "relatively conservative" number of 15x reduction, NVIDIA posits to process the same workload one would need just 33 GPU-accelerated servers, which would translate to a 15x increase in AI processing throughput using the same datacenter space.

Harnessing deep learning presents two challenges for developers and data scientists. Additionally, Dr. Goh, and Professor Matsuoka will discuss Tokyo Institute of Technology's large scale system, the latest TSUBAME3.0 supercomputer during a breakout session on the Scalable Learning Platform.

NVIDIA solved the first challenge earlier this year by combining the key software elements within the NVIDIA DGX-1™ AI supercomputer into a containerized package. The idea is to make the up-to-date stack more readily available while optimizing performance.

Deep learning also is on the way to vehicles which will assist with their autonomous features.

Nvidia say that the Nvidia GPU Cloud has been built from the ground up to develop and research deep learning technology on the world's fastest GPUs, the software will also be accessible through an Nvidia account. At US$149,000, it's worth some people's life savings.

Versatile: It's built to run anywhere. All told, the Volta-infused DGX-1 offers up to 960 TFLOPs of FP16 compute performance, versus 170 TFLOPs for the original, with significantly more bandwidth on tap. The output could then be loaded into TensorRT™ for inferencing.

Last November, we reported that had the biggest growth in six years of $2 billion revenue for the third quarter thanks to surging demand for the new range of high-end graphics cards. Pricing will be announced at a later date. Still, if you're firmly on the NVIDIA side of the graphics debate, this is certainly an exciting announcement.

Keep Current on NVIDIA Subscribe to the NVIDIA blog, follow us on Facebook, Google+, Twitter, LinkedIn and Instagram, and view NVIDIA videos on YouTube and images on Flickr. Nvidia has been working to make its GPUs increasingly friendly to AI applications, adding features such as fast 16-bit floating point. "This is really the first hybrid, deep learning cloud computing platform". As Huang's rivals reveal more about their products this year, that claim will come under close scrutiny.

Certain statements in this press release including, but not limited to, statements as to the impact, benefits, availability and pricing of NVIDIA GPU Cloud are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations.

"Customers pursuing Deep Learning projects face a variety of challenges including a lack of mature IT infrastructure and technology capabilities leading to poor performance, efficiency and time to value", said Bill Mannel, Vice President and General Manager, High Performance Computing and Artificial Intelligence at Hewlett Packard Enterprise. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge.