vacancies advertise contact news tip The Vault
facebook rss twitter

Nvidia announces TensorRT Hyperscale Platform at GTC Japan

by Mark Tyson on 13 September 2018, 10:01

Tags: NVIDIA (NASDAQ:NVDA)

Quick Link: HEXUS.net/qadxic

Add to My Vault: x

Nvidia has launched the TensorRT Hyperscale Platform, which is designed for data centres and is the industry's "most advanced inference accelerator for voice, video, image, recommendation services". The foundations for this new platform are Tesla T4 GPUs, based upon the Turing architecture, and a comprehensive set of new inference software; Tensor RT5, Tensor RT inference server, and CUDA 10. Off the back of these announcements Nvidia shared more specific industry targeted AI-powered products for robotics and drones, self-driving vehicles, and medical instrument design.

The vision of Nvidia and its keenest customers is that "every product and service will be touched and improved by AI," according to Ian Buck, VP and GM of Accelerated Business at the corporation. The new TensorRT Hyperscale Platform brings this future to reality "faster and more efficiently than had been previously thought possible," he asserted.

Nvidia breaks down its AI Inference Platform into bullet points as follows:

  • Nvidia Tesla T4 GPU – Featuring 320 Turing Tensor Cores and 2,560 CUDA cores, this new GPU provides breakthrough performance with flexible, multi-precision capabilities, from FP32 to FP16 to INT8, as well as INT4. Packaged in an energy-efficient, 75-watt, small PCIe form factor that easily fits into most servers, it offers 65 teraflops of peak performance for FP16, 130 teraflops for INT8 and 260 teraflops for INT4.
  • Nvidia TensorRT 5 – An inference optimizer and runtime engine, Nvidia TensorRT 5 supports Turing Tensor Cores and expands the set of neural network optimizations for multi-precision workloads.
  • Nvidia TensorRT inference server – This containerized microservice software enables applications to use AI models in data centre production. Freely available from the NVIDIA GPU Cloud container registry, it maximizes data centre throughput and GPU utilization, supports all popular AI models and frameworks, and integrates with Kubernetes and Docker.

Already supported by heavyweight IT companies, Nvidia shared some testimonials from the likes of Microsoft (improving Bing's advanced search offerings), Cisco (data insights), HPE (enabling inference on the edge), and IBM Cognitive Systems (for 4x faster deep learning training times).

In related announcements, Nvidia made the Jetson AGX Xavier dev kit available worldwide, and you can see it was selling this for Y149,800 at the end of the conference. Yamaha has adopted Jetson AGX Xavier for autonomous land, air, sea machines, and other less well known Japanese industrials have adopted it too.

The Nvidia DRIVE AGX system dev kit became available worldwide, and Fujifilm is one of the first companies to adopt the supercomputer for healthcare, medical imaging systems, and more.

Lastly, the newly unveiled Nvidia Clara Platform, a combination of hardware and software, will bring AI to next-gen medical imaging systems to improve the diagnosis, detection and treatment of diseases.



HEXUS Forums :: 2 Comments

Login with Forum Account

Don't have an account? Register today!
I dread to think what they're going to do with AI in medical imaging. I have enough problems trying to figure out how they've post-processed an image and what weirdness that's creating (or masking) without it getting all neural netty. If it's going to use AI to try and interpret imaging for quicker reports…. we've tried that and it's a disaster as people who aren't qualified to interpret the imaging rely on the report and assume it's correct. This is why for a lot of the qualitative stuff we dual report. What we need are reliable methods of automated quantification of things we normally assess qualitatively which are quick and easy to employ under pressure but let you make the interpretation based on relatively raw output data. That way you can spot errors quickly and easily.
philehidiot
I dread to think what they're going to do with AI in medical imaging. I have enough problems trying to figure out how they've post-processed an image and what weirdness that's creating (or masking) without it getting all neural netty. If it's going to use AI to try and interpret imaging for quicker reports…. we've tried that and it's a disaster as people who aren't qualified to interpret the imaging rely on the report and assume it's correct. This is why for a lot of the qualitative stuff we dual report. What we need are reliable methods of automated quantification of things we normally assess qualitatively which are quick and easy to employ under pressure but let you make the interpretation based on relatively raw output data. That way you can spot errors quickly and easily.

There are some things they seem good at, and that should improve over time.

https://www.theregister.co.uk/2018/08/13/deepmind_eye_scan/