Nvidia is looking to extend its lead as an AI company by improving its image recognition, language recognition, and decision-making platforms, as well as a new chip for autonomous vehicles.
The GPU Technology Conference is one of the biggest conferences around AI each year. At the ongoing GTC 2019 China, Nvidia held a keynote to emphasize the shift from CPU to GPU-based platforms for AI processing, with the second part of the keynote focussing on strides made for autonomous vehicles.
Baidu, which is China’s most prominent company in the field of AI, has also partnered with Nvidia for its AI Box, implementing Nvidia V100 GPUs for 10x faster performance.
Similarly, Alibaba, which is one of the biggest eCommerce companies in the world, has also recently made a switch to Nvidia T4 GPUs for its recommender platforms. On this Single’s Day (November 11), it saw a 10% increase in click-through rates with significantly faster results. Recommender platforms are particularly more complex as they have to consider all parameters around a customer and provide results in real-time. For example, for Single’s Day 2019, Alibaba had over two billion products on sale, which had to be shown to over 500 million potential shoppers.
To enable partners to make the most of this shift to GPU-powered processing, Nvidia has also made available the TensorRT Inference Software, which takes existing trained models and optimizes them for GPU for better performance. With TensorRT 7, support for many more models and neural networks has been added along with Kernel generation.
TensorRT 7 also enables real-time conversational AI and delivers accurate replies in less than 300ms. This process takes waveforms (voice) as input and converts that to text for speech synthesis and gives an output in the desired format. Nvidia’s GPU Inference partners include Microsoft Bing, Expedia, Walmart, Tencent, Twitter, PayPal, Snap Inc., Xiaomi, etc.
Nvidia Drive and automobile-related announcements
For automobiles, Nvidia is making available a software-driven autonomous vehicle platform. Pre-trained models are now openly available on the Nvidia Cloud, which offers data and learning from over ten years of driving human-operated vehicles with multiple sensors.
Transfer learning is also a big part of the Drive platform to make the data more usable. Drive also offers Federated Learning, which helps automobile companies transfer their learning around autonomous driving from one country to another without any of the data localization complexities.
On the hardware front, the Nvidia Drive AGX Orin processor, which brings a 7x performance improvement over its predecessor (Xavier) as well as backward compatibility. In China, Didi and SAIC have already partnered with Nvidia for its AI platform for self-driving cars. A few other autonomous vehicle concepts were also shown during the keynote.
When asked about specific timelines around the public availability for autonomous vehicles, Nvidia executives explained that there’s enormous technical complexity around the idea along with legal hurdles that are being figured out. The model will have to be trained further than previously thought to make it safer than human driving. They mentioned that there are already over 50 companies working on this, and even small steps of adoption will be very beneficial. They cited an example of where if one day, some country decides to make it compulsory for all cars to have LIDAR sensors on them, not only will the roads be a lot more safer, but will also generate a lot more high-quality data.