Amazon's cloud computing (opens in new tab) voice service Alexa is about to get a whole lot more powerful as the Amazon Alexa team has migrated the vast majority of its GPU-based machine inference workloads to Amazon EC2 Inf1 instances.
These new instances are powered by AWS Inferentia and the upgrade has resulted in 25 percent lower end-to-end latency and 30 percent lower cost compared to GPU-based instances for Alexa's text-to-speech workloads.
As a result of switching to EC2 (opens in new tab) Inf1 instances, Alexa engineers will now be able to begin using more complex algorithms in order to improve the overall experience for owners of the new Amazon Echo (opens in new tab) and other Alexa-powered devices.
In addition to Amazon Echo devices, more than 140,000 models of smart speakers, lights, plugs, smart TVs and cameras are powered by Amazon's cloud-based voice service. Each month, tens of millions of customers interact with Alexa to control their home devices, listen to music and the radio, stay informed or to be educated and entertained with the more than 100,000 Alexa Skills (opens in new tab) available for the platform.
- We've put together a list of the best cloud hosting (opens in new tab) services
- These are the best cloud backup (opens in new tab) services on the market
- Also check out our roundup of the best text-to-speech (opens in new tab) software
In a press release (opens in new tab), AWS technical evangelist Sébastien Stormacq explained why the Amazon Alexa team decided to move from GPU-base machine inference workloads, saying:
“Alexa is one of the most popular hyperscale machine learning services in the world, with billions of inference requests every week. Of Alexa’s three main inference workloads (ASR, NLU, and TTS), TTS workloads initially ran on GPU-based instances. But the Alexa team decided to move to the Inf1 instances as fast as possible to improve the customer experience and reduce the service compute cost.”
AWS Inferentia
AWS Inferentia (opens in new tab) is a custom chip built by AWS to accelerate machine learning inference workloads while also optimizing their cost.
Each chip contains four NeuronCores and each core implements a high-performance systolic array matrix multiply engine which helps massively speed up deep learning (opens in new tab) operations such as convolution and transformers. NeuronCores also come equipped with a large on-chip cache that cuts down on external memory accesses to dramatically reduce latency while increasing throughput.
For users wishing to take advantage of AWS Inferentia, the custom chip can be used natively from popular machine learning frameworks including TensorFlow, PyTorch and MXNet with the AWS Neuron software development kit.
In addition to the Alexa team, Amazon Rekognition (opens in new tab) is also adopting the new chip as running models such as object classification on Inf1 instances resulted in eight times lower latency and doubled throughput when compared to running these models on GPU instances.
- We've also highlighted the best cloud storage (opens in new tab)