The Amazon Echo is the epitome of an Internet of Things (IoT) device. It combines an embedded applications processor from Texas Instruments, MEMS microphones from Knowles, Wi-Fiand Bluetooth wireless connectivity, an AWS cloud backend, and support for diverse applications. It’s also multi-function, which increases the platform’s value for consumers (bundled services), as well as Amazon (multi-dimensional insights into customer behavior and trends). The glue that ties all of this together is, of course, software.
The Echo’s signature feature, automatic speech recognition (ASR), is enabled by software algorithms that not only provide the language modeling and natural language understanding capabilities that make the platform unique, but also help offset the rigors of reverberant speech. Reverberant speech is a phenomenon that occurs in indoor environments when an audible signal reflects or bounces off of various surfaces, creating noise in the form of echoes that diminish the direct path signal from speaker to microphone. As you can imagine, this wreaks havoc on speech recognition, but consider the real-world use case of the Amazon Echo wherein reverberant speech is often the only signal available from a speaker communicating with the device.