The announcement of Amazon’s new wall-mounted Echo Show 15 also revealed its new CPU that comes with some interesting (or potentially scary) applications. The AZ2 chip builds on the machine learning interface that premiered with the AZ1, which allowed Amazon devices to better recognize your voice, but has extended this capability to facial recognition as well. This comes with Amazon’s new focus on what it calls “Ambient Intelligence.”
Let’s explain what the AZ2 does on paper before we dive into the implications of this hardware. The AZ2 is capable of 22 times the amount of operations per second compared to the AZ1, which means it can simultaneously process speech and facial recognition locally. The information the AZ2 learns about your face will be part of what Amazon is calling “Visual ID” and requires users to specifically enroll in this feature. This enables the Echo Show 15 to recognize you and display custom content based on your Alexa profile.
Much like its predecessor, the AZ2 is a neural edge processor, which means that it utilizes machine learning to cut down on the amount of data it needs to send to or receive from the cloud. This has the benefit of not only reducing latency but also cutting back on the amount of data that gets stored on the cloud.
While Amazon has made it remarkably easy to control, view, and edit the information it can access, there is something eerily dystopian about a machine that can automatically recognize you when you walk into a room. If you’re someone who gets annoyed whenever your Alexa accidentally recognizes your voice, you may want to skip enrolling in Visual ID.
Currently, the Echo Show 15 is the only piece of Amazon hardware that we know will include the AZ2. But just like the AZ1, you can expect it to become a keystone piece of technology in Amazon devices going forward.
For full details on some of the other devices that Amazon announced at its fall hardware event today, check out our coverage of the event here.