Today AWS (Amazon Web Services) announced its Amazon Lex service is now available to all developers. The technology, which powers Amazon’s voice assistant Alexa, will allow developers to build voice or text chat features using Amazon’s automatic speech recognition and natural language understanding capabilities. Capital One, Freshdesk, Hubspot and Liberty Mutual are a few of the companies who had access to Amazon Lex in its preview phase. According to Raju Gulabani, VP of Databases, Analytics and AI at AWS, Amazon has “been blown away by the customer response to our preview.”
A Win-Win Situation For Amazon and Developers
While in the company’s press release Amazon might make its intentions to share Lex seem noble, it is actually a self-serving move. In an interview before the announcement, CTO Werner Vogels commented:
There’s massive acceleration happening here. The cool thing about having this running as a service in the cloud instead of in your own data center or on your own desktop is that we can make Lex better continuously by the millions of customers that are using it.
Vogel’s comment illuminates what Amazon is really after: data. For artificial technologies like Lex, processing large amounts of data is the key to success – and the key to further improving Alexa’s capabilities. While Alexa might have the lead right now as far as third-party integrations, Amazon needs a larger audience to compete against the massive amounts of data Apple’s Siri and Google have collected over the years.
It’s a win-win situation for Amazon and developers alike. Amazon gets their data, and developers get access to the company’s advanced AI technology. However, Google, Apple and other Amazon rivals might be thinking otherwise.