Artificial Intelligence at Google’s I/O 2019

Image by 377053 from Pixabay

Artificial intelligence (AI) plays a key role at almost every technology conference these days and Google annual developer conference, held over three days between 7 May and 9 May in San Francisco this year, was no different.

I/O 2019 saw the ubiquitous search engine provider announce updates and launches across its portfolio, including the latest beta release of Android Q, Google’s cross-hardware operating system; the Pixel 3a and Pixel 3a XL smartphones; augmented reality in Google Search; Duplex on the web; enhanced walking directions in Google Maps; and more.

On the AI-focused side of things, the company announced the winners of its $25 million AI Impact Challenge, some six months after it was first launched. Coming from twelve different nations, the winners will use a Google grant of up to US$2 million each to apply machine learning to fight some of the world’s biggest challenges.

The company also unveiled three separate accessibility projects designed to help people with disabilities, including Project Euphoria, to assist people with speech impairments; Live Relay, to help those with hearing challenges; and Project Diva, which aims to help people give Google Assistant commands without using their voice.

Elsewhere, Google told attendees that Google Assistant will soon become ten times faster than its current speed with “on-device” machine learning and plans to introduced a turbocharged version of the Assistant to Pixel phones later this year.

It claimed that the updated version won’t require repeatedly triggering with a repeated hotword – e.g. “hey Google” – and will be able to complete tasks like transcription, file searches, and selfie-snapping offline, without an internet connection, thanks to smaller speech recognition model than that of the current version.

For voice app creators, Google announced a number of upgrades to its Actions on the Google platform, allowing developers, for example, to tether an action to “how to” questions using a newly introduced “how-to markup language”. This means that Google Assistant-powered apps should theoretically be better equipped to respond to commonly asked questions with relevant text, images and instructional videos.

Google Lens, the company’s visual search and computer vision tool, will soon be able to surface top meals in a restaurant when users point their smartphone camera at a menu, using its ability to recognize all manner of real-world objects. Google said that Lens will also soon be able to read translated text aloud if users point their camera at printed content and will be able to help spilt a bill or calculate a tip following a meal.

It also revealed that it has plans to expand Google Duplex, a verbal chat agent that can make appointments for you over the phone (it started rolling out to smartphone users last year), to the web, where it will be able to handle relatively complex matters such as car rental bookings for you.

The company’s cloud unit announced that it would be making pods with 1,000 tensor processing unit (TPU) chips available in public beta. Google has be developing its own TPUs — programmable, custom chips designed to power extreme machine learning tasks — for some time, and researchers and developers can use them to train AI models.

Unsurprisingly, Google also focused on the role of AI and machine learning as it relates to privacy, detailing its work in federated learning, a distributed AI approach that looks to facilitate model training by aggregating samples that are sent to the cloud for processing only after they’ve been anonymized and encrypted. The company claims that it’s Gboard keyboard for Android and iOS already uses federated learning to improve next-word and emoji encryption across “tens of millions” of devices.

On the second day of I/O, Google published a list of privacy commitments regarding its hardware products, detailing how personal data is used and how it can be controlled. The document notes, for example, that the new Nest Hub Max, which uses an on-device facial recognition feature to spot familiar people and surface contextually relevant information, doesn’t send facial recognition data to the cloud.

Reader Interactions

Leave a Reply

Your email address will not be published. Required fields are marked *