Some thoughts on Google I/O and AI futures
May 18, 2017 | By Peter Bihr |
Google’s developer conference Google I/O has been taking place these last couple of days, and oh boy has there been some gems in CEO Sundar Pichai’s announcements.
Just to get this out right at the top: Many analysts’ reactions to the announcements were a little meh: Too incremental, not enough consumer-facing product news, they seemed to find. I was surprised to read and hear that. For me, this year’s I/O announcements were huge. I haven’t been as excited about the future of Google in a long, long time. As far as I can tell, Google’s future today looks a lot brighter still than yesterday’s.
Let’s dig into why.
Just as a quick overview and refresher, some of the key announcements (some links to write-ups included).
Let’s start with some announcements of a more general nature around market penetration and areas of focus:
- There are now 2 billion active Android devices
- Google Assistant comes to iOS (Wired)
- Google has some new VR and AR products and collaborations in the making, both through their Daydream platform and stand-alone headsets
Impressive, but not super exciting; let’s move on to where the meat is: Artificial intelligence (AI), and more specifically machine learning (ML). Google announced a year ago to turn into an AI-first company. And they’re certainly making true on that promise:
- Google Lens super-charges your phone camera with machine learning to recognize what you’re pointing the camera at and give you context and contextual actions (Wired)
- Google turns Google Photos up to 11 through machine learning (via Lens), including not just facial recognition but also smart sharing.
- Copy & paste gets much smarter through machine learning
- Google Home can differentiate several users (through machine learning?)
- Google Assistant’s SDK allows other companies and developers to include Assistant in their products (and not just in English, either)
- Cloud TPU is the new hardware that Google launches for machine learning (Wired)
- Google uses neural nets to design better neural nets
Here’s a 10min summary video from The Verge.
This is incredible. Every aspect of Google, both backend and frontend, is impacted by machine learning. Including their design of neural networks, which are improved by neural networks!
So what we see there are some incremental (if, in my book, significant) updates in consumer-facing products. This is mostly feature level:
- Better copy & paste
- Better facial recognition in photos (their “vision error rate of computer vision algorithms is now better than the human error rate”, says ZDNet)
- Smarter photo sharing (“share all photos of our daughter with my partner automatically”)
- Live translation and contextual actions based on photos (like pointing camera at wifi router to read login credentials and log you into the router automatically).
- Google Home can tell different users apart.
As features, these are nice-to-haves, not must-haves. However, they’re powered by AI. That changes everything. This is large-scale deployment of machine learning in consumer products. And not just consumer products.
Google’s AI-powered offerings also power other businesses now:
- Assistant can be included in third party products, like Amazon’s Alexa. This increases reach, and also the datasets available to train the machine learning algorithms further.
- The new Cloud TPU chips, combined with Google’s cloud-based machine learning framework around TensorFlow means that they’re not in the business of providing machine learning infrastructure: AI-as-a-Service (AssS).
It’s this last point that I find extremely exciting. Google just won the next 10 years.
The market for AI infrastructure—for AI-as-a-Service—is going to be mostly Google & Amazon (who already has a tremendous machine learning offering). The other players in that field (IBM, and maybe Microsoft at some point?) aren’t even in the same ballpark. Potentially there will be some newcomers; it doesn’t look like any of the other big tech companies will be huge players in that field.
As of today, Google sells AI infrastructure. This is a mode that we know from Amazon (where it has been working brilliantly), but so far hadn’t really known from Google.
There haven’t been many groundbreaking consumer-facing announcements at I/O. However, the future has never looked brighter for Google. Machine learning just became a lot more real and concrete. This is going to be exciting to watch.
At the same time, now’s the best time to think about societal implications, risks, and opportunities inherent in machine learning at scale: We’re on it. In my work as well as our community over at ThingsCon we’ve been tracking and discussing these issues in the context of Internet of Things for a long time. I see AI and machine learning as a logical continuation of this same debate. So in all my roles I’ll continue to advocate for a responsible, ethical, human-centric approach to emerging technologies.
Full disclosure: I’ve worked many times with different parts of Google, most recently with the Mountain View policy team. I do not, at the time of this writing, have a business relationship with Google. (I am, however, a heavy user of Google products.) Nothing I write here is based on any kind of information that isn’t publicly available.