The next great device

We’re at an inflection point with computer vision. The advent of specific infrastructure for machine learning will yield exponential improvements in computer vision applications. Google’s tensorflow ML algorithms and TPU recently achieved better image recognition results than humans. And the gap will only continue to widen from here.

Vision is perhaps the most vivid and actively used of all human senses. I’m incredibly excited about Augmented Reality applications with the availability of Vision APIs, and am happy to see that Apple & Google are facing off in owning vision as a platform for new applications.

Google announced their Cloud Vision API last month around Google I/O, and Apple highlighted their Vision API during the WWDC keynote yesterday.

So what does it mean when the world becomes one giant search box? Well, first this means that the camera is the input of the future. Instead of having to type or speak your queries, you’ll simply point your camera at something and then get contextually relevant information to act upon.

Google generated a lot of buzz last month showcasing a few use cases with the new Google Lens application.

Google Lens looks amazing. But it is immediately clear when watching the demo that your phone is not the right device to interact with the visual world.

The obvious devices for processing the visual world are smart glasses and contact lenses. Since the tech for smart contact lenses is still likely a few decades away, the next mass-market form factor is likely to be a set of glasses, e.g. Google Glass 2.0.

Why did Google Glass fail?

There are always a confluence of supporting technologies that need to be efficient from a technological and price perspective for a device to become mass-market. The first iteration of Google glass just wasn’t perfect on a few different dimensions:

Privacy

Society needs to be ready to assume we’re being recorded at any moment of our lives. I loathe living in such a world, but we are definitely moving in the direction where we could be recorded at virtually any moment while outside of our private residence.

When Glass launched in 2013, camera applications for mobile phones were just becoming mainstream. Now, it’s incredibly common to livestream your day via Facebok Live, Instagram Stories, Snapchat Stories, etc…

Form-factor

This is more important than people think. The Google Glass form factor was akin to the Segway. Wearing glasses without lenses looks as unnatural as gliding across the ground without moving your legs. I believe there needs to be a transitive form-factor where the wearable more closely resembles traditional glasses.

Snap Inc. (the “Camera Company”) has bridged this gap well with their effortlessly cool Snapchat spectacles. Of course, it doesn’t hurt to have cool Gen-Z teens sporting them instead of balding VCs.

I could write an entire post on the Google Glass form-factor - I actually think the minimalist approach was the correct one, but they should have required lenses (even if most of them didn’t have a prescription). More on this below.

Real-world use

This was perhaps the biggest missing piece for Google. Glass needs a community of developers to uncover all of the valuable possibilities with a visual interface. Google did have some interesting applications available, but I believe the platform needs to be more open and developer incentives more clear.

Why will smart glasses become mainstream now?

I predict Google will launch Google Glass v2 at Google I/O 2018. Here’s why:

Cloud vision API

The cloud vision API is mature and ready to go mainstream. This generates a lot of buzz from the development community and seeds the Google Glass app store with real-world applications.

Developer community

Google Cloud has made great strides in the past few years generating buzz for their development platforms. Their work on the computer vision API, tensorflow, the Google Cloud TPU, etc… will yield positive results with the developer community, and they will increasingly win marketshare for Voice and Visually driven applications.

This gives developers another incentive to be locked-in on Google’s infrastructure. This creates a virtuous cycle where more developers build better applications, which improves Google Glass functionality, which generates more demand, which garners more developers building great applications, etc…

Form factor

The hardware required for a great wearable has gotten much smaller, so the form factor becomes considerably easier (nothing bulky). Better, lighter materials, longer battery life, etc… I think Google will get this “more right” in a future release. Google Glass actually didn’t look half-bad when they were attached to a pair of spectacles.

Price

I believe the efficient price for smart glasses is going to be approximately the cost of an iPhone, around $750-1000. Our fingers are definitely far superior for navigation than voice, so I don’t see these replacing phones, but rather supplementing them. Of course, there is no better screen than something projected directly on top of our vision.

Follow me on twitter.