There is little question as we sit here at the end of July 2018 that Artificial Intelligence and Machine Learning are critical parts of our lives. Regardless of if you use a smart home device like Google Home, there are likely dozens of apps on your phone that have some AI or ML element to them. Google apps, most certainly fit in this statement with apps like Google Gboard.
But every once in a while the AI goes awry and well, it can be kind of funny at times. We’ve seen it before in Google Photos where some really scary photo stitching has happened. Now we can add this little risque gem from Gboard. Just type in “Sit on” while in a text message or Google Doc and watch the magic.
The Google Photos site is now starting to show an option to allow you to merge faces in your library. Photos since the beginning has had facial recognition built into it and has done a reasonable job of helping you identify people in your albums and tagging them. The challenge with it has come with age progression of children and photos of peoples at odd angles. It seems that the Photos team have be doing a bit of work on the Artificial Intelligence and Machine Learning algorithms on the feature.
On the Google Photos site, if you go to the Albums tab and click on People & Pets album, you will see a banner at the top of the page with suggestions of faces to merge under the tag for that person or pet.
In their continuing effort to AI all the things, Google announced at Google Cloud Next yesterday that grammar suggestions will be rolling out to Google Docs in the near future. The Mountain View company has already opened up the Early Adopters Program signup for those who are on G Suite and want to give it a try.
The feature works as you would expect and is similar to other tools provided by other apps, most notably Microsoft Word in Office 365. When you have a word or phrase that grammatically needs to be improved, it will be underlined in blue. You can then click on it and get the suggestion for improvement and either accept it or reject it.
At Google I/O in May, the company demonstrated what was both one of the coolest and most controversial future aspects of Google Assistant. Using a human sounding voice, referred to as Duplex, Assistant placed a phone call on the users behalf to set an appointment. The reaction to this has ranged from it being fake all the way around to this being the first indicator of the rise of the machines against humanity. But the fact is that the technology is there and Google’s AI technology is stunningly powerful. And it won’t be long before you get to try it yourself.
Google today published a video on their YouTube channel that highlights how Assistant will be able to get things done for you like setting up an appointment or reservation without you having to make the call yourself. Take a look at the video.
While we are months away from having this smooth of an interaction, the basic building blocks are there now and public testing has begun.
In today’s chapter of “It’s cool living in the future”, Google Lens is now broadly rolling out real time information and text selection features via Google Assistant. The new features, which were highlighted at Google I/O a few weeks back, trickled out to some last week but now it appears that Google has pushed the cloud-side big button and it is now going out to everyone.
There are few different elements that are rolling out today that are pretty magical. First, there is real time information element. When you enable Google Lens (long press your Home button to activate Google Assistant and then tap the Lens icon in the lower right corner), you can point your camera at different objects. If it recognized that object, you will get a colored bubble over that item which you can then tap and Assistant will give you information about it.
After being announced last week at Google I/O, the new Google News app is now available in both the Google Play Store for Android users as well as Apple’s App Store for iPhone users. While the app makes the AI-driven news app available, the Google News site is still hasn’t been updated at the time of this posting.
As you may recall, the new Google News app has an entirely new look and feel, drawing heavily on Material Design for its overall look with lots of visual content for news articles. It is a far cry from the nearly all text-based version of the old News app.
Among the many announcements at Google I/O yesterday, there was one that may have slipped by: Gmail and its new Smart Compose feature. Rolling out to all Gmail customers in the coming weeks, Smart Compose uses Artificial Intelligence to look at the content of your message as well as your writing style to provide suggested text for your email. This is way beyond auto completing of words. Smart Compose can provide complete sentences which can be put into your email with a single tap of the spacebar.
Smart Compose helps save you time by cutting back on repetitive writing, while reducing the chance of spelling and grammatical errors. It can even suggest relevant contextual phrases. For example, if it’s Friday it may suggest “Have a great weekend!” as a closing phrase.
This another great example of how Google is leveraging AI to complete time consuming but basic tasks in our day-to-day lives.
Of all of my Google devices, Google Home is quickly becoming my favorite. I use it for everything from catching up on the morning news, listening to music or podcasts, and getting information with a simply command. With Google Assistant built-in and Google continually adding functionality, we are only now seeing the tip of the iceberg of what Google Home could be doing for us and our smart homes in the future.
Normally Google Home is $129 but through the month of December, at the Google Store, you can pick one up for $79. That $50 savings would effectively let you buy a Google Home Mini (nearly two in fact) so you can have Google Assistant available to you in multiple rooms in your home. The Home Mini is down to just $29 right now at the Google Store.