Total Downloads

2,596,225

Total Files

9,206

Latest Update

10

Deep Dive: The AI Innovations Across Google’s 2017 Devices Lineup

Posted October 8, 2017 | Android | Cloud | Google | Google Assistant | Google Clips | Google Home | Google Pixel | Google Pixel XL | Google Translate | Mobile | Music + Videos | Nest | Pixel Buds | smart home | Windows


This year, Google is taking a bite out of Apple.

I previously made the case that AI is how Google will win in devices. Now it’s time to get a lot more specific.

As you may recall, Google announced a new lineup of smartphones, Chromebooks, smart speakers, and other devices this past Wednesday. Many have criticized the new products as being responses to the competition, or just bland. This view is wrong-headed and incorrect. Instead, the search giant was upfront about these devices’ collective, AI-based advantages. And this differentiation, I feel, is key: Google will use AI to win the next wave of personal computing.

That’s quite a claim, I know. And we’ll need to wait and see how Google’s various products and services perform in the market before we’ll know the veracity of this opinion. But in the meantime, we can examine how Google is applying AI specifically to each of its newly announced devices. This is helpful, I think, to understand how serious Google is about using its core strength in the cloud to help advance its goals on the client. And to widen the gap between itself and Apple, and any other erstwhile competitors.

And there is a ton of information to look at here.

“[We] are radically rethinking how computing should work,” Google CEO Sundar Pichai said, opening the devices event. “In an AI-first world, computers will adapt to how people live their lives, rather than people having to adapt to computers.”

(Google is also using AI and machine learning to advance its core web services and mobile apps, of course. And many Google advances will make their way to third-party solutions via Android. Here, however, I’m focusing specifically on the “Made by Google” devices that the company just announced.)

“AI-first” allows people to interact with computers and other devices in a natural and seamless way, using conversations, gestures and vision. AI-first is also ambient, a term you’ve probably heard me use a lot in the past year or so. This means it is available to you everywhere, not just on a certain device. It is also contextual so that it understands you, your location, and your environment to give you the information you really need at the right time. And it is adaptive, learning and improving over time.

Here’s how Google is applying these techniques in its newly-announced products, which it correctly describes as “radically helpful.”

Google Home and Google Assistant

For 2017, the Google Home hardware lineup is expanding past the original device to include a smaller and cuter Google Home Mini as well as a bigger Google Home Max, with its (apparently) superior sound. These two new products are aimed at filling out the product line and hitting new price points and audio performance, respectively. They’re about style and warmth.

So there’s nothing uniquely AI about the new Home devices per se, other than the Smart Sound feature for Max that tailors the speaker’s sound to individual rooms and even to the content you’re listening to. (Apple is doing this too, with HomePod.) But given the nature of this product family, there’s a lot going on here, AI-wise.

In fact, Google Assistant and the Google Home smart speakers it drives are, perhaps, the most obvious example of how this firm is using its AI expertise to improve real world products. And in just its first year in the market, Google Home has improved at a scale that is almost hard to fathom: It can now answer over 100 million additional questions.

The interactions you have with this device, or with Google Assistant generally, are of course very natural: You just speak normally and, in many cases, engage in a conversation. And now you can do so in far more places: Google has also worked to bring Google Assistant and Home to more countries, and to more languages, over the past year.

“Now, bringing the Assistant to people all around the world is no easy task,” Google’s Rishi Chandra noted, as he described the firm’s decade-long work in this area. “We had to make sure we could understand people of different age groups, genders, and accents. So we trained the Assistant at a scale that only Google could, with over 50 million voice samples from hundreds of different ambient environments.”

Google Assistant now features the best voice recognition in the market. And, unique among the entries in this field, it can recognize individual voices. So when I ask Google Home for my schedule, it gives me my schedule, not my wife’s. And when she asks the device for her reminders, she gets her’s, not mine. This feature also works with voice-free calling, another feature the debuted first on Google Home: When you ask to call someone named “Paul,” it will be a Paul in your address book.

“An assistant can only truly be useful if it knows who you are,” Chandra said. And Google is the only assistant that offers that very important feature. It’s a huge differentiator.

And Google Assistant and the devices it powers are not standing still: They’re improving over time. Two key changes that just became available are tied to routines, which let the Assistant carry out multiple actions—e.g. tasks—with a single command.

So Google Assistant now supports more routines—including such things as coming home in the evening and going to bed—and more actions.

In an example provided by Chandra, you might create an action called “Good morning” that turns on the lights, starts the coffee maker, and fires up your daily briefing on the speaker(s) of your choice. Google Home has also picked up a “find my phone” feature that will ring your smartphone if you can’t find it. Just say, “OK, Google, find my phone.” (Yes, it works with that voice recognition functionality, ensuring that it will ring your phone, and not your wife’s.)

Google Assistant is also improving its support for smart home devices: It now supports over 1,000 different products from over 100 different companies. It can also interact with these devices more intelligently, letting you use simpler and more natural language commands like “make it warmer” (as opposed to setting a particular thermostat to a particular temperature). Google also talked up its Nest-branded smart home products at the event; see below.

Google Home is also picking up a new feature called Broadcast that lets you send audio messages to every Google Home device in your home. For example, “OK Google, broadcast that it’s time to leave for school.” And to further its usefulness for families, Google is integrating linked accounts for kids under 13 with Google Home. And it has improved the Assistant’s voice recognition to include children so that it can understand them too.

“We’re introducing over 50 new experiences with the Google Assistant to help kids learning something new,” Mr. Chandra explained. “Explore new interests, imagine with story time, share laughs with the whole family.” He then provided a few examples from his own family: “OK Google, play musical chairs,” “OK Google, beat-box me,” “OK Google, let’s play space trivia,” “OK Google, tell me a story,” and so on.

Yes, there will always be the complaint that Google’s technologies cross some line between useful and creepy, but that’s the point. This is an area where Apple is simply too sheepish to tread, and it doesn’t have the technical acumen to pull it off anyway. It’s as much a contributor to Google’s ongoing success as the actual technology.

But in this specific case, one can imagine complaints about Google raising our children or whatever other nonsense. As Chandra notes, though, these experiences take kids away from solo experiences attached to screens, and lets the interact with each other, and with parents, in a group. It’s healthier than giving a kid an iPhone.

Google is partnering with Disney to bring that firm’s many entertainment experiences—like Mickey Mouse and Star Wars—to Google Home. And more broadly, it is opening up Assistant actions so that any third party can bring their own family- and kid-based experiences to the platform as well.

Nest

Google-owned Nest is unsurprisingly upping its game when it comes to Google Assistant integration. Nest recently (and ahead of the Google event) shipped six new hardware products, each of which combines machine learning and modern, thoughtful hardware design.

Nest’s Yoky Matsuoka provided a few examples.

For example, using Nest Cam in tandem with Google Home and Chromecast, you can keep an eye on the security of your home using just your voice. A command like “OK Google, show me the entry way” will be received by Google Home and the video from the Nest Cam will be streamed to the Chromecast attached to your TV. (You can also save a clip of the Nest Cam stream with “OK Google, save this clip” or similar.)

The Nest Hello video doorbell, meanwhile will use Google’s facial recognition technologies to recognize people who are at the front door. So when the doorbell rings, it will broadcast through any Google Home devices and tell you who it is (if that person was recognized): “Aunty Suzie is at the front door.”

Finally, using the Google Assistant routine improvements I noted above, you can now include actions for Nest products too. So when you create a routine like “Goodnight,” it can include arming the home security system and turning on home monitoring cameras in addition to turning off lights, setting the thermostat, setting an alarm, reminding you about your first appointment the next day, and whatever else. Pretty impressive.

Pixelbook

Google’s newest Chromebook, the Pixelbook, is a “4-in-1,” or a convertible PC, as we’d call it in the Windows world. And it’s interesting on a number of levels. But from an AI perspective, the Pixelbook provides one major leap forward for all laptop-kind: It is the first Chromebook with Google Assistant built-in. It even adds a dedicated Assistant key to the Chrome OS keyboard for the first time. That way, you can access Assistant by typing instead of speaking, something that may be more acceptable in laptop-style productivity situations.

That stuff is obvious. But Pixelbook also offers unique Assistant interactions via the optional Pixelbook Pen.

“Just hold the Pen’s button and circle an image or text on the screen, and the Assistant will take action,” Google’s Matt Vokoun explained. “When you’re browsing through a blog, discovering a new musician, you can circle their photo, and the Assistant will give you more information about them. From there, you can go to their Instagram page, their YouTube channel, listen to their songs, and more.”

As with the little-used Cortana integration in Microsoft Edge on Windows 10, the Assistant can also be used to do research: Circle a word and get a definition and other information.

Pixel and Pixel XL

Google’s latest smartphone push rightfully received a lot of attention this week. But the big news, of course, was how the search giant will use AI to differentiate these products from what Apple, Samsung, and others sell.

“The playing field for hardware components is leveling off,” Google’s Rick Osterloh explained. “Smartphones [have] very similar specs: Megapixels in the camera, processor speed, modem throughput, battery life, display quality. These core features are table stakes now. Moore’s Law and Dennard scaling are ideas from the past. It’s going to be harder and harder for [companies] to develop exciting new products each year because that’s no longer the timetable for big leaps forward in hardware alone. And that’s why we’re taking a very different approach at Google.”

He then reiterated the company mantra that “the next big innovation will occur at the intersection of AI, software, and hardware. So while smartphones can reach specs parity, Google’s devices will always have the edge because of the unique AI-based advances that it alone can deliver to users at scale.

The first generation Pixel handsets were the first smartphones to include Google Assistant. But they also revolutionized the end-to-end photos experience for users, thanks to a superior (in fact, best in market) camera with automatic HDR and video smoothing, free cloud-based storage for full-sized photos taken with the device, and a simple and elegant Photos app and service with instant search and an ever-growing list of features.

For Pixel 2, Google has done whatever it’s done to make what it feels is a competitive device. For this discussion, of course, what I’m concerned with is the AI-based innovations only. And there are a number of items, of course, above and beyond the obvious advancements to Google Assistant like broadcast and the new routines and actions noted previously.

The first, however, is related to Google Assistant: On Pixel 2, you can squeeze the device as you hold it to more easily (and perhaps more naturally) summon the Assistant. There’s no need to say “OK, Google.”

The new Pixels include an integrated Shazam-like feature called Now Playing that is available from the always-on display: Just glance at the display, and you will see the name of the artist and the currently-playing song. Interestingly, this one uses on-device machine learning, and not a cloud service, which is a curiously Apple-like way of doing things. If you tap the song name on the display, Google Assistant fires up so you can learn more, add the song to a playlist in your preferred music service, or watch the video on YouTube.

Google is also bringing at-a-glance functionality to the Pixel 2 home screen, starting with calendar data. But commute and traffic information, flight status, and more are coming soon.

But the most startling AI-related advance related to the new Pixel 2s is an app called Google Lens. It will ship in preview form this fall on the Pixels and then will be made available to other Android devices in the future.

“Google Lens is a way to do more with what you see,” Google’s Aparna Chennapragada said during the devices presentation.

At a basic level, Google Lens works like other apps that try to understand the live world view that’s available via your smartphone’s camera. (For example, you can use Google Translate to view a menu in say, Japanese, and see a live translation on the display in a sort of augmented reality view.) But Google Lens, of course, goes much further.

In a demo, Chennapragada showed how Google Lens could read phone numbers and email addresses from a flyer, which is useful. But it can also be used to call that number or email that address. It also works for mapping to physical addresses.

In another demo, Google Lens was used to identify the artist behind a print hanging framed on the wall. “Now you can just Lens it,” she said. She then used Google Lens to identify and learn more about a movie, a book, an album, and, most impressively, a Japanese temple in a personal photo from a trip from 5 years ago.

“There are a lot of things happening under the hood, all coming together,” Chennapragada said.

Thanks to major breakthroughs in deep learning and vision systems, Google Lens can work in tandem with millions and millions of items stored by Google Search to understand what you’re looking at. And Google’s Knowledge Graph, with its billions of facts about people, places, and things, is called on to more information. This is exactly the type of thing that only Google can do this effectively. And while it is still early days on visual recognition, Google’s track record on general search and voice recognition is well-established.

Google also uses AI to help improve the Pixel 2 cameras, as it did with the previous-generation devices. For this generation, the firm is adding a Portrait Mode feature that requires only a single camera lens—most smartphones need two to do this—to separate the subject from the background and create a compelling bokeh effect. The firm used over a million photos to train the machine learning algorithms that make this functionality possible, Google’s Mario Queiroz said. Also, Portrait Mode works on both cameras, unlike with other smartphones.

Accessories: Pixel Buds and Clips

While many of Google’s announcements this past few were ruined by leaked, two were not. Both were for devices that are accessories for Pixel or other Android-based smartphones.

The first is a new pair of wireless headphones called Google Pixel Buds. Which work like many other wireless headphones, of course. But with two wrinkles.

“When you pair your Pixel Buds to your Pixel 2, you get instant access to the Google Assistant,” Google’s Juston Payne noted. This enables voice control of various features like playing music, sending a text, or getting walking directions. “All while keeping your phone in your pocket,” he added. “It can also alert you to new notifications and read you your messages.”

And then he dropped the bomb. This is inarguably the most impressive thing that Google announced that day.

“Google Pixel Buds even give you access to a new, real-time language translation experience,” he said. “It’s an incredible application of Google Translate powered by machine learning. It’s like having a personal translator by your side.”

The live demo of this functionality is incredible to watch: Payne speaks in English to a Pixel Buds-wearing Swedish speaker. The Buds translate his speech into Swedish so she can understand it, and she then replies in Swedish. Her Pixel 2 smartphone speaks her words to Payne, translated to English. And as with the Babblefish from Hitchhiker’s Guide to the Galaxy—which, by the way is science-fiction—a real and natural conversation occurs. It’s incredible.

The Pixel Buds provide real-time language translation functionality in 40 different languages.

Finally, Google also showed off a new camera accessory called Google Clips. It’s basically a mini GoPro-type device that you can place in a room or space, or clip onto a child or pet, and have spontaneous scenes automatically recorded for you. Now, you can be part of the moment, and not just a bystander or family historian.

Google Clips looks fun. But the big news is its use of AI.

“Google Clips starts with an AI engine at the core of the camera,” Payne explained. “When you’re behind a camera, you look for people you care about. You look for smiles. You look for that moment that the dog starts chasing his tail. Clips does all of that for you. Turn it on, and it captures the moment … And it gets smarter over time.”

From a privacy perspective, all of the machine learning happens on the device itself (again, like Apple would do). Nothing leaves the device until you decided to share it.

As impressive, to me, is that Google was able to fit such ostensibly powerful machine learning capabilities into such a small device. Payne describes it as a “supercomputer.”

But then that’s Google in a nutshell: The supercomputer in a room full of normal computers.

And while I’m sure that Apple, Amazon, Microsoft, and others will be able to match some parts of what Google is doing here, it’s not clear to me that any of them can ever do it all. In fact, I’m sure they cannot. And that’s why this is all so impressive: Not any single announcement, but rather the weight, the scope, of it all.

 

Tagged with , , , , , , , ,



Source link

')
ankara escort çankaya escort çankaya escort escort bayan çankaya istanbul rus escort eryaman escort ankara escort kızılay escort istanbul escort ankara escort ankara escort escort ankara istanbul rus Escort atasehir Escort beylikduzu Escort Ankara Escort malatya Escort kuşadası Escort gaziantep Escort izmir Escort