Artificial Intelligence of Google PIXEL 2

Giant companies like Amazon are using artificial intelligence and machine learning for smart purposes. How can we forget Google! On 19th Oct 2017 Google have launched Google Pixel 2 and Pixel 2 XL which are called as two of the strongest smart phones. With a simple squeeze one can ask question, give command, grab a picture, launch an app and place a call without tapping, swiping and dialling.

Pixel 2 is armed with four highly effective features supported with Artificial Intelligence. Image recognition app-Google Lens along with portrait mode, charge monitoring feature of Bluetooth Headphone, Interactive animated AR stickers, and now playing feature for songs selection. Let’s discuss how AI is acting as a backbone of few of these smart features of Google PIXEL 2.

Google lens is much more than camera. It is a personalised recommendation engine based on machine learning algorithms which helps to locate things and images without getting it described in the words. It is fascinating application of content based image recognition. Most of us have habit of capturing image to store it to the memories. This is best utilisation by Google of unique power of images of being more informative than thousands of words.

We all remember the days when we used to note down lyrics to search the song on the internet. Now a days the scenario is different with the help of new and smarter technologies we can get the information at any moment of time with faster speed, with highest accuracy and whenever it is required. Now playing features of pixel 2 is a miniature version of neural network which run on tiny chip of pixel 2. This system is trained to recognise audio fingerprint of over 70,000 songs, and it’s updated weekly with latest from Google Play Music. Now playing is a complex feature made simple with an appropriate combination of hardware, software and artificial Intelligence.

Another feature of Pixel 2 is Portrait mode, a setting that allows people to take professional-looking, shallow depth-of field images on mobile phone without any manual editing. We do remember an era when the portrait mode-style image requires a SLR camera with a large lens, a small aperture and a steady photographer to capture the subject in focus. Today, roughly 85% of all photos are taken on mobile, which offers an interesting set of challenges: a small lens, a fixed aperture and a photographer who might not be so steady. Google and its research and hardware team recreated this effect to develop portrait mode.

Skillville view point:

Giant companies are empowering themselves with Artificial Intelligence. Devices are becoming smarter. World is becoming smaller and connectivity is increasing with quantum speed limit. Artificial Intelligence and Machine Learning are becoming part of our daily day to day life. The day is not so far when this human generated artificial intelligence will govern the universe under their supervision of algorithms.

So where are we? Are you ready to adapt the change? Are you ready to be a part of change? Yes it’s good to adapt the change and to be a part of change. But you get noticed when you BRING THE CHANGE.

Prepare yourself with Skillville to BRING THE CHANGE.

Certificate Program in Artificial Intelligence & Machine Learning:

Layer 1
Login Categories