Google announced in recent days that it intends to continue investing inartificial intelligence and today it is ready to show us how AI is fundamental in visual searches in Lens.
Artificial Intelligence and Google Lens
If initially artificial intelligence was limited to language processing, many things have changed over time, making AIs more intelligent in understanding information such as natural language, images, video and the real world.
How does this information translate into visual search?
This tool helps to connect users with the outside world through the image search. Using Lens is very easy, just have your smartphone camera or a photo available, all directly from the search bar.
However, Google has decided to enrich Lens with a new option designed for users Android: “Search your screen“. Basically, you can search for what you see in a photo or video directly on the web and in apps, without necessarily having to open the Google or Lens application.
To better understand how Search your screen works, let’s say a friend sends a photo of a monument. To use the new Lens option, just press the Home button on your smartphone or the power button and tap “search screen“. In this way Lens will immediately identify the monument, providing all the necessary information.
With this feature it is easier to search for something starting from an image and at the same time from a text. The novelty lies in the addition of the option “close to me“, to search for something you need directly near your location.
In the coming months, in addition to translating “near me” into more languages than English, it will also be possible look for something more specific about any image. For example, if you are looking for a rectangular coffee table, but you can only find round ones on the web, it will finally be possible to enter your preference by finding the style that best suits your taste.
Not only Lens: Google Maps and Google Translate have also received updates
With regard to Google Maps there are many steps forward made in the last year. Thanks to the development related to artificial intelligence and computer vision, immersive visualization through Street View takes on a completely different flavor, with unique aerial images and useful information such as weather, traffic and increased attendance.
For example, if we search for a museum on Maps it will be possible to view the busiest places in the vicinity, the museum entrances, scroll through the images in time to see the situation at different times of the day and move freely to look for nearby restaurants and pubs.
All this is possible thanks to “fields of neural radiation” (NeRF), which transforms ordinary images into 3D representations.
Besides Street View it is important to mention Live View, which allows you to find everything around us simply by placing the phone vertically while walking. Currently this feature is only available for London, Los Angeles, New York, Paris, San Francisco and Tokyobut will be expanded to other cities in the coming months.
As for traveling by car, Maps has also integrated a useful feature for i electric vehicle drivers such as charging points for short trips and very fast charging stations.
Finally the “directions at a glance“, which will be viewable in the route overview or on the phone’s lock screen, updating automatically if you suddenly decide to arbitrarily change the road.
Finally the implementations that concern Google Translate:
- higher performancewith the possibility of selecting a language with as few touches as possible, simply by holding down the “Language” button to quickly choose which language to translate the text into;
- more inclusivenesswith more readable texts and dynamic fonts that adapt as the text is typed.