Google announced in recent days that it intends to continue investing inartificial intelligence and today it is ready to show us how AI is fundamental in visual searches in Lens.
If initially artificial intelligence was limited to language processing, many things have changed over time, making AIs more intelligent in understanding information such as natural language, images, video and the real world.
How does this information translate into visual search?
Lens
This tool helps to connect users with the outside world through the image search. Using Lens is very easy, just have your smartphone camera or a photo available, all directly from the search bar.
However, Google has decided to enrich Lens with a new option designed for users Android: “Search your screen“. Basically, you can search for what you see in a photo or video directly on the web and in apps, without necessarily having to open the Google or Lens application.
To better understand how Search your screen works, let’s say a friend sends a photo of a monument. To use the new Lens option, just press the Home button on your smartphone or the power button and tap “search screen“. In this way Lens will immediately identify the monument, providing all the necessary information.
Multiple search
With this feature it is easier to search for something starting from an image and at the same time from a text. The novelty lies in the addition of the option “close to me“, to search for something you need directly near your location.
In the coming months, in addition to translating “near me” into more languages than English, it will also be possible look for something more specific about any image. For example, if you are looking for a rectangular coffee table, but you can only find round ones on the web, it will finally be possible to enter your preference by finding the style that best suits your taste.
With regard to Google Maps there are many steps forward made in the last year. Thanks to the development related to artificial intelligence and computer vision, immersive visualization through Street View takes on a completely different flavor, with unique aerial images and useful information such as weather, traffic and increased attendance.
For example, if we search for a museum on Maps it will be possible to view the busiest places in the vicinity, the museum entrances, scroll through the images in time to see the situation at different times of the day and move freely to look for nearby restaurants and pubs.
All this is possible thanks to “fields of neural radiation” (NeRF), which transforms ordinary images into 3D representations.
Besides Street View it is important to mention Live View, which allows you to find everything around us simply by placing the phone vertically while walking. Currently this feature is only available for London, Los Angeles, New York, Paris, San Francisco and Tokyobut will be expanded to other cities in the coming months.
As for traveling by car, Maps has also integrated a useful feature for i electric vehicle drivers such as charging points for short trips and very fast charging stations.
Finally the “directions at a glance“, which will be viewable in the route overview or on the phone’s lock screen, updating automatically if you suddenly decide to arbitrarily change the road.
Finally the implementations that concern Google Translate:
In the world of mobile gaming, Monopoly GO is a popular game known for being…
After the success of the first season, the animated series Monsters & Co Lavori in…
Xiaomi could launch a new cheap smartphone: Redmi 13. The first rumors speak of a…
The world of Formula 1 is always celebrated in every form of media possible. This…
Vivo expands the X100 family: X100 Ultra and X100s are coming, here are the first…
CORSAIR has further optimized the performance of PC fans with the RS MAX Series with…