Google announced a large number of new artificial intelligence features at the Google I/O developer event yesterday, and one of those Ads Available for iPhone users.
The updated Google Gemini feature now supports the ability to respond to its surroundings via a direct video brief.
The feature, called Gemini Live, was shown during the I/O event and watched a pointed phone camera to the perimeter of a person. Gemini then managed to comment on what she saw while answering questions and answering incorrect phrases from the user.
When directing the camera to a shadow and asking why someone was following them, Gemini saw the user telling that, in fact, just his shadow.
Gemini Live can also work based on what you see on your screen, allowing users to request artificial intelligence to suspend or work with what he sees, including web sites. Instead, files can be uploaded to Gemini Live, as well as pictures and more.

The new feature is available for Gemini users at the present time, for free, and works on both iPhone and iPad applications. It is available in more than 45 different languages in more than 150 countries, says Google.
All Eyes will now be on Apple and the annual WWDC event, which will be held next month, to see if Apple can add any similar features to iPhone via iOS 19. This update is expected to be announced in WWDC before release to the public later this year.
You may also want to check:
You can follow us xOr InstagramSubscribe to us YouTube channel And even like us Facebook page To keep up update yourself on the latest Microsoft, Google, Apple and Web.







