Warning: session_start(): open(/home/sitehosting/public_html/addon/nnu.ng/src/var/sessions/sess_11f9bbb2fea7f9de39c6a8114944f719, O_RDWR) failed: No space left on device (28) in /home/sitehosting/public_html/addon/nnu.ng/src/bootstrap.php on line 59

Warning: session_start(): Failed to read session data: files (path: /home/sitehosting/public_html/addon/nnu.ng/src/var/sessions) in /home/sitehosting/public_html/addon/nnu.ng/src/bootstrap.php on line 59
Gemini AI now allows users to ask questions using video and screen sharing - Nigeria News Update
Random Ads
Content
Content
Content

Gemini AI now allows users to ask questions using video and screen sharing

5 hours ago 5

Google has announced a new feature for its AI assistant, Gemini, which includes asking questions through a video and screen sharing ability that allows users to go real-time with their chats. 

Announced during the Mobile World Congress (MWC) 2025 in Barcelona, the company unveiled the new ‘Screenshare” feature that lets users share what’s on their phone screen with Gemini AI. With its real-time ability, the feature allows users to ask the AI chatbot questions about their immediate surroundings or a website they are navigating. 

For instance, Google displayed a video of a user shopping for a pair of baggy jeans and asking Gemini what other clothing would pair well with it. Gemini gave the option of the best matching fits and jackets in the video provided. 

The video search feature also allows users to take a video and ask Gemini questions about it as they are filming. Google recently teased in August 2024 of Gemini Live’s ability to see. The company noted that the live video features will start rolling out to Gemini Advanced subscribers on Android devices later in March.

 Google demonstrates new Gemini capacity to help businesses at Cape Town event

In a demo, Google posted a video showing the new live video feature in action. A ceramicist was asking Gemini which glazes it would recommend for newly fired vases, aiming for a “mid-century modern look.” The user moves the camera around to give Gemini a wide range of glaze options on what suits best. 

The new features come with some added advantages Gemini users can look out for.

For users who have issues structuring their prompt to the best of their taste, the real-time video and screen sharing help to clearly explain issues. It also makes communications more effective and speeds up problem resolution. The ability to visually demonstrate their problems, allows the Gemini to diagnose and solve issues more accurately and swiftly. 

During tutorials and lessons, the feature provides a far better interactive opportunity where hands-on learning experience and more tailored advice can be derived with the sharing of the screen.

Also Read: OpenAI launches Chat GPT-4.5, the latest model that understands human intentions

Recent Gemini updates

The news comes a few weeks after Google announced that Gemini can now recall past conversations you’ve had with it to enhance its responses in future ones.

The feature holds the ability to summarize past conversations or make reference to them as it suits users’ current prompt. Google explained that this will help reduce the need to search through old chats with Gemini.

 Google renames chatbot to reflect improved capacity

Though the feature is only available to Gemini Advanced subscribers and only in English, Google said it will soon roll out Google Workspace Business and Enterprise customers, as well as those who speak other languages. 

Also in January, Google released a whole set of updates including Gemini 2.0 Pro and “enhanced reasoning model” Gemini 2.0 Flash Thinking Experimental. The company noted that the addition enables the AI to show its thoughts and improve performance and explainability.

The frequent roll-out of features and models in the AI chatbot world reveals the tight competition exhibited within the industry. Like Google, OpenAI has been on the run for an advanced stretch of ChatGPT.

On Friday, Technext made a report on the launch of the latest ChatGPT – GPT-4.5 – trained with more computing power and data than previous editions. 

The tech company expressed that Chat GPT-4.5 was developed with the same techniques as its previous editions such as GPT-4, GPT-3, GPT-2, and GPT-1. However, it was designed with an increased rate of computing power and data during a “pre-training” phase called unsupervised learning.

Making further remark on its building cost, OpenAI claimed that the new model is expensive to run which has made the company admit that it is evaluating whether to continue serving the model in its API in the long term. 

ChatGPT users can finally browse and get real-time information beyond 2021

Importantly, the company mentioned that GPT-4.5 is qualitatively superior to other models in several areas that were not used. For instance, the model can understand human intent as it responds in a warmer and more natural tone and performs well on creative tasks such as writing and design.

As OpenAI plans to roll out GPT-5 later this year, more of such will be expected from Google as the rival tightens to be a test of superiority. 

Read Entire Article