Google is ushering in a new era with the launch of Gemini 3, an AI model that the tech giant claims is its most intelligent yet. This comes two years after Google launched the first version of Gemini to compete against OpenAI’s GPT models. The company says Gemini 3 represents a new level of reasoning, context awareness, and multimodal understanding, the kind of intelligence that actually plans and builds responses, and adapts as it goes. It is available across a suite of Google products.
Gemini 3 is a culmination of Gemini 1, 1.5, and 2.5, stacking the specific breakthroughs of these previous models into a more potent architecture. Instead of reinventing the wheel, it leverages the context and multimodal infrastructure established in earlier versions as a baseline. The result is a model that is better at long-chain reasoning and significantly more capable of handling complex tasks without breaking coherence.
The new model comes with upgrades across the Gemini app and Google’s AI ecosystem. Here are some exciting new features launched with Gemini 3.
Deep Think mode that learns, builds, and plans
The new model comes with a new Deep Think capability that allows the model to pause and reason through complex logic before responding. Gemini 3 is said to set a new benchmark for performance because it was designed to achieve the kind of depth and nuance that enables it to solve complex problems across science and mathematics with a high degree of reliability. Its responses are now concise as it has been trained to trade clichés and flattery for genuine insight.
With advanced multimodal understanding, the model can synthesise and understand various types of content simultaneously, including text, images, video, audio, and code. This means that it can easily process a photo of handwritten material or transcribe lengthy lecture notes into customised, interactive learning materials like flashcards or visualisations.
New generative UI in Gemini App
The app has a visual layout feature that allows the new model to create a new type of interface that adapts to the user’s specific prompt. This feature is a magazine-style view that generates explorable visual content, such as a full itinerary for a trip plan, complete with photos and customisable modules. This new interface also comes with a dynamic view feature that allows the model to design a custom interactive user interface in real-time. For example, asking for an explanation of a historical gallery could result in an interactive guide optimised for learning.
Gemini Agent for multi-step tasks
A new agentic tool has been launched within the Gemini App that allows the model to take on multi-step tasks like research, planning, or interacting with other Google apps, like Gmail and Calendar, on the user’s behalf. This tool can handle actions like organising an email inbox, adding reminders, or researching and booking travel. This feature will roll out to Google AI Ultra members first.
“My stuff” folder for your stuff
The Gemini app has introduced a new section in the menu, dubbed My Stuff, which stores the images, videos, and Canvas creations that a user generates within a chat, rather than them being buried within such chat history. This feature makes it easier for users to find their generated materials.
Better shopping experience
The model now offers an improved shopping feature, pulling product listings, prices, and comparison tables directly from Google’s Shopping Graph, which has over 50 billion product listings. This allows users to conduct complex product research and receive actionable purchase information within the Gemini chat.

Google Search learns to research
Google’s update to AI Mode on Google Search will now use a query fan-out technique, which means that rather than the model looking for matching keywords from a user’s prompt, Gemini 3 will break a complicated question into smaller pieces, do the research on each part, and give you one clear answer. This allows Search to more intelligently fan out and uncover a broader range of relevant web content that might have been missed by older models. It will be available to Google AI Pro and Ultra subscribers.
Interactive simulations in AI Mode
Google Search can now harness Gemini 3’s coding capability to instantly generate interactive tools and simulations directly within the AI Mode response. This may include a custom-built, functional loan calculator or an interactive simulation of a complex physics concept that is tailored specifically to the user’s query.
Google Antigravity developer platform
Google also launched Google Antigravity, a new agentic development platform powered by Gemini 3, and built around the idea that users need not spell out every line of code or manually debug every issue. It is a standalone Integrated Development Environment (IDE) that is available for Mac, Windows, and Linux.
The environment allows the AI agent to operate across a user’s code editor and browser simultaneously. When a user gives it a high-level prompt, like asking it to build an app feature, the system creates a plan, generates subtasks, and executes the code. The agent also learns from its past work and integrates users’ feedback into its responses.
Gemini 3 and Antigravity are pivotal for Google as they transform the company’s flagship model into a foundation for agentic intelligence. For Google, this new era is necessary to contribute to the evolving nature of artificial intelligence, and represents its belief that AI can be operational and active rather than just a response tool.