It’s over a week since Google Antigravity was launched by Google along with Gemini 3.
Expectation were super high:
- Fork of beloved VSC so it will be easy to switch
- Free Gemini 3 access (what bad can be about it?)
- Killer AI features (browser, nano banana integration, planning and reporting first approach)
I was super hyped about this project from the very beginning.
But did Google really delivered?
First days, first problems
The launch day was super exciting, especially that marketing machine for Gemini 3 release was huge, and new agentic IDE was a surprise. Because of such an enormous buzz, it was obvious that there will be launch problems. And they were, a lot of them.
People started complaining about:
- Hitting Gemini 3 Pro usage limits during first request
- Thinking loops
- Working too slow
- Poor model selection
But are those problems fixed?
A week of testing
Let’s say that it’s hard to test AI tools fast. Especially when you have AI tools launches almost every week. However I’ve used this tool a lot during last days and wanted to share my thoughts.
Model selection
You can still pick basically from 3 models:
- Gemini 3 (Pro/Low)
- Claude 4.5 Sonnet (w/ or w/out Thinking)
- GPT-OSS 120B
It’s definitely good that there are available other models. Relying on Gemini 3 is very risky even now. It’s often the case that model returns an error or inform you about reaching usage limit, so you need to switch.
I also support having 3 different models:
- one big LLM for hard tasks (Gemini 3)
- one medium LLM for daily coding (Claude 4.5)
- one tiny & cheap for small fixes, moving code around etc (GPT-OSS)
I can also totally understand that Google is somehow cooperating with Anthropic (they are even rumors about potential acquisition) and that they wanted to host open source model and didn’t wanted to host one from China.
But being unable to pick Gemini 2.5 is really weird for me.
If Antigravity want to be the leader, they must be as quick as Cursor with implementing new models.
Agentic coding
We need to remember that agentic coding tools are not sending raw requests to LLM APIs, but add additional level of AI agent controlling software craftsmanship process.
I love Gemini 3, but at the moment Google Antigravity coding agent is a bit worse than one from Cursor.
For harder tasks, thinking loops are still the case.
Of course, quality is the most important, not speed, but my time is worth more, so I end up using Gemini 3 somewhere else (Cursor or GeminiCLI) and paying for it.
However this tool is still in free only version and has no pricing announced. It’s obvious that it will try to limit used resources as much as it can. But still, I’d prefer launching paid version instantly, especially taking into account how AI is growing and how people are willing to pay for testing stuff.
One drawback of Antigravity’s agentic coding that I won’t justify by free plan is being hard to work with on tool calls. If you deny running tool, execution stops. And I have few issues when agent was trying to view my .env file by terminal commands and stopped working after I rejected access
Summary
Google’s Antigravity is a tool with great potential, but will have really hard time competing with Cursor or CLI tools, even Gemini CLI.
The most important thing will be the pricing and how much we will be able to do with the most powerful Google models within the plan.
If it will be the same price as Cursor but with more generous limits for Gemini 3 (or any future models) - they can disrupt.
However for now, I’m back with coding with Cursor + API-base pricing with CLI tools after I reach Cursor limits.