FTC Disclosure: TechSifted uses affiliate links. We may earn a commission if you click and buy โ at no extra cost to you. Our editorial opinions are our own.
Two things happened with Cursor in the last two weeks, and they matter separately.
First: Cursor 3 shipped on April 2. It's a real version bump โ not a feature release. The product is different now. Second: xAI struck a deal to rent GPU compute to Cursor for training Composer 2.5, the next underlying coding model. That's a backend story, but it has implications for where Cursor goes from here.
Let me take these one at a time.
What's New in Cursor 3
The big shift: Cursor 3 is agent-first. The previous versions were, at their core, a smart IDE with AI features bolted on. Cursor 3 reorganizes around an Agents Window where you're launching parallel agent tasks โ across local, cloud, worktree, and remote SSH environments โ and the editor exists to support that, not the other way around.
That's a meaningful philosophical change. If you're using Cursor 3 the same way you used Cursor 2, you're probably underusing it.
New things worth knowing:
Design Mode. You can now give Cursor visual mockups or reference screenshots and have it generate code from the visual spec. Not perfect, but genuinely useful for frontend work when you have a Figma comp and you're sick of translating it manually.
Best-of-N model comparison. You can run the same prompt against multiple models and compare results. Actually useful when you're evaluating whether to change your default model for a specific task type.
Cloud-local agent handoff. Agents can start in the cloud and continue locally (or vice versa). For longer-running tasks that you want to kick off remotely and then pick up at your desk, this is the feature.
Composer 2. The underlying coding model. Cursor built this on top of Moonshot AI's Kimi K2.5 with substantial continued pre-training and reinforcement learning on their own data. It's noticeably better than the previous Cursor models on multi-file refactors and maintaining coherent context across large codebases.
The ARR numbers confirm that whatever Cursor has been doing is working. They crossed $2 billion in annual recurring revenue in February โ doubled from $1 billion in November 2025. About 60% of that is now coming from corporate customers. Fifty thousand paying teams. Seven million monthly active users.
These aren't startup metrics anymore.
The xAI Compute Deal
Around April 15, it came out that xAI agreed to rent GPU capacity to Cursor for training Composer 2.5. Specifically: access to a chunk of xAI's roughly 200,000 NVIDIA H100/H200 cluster to run training runs for the next model generation.
This is interesting for a few reasons.
For Cursor, it solves a practical problem. Training a frontier coding model requires serious compute, and the usual path โ contracting with AWS, Azure, or Google โ is both expensive and competitive with the providers who also sell AI coding tools. Getting compute from xAI avoids that awkward dynamic, and the scale is significant.
For xAI, it's a business model. Elon Musk's AI company has been building out compute infrastructure aggressively โ the Memphis cluster, the expansion โ and renting capacity to model developers is a real revenue stream. This isn't charity; xAI is monetizing their GPU investment.
What this means for Cursor users is less direct. You won't notice xAI involvement in how Cursor behaves today โ Composer 2 is still the model running. But Composer 2.5, trained on xAI's infrastructure with whatever data advantages that brings, is the thing to watch. Cursor's been good at coding models. If the next one is meaningfully better, that matters.
One thing it doesn't mean: Grok is not replacing Cursor's models. Some early coverage implied a Grok integration. That's not what's happening. xAI is providing compute, not model access.
Where Cursor Now Stands
The coding tool market has reshuffled significantly in early 2026. GitHub Copilot is fighting on the enterprise side. Windsurf (from Codeium) has been gaining ground with developers who want a lighter-weight option. Claude Code โ Anthropic's CLI agent โ is a serious contender for anyone comfortable in the terminal.
Cursor sits in an interesting position: premium product, fast product velocity, strong enterprise traction, and now better-resourced compute for model training.
The $20/month Pro and $40/month Business pricing hasn't changed. But at $2B ARR, Cursor is firmly in "durable business" territory โ not a bet-on-a-startup gamble. That matters for enterprises making multi-year workflow commitments.
Should You Upgrade to Cursor 3?
If you're on the Pro tier, it's automatic โ you're already on it. No separate purchase.
The agent-first workflow is the thing most worth actually learning. The Agents Window is where the new capability lives, and if you're still using Cursor 3 the same way you used Cursor 2 โ tab completion, single-file edits, CMD+K for quick changes โ you're not getting what you paid for.
The parallel agent execution in particular is worth spending an hour with. Kick off a refactor in one agent, kick off test generation in another, let them run. It sounds like overkill for solo developers, but once you've run two agents simultaneously and seen the time savings, you don't want to go back.
Quick Take
Cursor 3 is the real thing โ not a minor update. The agent-first interface is a genuine workflow shift. The xAI compute deal is backend news that signals Cursor is investing seriously in its model roadmap. For developers who've been watching this space, Cursor's position at the top of the market just got stronger.
For a comparison with other tools in the space, see our Claude Code vs Cursor comparison and Cursor vs GitHub Copilot breakdown.












