
The vibe coding device Cursor, from startup Anysphere, has introduced Composer, its first in-house, proprietary coding giant language mannequin (LLM) as a part of its Cursor 2.0 platform update.
Composer is designed to execute coding duties rapidly and precisely in production-scale environments, representing a brand new step in AI-assisted programming. It is already being utilized by Cursor’s personal engineering workers in day-to-day improvement — indicating maturity and stability.
In accordance with Cursor, Composer completes most interactions in lower than 30 seconds whereas sustaining a excessive degree of reasoning skill throughout giant and sophisticated codebases.
The mannequin is described as 4 instances sooner than equally clever methods and is educated for “agentic” workflows—the place autonomous coding brokers plan, write, take a look at, and evaluate code collaboratively.
Beforehand, Cursor supported “vibe coding” — utilizing AI to put in writing or full code based mostly on pure language directions from a person, even somebody untrained in improvement — atop other leading proprietary LLMs from the likes of OpenAI, Anthropic, Google, and xAI. These choices are nonetheless obtainable to customers.
Benchmark Outcomes
Composer’s capabilities are benchmarked utilizing “Cursor Bench,” an inside analysis suite derived from actual developer agent requests. The benchmark measures not simply correctness, but additionally the mannequin’s adherence to current abstractions, type conventions, and engineering practices.
On this benchmark, Composer achieves frontier-level coding intelligence whereas producing at 250 tokens per second — about twice as quick as main fast-inference fashions and 4 instances sooner than comparable frontier methods.
Cursor’s revealed comparability teams fashions into a number of classes: “Greatest Open” (e.g., Qwen Coder, GLM 4.6), “Quick Frontier” (Haiku 4.5, Gemini Flash 2.5), “Frontier 7/2025” (the strongest mannequin obtainable midyear), and “Greatest Frontier” (together with GPT-5 and Claude Sonnet 4.5). Composer matches the intelligence of mid-frontier methods whereas delivering the very best recorded era pace amongst all examined lessons.
A Mannequin Constructed with Reinforcement Studying and Combination-of-Specialists Structure
Analysis scientist Sasha Rush of Cursor offered perception into the mannequin’s improvement in posts on the social network X, describing Composer as a reinforcement-learned (RL) mixture-of-experts (MoE) mannequin:
“We used RL to coach an enormous MoE mannequin to be actually good at real-world coding, and likewise very quick.”
Rush defined that the workforce co-designed each Composer and the Cursor atmosphere to permit the mannequin to function effectively at manufacturing scale:
“In contrast to different ML methods, you may’t summary a lot from the full-scale system. We co-designed this challenge and Cursor collectively so as to permit working the agent on the crucial scale.”
Composer was educated on actual software program engineering duties quite than static datasets. Throughout coaching, the mannequin operated inside full codebases utilizing a collection of manufacturing instruments—together with file modifying, semantic search, and terminal instructions—to resolve advanced engineering issues. Every coaching iteration concerned fixing a concrete problem, corresponding to producing a code edit, drafting a plan, or producing a focused rationalization.
The reinforcement loop optimized each correctness and effectivity. Composer realized to make efficient device decisions, use parallelism, and keep away from pointless or speculative responses. Over time, the mannequin developed emergent behaviors corresponding to working unit assessments, fixing linter errors, and performing multi-step code searches autonomously.
This design permits Composer to work throughout the identical runtime context because the end-user, making it extra aligned with real-world coding situations—dealing with model management, dependency administration, and iterative testing.
From Prototype to Manufacturing
Composer’s improvement adopted an earlier inside prototype often known as Cheetah, which Cursor used to discover low-latency inference for coding duties.
“Cheetah was the v0 of this mannequin primarily to check pace,” Rush mentioned on X. “Our metrics say it [Composer] is similar pace, however a lot, a lot smarter.”
Cheetah’s success at lowering latency helped Cursor determine pace as a key think about developer belief and usefulness.
Composer maintains that responsiveness whereas considerably bettering reasoning and activity generalization.
Builders who used Cheetah throughout early testing famous that its pace modified how they labored. One person commented that it was “so quick that I can keep within the loop when working with it.”
Composer retains that pace however extends functionality to multi-step coding, refactoring, and testing duties.
Integration with Cursor 2.0
Composer is totally built-in into Cursor 2.0, a significant replace to the corporate’s agentic improvement atmosphere.
The platform introduces a multi-agent interface, permitting as much as eight brokers to run in parallel, every in an remoted workspace utilizing git worktrees or distant machines.
Inside this technique, Composer can function a number of of these brokers, performing duties independently or collaboratively. Builders can evaluate a number of outcomes from concurrent agent runs and choose one of the best output.
Cursor 2.0 additionally consists of supporting options that improve Composer’s effectiveness:
-
In-Editor Browser (GA) – permits brokers to run and take a look at their code instantly contained in the IDE, forwarding DOM info to the mannequin.
-
Improved Code Assessment – aggregates diffs throughout a number of recordsdata for sooner inspection of model-generated adjustments.
-
Sandboxed Terminals (GA) – isolate agent-run shell instructions for safe native execution.
-
Voice Mode – provides speech-to-text controls for initiating or managing agent periods.
Whereas these platform updates broaden the general Cursor expertise, Composer is positioned because the technical core enabling quick, dependable agentic coding.
Infrastructure and Coaching Methods
To coach Composer at scale, Cursor constructed a customized reinforcement studying infrastructure combining PyTorch and Ray for asynchronous coaching throughout 1000’s of NVIDIA GPUs.
The workforce developed specialised MXFP8 MoE kernels and hybrid sharded knowledge parallelism, enabling large-scale mannequin updates with minimal communication overhead.
This configuration permits Cursor to coach fashions natively at low precision with out requiring post-training quantization, bettering each inference pace and effectivity.
Composer’s coaching relied on a whole lot of 1000’s of concurrent sandboxed environments—every a self-contained coding workspace—working within the cloud. The corporate tailored its Background Brokers infrastructure to schedule these digital machines dynamically, supporting the bursty nature of huge RL runs.
Enterprise Use
Composer’s efficiency enhancements are supported by infrastructure-level adjustments throughout Cursor’s code intelligence stack.
The corporate has optimized its Language Server Protocols (LSPs) for sooner diagnostics and navigation, particularly in Python and TypeScript tasks. These adjustments cut back latency when Composer interacts with giant repositories or generates multi-file updates.
Enterprise customers achieve administrative management over Composer and different brokers by way of workforce guidelines, audit logs, and sandbox enforcement. Cursor’s Groups and Enterprise tiers additionally help pooled mannequin utilization, SAML/OIDC authentication, and analytics for monitoring agent efficiency throughout organizations.
Pricing for particular person customers ranges from Free (Pastime) to Extremely ($200/month) tiers, with expanded utilization limits for Professional+ and Extremely subscribers.
Enterprise pricing begins at $40 per person per 30 days for Groups, with enterprise contracts providing customized utilization and compliance choices.
Composer’s Position within the Evolving AI Coding Panorama
Composer’s give attention to pace, reinforcement studying, and integration with reside coding workflows differentiates it from different AI improvement assistants corresponding to GitHub Copilot or Replit’s Agent.
Reasonably than serving as a passive suggestion engine, Composer is designed for steady, agent-driven collaboration, the place a number of autonomous methods work together instantly with a challenge’s codebase.
This model-level specialization—coaching AI to perform inside the true atmosphere it’s going to function in—represents a major step towards sensible, autonomous software program improvement. Composer is just not educated solely on textual content knowledge or static code, however inside a dynamic IDE that mirrors manufacturing situations.
Rush described this strategy as important to attaining real-world reliability: the mannequin learns not simply find out how to generate code, however find out how to combine, take a look at, and enhance it in context.
What It Means for Enterprise Devs and Vibe Coding
With Composer, Cursor is introducing greater than a quick mannequin—it’s deploying an AI system optimized for real-world use, constructed to function inside the identical instruments builders already depend on.
The mixture of reinforcement studying, mixture-of-experts design, and tight product integration provides Composer a sensible edge in pace and responsiveness that units it other than general-purpose language fashions.
Whereas Cursor 2.0 gives the infrastructure for multi-agent collaboration, Composer is the core innovation that makes these workflows viable.
It’s the primary coding mannequin constructed particularly for agentic, production-level coding—and an early glimpse of what on a regular basis programming may appear to be when human builders and autonomous fashions share the identical workspace.
