
Nous Research, the San Francisco-based synthetic intelligence startup, launched on Tuesday an open-source mathematical reasoning system referred to as Nomos 1 that achieved near-elite human efficiency on this yr’s William Lowell Putnam Mathematical Competition, some of the prestigious and notoriously tough undergraduate math contests on the planet.
The Putnam is thought for its problem: Whereas an ideal rating is 120, this yr’s prime rating was 90, and the median was simply 2. Nomos 1, in contrast, scored 87 factors — a end result that may have ranked second out of three,988 individuals within the 2024 competitors, in keeping with the corporate.
The discharge marks an inflection level within the quickly accelerating race to construct AI techniques able to refined mathematical reasoning. Not like the large, compute-intensive fashions deployed by main expertise corporations, Nomos 1 achieves its outcomes with a comparatively compact structure: 30 billion parameters with roughly 3 billion lively at any given time, utilizing a mixture-of-experts design primarily based on Alibaba’s Qwen3 model.
“This rating would rank #2/3988 in 2024 and marks our first step with Hillclimb AI in direction of making a SOTA AI mathematician,” Nous Analysis announced on social media Tuesday.
The identical base mannequin scored 24 factors with out Nous Analysis’s specialised coaching
Maybe most placing is the hole between Nomos 1 and its base mannequin. When Nous Analysis ran the identical Qwen3-30B-A3B-Thinking-2507 model via an equivalent testing harness, it scored simply 24 out of 120 — a end result that underscores the crucial significance of post-training optimization and specialised reasoning methods over uncooked mannequin scale.
“Nomos 1 achieved an 87/120 with 8 good scores,” the corporate said, noting that the efficiency distinction “is essentially as a result of post-training and information high quality slightly than the harness.”
The outcomes had been verified via blind grading by a human knowledgeable who had beforehand completed within the prime 200 on the Putnam. Nous Research supplied the anonymized submissions to the grader, then printed the complete set of de-anonymized recordsdata and the runbooks used to generate them on GitHub.
Why the Putnam competitors is taken into account the final word check of mathematical reasoning
The William Lowell Putnam Mathematical Competition is an annual arithmetic competitors for undergraduate school college students enrolled at establishments of upper studying in the USA and Canada. It’s extensively thought of to be probably the most prestigious university-level arithmetic competitors on the planet.
The notoriously brutal William Lowell Putnam Mathematical Competitors is extra of a mathematical sporting occasion than an instructional check. The examination consists of two 3-hour periods separated by a 2-hour break. There are a complete of 12 inquiries to be solved, 6 for every session. Every query is price 10 factors, for a complete of 120 factors.
Putnam questions usually are not the kind that come up in common exams or textbooks. They’re extra like puzzles than calculations, usually requiring college students to seek out other ways to characterize issues earlier than an answer may unfold.
Final yr, practically 4,000 college students throughout the continent wrote the Putnam. Sixty-one per cent scored three factors or fewer, in keeping with the Mathematical Association of America, which organizes the competitors. The highest rating was 90 out of 120.
Many Putnam Fellows have gone on to develop into distinguished researchers in arithmetic and different fields, together with three Fields Medalists — John Milnor, David Mumford, and Daniel Quillen — and two Nobel laureates in physics — Richard Feynman and Kenneth Wilson.
Contained in the two-phase reasoning system that powers Nomos 1’s mathematical breakthroughs
Nomos 1 is a specialization of Qwen’s Qwen3-30B-A3B-Thinking model, optimized for mathematical problem-solving and proof-writing in pure language. The system was developed in collaboration with Hillclimb AI.
What distinguishes Nomos 1 from easy mannequin inference is its refined reasoning harness — an open-source framework that orchestrates how the mannequin approaches and solves issues. The harness operates in two distinct phases inside a three-hour time restrict, mirroring the precise Putnam competitors construction.
Within the fixing section, parallel staff concurrently deal with issues utilizing a priority-based system. Every employee picks an issue, generates a submission, then scores its personal work on a scale of 1 to 7. Issues with the fewest good scores obtain precedence, guaranteeing the system focuses its compute on the toughest challenges. This course of continues till both all issues have achieved a goal variety of self-critiqued good scores or time runs out.
The finalization section begins quarter-hour earlier than the time restrict (or at 50% for shorter runs) and employs a two-stage choice course of. First, a consolidation step teams submissions by conclusion and makes an attempt to determine the proper group — importantly, not essentially the bulk group. Then, a pairwise match utilizing single elimination determines the ultimate submission for every drawback.
“Our open supply reasoning system consists of a fixing section, the place staff try a least-solved drawback and self-assess, adopted by a finalization section, which consolidates submissions to decide on a closing submission for every drawback,” Nous Analysis explained.
How Nomos 1 compares to mathematical AI techniques from DeepSeek, Google, and OpenAI
The Nomos 1 outcomes arrive amid a flurry of advances in mathematical reasoning AI. DeepSeek’s mannequin, DeepSeekMath-V2, scored 118 out of 120 factors on questions from the 2024 William Lowell Putnam Mathematical Competitors, beating the highest human rating of 90. The mannequin additionally carried out on the degree of gold-medal winners within the Worldwide Mathematical Olympiad.
This yr, Google’s superior Gemini model operated end-to-end in pure language, producing rigorous mathematical proofs straight from the official drawback descriptions – all inside the 4.5-hour competitors time restrict. They achieved this yr’s end result utilizing a complicated model of Gemini Deep Think.
What makes Nomos 1’s achievement notable isn’t uncooked efficiency — it trails DeepSeek’s 118/120 — however slightly its accessibility and effectivity. At 30 billion parameters with solely 3 billion lively, the mannequin can run on consumer-grade {hardware}, a stark distinction to the large compute clusters required by frontier fashions from OpenAI and Google.
Hermes 4.3 arrived simply six days earlier, educated on a decentralized blockchain community
The Nomos 1 announcement follows intently on the heels of Nous Analysis’s December 3 launch of Hermes 4.3, a general-purpose language mannequin that marked one other important milestone for the corporate.
Hermes 4.3, primarily based on ByteDance’s Seed-OSS-36B-Base model, is the primary manufacturing mannequin that Nous Analysis educated solely on its Psyche network — a distributed coaching infrastructure that makes use of a novel optimizer referred to as DisTrO to coordinate coaching throughout nodes unfold all through information facilities over the open web, secured by consensus on the Solana blockchain.
The corporate educated Hermes 4.3 each via conventional centralized strategies and on the Psyche network, particularly to confirm that distributed coaching may match or exceed centralized efficiency for manufacturing workloads. The Psyche-trained model outperformed the centralized model throughout a collection of downstream duties, the corporate reported.
“The coaching run proved steady all through, averaging 144k tokens/second unfold throughout 24 Psyche nodes,” Nous Analysis said. “Utilizing DisTrO’s overlapped collective technique, the whole lot of the P2P communications had been hidden by the coaching time, successfully reaching equal throughput to conventional, centralized coaching.”
Hermes 4.3 additionally achieved state-of-the-art outcomes on RefusalBench, a brand new benchmark that measures a mannequin’s willingness to be useful throughout a wide range of eventualities generally restricted by different fashions. The mannequin answered 74.60% of RefusalBench questions in non-reasoning mode, surpassing its predecessor Hermes 4 70B (59.50%) and outperforming closed fashions together with Grok 4 (51.30%) and Gemini 2.5 Professional (24.23%).
Small fashions with sensible coaching are closing the hole with trillion-parameter giants
Collectively, the 2 releases in a single week sign Nous Analysis’s strategic guess: that smaller, extra environment friendly fashions with refined post-training methods and reasoning harnesses can compete with — and in some circumstances outperform — the large fashions developed by better-funded opponents.
For enterprise decision-makers, the implications are important. Mathematical reasoning capabilities have functions far past educational competitions: they’re important for formal verification, theorem proving, scientific modeling, cryptographic evaluation, and any area requiring rigorous logical deduction.
The open-source nature of each releases — Nomos 1 is accessible underneath the Apache 2.0 license on Hugging Face, with the full reasoning harness on GitHub — signifies that organizations can deploy these capabilities on their very own infrastructure with out counting on API calls to main cloud suppliers.
“For the primary time, anybody can run or entry a state-of-the-art AI mathematician,” one observer famous on social media. “This lowers the barrier to severe math analysis, proof verification, modeling advanced techniques, superior reasoning work.”
The important thing contributors to Nomos 1 embrace Roger Jin, who led the coaching; Jeffrey Quesnelle and Dakota Mahan, who constructed the infrastructure; Chen Guang, who suggested; and Ryan Teknium and Jeffrey Quesnelle, who supplied management. The mannequin was developed with contributions from Hillclimb AI and a group of math specialists together with Samuel Kim, Miron Yurkevich, and others.
The race to construct AI mathematicians is accelerating sooner than anybody predicted
The 86th Putnam Competition occurred on Saturday, December 6, 2025 — simply three days earlier than Nous Analysis launched Nomos 1. The timing underscores how quickly the sector is shifting: corporations are actually releasing mathematical AI techniques able to near-elite human efficiency inside days of the competitions they’re designed to unravel.
Competitors in mathematical AI has intensified dramatically in current months. In July, a complicated model of Google DeepMind’s Gemini model and an experimental reasoning mannequin from OpenAI each achieved gold standing on the IMO 2025. DeepSeek’s new model matched their efficiency, fixing 5 out of 6 issues.
However the useful resource necessities for these frontier techniques stay prohibitive for many organizations. OpenAI’s o1-pro is estimated at over 1.8 trillion parameters; Google’s Gemini 2.5 Professional probably exceeds 400 billion. Nomos 1, in contrast, achieves aggressive outcomes with a fraction of that footprint.
The hole between huge frontier fashions and environment friendly open-source options is narrowing. And for organizations that want mathematical reasoning capabilities with out the price range for hyperscale compute, that hole could have simply closed sufficient to matter.
As one observer put it on social media: “This marks a big bounce for AI math fashions which might be sufficiently small to run in your laptop computer.”
A laptop computer that may now outperform practically 4,000 of the continent’s greatest undergraduate mathematicians.
