
The Allen Institute for AI (Ai2) lately launched what it calls its strongest household of fashions but, Olmo 3. However the firm saved iterating on the fashions, increasing its reinforcement studying (RL) runs, to create Olmo 3.1.
The brand new Olmo 3.1 fashions concentrate on effectivity, transparency, and management for enterprises.
Ai2 up to date two of the three variations of Olmo 2: Olmo 3.1 Assume 32B, the flagship mannequin optimized for superior analysis, and Olmo 3.1 Instruct 32B, designed for instruction-following, multi-turn dialogue, and gear use.
Olmo 3 has a 3rd model, Olmo 3-Base for programming, comprehension, and math. It additionally works effectively for proceed fine-tuning.
Ai2 stated that to improve Olmo 3 Assume 32B to Olmo 3.1, its researchers prolonged its greatest RL run with an extended coaching schedule.
“After the unique Olmo 3 launch, we resumed our RL coaching run for Olmo 3 32B Assume, coaching for a further 21 days on 224 GPUs with additional epochs over our Dolci-Assume-RL dataset,” Ai2 stated in a blog post. “This yielded Olmo 3.1 32B Assume, which brings substantial positive factors throughout math, reasoning, and instruction-following benchmarks: enhancements of 5+ factors on AIME, 4+ factors on ZebraLogic, 4+ factors on IFEval, and 20+ factors on IFBench, alongside stronger efficiency on coding and complicated multi-step duties.”
To get to Olmo 3.1 Instruct, Ai2 stated its researchers utilized the recipe behind the smaller Instruct dimension, 7B, to the bigger mannequin.
Olmo 3.1 Instruct 32B is “optimized for chat, software use, & multi-turn dialogue—making it a way more performant sibling of Olmo 3 Instruct 7B and prepared for real-world purposes,” Ai2 stated in a post on X.
For now, the brand new checkpoints can be found on the Ai2 Playground or Hugging Face, with API entry coming quickly.
Higher efficiency on benchmarks
The Olmo 3.1 fashions carried out effectively on benchmark checks, predictably beating the Olmo 3 fashions.
Olmo 3.1 Assume outperformed Qwen 3 32B fashions within the AIME 2025 benchmark and carried out near Gemma 27B.
Olmo 3.1 Instruct carried out strongly in opposition to its open-source friends, even beating fashions like Gemma 3 on the Math benchmark.
“As for Olmo 3.1 32B Instruct, it’s a larger-scale instruction-tuned mannequin constructed for chat, software use, and multi-turn dialogue. Olmo 3.1 32B Instruct is our most succesful absolutely open chat mannequin thus far and — in our evaluations — the strongest absolutely open 32B-scale instruct mannequin,” the corporate stated.
Ai2 additionally upgraded its RL-Zero 7B fashions for math and coding. The corporate stated on X that each fashions benefited from longer and extra secure coaching runs.
Dedication to transparency and open supply
Ai2 beforehand instructed VentureBeat that it designed the Olmo 3 household of fashions to supply enterprises and analysis labs extra management and understanding of the info and coaching that went into the mannequin.
Organizations might add to the mannequin’s knowledge combine and retrain it to additionally be taught from what’s been added.
This has lengthy been a dedication for Ai2, which additionally gives a software referred to as OlmoTrace that tracks how LLM outputs match its coaching knowledge.
“Collectively, Olmo 3.1 Assume 32B and Olmo 3.1 Instruct 32B present that openness and efficiency can advance collectively. By extending the identical mannequin circulation, we proceed to enhance capabilities whereas retaining end-to-end transparency over knowledge, code, and coaching selections,” Ai2 stated.
