Saturday, 13 Dec 2025
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > AI > DeepCoder delivers top coding performance in efficient 14B open model
AI

DeepCoder delivers top coding performance in efficient 14B open model

Last updated: April 11, 2025 3:07 am
Published April 11, 2025
Share
DeepCoder delivers top coding performance in efficient 14B open model
SHARE

Be part of our every day and weekly newsletters for the newest updates and unique content material on industry-leading AI protection. Study Extra


Researchers at Together AI and Agentica have launched DeepCoder-14B, a brand new coding mannequin that delivers spectacular efficiency corresponding to main proprietary fashions like OpenAI’s o3-mini. 

Constructed on high of DeepSeek-R1, this mannequin offers extra flexibility to combine high-performance code technology and reasoning capabilities into real-world functions. Importantly, the groups have totally open-sourced the mannequin, its coaching knowledge, code, logs and system optimizations, which may help researchers enhance their work and speed up progress.

Aggressive coding capabilities in a smaller package deal

The analysis group’s experiments present that DeepCoder-14B performs strongly throughout a number of difficult coding benchmarks, together with LiveCodeBench (LCB), Codeforces and HumanEval+.

“Our mannequin demonstrates robust efficiency throughout all coding benchmarks… corresponding to the efficiency of o3-mini (low) and o1,” the researchers write in a blog post that describes the mannequin.

Curiously, regardless of being skilled totally on coding duties, the mannequin reveals improved mathematical reasoning, scoring 73.8% on the AIME 2024 benchmark, a 4.1% enchancment over its base mannequin (DeepSeek-R1-Distill-Qwen-14B). This means that the reasoning expertise developed via RL on code will be generalized successfully to different domains.

DeepCoder-14B performance
Credit score: Collectively AI

Probably the most hanging side is attaining this degree of efficiency with solely 14 billion parameters. This makes DeepCoder considerably smaller and doubtlessly extra environment friendly to run than many frontier fashions.

Improvements driving DeepCoder’s efficiency

Whereas growing the mannequin, the researchers solved a number of the key challenges in coaching coding fashions utilizing reinforcement studying (RL).

See also  Quantum system allows efficient error correction, longer computation times

The primary problem was curating the coaching knowledge. Reinforcement studying requires dependable reward indicators indicating the mannequin’s output is appropriate. Because the researchers level out, “In contrast to math—the place plentiful high-quality, verifiable knowledge is available on the Web—the coding area suffers from a relative shortage of such knowledge.” 

To handle this drawback, the DeepCoder group carried out a strict pipeline that gathers examples from completely different datasets and filters them for validity, complexity and duplication. This course of yielded 24,000 high-quality issues, offering a stable basis for efficient RL coaching.

The group additionally designed an easy reward operate that solely offers a constructive sign if the generated code passes all sampled unit checks for the issue inside a selected time restrict. Mixed with the high-quality coaching examples, this outcome-focused reward system prevents the mannequin from studying methods like printing memorized solutions for public checks or optimizing for easy edge circumstances with out fixing the core drawback.

The mannequin’s core coaching algorithm relies on Group Relative Coverage Optimization (GRPO), a reinforcement studying algorithm that proved very profitable in DeepSeek-R1. Nonetheless, the group made a number of modifications to the algorithm to make it extra secure and permit the mannequin to proceed bettering because the coaching extends for an extended time.

GRPO+
GRPO+ permits DeepCoder-14 to proceed for longer durations with out collapsing Credit score: Collectively AI

Lastly, the group prolonged the mannequin’s context window iteratively, first coaching it on shorter reasoning sequences and progressively growing the size. Additionally they developed a filtering methodology to keep away from penalizing the mannequin when it created reasoning chains that exceeded the context limits when fixing a tough immediate. 

See also  How Europe’s talent can secure a trillion-euro AI economic injection
iterative context extension
DeepCoder was skilled on 32K context issues however was additionally in a position to resolve 64K duties Credit score: Collectively AI

The researchers clarify the core concept: “To protect long-context reasoning whereas enabling environment friendly coaching, we included overlong filtering… This system masks out truncated sequences throughout coaching in order that fashions aren’t penalized for producing considerate however prolonged outputs that exceed the present context restrict.” 

The coaching was progressively scaled from a 16K to a 32K context window, and the ensuing mannequin might additionally resolve issues that required as much as 64K tokens.

Optimizing long-context RL coaching

Coaching massive fashions with RL, particularly on duties requiring lengthy generated sequences like coding or advanced reasoning, is computationally intensive and sluggish. A serious bottleneck is the “sampling” step, the place the mannequin generates doubtlessly hundreds of tokens per instance within the batch. Variations in response size imply some responses end a lot later than others, leaving GPUs idle and slowing down the whole coaching loop. 

To speed up this, the group developed verl-pipeline, an optimized extension of the open-source verl library for reinforcement learning from human feedback (RLHF). The important thing innovation, which they name “One-Off Pipelining,” rearranges the response sampling and mannequin updates to scale back the bottlenecks and accelerator idle time.

One-Off Pipelining
One-Off Pipelining

Their experiments confirmed that one-off pipelining offered as much as a 2x speedup for coding RL duties in comparison with baseline implementations. This optimization was essential for coaching DeepCoder inside an affordable timeframe (2.5 weeks on 32 H100s) and is now open-sourced as a part of verl-pipeline for the neighborhood to make use of and construct upon. 

See also  A ChatGPT 'router' that automatically selects the right OpenAI model for your job appears imminent

Enterprise impression

The researchers have made all of the artifacts for coaching and working DeepCoder-14B accessible on GitHub and Hugging Face beneath a permissive license.

“By totally sharing our dataset, code, and coaching recipe, we empower the neighborhood to breed our work and make RL coaching accessible to all,” the researchers write.

DeepCoder-14B powerfully illustrates a broader, accelerating development within the AI panorama: the rise of extremely succesful but environment friendly and overtly accessible fashions. 

For the enterprise world, this shift signifies extra choices and better accessibility of superior fashions. Chopping-edge efficiency is not solely the area of hyperscalers or these prepared to pay premium API charges. Fashions like DeepCoder can empower organizations of all sizes to leverage subtle code technology and reasoning, customise options to their particular wants, and securely deploy them inside their environments. 

This development can decrease the barrier to entry for AI adoption and foster a extra aggressive and modern ecosystem, the place progress is pushed via open supply collaboration.


Source link
TAGGED: 14B, coding, DeepCoder, Delivers, efficient, Model, Open, performance, Top
Share This Article
Twitter Email Copy Link Print
Previous Article Honeycomb Acquires Grit Honeycomb Acquires Grit
Next Article artificial intelligence Outtake Raises $16.5M in Series A Funding
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Pepeto and Pepe Unchained Compete for Dominance in the Next Memecoin Era

London, uk, December sixth, 2024, Chainwire Pepeto: Low Value and Large Bulletins Forward Pepeto has…

December 6, 2024

2025 global network outage report and internet health check

Two notable outages On December 30, Neustar, a U.S. primarily based expertise service supplier headquartered…

January 13, 2025

Canva to Acquire Leonardo.AI

Canva, a world chief in visible communication with greater than 190 million month-to-month lively customers,…

July 31, 2024

Premio secures triple ‘Best in Show’ honors for rugged edge innovation

Rugged edge and embedded computing supplier Premio received three “Finest in Present” awards at Embedded…

October 30, 2024

ceτi AI Announces Successful Launch of Revolutionary Decentralized AI Infrastructure Token

Portland, Oregon, March thirteenth, 2024, Chainwire ceτi AI, a pioneering decentralized synthetic intelligence infrastructure supplier,…

March 13, 2024

You Might Also Like

Google’s new framework helps AI agents spend their compute and tool budget more wisely
AI

Google’s new framework helps AI agents spend their compute and tool budget more wisely

By saad
BBVA embeds AI into banking workflows using ChatGPT Enterprise
AI

BBVA embeds AI into banking workflows using ChatGPT Enterprise

By saad
Ai2's new Olmo 3.1 extends reinforcement learning training for stronger reasoning benchmarks
AI

Ai2's new Olmo 3.1 extends reinforcement learning training for stronger reasoning benchmarks

By saad
Experimental AI concludes as autonomous systems rise
AI

Experimental AI concludes as autonomous systems rise

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.