Alibaba has launched a brand new AI coding mannequin referred to as Qwen3-Coder, constructed to deal with complicated software program duties utilizing a big open-source mannequin. The instrument is a part of Alibaba’s Qwen3 household and is being promoted as the corporate’s most superior coding agent up to now.
The mannequin makes use of a Combination of Specialists (MoE) method, activating 35 billion parameters out of a complete 480 billion and supporting as much as 256,000 tokens of context. That quantity can reportedly be stretched to 1 million utilizing particular extrapolation strategies. The corporate claims Qwen3-Coder has outperformed different open fashions in agentic duties, together with variations from Moonshot AI and DeepSeek.
However not everybody sees this as excellent news. Jurgita Lapienyė, Chief Editor at Cybernews, warns that Qwen3-Coder could also be greater than only a useful coding assistant—it may pose an actual danger to international tech methods if adopted broadly by Western builders.
A computer virus in open supply clothes?
Alibaba’s messaging round Qwen3-Coder has centered on its technical power, evaluating it to top-tier instruments from OpenAI and Anthropic. However whereas benchmark scores and options draw consideration, Lapienyė suggests they might additionally distract from the true difficulty: safety.
It’s not that China is catching up in AI—that’s already identified. The deeper concern is in regards to the hidden dangers of utilizing software program generated by AI methods which might be tough to examine or absolutely perceive.
As Lapienyė put it, builders could possibly be “sleepwalking right into a future” the place core methods are unknowingly constructed with weak code. Instruments like Qwen3-Coder might make life simpler, however they might additionally introduce refined weaknesses that go unnoticed.
This danger isn’t hypothetical. Cybernews researchers not too long ago reviewed AI use throughout main US corporations and located that 327 of the S&P 500 now publicly report utilizing AI instruments. In these firms alone, researchers recognized almost 1,000 AI-related vulnerabilities.
Including one other AI mannequin—particularly one developed beneath China’s strict nationwide safety legal guidelines—may add one other layer of danger, one which’s more durable to regulate.
When code turns into a backdoor
Right now’s builders lean closely on AI instruments to put in writing code, repair bugs, and form how purposes are constructed. These methods are quick, useful, and getting higher daily.
However what if those self same methods have been educated to inject flaws? Not apparent bugs, however small, hard-to-spot points that wouldn’t set off alarms. A vulnerability that appears like a innocent design determination may go undetected for years.
That’s how provide chain assaults typically start. Previous examples, just like the SolarWinds incident, present how long-term infiltration might be executed quietly and patiently. With sufficient entry and context, an AI mannequin may learn to plant comparable points—particularly if it had publicity to hundreds of thousands of codebases.
It’s not only a principle. Beneath China’s Nationwide Intelligence Regulation, firms like Alibaba should cooperate with authorities requests, together with these involving information and AI fashions. That shifts the dialog from technical efficiency to nationwide safety.
What occurs to your code?
One other main difficulty is information publicity. When builders use instruments like Qwen3-Coder to put in writing or debug code, each piece of that interplay may reveal delicate info.
Which may embody proprietary algorithms, safety logic, or infrastructure design—precisely the sort of particulars that may be helpful to a overseas state.
Although the mannequin is open supply, there’s nonetheless quite a bit that customers can’t see. The backend infrastructure, telemetry methods, and utilization monitoring strategies will not be clear. That makes it exhausting to know the place information goes or what the mannequin may bear in mind over time.
Autonomy with out oversight
Alibaba has additionally centered on agentic AI—fashions that may act extra independently than customary assistants. These instruments don’t simply recommend strains of code. They are often assigned full duties, function with minimal enter, and make choices on their very own.
Which may sound environment friendly, however it additionally raises pink flags. A completely autonomous coding agent that may scan whole codebases and make adjustments may develop into harmful within the mistaken arms.
Think about an agent that may perceive an organization’s system defences and craft tailor-made assaults to use them. The identical skillset that helps builders transfer sooner could possibly be repurposed by attackers to maneuver even sooner nonetheless.
Regulation nonetheless isn’t prepared
Regardless of these dangers, present rules don’t handle instruments like Qwen3-Coder in a significant approach. The US authorities has spent years debating information privateness issues tied to apps like TikTok, however there’s little public oversight of foreign-developed AI instruments.
Teams just like the Committee on International Funding within the US (CFIUS) evaluation firm acquisitions, however no comparable course of exists for reviewing AI fashions which may pose nationwide safety dangers.
President Biden’s government order on AI focuses primarily on homegrown fashions and normal security practices. But it surely leaves out issues about imported instruments that could possibly be embedded in delicate environments like healthcare, finance, or nationwide infrastructure.
AI instruments able to writing or altering code must be handled with the identical seriousness as software program provide chain threats. Which means setting clear pointers for the place and the way they can be utilized.
What ought to occur subsequent?
To scale back danger, organisations coping with delicate methods ought to pause earlier than integrating Qwen3-Coder—or any foreign-developed agentic AI—into their workflows. For those who wouldn’t invite somebody you don’t belief to take a look at your supply code, why let their AI rewrite it?
Safety instruments additionally have to catch up. Static evaluation software program might not detect complicated backdoors or refined logic points crafted by AI. The trade wants new instruments designed particularly to flag and check AI-generated code for suspicious patterns.
Lastly, builders, tech leaders, and regulators should perceive that code-generating AI isn’t impartial. These methods have energy—each as useful instruments and potential threats. The identical options that make them helpful may also make them harmful.
Lapienyė referred to as Qwen3-Coder “a possible Malicious program,” and the metaphor suits. It’s not nearly productiveness. It’s about who’s contained in the gates.
Not everybody agrees on what issues
Wang Jian, the founding father of Alibaba Cloud, sees issues in another way. In an interview with Bloomberg, he mentioned innovation isn’t about hiring the costliest expertise however about selecting individuals who can construct the unknown. He criticised Silicon Valley’s method to AI hiring, the place tech giants now compete for prime researchers like sports activities groups bidding on athletes.
“The one factor it is advisable to do is to get the appropriate individual,” Wang mentioned. “Not likely the costly individual.”
He additionally believes that the Chinese language AI race is wholesome, not hostile. Based on Wang, firms take turns pulling forward, which helps your entire ecosystem develop sooner.
“You may have the very quick iteration of the know-how due to this competitors,” he mentioned. “I don’t assume it’s brutal, however I believe it’s very wholesome.”
Nonetheless, open-source competitors doesn’t assure belief. Western builders want to consider carefully about what instruments they use—and who constructed them.
The underside line
Qwen3-Coder might supply spectacular efficiency and open entry, however its use comes with dangers that transcend benchmarks and coding velocity. In a time when AI instruments are shaping how important methods are constructed, it’s value asking not simply what these instruments can do—however who advantages after they do it.
(Photograph by Shahadat Rahman)
See additionally: Alibaba’s new Qwen reasoning AI mannequin units open-source data

Wish to be taught extra about AI and massive information from trade leaders? Take a look at AI & Big Data Expo happening in Amsterdam, California, and London. The great occasion is co-located with different main occasions together with Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Discover different upcoming enterprise know-how occasions and webinars powered by TechForge here.
