Friday, 1 May 2026
Subscribe
logo
  • AI Compute
  • Infrastructure
  • Power & Cooling
  • Security
  • Colocation
  • Cloud Computing
  • More
    • Sustainability
    • Industry News
    • About Data Center News
    • Terms & Conditions
Font ResizerAa
Data Center NewsData Center News
Search
  • AI Compute
  • Infrastructure
  • Power & Cooling
  • Security
  • Colocation
  • Cloud Computing
  • More
    • Sustainability
    • Industry News
    • About Data Center News
    • Terms & Conditions
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > AI & Compute > Endor Labs: AI transparency vs ‘open-washing’
AI & Compute

Endor Labs: AI transparency vs ‘open-washing’

Last updated: February 24, 2025 8:33 pm
Published February 24, 2025
Share
Transparent model of a building illustrating the trend towards open-source AI and the need for increased transparency around the development of artificial intelligence models to build trust, reduce risk, and improve security.
SHARE

Because the AI trade focuses on transparency and safety, debates across the true that means of “openness” are intensifying. Consultants from open-source safety agency Endor Labs weighed in on these urgent subjects.

Andrew Stiefel, Senior Product Advertising and marketing Supervisor at Endor Labs, emphasised the significance of making use of classes discovered from software program safety to AI methods.

“The US authorities’s 2021 Govt Order on Improving America’s Cybersecurity features a provision requiring organisations to supply a software program invoice of supplies (SBOM) for every product bought to federal authorities businesses.”

An SBOM is basically a list detailing the open-source elements inside a product, serving to detect vulnerabilities. Stiefel argued that “making use of these identical ideas to AI methods is the logical subsequent step.”  

“Offering higher transparency for residents and authorities staff not solely improves safety,” he defined, “but in addition provides visibility right into a mannequin’s datasets, coaching, weights, and different elements.”

What does it imply for an AI mannequin to be “open”?  

Julien Sobrier, Senior Product Supervisor at Endor Labs, added essential context to the continuing dialogue about AI transparency and “openness.” Sobrier broke down the complexity inherent in categorising AI methods as really open.

“An AI mannequin is manufactured from many elements: the coaching set, the weights, and packages to coach and take a look at the mannequin, and so forth. You will need to make the entire chain accessible as open supply to name the mannequin ‘open’. It’s a broad definition for now.”  

Sobrier famous the dearth of consistency throughout main gamers, which has led to confusion in regards to the time period.

“Among the many major gamers, the issues in regards to the definition of ‘open’ began with OpenAI, and Meta is within the information now for his or her LLAMA mannequin though that’s ‘extra open’. We want a typical understanding of what an open mannequin means. We wish to be careful for any ‘open-washing,’ as we noticed it with free vs open-source software program.”  

See also  What is Famous Labs? Building an autonomous creation ecosystem

One potential pitfall, Sobrier highlighted, is the more and more widespread observe of “open-washing,” the place organisations declare transparency whereas imposing restrictions.

“With cloud suppliers providing a paid model of open-source tasks (resembling databases) with out contributing again, we’ve seen a shift in lots of open-source tasks: The supply code continues to be open, however they added many business restrictions.”  

“Meta and different ‘open’ LLM suppliers may go this route to maintain their aggressive benefit: extra openness in regards to the fashions, however stopping opponents from utilizing them,” Sobrier warned.

DeepSeek goals to extend AI transparency

DeepSeek, one of many rising — albeit controversial — gamers within the AI trade, has taken steps to handle a few of these issues by making parts of its fashions and code open-source. The transfer has been praised for advancing transparency whereas offering safety insights.  

“DeepSeek has already launched the fashions and their weights as open-source,” mentioned Andrew Stiefel. “This subsequent transfer will present larger transparency into their hosted providers, and can give visibility into how they fine-tune and run these fashions in manufacturing.”

Such transparency has important advantages, famous Stiefel. “This may make it simpler for the neighborhood to audit their methods for safety dangers and in addition for people and organisations to run their very own variations of DeepSeek in manufacturing.”  

Past safety, DeepSeek additionally provides a roadmap on easy methods to handle AI infrastructure at scale.

“From a transparency aspect, we’ll see how DeepSeek is operating their hosted providers. This may assist tackle safety issues that emerged after it was found they left a few of their Clickhouse databases unsecured.”

Stiefel highlighted that DeepSeek’s practices with instruments like Docker, Kubernetes (K8s), and different infrastructure-as-code (IaC) configurations might empower startups and hobbyists to construct related hosted situations.  

See also  Ericsson launches Cognitive Labs to pioneer telecoms AI research

Open-source AI is sizzling proper now

DeepSeek’s transparency initiatives align with the broader pattern towards open-source AI. A report by IDC reveals that 60% of organisations are choosing open-source AI fashions over business alternate options for his or her generative AI (GenAI) tasks.  

Endor Labs analysis additional signifies that organisations use, on common, between seven and twenty-one open-source fashions per utility. The reasoning is evident: leveraging one of the best mannequin for particular duties and controlling API prices.

“As of February seventh, Endor Labs discovered that greater than 3,500 further fashions have been skilled or distilled from the unique DeepSeek R1 mannequin,” mentioned Stiefel. “This exhibits each the vitality within the open-source AI mannequin neighborhood, and why safety groups want to grasp each a mannequin’s lineage and its potential dangers.”  

For Sobrier, the rising adoption of open-source AI fashions reinforces the necessity to consider their dependencies.

“We have to take a look at AI fashions as main dependencies that our software program depends upon. Corporations want to make sure they’re legally allowed to make use of these fashions but in addition that they’re secure to make use of when it comes to operational dangers and provide chain dangers, similar to open-source libraries.”

He emphasised that any dangers can prolong to coaching knowledge: “They must be assured that the datasets used for coaching the LLM weren’t poisoned or had delicate non-public info.”  

Constructing a scientific strategy to AI mannequin danger  

As open-source AI adoption accelerates, managing danger turns into ever extra important. Stiefel outlined a scientific strategy centred round three key steps:  

  1. Discovery: Detect the AI fashions your organisation at present makes use of.  
  2. Analysis: Evaluation these fashions for potential dangers, together with safety and operational issues.  
  3. Response: Set and implement guardrails to make sure secure and safe mannequin adoption.  
See also  Evil Geniuses and Theta Labs launch AI chatbot based on esports mascot Meesh

“The secret is discovering the suitable steadiness between enabling innovation and managing danger,” Stiefel mentioned. “We have to give software program engineering groups latitude to experiment however should achieve this with full visibility. The safety crew wants line-of-sight and the perception to behave.”  

Sobrier additional argued that the neighborhood should develop greatest practices for safely constructing and adopting AI fashions. A shared methodology is required to guage AI fashions throughout parameters resembling safety, high quality, operational dangers, and openness.

Past transparency: Measures for a accountable AI future  

To make sure the accountable progress of AI, the trade should undertake controls that function throughout a number of vectors:  

  • SaaS fashions: Safeguarding worker use of hosted fashions.
  • API integrations: Builders embedding third-party APIs like DeepSeek into functions, which, by means of instruments like OpenAI integrations, can change deployment with simply two strains of code.
  • Open-source fashions: Builders leveraging community-built fashions or creating their very own fashions from current foundations maintained by firms like DeepSeek.

Sobrier warned of complacency within the face of fast AI progress. “The neighborhood must construct greatest practices to develop secure and open AI fashions,” he suggested, “and a technique to fee them alongside safety, high quality, operational dangers, and openness.”  

As Stiefel succinctly summarised: “Take into consideration safety throughout a number of vectors and implement the suitable controls for every.”

See additionally: AI in 2025: Function-driven fashions, human integration, and extra

Wish to be taught extra about AI and massive knowledge from trade leaders? Take a look at AI & Big Data Expo happening in Amsterdam, California, and London. The great occasion is co-located with different main occasions together with Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Discover different upcoming enterprise know-how occasions and webinars powered by TechForge here.

Source link

TAGGED: Endor, Labs, openwashing, transparency
Share This Article
Twitter Email Copy Link Print
Previous Article Microsoft's palm-sized chip brings practical quantum computing within reach Microsoft’s palm-sized chip brings practical quantum computing within reach
Next Article Microsoft Dropped Some AI Data Center Leases – Report Microsoft Dropped Some AI Data Center Leases – Report
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Panduit wins prestigious Data Centre Solutions Technology Award

This yr, the award went to the Panduit G6 Energy Distribution Unit, a number one…

May 30, 2025

That ‘cheap’ open-source AI model is actually burning through your compute budget

Need smarter insights in your inbox? Join our weekly newsletters to get solely what issues…

August 15, 2025

MCP and the innovation paradox: Why open standards will save AI from itself

Be part of our day by day and weekly newsletters for the most recent updates…

May 10, 2025

Congress pushes GPS tracking for every exported semiconductor

America’s quest to guard its semiconductor expertise from China has taken more and more dramatic…

May 17, 2025

Samsung AI-RAN demo signals telecom cloud shift at MWC 2026

Samsung moved synthetic intelligence nearer to dwell telecom infrastructure at MWC 2026, the place it…

March 2, 2026

You Might Also Like

STL launches Neuralis data centre connectivity suite in the U.S.
AI & Compute

STL launches Neuralis data centre connectivity suite in the U.S.

By saad
What is optical interconnect and why Lightelligence's $10B debut says it matters for AI
AI & Compute

What is optical interconnect and why Lightelligence’s $10B debut says it matters for AI

By saad
IBM launches AI platform Bob to regulate SDLC costs
AI & Compute

IBM launches AI platform Bob to regulate SDLC costs

By saad
The evolution of encoders: From simple models to multimodal AI
AI & Compute

The evolution of encoders: From simple models to multimodal AI

By saad

About Us

Data Center News is your dedicated source for data center infrastructure, AI compute, cloud, and industry news.

Top Categories

  • AI & Compute
  • Cloud Computing
  • Power & Cooling
  • Colocation
  • Security
  • Infrastructure
  • Sustainability
  • Industry News

Useful Links

  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

Find Us on Socials

© 2026 Data Center News. All Rights Reserved.

© 2026 Data Center News. All Rights Reserved.
Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.