Monday, 12 Jan 2026
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > Security > Microsoft’s new safety system can catch hallucinations in its customers’ AI apps
Security

Microsoft’s new safety system can catch hallucinations in its customers’ AI apps

Last updated: March 29, 2024 12:12 am
Published March 29, 2024
Share
Microsoft explains how Russian hackers spied on its executives
SHARE

Sarah Chook, Microsoft’s chief product officer of accountable AI, tells The Verge in an interview that her staff has designed a number of new security options that can be simple to make use of for Azure clients who aren’t hiring teams of pink teamers to check the AI providers they constructed. Microsoft says these LLM-powered instruments can detect potential vulnerabilities, monitor for hallucinations “which might be believable but unsupported,” and block malicious prompts in actual time for Azure AI clients working with any mannequin hosted on the platform. 

“We all know that clients don’t all have deep experience in immediate injection assaults or hateful content material, so the analysis system generates the prompts wanted to simulate these kinds of assaults. Prospects can then get a rating and see the outcomes,” she says. 

Three options: Immediate Shields, which blocks immediate injections or malicious prompts from exterior paperwork that instruct fashions to go towards their coaching; Groundedness Detection, which finds and blocks hallucinations; and security evaluations, which assess mannequin vulnerabilities, at the moment are obtainable in preview on Azure AI. Two different options for steering fashions towards secure outputs and monitoring prompts to flag probably problematic customers can be coming quickly. 

Whether or not the consumer is typing in a immediate or if the mannequin is processing third-party information, the monitoring system will consider it to see if it triggers any banned phrases or has hidden prompts earlier than deciding to ship it to the mannequin to reply. After, the system then seems to be on the response by the mannequin and checks if the mannequin hallucinated info not within the doc or the immediate.

See also  Federal prosecutors still can’t get into Eric Adams’ cellphone

Within the case of the Google Gemini photos, filters made to cut back bias had unintended results, which is an space the place Microsoft says its Azure AI instruments will enable for extra personalized management. Chook acknowledges that there’s concern Microsoft and different corporations might be deciding what’s or isn’t acceptable for AI fashions, so her staff added a means for Azure clients to toggle the filtering of hate speech or violence that the mannequin sees and blocks. 

Sooner or later, Azure customers may get a report of customers who try to set off unsafe outputs. Chook says this permits system directors to determine which customers are its personal staff of pink teamers and which might be folks with extra malicious intent.

Chook says the protection options are instantly “connected” to GPT-4 and different fashionable fashions like Llama 2. Nonetheless, as a result of Azure’s mannequin backyard comprises many AI fashions, customers of smaller, much less used open-source methods could should manually level the protection options to the fashions. 

Source link

TAGGED: apps, catch, Customers, hallucinations, Microsofts, safety, System
Share This Article
Twitter Email Copy Link Print
Previous Article FLock.io FLock.io Raises USD $6M in Seed Funding
Next Article Keysource secures contract for HMT’s OSCAR II system National Grid CEO: Data centre power demand to soar six-fold by 2035
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Dr. Dean Kassmann (IonQ) – HostingJournalist.com

IonQ, a pacesetter within the quantum computing business, lately introduced the promotion of Dr. Dean…

June 23, 2024

What it means to wire data centres for the future

Stuart Thompson, President, ABB Electrification Service, argues that solely a swift, good overhaul of ageing…

May 8, 2025

Scale Computing Gains as VMware Customers Switch Platforms

Scale Computing, a world supplier of virtualization, edge computing, and hyperconverged options, has reported unprecedented…

January 24, 2025

Nvidia Beefs up its AI Security Capabilities with DOCA Argus

The business's largest safety present, RSAC Convention 2025, happened in San Francisco this week. The…

May 2, 2025

Riverbed survey reveals AI readiness gap

“Wanting forward, nevertheless, there's a broad consensus round future readiness. By 2028, 86% of respondents…

September 24, 2025

You Might Also Like

AI regulation
Innovations

The next wave of AI regulation: Balancing innovation with safety

By saad
Inside China’s push to apply AI across its energy system
AI

Inside China’s push to apply AI across its energy system

By saad
AI is moving to the edge – and network security needs to catch up
AI

AI is moving to the edge – and network security needs to catch up

By saad
Zencoder drops Zenflow, a free AI orchestration tool that pits Claude against OpenAI’s models to catch coding errors
AI

Zencoder drops Zenflow, a free AI orchestration tool that pits Claude against OpenAI’s models to catch coding errors

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.