Sunday, 19 Apr 2026
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > Cloud Computing > Microsoft unveils safety and security tools for generative AI
Cloud Computing

Microsoft unveils safety and security tools for generative AI

Last updated: April 1, 2024 8:02 pm
Published April 1, 2024
Share
construction site barricades
SHARE

Microsoft is including security and safety instruments to Azure AI Studio, the corporate’s cloud-based toolkit for constructing generative AI functions. The brand new instruments embrace safety in opposition to immediate injection assaults, detection of hallucinations in mannequin output, system messages to steer fashions towards secure output, mannequin security evaluations, and threat and security monitoring.

Microsoft introduced the brand new options on March 28. Security evaluations at the moment are out there in preview in Azure AI Studio. The opposite options are coming quickly, Microsoft stated. Azure AI Studio, additionally in preview, could be accessed from ai.azure.com.

Immediate shields will detect and block injection assaults and embrace a brand new mannequin to establish oblique immediate assaults earlier than they affect the mannequin. This characteristic is at the moment out there in preview in Azure AI Content material Security. Groundness detection is designed to establish text-based hallucinations, together with minor inaccuracies, in mannequin outputs. This characteristic detects “ungrounded materials” in textual content to assist the standard of LLM outputs, Microsoft stated.

Security system messages, also referred to as metaprompts, steer a mannequin’s habits towards secure and accountable outputs. Security evaluations assess an utility’s capacity to jailbreak assaults and to producing content material dangers. Along with mannequin high quality metrics, they supply metrics associated to content material and safety dangers.

Lastly, threat and security monitoring helps customers perceive what mannequin inputs, outputs, and customers are triggering content material filters to tell mitigation. This characteristic is at the moment out there in preview in Azure OpenAI Service.

See also  Microsoft Is Struggling to Retain Women, Minority Employees

Copyright © 2024 IDG Communications, .

Source link

TAGGED: generative, Microsoft, safety, security, Tools, unveils
Share This Article
Twitter Email Copy Link Print
Previous Article Apple researchers develop AI that can 'see' and understand screen context Apple researchers develop AI that can ‘see’ and understand screen context
Next Article From AI to HDDs: Data storage predictions for 2024 The changing world of backup
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

HPE extends Juniper’s Mist AI to boost data center management

Additional, Aaron said that Marvis Actions presents automated remediations for IT-approved eventualities. Utilizing a Human-in-the-Loop…

September 1, 2025

Virtual reality and wearable technology pilot aims to cut drug deaths

Hundreds of lives may very well be saved by the usage of synthetic intelligence (AI)…

October 18, 2024

Compact phononic circuits guide sound at gigahertz frequencies for chip-scale devices

Topological phononic chip platform. a, Illustration of built-in units that use topologically protected sound waves,…

September 19, 2025

Nvidia-Backed Group Drops $40B on Aligned Data Centers

The AI Infrastructure Partnership – a bunch that features Nvidia, Blackrock, Microsoft, and xAI –…

October 15, 2025

Riverbed survey reveals AI readiness gap

“Wanting forward, nevertheless, there's a broad consensus round future readiness. By 2028, 86% of respondents…

September 24, 2025

You Might Also Like

3D zero-day vulnerability refers to a security flaw in software
Global Market

DNS security is often inadequate, and network engineers should get more involved

By saad
Top 10 tools for multi-cloud architecture design
Cloud Computing

Top 10 tools for multi-cloud architecture design

By saad
Spending on AI-enabled security tools
Global Market

IBM unveils security services for thwarting agentic attacks, automating threat assessment

By saad
AI data centre power demand is reshaping cloud growth
Cloud Computing

AI data centre power demand shapes cloud growth

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.