Friday, 10 Apr 2026
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > Security > NIST Creates Cybersecurity Playbook for Generative AI | DCN
Security

NIST Creates Cybersecurity Playbook for Generative AI | DCN

Last updated: January 22, 2024 6:32 pm
Published January 22, 2024
Share
NIST Creates Cybersecurity Playbook for Generative AI
SHARE

The US National Institute of Standards and Technology (NIST) has published a report laying out in detail the types of cyberattacks that could be aimed at AI systems as well as possible defenses against them.

The agency believes such a report is critical because current defenses against cyberattacks on AI systems are lackluster – at a time when AI is increasingly pervading all aspects of life and business.

Related: The EU Takes the Lead in AI Regulation – But the New Rules Will Have Global Implications

Called “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” the report starts by developing a taxonomy and terminology of adversarial ML, which in turn will help secure AI systems as developers have a uniform basis from which to form defenses.

The report covers two broad types of AI: predictive AI and generative AI. These systems are trained on vast amounts of data, which bad actors may act to corrupt. This is not inconceivable since these datasets are too large for people to monitor and filter.

Related: Key AI Trends to Look For in 2024

NIST wants the report to help developers understand the types of attacks they might expect along with approaches to mitigate them, though it acknowledges that there is no silver bullet for beating the bad guys.

NIST’s identifies four major types of attacks on AI systems:

Evasion attacks: These occur after an AI system is deployed, where a user attempts to alter an input to change how the system responds to it. Examples include tampering with road signs to mess with autonomous vehicles.

See also  Top Cloud Migration Challenges and How to Face Them | DCN

Poisoning attacks: These occur in the training phase through the introduction of corrupted data. Examples include adding various instances of inappropriate language into conversation records so a chatbot would view them as common use.

Privacy attacks: These occur during deployment and they are attempts to learn sensitive information about the AI or the data it was trained on with the goal of misusing it. A bad actor would ask the bot questions and use those answers to reverse engineer the model to find its weak spots.

Abuse attacks: These involve inputting false information into a source from which an AI learns. Different from poisoning attacks, abuse attacks give the AI incorrect information from a legitimate but compromised source to repurpose the AI.

However, each of these types can be impacted by criteria like the attacker’s goals and objectives, capabilities and knowledge.

“Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” said Alina Oprea, co-author and a professor at Northeastern University. “Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set.”

Defensive measures to mount include augmenting the training data with adversarial examples during training using correct labels, monitoring standard performance metrics of ML models for large degradation in classifier metrics, using data sanitization techniques, and other methods.

This story originally appeared on AI Business

Source link

TAGGED: Creates, Cybersecurity, DCN, generative, NIST, Playbook
Share This Article
Twitter Email Copy Link Print
Previous Article 2024 Data Center Un-Predictions: Five Unlikely Industry Forecasts 2024 Data Center Un-Predictions: Five Unlikely Industry Forecasts | DCN
Next Article Leading Voice for Data Center Operators in Asia-Pacific Leading Voice for Data Center Operators in Asia-Pacific
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Cisco GM underscores focus on issues critical for Spain’s tech development

As well as, Cisco has skilled some 350,000 individuals in digital expertise by way of…

April 12, 2025

Why the entertainment industry needs an AI roadmap

Tim Levy, Founder and CEO of Twyn, points an pressing warning to the leisure sector:…

April 29, 2024

US modifies AI chip export rules, boosting Middle East access

Considerations over the potential misuse of those applied sciences, notably the danger of them being…

October 5, 2024

Switchee Raises £5M in Funding

Switchee, a London, UK-based social housing know-how supplier, raised £5m in funding. Backers included Octopus…

July 12, 2024

Pilgrim Secures 200 Crore Funding at 3,000 Crore Pre-Money Valuation

Magnificence and Private Care Model Pilgrim Secures ₹200 Crore Funding, Elevating its Pre-Cash Valuation to…

March 18, 2025

You Might Also Like

Inside the AI agent playbook driving enterprise margin gains
AI

Inside the AI agent playbook driving enterprise margin gains

By saad
Tenable and OX help close code-to-cloud cybersecurity gaps
Cloud Computing

Tenable and OX help close code-to-cloud cybersecurity gaps

By saad
IBM X-Force: AI creates security challenges, but basic system flaws are more problematic
Global Market

IBM X-Force: AI creates security challenges, but basic system flaws are more problematic

By saad
Cloud computing concept with engineer using computer in office.
Global Market

DCN becoming the new WAN for AI-era applications

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.