Saturday, 21 Jun 2025
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > Security > NIST Creates Cybersecurity Playbook for Generative AI | DCN
Security

NIST Creates Cybersecurity Playbook for Generative AI | DCN

Last updated: January 22, 2024 6:32 pm
Published January 22, 2024
Share
NIST Creates Cybersecurity Playbook for Generative AI
SHARE

The US National Institute of Standards and Technology (NIST) has published a report laying out in detail the types of cyberattacks that could be aimed at AI systems as well as possible defenses against them.

The agency believes such a report is critical because current defenses against cyberattacks on AI systems are lackluster – at a time when AI is increasingly pervading all aspects of life and business.

Related: The EU Takes the Lead in AI Regulation – But the New Rules Will Have Global Implications

Called “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” the report starts by developing a taxonomy and terminology of adversarial ML, which in turn will help secure AI systems as developers have a uniform basis from which to form defenses.

The report covers two broad types of AI: predictive AI and generative AI. These systems are trained on vast amounts of data, which bad actors may act to corrupt. This is not inconceivable since these datasets are too large for people to monitor and filter.

Related: Key AI Trends to Look For in 2024

NIST wants the report to help developers understand the types of attacks they might expect along with approaches to mitigate them, though it acknowledges that there is no silver bullet for beating the bad guys.

NIST’s identifies four major types of attacks on AI systems:

Evasion attacks: These occur after an AI system is deployed, where a user attempts to alter an input to change how the system responds to it. Examples include tampering with road signs to mess with autonomous vehicles.

See also  'KeyTrap' DNS Bug Threatens Widespread Internet Outages | DCN

Poisoning attacks: These occur in the training phase through the introduction of corrupted data. Examples include adding various instances of inappropriate language into conversation records so a chatbot would view them as common use.

Privacy attacks: These occur during deployment and they are attempts to learn sensitive information about the AI or the data it was trained on with the goal of misusing it. A bad actor would ask the bot questions and use those answers to reverse engineer the model to find its weak spots.

Abuse attacks: These involve inputting false information into a source from which an AI learns. Different from poisoning attacks, abuse attacks give the AI incorrect information from a legitimate but compromised source to repurpose the AI.

However, each of these types can be impacted by criteria like the attacker’s goals and objectives, capabilities and knowledge.

“Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” said Alina Oprea, co-author and a professor at Northeastern University. “Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set.”

Defensive measures to mount include augmenting the training data with adversarial examples during training using correct labels, monitoring standard performance metrics of ML models for large degradation in classifier metrics, using data sanitization techniques, and other methods.

This story originally appeared on AI Business

Source link

TAGGED: Creates, Cybersecurity, DCN, generative, NIST, Playbook
Share This Article
Twitter Email Copy Link Print
Previous Article 2024 Data Center Un-Predictions: Five Unlikely Industry Forecasts 2024 Data Center Un-Predictions: Five Unlikely Industry Forecasts | DCN
Next Article Leading Voice for Data Center Operators in Asia-Pacific Leading Voice for Data Center Operators in Asia-Pacific
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

A History of Microsoft Azure Outages

DCN has been monitoring Microsoft Azure outages for over a decade. With so many utilizing…

June 6, 2024

Can tomorrow’s data centre leaders scale fast enough?

On this Q&A, Justine Gordon, Markus Keller and Silvia Rapallo from Egon Zehnder’s World Infrastructure…

May 29, 2025

UiPath teams up with SAP to accelerate enterprise automation for SAP customers

UiPath, an enterprise automation and AI software program firm, has revealed its UiPath Platform can…

October 16, 2024

Hive Raises €12M in Series A Funding

Hive, a Geneva, Switzerland-based distributed cloud supplier, obtained €12M in Collection A funding. The spherical…

March 30, 2024

AWS plans major European expansion with €7.8 billion investment in sovereign data center

Briefly: Amazon Internet Providers is doubling down on Europe's digital future with plans to speculate…

May 20, 2024

You Might Also Like

The Interpretable AI playbook: What Anthropic's research means for your enterprise LLM strategy
AI

The Interpretable AI playbook: What Anthropic’s research means for your enterprise LLM strategy

By saad
AI Data Centers Have a Weight Problem
Security

AI Data Centers Have a Weight Problem

By saad
Generative AI moves to the edge as Nota AI and Wind River target on-device intelligence
Edge Computing

Generative AI moves to the edge as Nota AI and Wind River target on-device intelligence

By saad
Data Center Knowledge Wins 2025 National Azbee Award
Design

DCN Wins 2025 National Azbee Award

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.OkNoPrivacy policy
You can revoke your consent any time using the Revoke consent button.Revoke consent