Monday, 9 Feb 2026
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > Security > NIST Creates Cybersecurity Playbook for Generative AI | DCN
Security

NIST Creates Cybersecurity Playbook for Generative AI | DCN

Last updated: January 22, 2024 6:32 pm
Published January 22, 2024
Share
NIST Creates Cybersecurity Playbook for Generative AI
SHARE

The US National Institute of Standards and Technology (NIST) has published a report laying out in detail the types of cyberattacks that could be aimed at AI systems as well as possible defenses against them.

The agency believes such a report is critical because current defenses against cyberattacks on AI systems are lackluster – at a time when AI is increasingly pervading all aspects of life and business.

Related: The EU Takes the Lead in AI Regulation – But the New Rules Will Have Global Implications

Called “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” the report starts by developing a taxonomy and terminology of adversarial ML, which in turn will help secure AI systems as developers have a uniform basis from which to form defenses.

The report covers two broad types of AI: predictive AI and generative AI. These systems are trained on vast amounts of data, which bad actors may act to corrupt. This is not inconceivable since these datasets are too large for people to monitor and filter.

Related: Key AI Trends to Look For in 2024

NIST wants the report to help developers understand the types of attacks they might expect along with approaches to mitigate them, though it acknowledges that there is no silver bullet for beating the bad guys.

NIST’s identifies four major types of attacks on AI systems:

Evasion attacks: These occur after an AI system is deployed, where a user attempts to alter an input to change how the system responds to it. Examples include tampering with road signs to mess with autonomous vehicles.

See also  Live Nation confirms a massive Ticketmaster data breach

Poisoning attacks: These occur in the training phase through the introduction of corrupted data. Examples include adding various instances of inappropriate language into conversation records so a chatbot would view them as common use.

Privacy attacks: These occur during deployment and they are attempts to learn sensitive information about the AI or the data it was trained on with the goal of misusing it. A bad actor would ask the bot questions and use those answers to reverse engineer the model to find its weak spots.

Abuse attacks: These involve inputting false information into a source from which an AI learns. Different from poisoning attacks, abuse attacks give the AI incorrect information from a legitimate but compromised source to repurpose the AI.

However, each of these types can be impacted by criteria like the attacker’s goals and objectives, capabilities and knowledge.

“Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” said Alina Oprea, co-author and a professor at Northeastern University. “Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set.”

Defensive measures to mount include augmenting the training data with adversarial examples during training using correct labels, monitoring standard performance metrics of ML models for large degradation in classifier metrics, using data sanitization techniques, and other methods.

This story originally appeared on AI Business

Source link

TAGGED: Creates, Cybersecurity, DCN, generative, NIST, Playbook
Share This Article
Twitter Email Copy Link Print
Previous Article 2024 Data Center Un-Predictions: Five Unlikely Industry Forecasts 2024 Data Center Un-Predictions: Five Unlikely Industry Forecasts | DCN
Next Article Leading Voice for Data Center Operators in Asia-Pacific Leading Voice for Data Center Operators in Asia-Pacific
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Leaders call for unity and equitable development

Because the 2025 AI Motion Summit kicks off in Paris, international leaders, business consultants, and…

February 10, 2025

Aiwyn Secures $113M in Funding

Aiwyn, a Charlotte, NC-based supplier of a observe automation platform for Licensed Public Accounting (CPA)…

December 22, 2024

Chip Deals, Vacancies, and Metal Foam

With information middle information transferring quicker than ever, we wish to make it simple for…

August 23, 2024

FLOKI DAO Unanimously Votes to Provide Liquidity for Floki ETP Launch

Miami, Florida, December thirty first, 2024, Chainwire The Floki DAO has voted decisively in favor…

January 1, 2025

Unlocking Scalable, AI-Ready Data Lakes with Supermicro Storage Solutions

As enterprises generate and devour knowledge at unprecedented charges, the power to retailer, handle, and…

April 5, 2025

You Might Also Like

Cloud computing concept with engineer using computer in office.
Global Market

DCN becoming the new WAN for AI-era applications

By saad
Unigen expands its edge portfolio into generative AI applications
Edge Computing

Unigen expands its edge portfolio into generative AI applications

By saad
DETANGLE project supports EU cybersecurity regulations
Innovations

DETANGLE project supports EU cybersecurity regulations

By saad
EU cybersecurity
Innovations

EU Cybersecurity Act overhaul targets rising threats

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.