Monday, 9 Feb 2026
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > AI > UMD’s quest for ethical and inclusive AI
AI

UMD’s quest for ethical and inclusive AI

Last updated: October 8, 2024 2:48 pm
Published October 8, 2024
Share
Source: University of Maryland
SHARE

As synthetic intelligence techniques more and more permeate essential decision-making processes in our on a regular basis lives, the combination of moral frameworks into AI improvement is turning into a analysis precedence. On the College of Maryland (UMD), interdisciplinary teams deal with the advanced interaction between normative reasoning, machine studying algorithms, and socio-technical techniques. 

In a latest interview with Synthetic Intelligence Information, postdoctoral researchers Ilaria Canavotto and Vaishnav Kameswaran mix experience in philosophy, pc science, and human-computer interplay to deal with urgent challenges in AI ethics. Their work spans the theoretical foundations of embedding moral rules into AI architectures and the sensible implications of AI deployment in high-stakes domains corresponding to employment.

Normative understanding of AI techniques

Ilaria Canavotto, a researcher at UMD’s Values-Centered Synthetic Intelligence (VCAI) initiative, is affiliated with the Institute for Superior Laptop Research and the Philosophy Division. She is tackling a basic query: How can we imbue AI techniques with normative understanding? As AI more and more influences choices that impression human rights and well-being, techniques have to understand moral and authorized norms.

“The query that I examine is, how can we get this sort of info, this normative understanding of the world, right into a machine that may very well be a robotic, a chatbot, something like that?” Canavotto says.

Her analysis combines two approaches:

Prime-down method: This conventional methodology entails explicitly programming guidelines and norms into the system. Nevertheless, Canavotto factors out, “It’s simply not possible to write down them down as simply. There are at all times new conditions that come up.”

Backside-up method: A more moderen methodology that makes use of machine studying to extract guidelines from knowledge. Whereas extra versatile, it lacks transparency: “The issue with this method is that we don’t actually know what the system learns, and it’s very tough to clarify its determination,” Canavotto notes.

See also  ManageEngine's ethical cybersecurity approach in 2025

Canavotto and her colleagues, Jeff Horty and Eric Pacuit, are growing a hybrid method to mix the very best of each approaches. They goal to create AI techniques that may be taught guidelines from knowledge whereas sustaining explainable decision-making processes grounded in authorized and normative reasoning.

“[Our] method […] relies on a subject that is known as synthetic intelligence and legislation. So, on this subject, they developed algorithms to extract info from the info. So we want to generalise a few of these algorithms after which have a system that may extra typically extract info grounded in authorized reasoning and normative reasoning,” she explains.

AI’s impression on hiring practices and incapacity inclusion

Whereas Canavotto focuses on the theoretical foundations, Vaishnav Kameswaran, affiliated with UMD’s NSF Institute for Reliable AI and Regulation and Society, examines AI’s real-world implications, notably its impression on folks with disabilities.

Kameswaran’s analysis appears to be like into the usage of AI in hiring processes, uncovering how techniques can inadvertently discriminate towards candidates with disabilities. He explains, “We’ve been working to… open up the black field a bit, attempt to perceive what these algorithms do on the again finish, and the way they start to evaluate candidates.”

His findings reveal that many AI-driven hiring platforms rely closely on normative behavioural cues, corresponding to eye contact and facial expressions, to evaluate candidates. This method can considerably drawback people with particular disabilities. As an illustration, visually impaired candidates could wrestle with sustaining eye contact, a sign that AI techniques typically interpret as lack of engagement.

See also  Ordnance Survey: Navigating the role of AI and ethical considerations in geospatial technology

“By specializing in a few of these qualities and assessing candidates primarily based on these qualities, these platforms are inclined to exacerbate present social inequalities,” Kameswaran warns. He argues that this development might additional marginalise folks with disabilities within the workforce, a gaggle already going through vital employment challenges.

The broader moral panorama

Each researchers emphasise that the moral issues surrounding AI prolong far past their particular areas of research. They contact on a number of key points:

  1. Knowledge privateness and consent: The researchers spotlight the inadequacy of present consent mechanisms, particularly relating to knowledge assortment for AI coaching. Kameswaran cites examples from his work in India, the place susceptible populations unknowingly surrendered in depth private knowledge to AI-driven mortgage platforms in the course of the COVID-19 pandemic.
  2. Transparency and explainability: Each researchers stress the significance of understanding how AI techniques make choices, particularly when these choices considerably impression folks’s lives.
  3. Societal attitudes and biases: Kameswaran factors out that technical options alone can’t clear up discrimination points. There’s a necessity for broader societal modifications in attitudes in direction of marginalised teams, together with folks with disabilities.
  4. Interdisciplinary collaboration: The researchers’ work at UMD exemplifies the significance of cooperation between philosophy, pc science, and different disciplines in addressing AI ethics.

Wanting forward: options and challenges

Whereas the challenges are vital, each researchers are working in direction of options:

  • Canavotto’s hybrid method to normative AI might result in extra ethically-aware and explainable AI techniques.
  • Kameswaran suggests growing audit instruments for advocacy teams to evaluate AI hiring platforms for potential discrimination.
  • Each emphasise the necessity for coverage modifications, corresponding to updating the Individuals with Disabilities Act to deal with AI-related discrimination.
See also  Dirty Looks and Deep Green announce UK-first ethical rendering partnership

Nevertheless, additionally they acknowledge the complexity of the problems. As Kameswaran notes, “Sadly, I don’t suppose {that a} technical answer to coaching AI with sure sorts of knowledge and auditing instruments is in itself going to resolve an issue. So it requires a multi-pronged method.”

A key takeaway from the researchers’ work is the necessity for larger public consciousness about AI’s impression on our lives. Individuals must understand how a lot knowledge they share or the way it’s getting used. As Canavotto factors out, corporations typically have an incentive to obscure this info, defining them as “Corporations that attempt to inform you my service goes to be higher for you when you give me the info.”

The researchers argue that rather more must be carried out to coach the general public and maintain corporations accountable. Finally, Canavotto and Kameswaran’s interdisciplinary method, combining philosophical inquiry with sensible software, is a path ahead in the appropriate path, making certain that AI techniques are highly effective but additionally moral and equitable.

See additionally: Rules to assist or hinder: Cloudflare’s take

Need to be taught extra about AI and massive knowledge from trade leaders? Take a look at AI & Big Data Expo going down in Amsterdam, California, and London. The excellent occasion is co-located with different main occasions together with Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Discover different upcoming enterprise expertise occasions and webinars powered by TechForge here.

Tags: ai, synthetic intelligence, ethics, analysis, Society

Source link

TAGGED: ethical, Inclusive, quest, UMDs
Share This Article
Twitter Email Copy Link Print
Previous Article OnLogic and NodeWeaver collaborate to enhance edge computing efficiency and flexibility OnLogic and NodeWeaver collaborate to enhance edge computing efficiency and flexibility
Next Article Solvd Acquires EastBanc Technologies to Drive Its New AI-Strategy Solvd Acquires EastBanc Technologies to Drive Its New AI-Strategy
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Research Shows The Rapid Increase of Online Scams 

A 2024 intensive examine completed by Social Catfish reveals an setting the place on-line scams…

May 2, 2025

5 key strategies for IT leaders

In as we speak’s fast-paced enterprise world, organisations rely closely on SaaS (Software program as…

April 3, 2025

Ambiq debuts AI tools to cut power and speed up edge inference

Ambiq launched two new edge AI runtime options, HeliosRT and HeliosAOT, optimized for his or…

July 31, 2025

Ukhi Raises $1.2M in Pre-Seed Funding

Ukhi, a New Delhi, India-based biomaterials startup, raised $1.2M in Pre-Seed fairness and debt funding. The…

December 6, 2024

EfficiencyIT secures Planet Mark Certification

EfficiencyIT has revealed its official certification by Planet Mark, a globally recognised sustainability certification organisation.…

January 22, 2025

You Might Also Like

SuperCool review: Evaluating the reality of autonomous creation
AI

SuperCool review: Evaluating the reality of autonomous creation

By saad
Top 7 best AI penetration testing companies in 2026
AI

Top 7 best AI penetration testing companies in 2026

By saad
Intuit, Uber, and State Farm trial AI agents inside enterprise workflows
AI

Intuit, Uber, and State Farm trial enterprise AI agents

By saad
How separating logic and search boosts AI agent scalability
AI

How separating logic and search boosts AI agent scalability

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.