Sunday, 16 Nov 2025
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > AI > Is OpenAI’s ‘moonshot’ to integrate democracy into AI tech more than PR? | The AI Beat
AI

Is OpenAI’s ‘moonshot’ to integrate democracy into AI tech more than PR? | The AI Beat

Last updated: January 23, 2024 1:58 pm
Published January 23, 2024
Share
Is OpenAI's 'moonshot' to integrate democracy into AI tech more than PR? | The AI Beat
SHARE

Last week, an OpenAI PR rep reached out by email to let me know the company had formed a new “Collective Alignment” team that would focus on “prototyping processes” that allow OpenAI to “incorporate public input to guide AI model behavior.” The goal? Nothing less than democratic AI governance — building on the work of ten recipients of OpenAI’s Democratic Inputs to AI grant program.

I immediately giggled. The cynical me enjoyed rolling my eyes at the idea of OpenAI, with its lofty ideals of ‘creating safe AGI that benefits all of humanity’ while it faces the mundane reality of hawking APIs and GPT stores and scouring for more compute and fending off copyright lawsuits, attempting to tackle one of humanity’s thorniest challenges throughout history — that is, crowdsourcing a democratic, public consensus about anything.

After all, isn’t American democracy itself currently being tested like never before? Aren’t AI systems at the core of deep-seated fears about deepfakes and disinformation threatening democracy in the 2024 elections? How could something as subjective as public opinion ever be applied to the rules of AI systems — and by OpenAI, no less, a company which I think can objectively be described as the king of today’s commercial AI?

Still, I was fascinated by the idea that there are people at OpenAI whose full-time job is to make a go at creating a more democratic AI guided by humans — which is, undeniably, a hopeful, optimistic and important goal. But is this effort more than a PR stunt, a gesture by an AI company under increased scrutiny by regulators?

OpenAI researcher admits collective alignment could be a ‘moonshot’

I wanted to know more, so I got on a Zoom with the two current members of the new Collective Alignment team: Tyna Eloundou, an OpenAI researcher focused on the societal impacts of technology, and Teddy Lee, a product manager at OpenAI who previously led human data labeling products and operations to ensure responsible deployment of GPT, ChatGPT, DALL-E, and OpenAI API. The team is “actively looking” to add a research engineer and research scientist to the mix, which will work closely with OpenAI’s “Human Data” team, “which builds infrastructure for collecting human input on the company’s AI models, and other research teams.”

See also  Tech Mahindra and Google Cloud team up to boost generative AI adoption

I asked Eloundou how challenging it would be to reach the team’s goals of developing democratic processes for deciding what rules AI systems should follow. In an OpenAI blog post in May 2023 that announced the grant program, “democratic processes” were defined as “a process in which a broadly representative group of people exchange opinions, engage in deliberative discussions, and ultimately decide on an outcome via a transparent decision making process.”

Eloundou admitted that many would call it a “moonshot.”

“But as a society, we’ve had to face up to this challenge,” she added. “Democracy itself is complicated, messy, and we arrange ourselves in different ways to have some hope of governing our societies or respective societies.” For example, she explained, it is people who decide on all the parameters of democracy — how many representatives, what voting looks like — and people decide whether the rules make sense and whether to revise the rules.

Lee pointed out that one anxiety-producing challenge is the myriad of directions that attempting to integrate democracy into AI systems can go.

“Part of the reason for having a grant program in the first place is to see what other people who are already doing a lot of exciting work in the space are doing, what are they going to focus on,” he said. “It’s a very intimidating space to step into — the socio-technical world of how do you see these models collectively, but at the same time, there’s a lot of low-hanging fruit, a lot of ways that we can see our own blind spots.”

See also  Writer’s Palmyra X 004 takes the lead in AI function calling, surpassing tech giants

10 teams designed, built and tested ideas using democratic methods

According to a new OpenAI blog post published last week, the democratic inputs to AI grant program awarded $100,000 to 10 diverse teams out of nearly 1000 applicants to design, build, and test ideas that use democratic methods to decide the rules that govern AI systems. “Throughout, the teams tackled challenges like recruiting diverse participants across the digital divide, producing a coherent output that represents diverse viewpoints, and designing processes with sufficient transparency to be trusted by the public,” the blog post says.

Each team tackled these challenges in different ways — they included “novel video deliberation interfaces, platforms for crowdsourced audits of AI models, mathematical formulations of representation guarantees, and approaches to map beliefs to dimensions that can be used to fine-tune model behavior.”

There were, not surprisingly, immediate roadblocks. Many of the ten teams quickly learned that public opinion can change on a dime, even day-to-day. Reaching the right participants across digital and cultural divides is tough and can skew results. Finding agreement among polarized groups? You guessed it — hard.

But OpenAI’s Collective Alignment team is undeterred. In addition to advisors on the original grant program including Hélène Landemore, a professor of political science at Yale, Eloundou said the team has reached out to several researchers in the social sciences, “in particular those who are involved in citizens assemblies — I think those are the closest modern corollary.” (I had to look that one up — a citizens assembly is “a group of people selected by lottery from the general population to deliberate on important public questions so as to exert an influence.”)

See also  OpenAI CEO calls GPT-5 Orion report 'fake news out of control'

Giving democratic processes in AI ‘our best shot’

One of the grant program’s starting points, said Lee, was “we don’t know what we don’t know.” The grantees came from domains like journalism, medicine, law, and social science, some had worked on U.N. peace negotiations — but the sheer amount of excitement and expertise in this space, he explained, imbued the projects with a sense of energy. “We just need to help to focus that towards our own technology,” he said. “That’s been pretty exciting and also humbling.”

But is the Collective Alignment team’s goal ultimately doable? “I think it’s just like democracy itself,” he said. “It’s a bit of a continual effort. We won’t solve it. As long as people are involved, as people’s views change and people interact with these models in new ways, we’ll have to keep working at it.”

Eloundou agreed. “We’ll definitely give it our best shot,” she said.

PR stunt or not, I can’t argue with that — at a moment when democratic processes seem to be hanging by a string, it seems like any effort to boost them in AI system decision-making should be applauded. So, I say to OpenAI: Hit me with your best shot.

Source link

Contents
OpenAI researcher admits collective alignment could be a ‘moonshot’10 teams designed, built and tested ideas using democratic methodsGiving democratic processes in AI ‘our best shot’
TAGGED: Beat, democracy, integrate, moonshot, OpenAIs, Tech
Share This Article
Twitter Email Copy Link Print
Previous Article Teen GTA VI hacker sentenced to life in a secure hospital Teen GTA VI hacker sentenced to life in a secure hospital
Next Article Genpact Recognized as a "Sustainable Corporate of the Year" by Frost & Sullivan for the Second Consecutive Year Genpact Recognized as a “Sustainable Corporate of the Year” by Frost & Sullivan for the Second Consecutive Year
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Future of IT spending: spending to soar to $5.4 trillion in 2025

In a time of worldwide unpredictability, IT spending stays largely unaffected, pushed by sustained deal…

July 17, 2025

This agency is tasked with keeping AI safe. Its offices are crumbling | DCN

(The Washington Submit) -- On the Nationwide Institute of Requirements and Know-how - the federal…

March 23, 2024

OnLogic teams with AI experts to deliver targeted industrial AI solutions

OnLogic, an answer supplier and producer of commercial computer systems has partnered with AI specialists…

October 24, 2024

SparkFun, Digi International unite to boost cellular IoT innovation via development tools

SparkFun has partnered with Digi International to offer a range of development boards. The solutions…

February 7, 2024

LLM not available in your area? Snowflake now enables cross-region inference

Be part of our day by day and weekly newsletters for the most recent updates…

August 11, 2024

You Might Also Like

Alembic melted GPUs chasing causal A.I. — now it's running one of the fastest supercomputers in the world
AI

Alembic melted GPUs chasing causal A.I. — now it's running one of the fastest supercomputers in the world

By saad
Inside LinkedIn’s generative AI cookbook: How it scaled people search to 1.3 billion users
AI

Inside LinkedIn’s generative AI cookbook: How it scaled people search to 1.3 billion users

By saad
OpenAI experiment finds that sparse models could give AI builders the tools to debug neural networks
AI

OpenAI experiment finds that sparse models could give AI builders the tools to debug neural networks

By saad
Google’s new AI training method helps small models tackle complex reasoning
AI

Google’s new AI training method helps small models tackle complex reasoning

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.