Thursday, 22 Jan 2026
Subscribe
logo
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Font ResizerAa
Data Center NewsData Center News
Search
  • Global
  • AI
  • Cloud Computing
  • Edge Computing
  • Security
  • Investment
  • Sustainability
  • More
    • Colocation
    • Quantum Computing
    • Regulation & Policy
    • Infrastructure
    • Power & Cooling
    • Design
    • Innovations
    • Blog
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Data Center News > Blog > AI > Unintended consequences: U.S. election results herald reckless AI development
AI

Unintended consequences: U.S. election results herald reckless AI development

Last updated: December 22, 2024 10:29 pm
Published December 22, 2024
Share
Unintended consequences: U.S. election results herald reckless AI development
SHARE

Be part of our each day and weekly newsletters for the most recent updates and unique content material on industry-leading AI protection. Study Extra


Whereas the 2024 U.S. election targeted on conventional points just like the economic system and immigration, its quiet affect on AI coverage might show much more transformative. With out a single debate query or main marketing campaign promise about AI, voters inadvertently tipped the scales in favor of accelerationists — those that advocate for fast AI improvement with minimal regulatory hurdles. The implications of this acceleration are profound, heralding a brand new period of AI coverage that prioritizes innovation over warning and indicators a decisive shift within the debate between AI’s potential dangers and rewards.

The professional-business stance of President-elect Donald Trump leads many to imagine that his administration will favor these growing and advertising and marketing AI and different superior applied sciences. His occasion platform has little to say about AI. Nevertheless, it does emphasize a coverage strategy targeted on repealing AI laws, notably focusing on what they described as “radical left-wing concepts” inside present govt orders of the outgoing administration. In distinction, the platform supported AI improvement geared toward fostering free speech and “human flourishing,” calling for insurance policies that allow innovation in AI whereas opposing measures perceived to hinder technological progress.

Early indications primarily based on appointments to main authorities positions underscore this path. Nevertheless, there’s a bigger story unfolding: The decision of the extreme debate over AI’s future.

An intense debate

Ever since ChatGPT appeared in November 2022, there was a raging debate between these within the AI area who need to speed up AI improvement and those that need to decelerate.

Famously, in March 2023 the latter group proposed a six-month AI pause in improvement of probably the most superior methods, warning in an open letter that AI instruments current “profound dangers to society and humanity.” This letter, spearheaded by the Future of Life Institute, was prompted by OpenAI’s launch of the GPT-4 giant language mannequin (LLM), a number of months after ChatGPT launched.

See also  Arcee aims to reboot U.S. open source AI with new Trinity models released under Apache 2.0

The letter was initially signed by greater than 1,000 know-how leaders and researchers, together with Elon Musk, Apple Co-founder Steve Wozniak, 2020 Presidential candidate Andrew Yang, podcaster Lex Fridman, and AI pioneers Yoshua Bengio and Stuart Russell. The variety of signees of the letter finally swelled to greater than 33,000. Collectively, they turned generally known as “doomers,” a time period to seize their considerations about potential existential dangers from AI.

Not everybody agreed. OpenAI CEO Sam Altman didn’t signal. Nor did Invoice Gates and lots of others. Their causes for not doing so assorted, though many voiced considerations about potential hurt from AI. This led to many conversations in regards to the potential for AI to run amok, resulting in catastrophe. It turned modern for a lot of within the AI area to speak about their assessment of the probability of doom, sometimes called an equation: p(doom). However, work on AI improvement didn’t pause.

For the file, my p(doom) in June 2023 was 5%. Which may appear low, but it surely was not zero. I felt that the main AI labs had been honest of their efforts to stringently check new fashions previous to launch and in offering vital guardrails for his or her use.

Many observers involved about AI risks have rated existential dangers increased than 5%, and a few have rated a lot increased. AI security researcher Roman Yampolskiy rated the chance of AI ending humanity at over 99%. That stated, a study launched early this yr, nicely earlier than the election and representing the views of greater than 2,700 AI researchers, confirmed that “the median prediction for very unhealthy outcomes, comparable to human extinction, was 5%.” Would you board a aircraft if there have been a 5% probability it would crash? That is the dilemma AI researchers and policymakers face.

Should go sooner

Others have been overtly dismissive of worries about AI, pointing as an alternative to what they perceived as the large upside of the know-how. These embrace Andrew Ng (who based and led the Google Mind venture) and Pedro Domingos (a professor of pc science and engineering on the College of Washington and creator of “The Master Algorithm”). They argued, as an alternative, that AI is a part of the answer. As put ahead by Ng, there are certainly existential risks, comparable to local weather change and future pandemics, and AI might be a part of how these are addressed and mitigated.

See also  Thinking Machines named OpenAI’s first APAC partner

Ng argued that AI improvement shouldn’t be paused, however ought to as an alternative go sooner. This utopian view of know-how has been echoed by others who’re collectively generally known as “efficient accelerationists” or “e/acc” for brief. They argue that know-how — and particularly AI — is just not the issue, however the answer to most, if not all, of the world’s points. Startup accelerator Y Combinator CEO Garry Tan, together with different outstanding Silicon Valley leaders, included the time period “e/acc” of their usernames on X to point out alignment to the imaginative and prescient. Reporter Kevin Roose on the New York Occasions captured the essence of those accelerationists by saying they’ve  an “all-gas, no-brakes strategy.”

A Substack newsletter from a pair years in the past described the rules underlying efficient accelerationism. Right here is the summation they provide on the finish of the article, plus a remark from OpenAI CEO Sam Altman.

AI acceleration forward

The 2024 election end result could also be seen as a turning level, placing the accelerationist imaginative and prescient ready to form U.S. AI coverage for the subsequent a number of years. For instance, the President-elect lately appointed know-how entrepreneur and enterprise capitalist David Sacks as “AI czar.”

Sacks, a vocal critic of AI regulation and a proponent of market-driven innovation, brings his expertise as a know-how investor to this position. He is without doubt one of the main voices within the AI {industry}, and far of what he has stated about AI aligns with the accelerationist viewpoints expressed by the incoming occasion platform.

See also  Nous Research just released Nomos 1, an open-source AI that ranks second on the notoriously brutal Putnam math exam

In response to the AI govt order from the Biden administration in 2023, Sacks tweeted: “The U.S. political and financial state of affairs is hopelessly damaged, however we’ve got one unparalleled asset as a rustic: Chopping-edge innovation in AI pushed by a very free and unregulated marketplace for software program improvement. That simply ended.” Whereas the quantity of affect Sacks may have on AI coverage stays to be seen, his appointment indicators a shift towards insurance policies favoring {industry} self-regulation and fast innovation.

Elections have penalties

I doubt many of the voting public gave a lot thought to AI coverage implications when casting their votes. However, in a really tangible method, the accelerationists have gained as a consequence of the election, doubtlessly sidelining these advocating for a extra cautious strategy by the federal authorities to mitigate AI’s long-term dangers.

As accelerationists chart the trail ahead, the stakes couldn’t be increased. Whether or not this period ushers in unparalleled progress or unintended disaster stays to be seen. As AI improvement accelerates, the necessity for knowledgeable public discourse and vigilant oversight turns into ever extra paramount. How we navigate this period will outline not solely technological progress but additionally our collective future.

As a counterbalance to an absence of motion on the federal degree, it’s attainable that a number of states will undertake varied laws, which has already occurred to some extent in California and Colorado. As an illustration, California’s AI security payments give attention to transparency necessities, whereas Colorado addresses AI discrimination in hiring practices, providing fashions for state-level governance. Now, all eyes will probably be on the voluntary testing and self-imposed guardrails at Anthropic, Google, OpenAI and different AI mannequin builders.

In abstract, the accelerationist victory means much less restrictions on AI innovation. This elevated velocity could certainly result in sooner innovation, but additionally raises the chance of unintended penalties. I’m now revising my p(doom) to 10%. What’s yours?

Gary Grossman is EVP of know-how apply at Edelman and world lead of the Edelman AI Heart of Excellence.


Source link
TAGGED: consequences, Development, Election, herald, reckless, Results, U.S, Unintended
Share This Article
Twitter Email Copy Link Print
Previous Article Study uses AI to interpret American Sign Language in real-time Study uses AI to interpret American Sign Language in real-time
Next Article Hauler Hero Hauler Hero Raises $10M in Seed Funding
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
TwitterFollow
InstagramFollow
YoutubeSubscribe
LinkedInFollow
MediumFollow
- Advertisement -
Ad image

Popular Posts

Rapid Growth of Virtual Data Rooms in Providing Financial Services

Because the monetary sector grows, firms are utilizing VDRs to rework their processes fully. These…

July 5, 2024

FLock.io Partners with Alibaba Cloud on Advanced AI Model Co-Creation

Covent Backyard, UK, April twenty fourth, 2025, Chainwire FLock.io, the non-public AI coaching platform, has…

April 24, 2025

Synergy identifies the world’s Top 20 metro markets for colocation

Northern Virginia is the only greatest market, accounting for nearly 7% of the entire. It's…

August 22, 2024

AI can create a reasonable facsimile of a person’s personality after two-hour interview

The interview interface. a) The primary interview interface: A 2D sprite representing the AI interviewer…

November 28, 2024

Pasqal Quantum Computing Now Available Through Microsoft Azure

Pasqal has revealed an interface with Microsoft companies that will make its expertise obtainable through…

March 20, 2025

You Might Also Like

OpenCog Hyperon and AGI: Beyond large language models
AI

OpenCog Hyperon and AGI: Beyond large language models

By saad
The quiet work behind Citi’s 4,000-person internal AI rollout
AI

The quiet work behind Citi’s 4,000-person internal AI rollout

By saad
Balancing AI cost efficiency with data sovereignty
AI

Balancing AI cost efficiency with data sovereignty

By saad
Claude Code costs up to $200 a month. Goose does the same thing for free.
AI

Claude Code costs up to $200 a month. Goose does the same thing for free.

By saad
Data Center News
Facebook Twitter Youtube Instagram Linkedin

About US

Data Center News: Stay informed on the pulse of data centers. Latest updates, tech trends, and industry insights—all in one place. Elevate your data infrastructure knowledge.

Top Categories
  • Global Market
  • Infrastructure
  • Innovations
  • Investments
Usefull Links
  • Home
  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2024 – datacenternews.tech – All rights reserved

Welcome Back!

Sign in to your account

Lost your password?
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
You can revoke your consent any time using the Revoke consent button.