AI was a major theme at Davos 2024. As reported by Fortune, more than two dozen sessions at the event focused directly on AI, covering everything from AI in education to AI regulation.
A who’s who of AI was in attendance, including OpenAI CEO Sam Altman, Inflection AI CEO Mustafa Suleyman, AI pioneer Andrew Ng, Meta chief AI scientist Yann LeCun, Cohere CEO Aidan Gomez and many others.
Shifting from wonder to pragmatism
Whereas at Davos 2023, the conversation was full of speculation based on the then fresh release of ChatGPT, this year was more tempered.
“Last year, the conversation was ‘Gee whiz,’” Chris Padilla, IBM’s VP of government and regulatory affairs, said in an interview with The Washington Post. “Now, it’s ‘What are the risks? What do we have to do to make AI trustworthy?’”
Among the concerns discussed in Davos were turbocharged misinformation, job displacement and a widening economic gap between wealthy and poor nations.
Perhaps the most discussed AI risk at Davos was the threat of wholesale misinformation and disinformation, often in the form of deepfake photos, videos and voice clones that could further muddy reality and undermine trust. A recent example was robocalls that went out before the New Hampshire presidential primary election using a voice clone impersonating President Joe Biden in an apparent attempt to suppress votes.
AI-enabled deepfakes can create and spread false information by making someone seem to say something they did not. In one interview, Carnegie Mellon University professor Kathleen Carley said: “This is kind of just the tip of the iceberg in what could be done with respect to voter suppression or attacks on election workers.”
Enterprise AI consultant Reuven Cohen also recently told VentureBeat that with new AI tools we should expect a flood of deepfake audio, images and video just in time for the 2024 election.
Despite a considerable amount of effort, a foolproof method to detect deepfakes has not been found. As Jeremy Kahn observed in a Fortune article: “We better find a solution soon. Distrust is insidious and corrosive to democracy and society.”
AI mood swing
This mood swing from 2023 to 2024 led Suleyman to write in Foreign Affairs that a “cold war strategy” is needed to contain threats made possible by the proliferation of AI. He said that foundational technologies such as AI always become cheaper and easier to use and permeate all levels of society and all manner of positive and harmful uses.
“When hostile governments, fringe political parties and lone actors can create and broadcast material that is indistinguishable from reality, they will be able to sow chaos, and the verification tools designed to stop them may well be outpaced by the generative systems.”
Concerns about AI date back decades, initially and best popularized in the 1968 movie “2001: A Space Odyssey.” There has since been a steady stream of worries and concerns, including over the Furby, a wildly popular cyber pet in the late 1990s. The Washington Post reported in 1999 that the National Security Administration (NSA) banned these from their premises over concerns that they could serve as listening devices that might divulge national security information. Recently released NSA documents from this period discussed the toy’s ability to “learn” using an “artificial intelligent chip onboard.”
Contemplating AI’s future trajectory
Worries about AI have recently become acute as more AI experts claim that Artificial General Intelligence (AGI) could be achieved soon. While the exact definition of AGI remains vague, it is thought to be the point at which AI becomes smarter and more capable than a college-educated human across a broad spectrum of activities.
Altman has said that he believes AGI might not be far from becoming a reality and could be developed in the “reasonably close-ish future.” Gomez reinforced this view: “I think we will have that technology quite soon.”
Not everyone agrees on an aggressive AGI timeline, however. For example, LeCun is skeptical about an imminent AGI arrival. He recently told Spanish outlet EL PAÍS that “Human-level AI is not just around the corner. This is going to take a long time. And it’s going to require new scientific breakthroughs that we don’t know of yet.”
Public perception and the path foward
We know that uncertainty about the future course of AI technology remains. In the 2024 Edelman Trust Barometer, which launched at Davos, global respondents are split on rejecting (35%) versus accepting (30 %) AI. People recognize the impressive potential of AI, but also its attendant risks. According to the report, people are more likely to embrace AI — and other innovations — if it is vetted by scientists and ethicists, they feel like they have control over how it affects their lives and they feel that it will bring them a better future.
It is tempting to rush towards solutions to “contain” the technology, as Suleyman suggests, although it is useful to recall Amara’s Law as defined by Roy Amara, past president of The Institute for the Future. He said: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
While enormous amounts of experimentation and early adoption are now underway, widespread success is not assured. As Rumman Chowdhury, CEO and cofounder of AI-testing nonprofit Humane Intelligence, stated: “We will hit the trough of disillusionment in 2024. We’re going to realize that this actually isn’t this earth-shattering technology that we’ve been made to believe it is.”
2024 may be the year that we find out how earth-shattering it is. In the meantime, most people and companies are learning about how best to harness generative AI for personal or business benefit.
Accenture CEO Julie Sweet said in an interview that: “We’re still in a land where everyone’s super excited about the tech and not connecting to the value.” The consulting firm is now conducting workshops for C-suite leaders to learn about the technology as a critical step towards achieving the potential and moving from use case to value.
Thus, the benefits and most harmful impacts from AI (and AGI) may be imminent, but not necessarily immediate. In navigating the intricate landscape of AI, we stand at a crossroads where prudent stewardship and innovative spirit can steer us towards a future where AI technology amplifies human potential without sacrificing our collective integrity and values. It is for us to harness our collective courage to envision and design a future where AI serves humanity, not the other way around.
Gary Grossman is EVP of technology Practice at Edelman and global lead of the Edelman AI Center of Excellence.