Why Is the U.N. Proposing the Creation of an International Agency to Regulate A.I.?

Nuha Yousef | 2 years ago

12

Print

Share

Artificial intelligence poses an existential threat to humanity that is comparable to nuclear war, the United Nations Secretary-General Antonio Guterres warned, urging the world to heed the advice of scientists and experts who have sounded the alarm.

In a press conference, Guterres said that the potential benefits of artificial intelligence were enormous, but so were the dangers of losing human control over it.

The U.N. Secretary-General called for a clear and coordinated global response to regulate the use and development of this powerful technology.

He endorsed the idea of creating an international agency on artificial intelligence, similar to the International Atomic Energy Agency, which monitors and inspects nuclear activities around the world.

Guterres said that the pace of technological innovation was staggering, and so were the threats that came with it, which made him single out generative artificial intelligence, a form of machine learning that can create realistic content such as text, images, and audio, as a source of grave concern.

Guterres also addressed the issue of hate speech on social media platforms and how to combat it. He said that the world had to confront the “serious international harm” caused by the proliferation of hate and lies across the digital space.

According to Guterres, digital technologies were being exploited to undermine science, spread disinformation and hatred to billions of people, fuel conflict, threaten democracy and human rights, harm public health, and sabotage climate action.

So to tackle this immediate and evident international threat, we need concerted global action to make the digital space safer and more inclusive and to protect human rights.

 

Father of A.I.

Geoffrey Hinton, or the “Godfather” of artificial intelligence, has worked on developing artificial neural networks that revolutionized the world of modern technology. His works are considered a cornerstone for developing machines similar to the human brain and characterized by independence from their programmers in the future.

Despite the wide fame that Hinton achieved in recent years, he decided to announce his resignation from Google in 2023 and expressed his partial regret for working in the field of artificial intelligence, warning of its dangers that may later become out of human control.

Hinton is seen as one of the prominent figures in the field of deep learning, which is a new research field that deals with finding theories and algorithms that enable the machine to learn by itself by simulating the nerve cells in the human body.

His pioneering research on neural networks and deep learning paved the way for current artificial intelligence systems such as ChatGPT, and because of his efforts in this field, Hinton won the Turing Award in 2018.

Although he contributed to writing dozens of research papers in the field of deep learning and artificial intelligence, he focused his research on methods of using neural networks for machine learning, memory, perception, and symbol processing.

At the Neural Information Processing Systems Conference (NeurIPS), Hinton presented a new learning algorithm for neural networks, which he called the Forward-Forward algorithm, which was considered a revolutionary innovation in this field.

Despite being the “Godfather” of artificial intelligence, Hinton warned of its dangers and even expressed his regrets about working in this field, and this regret was the reason that pushed him to submit his resignation from Google in May 2023 and justified his decision by saying that he wants to “talk about the dangers of A.I. without considering how this impacts Google.”

And indeed, after his resignation, Hinton began talking about the risks associated with artificial intelligence and technological unemployment and the deliberate misuse of this innovation by parties he described as malicious.

In a television interview, Hinton revealed that artificial intelligence may soon surpass the informational capacity of the human brain and described some of the dangers posed by these artificial chat programs as “extremely harmful.”

Hinton explained that chat robots have the ability to learn independently and share knowledge, which means that whenever one copy gets new information, it is automatically published to the entire group.

This allows chat robots powered by artificial intelligence to accumulate knowledge beyond the ability of any human being. Hinton went further in his fears, indicating that he does not rule out that artificial intelligence could “wipe out humanity,” as The Guardian titled one of its reports in this regard.

He added that despite the great benefits of artificial intelligence systems in all fields, including military and economic ones, he is concerned that these systems may set themselves sub-goals that do not align with the interests of their programmers.

The biggest concern expressed by Hinton was the catastrophic misuse of artificial intelligence systems by “malicious” parties who said they could exploit artificial intelligence for bad purposes.

Hinton has been one of the main advocates since 2017 for banning lethal autonomous weapons. As for the economic impacts of artificial intelligence, Hinton was optimistic about it in the past, stating back in 2018 that artificial intelligence will never replace humans.

By 2023 his point of view changed, and he became concerned about its negative impact on the labor market.

 

The Guardian

David Evan Harris, a senior adviser for A.I. ethics at the Psychology of Technology Institute, wrote a unique perspective on the dangers of artificial intelligence.

From 2018 to 2020, he was part of Facebook’s civic integrity team, where he fought against online interference in democracy from various sources, witnessing how dictators around the world used fake accounts to manipulate public opinion, harass dissidents and incite violence.

Now he is sounding the alarm about a new threat: large language models (LLMs), powerful A.I. systems that can generate realistic text on almost any topic.

He warns that these tools could be used by malicious actors to create more convincing and widespread disinformation campaigns, especially ahead of the 2024 U.S. presidential election.

Harris cites the example of LLaMA, an open-source LLM developed by researchers at Stanford University and Meta.

“Meta’s LLaMA can be run by anyone with sufficient computer hardware to support them—the latest offspring can be used on commercially available laptops. This gives anyone—from unscrupulous political consultancies to Vladimir Putin’s well-resourced GRU intelligence agency—freedom to run the A.I. without any safety systems in place,” Harris wrote.

He also says that LLaMA could be used to generate scripts for deepfakes, videos that show people saying or doing things they never did, expressing his concerns about Meta’s platforms (Facebook, Instagram, and WhatsApp), which he says will be among the biggest battlegrounds for these “influence operations.”

But his worries go beyond the erosion of democracy. After leaving the civic integrity team, he managed research teams working on responsible A.I. at Meta, where he documented the potential harms of A.I. and sought ways to make it more safe and more fair for society.

He saw how Meta’s own A.I. systems could facilitate housing discrimination, make racist associations and exclude women from seeing job listings visible to men.

“The scary part, though, is that these incidents were the unintended consequences of implementing A.I. systems at scale. When A.I. is in the hands of people who are deliberately and maliciously abusing it, the risks of misalignment increase exponentially, compounded even further as the capabilities of A.I. increase,” Harris noted.

He calls for a pause in the release of open-source LLMs, which he says could give governments time to put critical regulations in place, echoing the sentiments of tech leaders such as Sam Altman, Sundar Pichai, and Elon Musk, who have also advocated for more oversight of A.I.

Tech companies need to put stronger controls on who qualifies as a “researcher” for special access to these potentially dangerous tools, he concluded.