The First of Its Kind: Will the White House Regulate the Development of Artificial Intelligence?

Nuha Yousef | 2 years ago

12

Print

Share

A decade ago, Google faced a backlash when its search engine displayed an image of Michelle Obama, the former first lady, as a result for the query “gorilla.”

The episode highlighted how artificial intelligence algorithms, designed and coded by humans with their own biases and prejudices, could produce offensive or inaccurate outcomes.

Since then, as A.I. systems have become more advanced and pervasive, the concerns about their potential harms have also grown.

Experts have warned about the loss of jobs to automation, the invasion of privacy by data-hungry algorithms, and the manipulation of users by targeted ads.

 

New Risks

New Dangers But now, a new generation of powerful A.I. tools that can generate realistic texts and images poses even greater dangers, according to Geoffrey Hinton, a pioneer of deep learning and a former Google researcher. He recently resigned from the company over its handling of ethical issues related to A.I.

Hinton said that the current threat is the proliferation of misinformation that these tools can create. He explained that these systems rely on data from the internet to learn how to produce convincing content, but the internet is full of lies and propaganda. As a result, these tools can spread falsehoods in a way that is hard to distinguish from reality, leaving users confused and misinformed.

But the future risks could be even worse, he added. He predicted that the competition between companies and countries to develop smarter machines could lead to an existential threat to human civilization if A.I. surpasses human intelligence.

A study by the Center for Socially Responsible Artificial Intelligence at Pennsylvania State University found that the media coverage of A.I. has often been exaggerated and sensationalized, creating a distorted image of the technology among the public.

S. Shyam Sundar, the director of the center and a professor of communications, said that his research has shown that when users interact with a machine rather than a human, they tend to apply a mental shortcut that he calls a “machine heuristic.”

This means they assume that machines are accurate, objective, unbiased, and infallible, and they trust them too much.

He also said that his studies have shown that people treat computers as social beings when they exhibit some signs of human-like behavior, such as using conversational language.

In these cases, people follow social norms of human interaction, such as politeness and reciprocity. Therefore, when computers seem empathetic, people are more likely to trust them blindly.

Sundar argued that regulation is needed to ensure that A.I. products are worthy of this trust and do not exploit it.

“A.I. poses a unique challenge because, unlike in traditional engineering systems, designers cannot be sure how A.I. systems will behave,” he said.

 

White House Regulations

As artificial intelligence becomes more powerful and pervasive, President Biden is grappling with how to harness its potential for good while guarding against its dangers.

Last month, he declared that A.I. could help solve some of the world’s most pressing challenges, such as disease and climate change. But he also warned that it could pose serious threats to national security and economic stability.

To address these issues, he convened a meeting earlier this month with some of the top executives in the field, including the heads of Google, Microsoft, Alphabet, OpenAI, and Anthropic. They discussed a draft legislation that would regulate A.I. technologies in the country, which the administration plans to release soon.

According to people familiar with the meeting, President Biden and Vice President Harris outlined some of the key principles that would guide their approach to A.I. regulation.

They said they wanted to ensure that A.I. is used in a safe, ethical, and responsible manner and that it respects human rights and values.

They also urged the companies to be more transparent and accountable for their A.I. products and practices and to cooperate with the government and other stakeholders in developing standards and norms for A.I.

The meeting was followed by a series of announcements from the administration and Congress, signaling their commitment to advancing “trustworthy” A.I.

The National Science Foundation said it would invest $200 million in new A.I. institutes that would focus on topics such as health care, agriculture, and education. The investment brings the total funding for A.I. institutes under the Biden administration to nearly $500 million.

The Federal Trade Commission said it would use its existing authority to monitor and enforce some of the risks posed by A.I., such as discrimination, deception, and unfairness.

The agency’s chairwoman, Lina Khan, said in a blog post that the FTC would also issue guidance for federal agencies on how to use A.I. systems in a lawful and ethical way.

And Senator Michael Bennet introduced a bill that would create a task force to review the U.S. policies on A.I. and recommend ways to protect privacy, civil liberties, and due process.

White House press secretary Karine Jean-Pierre described the meeting with the CEOs as “honest” and “frank.” She said in a statement that the administration recognized the need for “a balanced approach” to A.I. regulation that promotes innovation while safeguarding public interest.

 

Global Warning

As artificial intelligence technologies become more powerful and pervasive, lawmakers and regulators worldwide are grappling with how to oversee and control them. Several countries and organizations have recently taken steps to draft or adopt legislation that could set precedents for the global governance of A.I.

In the European Union, members of the European Parliament have reached a preliminary agreement on a bill that would establish the world’s first comprehensive rules for A.I., according to The New York Times. The bill would aim to protect fundamental rights and values, such as privacy and human dignity, while fostering innovation and competitiveness.

The European Data Protection Council, which coordinates the national privacy authorities of the bloc, has also formed a working group on ChatGPT, the popular and controversial A.I. system known for its ability to generate realistic texts on various topics.

In the United Kingdom, the Competition and Markets Authority has announced a new “fact-finding” review of the A.I. market to assess its impact on consumers, businesses, and the economy and whether new regulations or principles are needed. The authority said in a statement that it would examine how A.I. affects competition, pricing, quality, choice, and innovation.

The Group of Seven (G7) developed countries have also expressed their intention to adopt a risk-based approach to regulating A.I. following a meeting of their ministers in Japan in late April. They agreed to promote “human-centric” A.I. that respects human rights and democratic values and to cooperate on developing common standards and best practices.

Other countries that have voiced similar concerns or taken action include Australia, France, Ireland, Spain, and Italy. Italy initially banned ChatGPT for its citizens but later lifted the restriction for those over 18 years old. China, despite its history of poor use of A.I. for surveillance and censorship, has also joined the list of countries seeking to regulate the technology.

However, many experts doubt the feasibility and effectiveness of regulating A.I., given its complexity and rapid development.

Some tech industry leaders, such as Elon Musk, have argued that A.I. is too advanced for current laws and that even its creators do not fully understand what they are making.

Some specialists also fear that regulation could stifle innovation and creativity by imposing outdated or rigid rules on a dynamic and diverse field. They propose instead to dismantle old and established practices and norms and to foster a culture of experimentation and exploration.