China Banned It, and Europe Fears It — Why Is the World Worried About ChatGPT?

“Until now, artificial intelligence could read and write, but could not understand the content. We find ourselves with a tool that can make even white collar type jobs far more efficient.” This is how the American billionaire Bill Gates responded to the question about the impact of technology on the global economy and the labor market during the next ten years, indicating that this technology will “change the world.”
But what is interesting was not the question nor the answer itself, but rather the questioner in particular, as he was not a journalist or researcher as many might expect, but rather the famous ChatGPT chatbot. It is considered by international technical experts as a “milestone” in the development of artificial intelligence technology.
This big boom, which succeeded in being the “fastest growing” electronic application in history, raises many concerns in politics and the economy, to the extent that some believe it may threaten human existence.
Scary Boom
Bill Gates was interviewed by ChatGPT on February 18, 2023, accompanied by British Prime Minister Rishi Sunak at the Imperial College London.
The chatbot asked distinct questions to Gates and Sunak, some of which were related to their private lives and others to their work and positions.
It also asked them about the parts of their jobs that they wished the application could do on their behalf. Gates answered, saying that he would like to use it to make his notes smarter and transform and develop images because he is not good at drawing, and he also indicated that he now uses it to write some poems and songs.
As for Sunak, he wanted artificial intelligence to take over the task of answering the routine questions he was exposed to on a weekly basis as prime minister, and he described that as being great.
Business Today said that this event demonstrated the ability of machine learning algorithms to engage in intelligent conversations with humans.
The artificial intelligence platform used advanced natural language processing algorithms to generate questions for the two leaders based on their previous speeches, interviews, and public statements.
This application was developed by the American company, OpenAI, and it was trained on trillions of words from the Internet to answer all kinds of questions in natural language.
It can also translate languages and create a coherent improvised text, as well as craft poetry or even programmatic and critical texts if asked to do so.
The application has attracted a lot of attention since its launch in November 2022, and Microsoft soon announced its desire to invest in this chatbot and then add it to its internet browser.
ChatGPT has achieved rapid growth during the last three months and has succeeded in answering many questions naturally, regardless of their complexity. Many developers and programmers around the world have begun to use it extensively to correct their software models.
ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history, according to a UBS study on Wednesday.
The report, citing data from analytics firm SimilarWeb, said an average of about 13 million unique visitors had used ChatGPT per day in January, more than double the levels of December.
“In 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app,” UBS analysts wrote in the note.
It took TikTok about nine months after its global launch to reach 100 million users and Instagram 2-1/2 years, according to data from Sensor Tower.
Big Concerns
On February 25, the Chinese government blocked its citizens’ access to the application due to what it considered to be spreading “misinformation” that reflects the American view of the world.
Reuters reported that the Chinese authorities had informed the main Chinese technology companies, including Tencent, which owns the famous WeChat application, to cut off access to ChatGPT.
The Chinese government also confirmed that technology companies that want to develop artificial intelligence applications should communicate with government agencies before launching any new application.
The move comes in light of growing concerns in China about some responses that the government cannot censor, particularly user questions related to human rights abuses, such as those related to Uyghur Muslims.
In 2019, the Chinese government passed legislation requiring companies to submit AI technologies to the government for approval before selling or exporting them.
In general, applications of artificial intelligence and bots rely on processing that produces responses to user inquiries and gives information according to the data it has trained on.
This means that the responses generated by the app may reflect biases found in the source material, according to Reuters.
Two weeks before the Chinese move, the European Union announced that it was drafting new rules on artificial intelligence aimed at addressing concerns about the risks of ChatGPT and this technology in general.
On February 3, EU Commissioner for Industry Thierry Breton said that the risks posed by the application and AI systems underscored the urgent need to set a global benchmark for the technology, and these rules are currently under discussion in Brussels.
ChatGPT has shown that AI solutions can bring great opportunities to businesses and citizens but can also pose risks. “This is why we need a solid regulatory framework to ensure trustworthy AI based on high-quality data,” he told Reuters in written comments.
Regarding the most prominent of the new rules being studied, Breton explained that “People would need to be informed that they are dealing with a chatbot and not with a human being. Transparency is also important with regard to the risk of bias and false information.”
While AI advocates describe the app as a technological breakthrough, experts and analytics platforms such as UBS and SimilarWeb raise concerns that the systems used by such apps could be misused for plagiarism, fraud, and the spread of misinformation.
Experts believe that artificial intelligence is progressing toward the ability to redesign itself at an increasing rate, which may lead to an unstoppable “intelligence explosion” and to controlling humans by robot.
Elon Musk, who co-founded the firm behind ChatGPT, warns that AI is “one of the biggest risks to the future of civilization,” noting that his competitor, Microsoft, took advantage of this and invested billions of dollars.
Reuters said Microsoft declined to comment on Breton’s statement, and OpenAI did not respond to a request for comment but claimed on its website that it aims to produce “safe” artificial intelligence that benefits all humanity.
Expensive Price
A study conducted by Resume Builder, with about 1,000 business leaders, revealed that ChatGPT has become a major threat to the jobs of millions of people.
In early February, more than half of the companies studied in the United States used ChatGPT in some of their businesses.
It added that nearly half of these companies have already replaced a number of their employees due to the app services.
The companies that used the application said that they saved money through artificial intelligence, as 4% of them saved more than $ 50,000, and 11% saved more than $ 100,000.
Stacey Haller, Senior Career Adviser at Resume Builder, said that as this new technology grows, workers will certainly need to think about how it affects their job responsibilities.
The majority of companies use the application to write codes, advertisements, content creation, customer support, and prepare meeting summaries, according to him.
Paradoxically, job seekers also use it to write resumes and cover letters.
Most users were satisfied with the results, with 76% saying the quality of the material written by the app was “high or very high,” according to the study.
At the end of January 2023, the University of California published an article about the dangers of ChatGPT to education, in which it revealed that four cases have already been reported in which artificial intelligence was used in four research papers.
In the article, Iqbal Pittalwala, a senior public information officer at the university, said that ChatGPT as a large language model can generate human-like text based on a given context. It can also perform most text-generation tasks involving natural language for communication.
However, Pittalwala said, “It will be impractical to try to ban or prevent the use of ChatGPT. AI tools are here to stay. They will improve and become increasingly important across disciplines. In the long run, departments may therefore need to re-evaluate their teaching mission and ask themselves, if a chatbot can do most of what a college graduate can do, then what is the value of a degree?”
If students can avoid real learning and get high homework grades using the model, then teachers will need to either conduct their assessments through a technical method that detects the use of the model or find ways to incorporate the app into helping students build new skills, he added.
This seems to have already been achieved with the OpenAI announcement in early February of a free tool that it says is meant to “distinguish between text written by a human and text written by AIs.” It warns the classifier is “not fully reliable” in a press release and “should not be used as a primary decision-making tool.” According to OpenAI, it can be useful in trying to determine whether someone is trying to pass off generated text as something that was written by a person.
“The tool, known as a classifier, is relatively simple, though you will have to have a free OpenAI account to use it. You just paste text into a box, click a button, and it’ll tell you whether it thinks the text is very unlikely, unlikely, unclear if it is, possibly, or likely AI-generated,” according to The Verge.