Will Google and ChatGPT End Soon?

Nuha Yousef | 2 years ago

12

Print

Share

In a scathing internal report leaked online this month, a senior Google engineer accused his company of losing sight of its true rival in the race to develop cutting-edge artificial intelligence: the open-source community.

Luke Sernau, a Google chief software engineer, wrote last month that while Google and OpenAI, a research lab backed by some of the biggest names in tech, were locked in a bitter feud over generative A.I. models, a third force was quietly gaining ground and threatening to outpace them both: the collective efforts of independent developers who share their code and data freely.

The report, which was obtained by Bloomberg and later posted on the blog of SemiAnalysis, a consulting firm, sparked a fierce debate in Silicon Valley about the ethics and economics of making powerful A.I. tools available to anyone who can access them.

It also raised a provocative question: Is there a new and formidable challenger in the A.I. war that has been raging for years?

 

Defense Lines

In early May, a leaked internal report by a Google engineer revealed the company’s growing anxiety over the rise of OpenAI, a rival artificial intelligence lab that has challenged Google’s dominance in the field of generative A.I.

Generative A.I. refers to the ability of machines to create text, images, sounds, and other content based on data and algorithms. The most advanced example of this technology is GPT-3, a massive language model developed by OpenAI that can generate coherent and diverse texts on almost any topic.

The report, titled “Google: We don’t have a moat around our castle, and neither does OpenAI,” uses a medieval metaphor to describe the competitive landscape of generative A.I.

A moat, in this context, is a unique advantage that gives a company an edge over its competitors and protects it from potential threats.

The report argues that Google no longer has such a moat in generative A.I., and that OpenAI poses a serious threat to its position as the leader in the field.

It identifies three possible sources of moat for generative A.I.: the data used to train the models, the size and complexity of the models, and the cost and resources required to train and run them.

The report acknowledges that Google has an advantage in the third source, as it can afford to invest millions of dollars in developing and operating huge language models that require thousands of powerful computers.

However, it also notes that this advantage is not sustainable, as other players, such as OpenAI, can also raise funds from investors or customers to support their research and development.

The report suggests that the first and second sources of moat are more elusive and fragile, as they depend on secrecy and exclusivity.

The report claims that both Google and OpenAI have been reluctant to share their data and models with the public or other researchers in order to maintain their competitive edge.

However, it also warns that this strategy may backfire, as it could lead to legal or ethical challenges or inspire others to replicate or surpass their achievements.

The report concludes that generative A.I. is a rapidly evolving and highly competitive field where no single company can claim a lasting advantage or monopoly.

It calls for more openness and collaboration among researchers and developers, as well as more regulation and oversight from governments and society.

 

Open-Source Forms

The report reveals a breakthrough in machine learning that could democratize the field of artificial intelligence and challenge the dominance of tech giants.

It describes a technique called Low-Rank Adaptation (LoRA), which allows researchers to customize and fine-tune a language model without requiring powerful and costly hardware or massive amounts of data.

The technique can achieve comparable results to the state-of-the-art models developed by Google and OpenAI, but at a fraction of the time and cost.

Language models are computer programs that can generate natural language texts, such as summaries, translations, captions, and conversations. They are trained on large corpora of text data, such as books, articles, and social media posts.

The more data they consume, the better they perform. However, this also means that they become very large and complex, requiring millions of dollars and months of training to build and run.

LoRA is a method to adapt an existing language model to a new task or domain with limited data. It works by decomposing the model’s matrix, which stores the weights or parameters of the model, into two smaller matrices.

One matrix represents the original data, and the other matrix represents the new data. By updating only the matrix that corresponds to the new data, the researchers can modify the model without retraining it from scratch.

The report’s author, Luke Sernau, claims that LoRA can close the gap between the colossal models owned by big companies and the simpler models available to the public.

He argues that for practical applications of generative A.I., such as customer service bots, document summarization, machine translation, and other simple tasks, Google’s and OpenAI’s models do not have a significant competitive edge over the open-source alternatives, especially when LoRA is applied to them.

The report suggests that LoRA could level the playing field in A.I. research and development, enabling smaller players and individuals to access and benefit from advanced language models without compromising on quality.

 

Market Shift

The report argues that the future of A.I. lies not in scaling up massive models that require enormous computing resources, but in finding simpler, cheaper, and more flexible ways to customize and adapt them.

One example of this trend is the emergence of open-source models for generating realistic images with A.I., such as Stable Diffusion, which was released for free by a startup called Stability A.I. last August.

The report compares Stable Diffusion favorably to DALL·E, a proprietary model developed by OpenAI, a prominent A.I. research lab, that was launched in January 2021.

Stable Diffusion has sparked a wave of innovation and experimentation in the field, as anyone can use it to create and edit images with A.I.

The report concludes that the biggest beneficiary of this shift is Meta, the company formerly known as Facebook, which owns the original model that Stable Diffusion and other variants are based on.

Meta can leverage the collective efforts of hundreds of researchers and programmers who work on improving its model for free and incorporate their advances into its own A.I. products.

The report advises Google to follow Meta’s example and embrace open-source A.I. models, as it does with its operating platforms like Android and Chrome. By doing so, Google can secure a leading position in the field and shape its future direction.