According to reporting from Reuters, OpenAI’s ChatGPT is now the fastest-growing app in human history, reaching an estimated 100 million monthly users in just two months since its November release. Analysts believe that the viral launch of ChatGPT will give OpenAI a first-mover advantage against other AI companies. The growing usage has also provided valuable feedback to help train and improve the chatbot’s responses.
However, there are growing concerns among AI researchers that the current hype overstates its capability. At the same time, there is also agreement that ChatGPT can be quite useful if its output is reviewed or edited.
1. From Therapy Bots to College Essays
ChatGPT is based on a Generative Pretrained Transformer (GPT) language model that uses deep learning to generate human-like text. These models are called ‘generative’ because they generate new text based on their input. Transformer-based generative AI is also considered a stepping-stone to new applications way beyond typical natural language processing tasks such as translation, text summarization and text generation. The types of usage currently discussed include new architectures of search engines, explaining complex algorithms, creating personalized therapy bots, helping to build apps from scratch, explaining scientific concepts or writing college essays, to name just a few.
A Human’s Guide to Machine IntelligenceViking
Realizing the concept of a new method in human-machine-based cooperation, some researchers claim that generative AI will also support the creative process of artists and designers. Existing tasks will be augmented by generative AI systems, speeding up the ideation and creation phase. Beyond this new functionality, generative AI models can also support transformative capabilities required for solving complex problems in computer engineering.
Microsoft-owned GitHub suggests code and assists developers in autocompleting their programming tasks. The system has been quoted as autocompleting up to 40% of developers’ code, thereby improving the workflow and reducing the associated cost of coding.
2. The battle of two Tech-Giants
ChatGPT is a potential threat to Google’s search-engine business as it impacts its primary revenue stream. According to the Analytics company StatCounter, Google’s current worldwide market share of search is 92.5% compared to Microsoft Bing’s 3%. The New York Times reported that ChatGPT’s release prompted a ‘code red’ from Google’s management because of its potential to upend the decades-old, ad-sponsored search engine business. As result, a flood of new transformer-enabled tools is anticipated.
What ChatGPT Reveals About the Urgent Need for Responsible AIThe Boston Consulting Group
For example, Google is expected to announce a new text-to-image tool called ‘Muse.’ “We consider Muse’s decoding process analogous to the process of painting – the artist starts with a sketch of the key region, then progressively fills the color, and refines the results by tweaking the details,” a research scientist at Google said. Muse will compete heads-on against OpenAI’s highly successful DALL·E 12-billion parameter version of GPT-3, trained to generate images from text. In addition, Google is working on a ChatGPT competitor called ‘Bard,’ to be released soon, according to a blog post published by CEO Sundar Pichai.
In late January 2023, Microsoft announced a new multiyear, multibillion-dollar investment in OpenAI. The acquisition is the third phase of the partnership, following Microsoft’s previous investments in 2019 and 2021.
Microsoft declined to provide a specific dollar amount, but the Analytics company Semafor reported earlier this month that Microsoft was in talks to invest as much as $10 billion. In a press release, Microsoft said the renewed partnership will accelerate breakthroughs in AI and help both companies commercialize advanced technologies. Moreover, OpenAI announced a $20 monthly service subscription, initially for users in the United States only. A spokesman said this should provide a faster and more stable service and the opportunity to try new features first.
At the World Economic Forum (WEF) in Davos, Microsoft’s CEO Satya Nadella made the point that a new generation of AI platforms with enormous business potential is emerging, providing services for search engines, social networks, and digital clouds.
The wealth generated by businesses that know how to make the most of these technologies will have a cascading effect.
Enforcing this statement, Nicole Sahin, CEO of a global recruitment company, made the point that instead of employing five software engineers to write code, soon it might only take one sound engineer to review what an AI tool suggests. Other consequences are likely, yet the effects are still unpredictable. Transformer technology is accelerating exponentially while competition among tech giants is intensifying with no end in sight.
4. Is ChatGPT Overhyped?
In a recent article on ZDNET, Yann LeCun, Meta’s chief AI scientist, stated that half a dozen startups have very similar technologies. GPT-3 is composed of multiple pieces of technology developed over many years by many parties.
ChatGPT uses transformer-architectures that are pre-trained in a self-supervised manner. But self-supervised learning is something I have been advocating for a long time, even before OpenAI existed.Yann Le Cun
LeCun also remarked that transformers are a Google invention, referring to the language neural net model unveiled by Google in 2017. The model has become the basis for many language programs, including GPT-3, and the work on these programs goes back decades. The first neural net language model – at that time, it was large, but by today’s standards, it is tiny – was developed by Yoshua Bengio, head of Canada’s MILA institute for AI, about 20 years ago. Bengio’s work on the concept of ‘attention’ was later picked up by Google and became a pivotal element in all language models.
According to LeCun, ChatGPT makes extensive use of a technique called ‘reinforcement learning through human feedback,’ where humans rank the output of the machine, thereby improving it, much like Google’s Page Rank for the web. “That approach was pioneered not at OpenAI but at Google’s DeepMind unit,” he said.
In LeCun’s view, ChatGPT is less a case of scientific breakthroughs than an example of decent engineering.
“It is well put together, but it is not revolutionary, although that is the way it is perceived in the public,” LeCun said. He is not the only one to point out that the current hype is due in no small part to the fact that the general public – and many companies as well – had previously viewed AI as purely a future scenario and did not believe they would be able to use the technology themselves this month or next. OpenAI has attracted enormous attention with a well-controlled marketing campaign by simply “catching end-users off guard.” Next to LeCun’s view, one should keep in mind that ‘Knowledge’ and ‘Meaning’ are two different concepts and are part of a hierarchy that starts with data from which information is extracted, thereby creating knowledge and wisdom at the highest level of the order.
Transformer-based tools like ChatGPT still have a long way to go, and it is questionable that they ever achieve real understanding – or: wisdom, the highest form of human intelligence.
5. Problems Ahead
ChatGPT’s ability to imitate how real people talk and write has sparked concern about its potential to replace professional writers or do students’ homework. Moreover, as generative AI has sparked a new wave of artificial creativity, there are rising concerns about its impact on society. Well-known artist Carson Grubaugh shares this concern and predicts that large parts of the creative workforce, including commercial artists working in entertainment, video games, advertising, and publishing, could lose their jobs because of generative AI.
Artificial Intelligence RiskKnowledge@Wharton
Besides profound effects on tasks and jobs, generative AI models have raised the alarm in the AI governance community. One of the problems with large language models is their ability to generate false and misleading content.
Researchers from Meta trained a generative transformer with 48 million articles to summarize academic papers, solve math problems, and write scientific code. The system was taken down after less than three days of being online. The users realized that the system was producing incorrect results, misconstruing scientific facts and knowledge.
More alarming are systems with advanced capabilities to render obsolete The Turing Test, which tests a machine’s ability to exhibit intelligent behavior similar to or indistinguishable from a human. This test was once considered the ‘holy grail’ of behavioral research when the internet did not exist. Today’s capabilities of ChatGPT can be misused to generate fake news and disinformation across internet-connected global platforms and ecosystems.
Biased AI Is Another Sign We Need to Solve the Cybersecurity Diversity ProblemSecurity Intelligence
Because large language models need to be trained on massive datasets represented by books, articles and websites, these sources of knowledge may be biased – and since at least the current version of ChatGPT does not automatically provide the references (or proofs) for its generated information, whereupon it could be easily checked in a structured way, this only makes things more problematic. Despite substantial reductions in harmful and untruthful outputs with human analysis and feedback support, OpenAI acknowledges that their models can still generate toxic, outdated, biased and factually incorrect results.
While generative AI is a game-changer in numerous areas and tasks, there is a solid need to govern the diffusion of these models and their impact on society and the economy. The discussion between centralized and controlled adoption with firm ethical boundaries, on one hand, versus faster innovation and decentralized distribution, on the other, will be necessary for developing a generative AI community in the coming years.
Is AI Good for the Planet?Polity Press
The issues to be solved include disruption of labor markets, legitimacy of scraped data, licensing, copyright, and potential for biased or otherwise harmful content, and misinformation, to name just a few. A thoughtful and beneficial expansion of generative AI technologies can be achieved only when solid checks and balances are in place. Therefore, the current ChatGPT-hype will likely cool down as unfavorable media coverage erodes the trust needed to advance the technology.
Only sustainable user acceptance will prove that humanity has reached a revolutionary level of a new industrial age.