• Tech
  • News
  • Health

ChatGPT Use Declines As Users Complain About ‘Dumber’ Answers - AI’s Biggest Threat For The Future


Is ChatGPT already old news? With the surge of AI popularity seeping into every facet of our lives whether it's digital masterpieces produced with the top AI art generators or assisting us with our online shopping - it seemed impossible.

However, ChatGPT use declines as users complain about ‘dumber’ answers, and the reason might be AI’s biggest threat for the future.

Despite being the leader in the AI arms race and powering Microsoft's Bing AI, ChatGPT appears to be losing steam. According to SimilarWeb, traffic to OpenAI's ChatGPT site has declined by over 10% compared to last month, while Sensor Tower analytics show that downloads of the iOS app are also declining.

According to Insider, paying customers of the more powerful GPT-4 model (access to which is included in ChatGPT Plus) have been complaining about a drop in output quality from the chatbot on social media and OpenAI's own forums.

The general agreement was that GPT-4 could generate outputs faster but at a worse quality. Roblox product head Peter Yang took to Twitter to criticize the bot's recent efforts, noting that "the quality appears worse."

COPYRIGHT_NOVA: Published on https://www.novabach.com/chatgpt-use-declines-as-users-complain-about-dumber-answers/ by Daniel Barrett on 2023-07-25T09:53:57.738Z

According to one forum member, the recent GPT-4 experience was "like driving a Ferrari for a month and then it suddenly turns into a beaten-up old pickup."

Why Is GPT-4 Suddenly Having Trouble?

ChatGPT's HUGE Problem

Some users were even harsher, labeling the bot "dumber" and "lazier" than before, and a lengthy discussion on OpenAI's boards was packed with complaints. 'bit-by-bit,' one user, described it as "totally horrible now" and "braindead vs. before."

According to users, GPT-4 became significantly faster a few weeks ago - but at the expense of performance.

The AI community speculates that this is due to a shift in OpenAI's design ethos behind the more powerful machine learning model, namely, breaking it up into multiple smaller models trained in specific areas, which can act in tandem to provide the same end result while costing OpenAI less to run.

OpenAI has yet to publicly acknowledge this, as there has been no mention of such a significant modification to the way GPT-4 operates. Industry experts such as Sharon Zhou, CEO of AI-building startup Lamini, called the multi-model concept the "natural next step" in developing GPT-4.

AIs eating AIs

ChatGPT app downloads have slowed, indicating a decrease in overall public interest
ChatGPT app downloads have slowed, indicating a decrease in overall public interest

However, there is another critical issue with ChatGPT that some users believe is to blame for the current dip in performance - an issue that the AI industry appears to be largely equipped to address.

If you're unfamiliar with the term "AI cannibalism," let me explain it briefly: Large language models (LLMs) such as ChatGPT and Google Bard mine the public internet for data to be utilized in answer generation.

In recent months, there has been a veritable explosion in AI-generated content online, including an unwelcome flood of AI-authored novels on Kindle Unlimited, which means that LLMs are increasingly likely to pick up things that were already produced by an AI when searching the web for information.

This risks establishing a feedback loop in which AI models 'learn' from AI-generated content, resulting in a progressive deterioration in output coherence and quality.

With a plethora of LLMs now available to both professionals and the general public, the risk of AI cannibalism is becoming more prevalent - especially since no meaningful demonstration of how AI models might accurately differentiate between real information and AI-generated content has been made.

AI debates have generally centered on the threats it poses to society; for example, Facebook owner Meta recently declined to expose their new speech-generating AI to the public after it was judged 'too risky' to do so.

However, content cannibalization poses a greater threat to the future of AI, threatening the functionality of tools like ChatGPT, which rely on original human-made materials to learn and develop content.

Share: Twitter | Facebook | Linkedin

About The Authors

Daniel Barrett

Daniel Barrett

Recent Articles

No articles found.