MillenniumPost
In Retrospect

AI becoming lazy & sassy?

In recent months, more and more users of the latest version of ChatGPT have complained that the chatbot refuses to comply with instructions, does not seem interested in answering queries or provides a little bit of information and then instructs users to fill in the rest themselves, all in a kind of particularly sassy manner, raising a lot of eyebrows in its newfound sluggishness

AI becoming lazy & sassy?
X

It may not just be us struggling with the days getting shorter and temperatures plummeting with the seasonal depression associated with a long and arduous winter beginning to settle in. OpenAI’s breakout chatbot ChatGPT may be suffering from the same condition.

Since November, users have started to notice that the chatbot, which recently celebrated its first birthday, was getting “lazier” and more irritable, often refusing to carry out the tasks it’s asked to do or asking the user to do them instead.

“Due to the extensive nature of the data, the full extraction of all products would be quite lengthy,” it demurred in one case. “However, I can provide the file with this single entry as a template, and you can fill in the rest of the data as needed.”

Users vented their frustrations on Twitter and OpenAI’s online developer forum about issues such as weakened logic, more erroneous responses, losing track of provided information, trouble following instructions, forgetting to add brackets in basic software code, and only remembering the most recent prompt.

“The current GPT-4 is disappointing,” a developer who uses GPT-4 to help him code functions for his website wrote. “It’s like driving a Ferrari for a month then suddenly it turns into a beaten up old pickup. I’m not sure I want to pay for it.”

The bizarre trend eventually caught the attention of OpenAI, which issued a statement on its official ChatGPT account on X-formerly-Twitter, writing that “we’ve heard all your feedback about GPT-4 getting lazier!”

“We haven’t updated the model since Nov 11th, and this certainly isn’t intentional,” the company added. “Model behavior can be unpredictable, and we’re looking into fixing it.”

So, the question remains if ChatGPT is really getting lazier. Or could the bot be picking up from its immense training data that people often have waning energy levels and motivation to do work in the winter months — and reflecting it back at us?

On December 1, OpenAI employee Will Depue confirmed in an X post that OpenAI was aware of reports about laziness and was working on a potential fix. “Not saying we don’t have problems with over-refusals (we definitely do) or other weird things (working on fixing a recent laziness issue), but that’s a product of the iterative process of serving and trying to support so many use cases at once,” he wrote.

It’s also possible that ChatGPT was always “lazy” with some responses (since the responses vary randomly), and the recent trend made everyone take note of the instances in which they are happening. For example, in June, someone complained of GPT-4 being lazy on Reddit. (Maybe ChatGPT was on summer vacation?)

Also, people have been complaining about GPT-4 losing capability since it was released. Those claims have been controversial and difficult to verify, making them highly subjective. It’s also worth mentioning that when people recently noticed more refusals than usual after the upgrade to GPT-4 Turbo in early November, some assumed that OpenAI was testing a new method that attempted to save computational resources by refusing to do extra work. However, OpenAI denies that is the case and acts like the apparent laziness is as much of a surprise to the company as it is to everyone else.

The theory, dubbed “winter break hypothesis” by another user on X, quickly caught on on social media, surfacing as an unconventional explanation for ChatGPT’s newfound sluggishness.

It’s a theory as elegant as it is hard to prove — especially because the researchers behind ChatGPT have admitted that they’re not entirely sure how the tool actually works.

In the meantime, some have attempted to quantify ChatGPT’s laziness by measuring the number of characters it was willing to spit out in May compared to in December. Others have struggled to reproduce early results claiming that ChatGPT had indeed gotten lazier.

Another possibility is that OpenAI’s large language models are trying to reduce their burden on already overloaded systems. But there’s no evidence supporting that theory either, as it was pointed out last month.

The Independent maintained that if the person asks for a piece of code, for instance, it might just give a little information and then instruct users to fill in the rest. Some complained that it did so in a particularly sassy way, telling people that they are perfectly able to do the work themselves, for instance.

In numerous Reddit threads and even posts on OpenAI’s own developer forums, users complained that the system had become less useful. They also speculated that the change had been made intentionally by OpenAI so that ChatGPT was more efficient, and did not return long answers.

AI systems such as ChatGPT are notoriously costly for the companies that run them, and so giving detailed answers to questions can require considerable processing power and computing time.

Another user — Christi Kennedy — wrote on OpenAI’s developer forum that GPT-4 had started looping outputs of code and other information over and over again.

“It’s braindead vs. before,” she wrote last month. “If you aren’t actually pushing it with what it could do previously, you wouldn’t notice. Yet if you are really using it fully, you see it is obviously much dumber.”

Meanwhile, an open letter signed by hundreds of prominent Artificial Intelligence experts, tech entrepreneurs, and scientists calls for a pause on the development and testing of AI technologies more powerful than OpenAI’s language model GPT-4 so that the risks it may pose can be properly studied.

It warns that language models like GPT-4 can already compete with humans at a growing range of tasks and could be used to automate jobs and spread misinformation. The letter also raises the distant prospect of AI systems that could replace humans and remake civilisation.

“We call on all AI labs to immediately pause for at least 6 months the training

of AI systems more powerful than GPT-4 (including the currently-being-trained GPT-5),” states the letter, whose signatories include Yoshua Bengio, a professor at the University of Montreal considered a pioneer of modern AI, historian Yuval Noah Harari, Skype co-founder Jaan Tallinn, and Twitter CEO Elon Musk.

The letter, which was written by the Future of Life Institute, an organisation focused on technological risks to humanity, adds that the pause should be “public and verifiable,” and should involve all those working on advanced AI models like GPT-4. It does not suggest how a halt on development could be verified but adds that “if such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” something that seems unlikely to happen within six months.

The letter comes as AI systems make increasingly bold and impressive leaps. GPT-4 was only announced recently, but its capabilities have stirred up considerable enthusiasm and a fair amount of concern. The language model, which is available via ChatGPT, scores highly on many academic tests, and can correctly solve tricky questions that are generally thought to require more advanced intelligence than AI systems have previously demonstrated. Yet GPT-4 also makes plenty of trivial, logical mistakes. And, like its predecessors, it sometimes “hallucinates” incorrect information, betrays ingrained societal biases, and can be prompted to say hateful or potentially harmful things.

Part of the concern expressed by the signatories of the letter is that OpenAI, Microsoft, and Google, have begun a profit-driven race to develop and release new AI models as quickly as possible. At such a pace, the letter argues, developments are happening faster than society and regulators can come to terms with.

Since, ultimately, our networked world runs on software, suddenly having tools that can write it — and that could be available to anyone, not just geeks — marks an important moment. Programmers have always managed to make an inanimate object do something useful, leading to computers obeying orders. But to become masters of their virtual universe, programmers had to possess arcane knowledge and learn specialist languages to converse with their electronic servants. For most people, that was a pretty high threshold to cross. Unfortunately, for some, ChatGPT and its ilk have just brought down the level.

Views expressed are personal

Next Story
Share it