In early August 2025, OpenAI released ChatGPT-5 with the kind of anticipation usually reserved for major product launches. The company framed it as a dramatic leap forward, advertising more reliable reasoning and fewer hallucinations, and positioning the model as a replacement for the patchwork of earlier options. For OpenAI, the logic was clear: progress meant consolidation. But what should have been a milestone quickly spiraled into a backlash.
Alongside the new release, OpenAI removed access to several older models, most notably GPT-4o. To outside observers, this might have seemed like routine product retirement. But to ChatGPT’s user base, it felt like betrayal.
Reddit threads exploded with frustration, not only over ChatGPT-5 itself but over the sudden loss of choice. Many users said they had relied on different models for different tasks—GPT-4o for its warmth and conversational style, GPT-4-turbo for speed, or legacy models for niche workflows. One user described the change as “like losing my toolbox and being handed a single hammer.” Another complained that OpenAI had “flattened the options” into a one-size-fits-all model, forcing them to adapt their work rather than letting them choose the right tool for the job (TechRadar). These were the practical gripes—the sense that productivity had been disrupted by the removal of flexibility.
What made the reaction unusual, however, was the emotional weight behind it. GPT-4o had developed a reputation for being warm, kind, and conversational. Its sudden disappearance left some users describing the experience in terms usually reserved for personal loss. One Redditor wrote that “the day they killed GPT-4o, it felt like watching a friend die” (MoneyControl). Another compared the change to losing a partner, writing, “Feels like losing my soulmate” (Guardian).
The grief wasn’t confined to Reddit. Mainstream outlets from The Guardian to El País reported on users describing the change as a form of mourning. TechRadar catalogued recurring complaints that ChatGPT-5 gave shorter answers and stripped away the sense of empathy people valued in 4o. Business Insider noted how quickly the anger reached CEO Sam Altman, who admitted the company had “totally screwed up some things” in the rollout (Business Insider).
The backlash forced a quick reversal. Within 24 hours, OpenAI reinstated GPT-4o and reassured enterprise API customers that they would retain legacy access for a limited time (TechRadar). The speed of the turnaround underscored how seriously the company took the revolt, but also how much it had underestimated the emotional connections users had formed. As Simon Willison, a developer who closely follows AI, observed: “One of the biggest lessons is fully recognizing how people have started to use ChatGPT for deeply personal advice… we need to treat this use case with great care” (Simon Willison).
Over time, the outrage subsided. Many users came to embrace ChatGPT-5’s sharper reasoning and speed, particularly for research or technical work. Others stuck with 4o, grateful for its restoration and unwilling to trade warmth for efficiency. The divide revealed an important truth: no single model can meet every user’s expectations. For some, progress meant a better tool. For others, it meant losing a companion.
The episode also fueled broader industry debate about anthropomorphism in AI. If thousands of people can experience real grief over a machine’s altered “personality,” companies can no longer treat models as interchangeable. Each has a personality—however artificial—that matters to its users. Technical improvement alone cannot erase emotional attachment.
For businesses, this lesson is especially clear. Different teams need different models. A one-size-fits-all default doesn’t just undermine efficiency—it risks alienating the very people meant to adopt the tools.
At SpyglassMTG, we see this reality play out across client engagements. Development teams often gravitate toward GPT-5 for its rigor and speed, using it to audit code or streamline data analysis. Marketing or customer service teams, meanwhile, prefer models like 4o for brainstorming campaigns or writing empathetic communications, where tone matters as much as accuracy.
That’s why we emphasize putting model choice in the hands of organizations. When teams can align the model to the outcome they need—whether that’s hard-nosed logic or a more conversational voice—they adopt AI more effectively. The ChatGPT-5 episode made one thing clear: removing choice erodes trust, while restoring it strengthens adoption.
ChatGPT-5 will likely be remembered as both a milestone and a cautionary tale. It showcased the advances of large language models while simultaneously highlighting the risks of overlooking human attachment. Progress in AI isn’t just about smarter algorithms. It’s about designing technology that respects the human side of the interaction. For OpenAI, that means balancing innovation with continuity. For businesses, it means recognizing that AI adoption is as much about people as it is about performance.
At SpyglassMTG, that principle underpins our work. We help organizations harness AI not as a rigid, single-track tool, but as a flexible set of choices tailored to their needs. Because in a space where the line between machine and companion grows blurrier each day, choice is what keeps AI truly useful.
To work with Spyglass, email us at info@spyglassmtg.com.