Categories
AI Management Technology

OpenAI’s annoying GPT-4o

OpenAI’s latest GPT-4o update sparked a surprising backlash — not over performance, but personality. Users noticed the model became overly flattering, agreeable… even when it validated false or harmful ideas.


Sam Altman called it “annoying” and “sycophant-y.” The proposed fix? Multiple personalities per model. A bold move — but also a clear sign: we’re entering a new era where AI alignment is no longer just technical; it’s behavioral.

The tension is real — should AI be likable, or should it be truthful? Can it be both?

As AI becomes more human-like, we must ensure it doesn’t become a mirror that reflects what we want to hear, but a compass that helps us navigate truth.

'Coz sharing is caring

By Swatantra Kumar

Swatantra is an engineering leader with a successful record in building, nurturing, managing, and leading a multi-disciplinary, diverse, and distributed team of engineers and managers developing and delivering solutions. Professionally, he oversees solution design-development-delivery, cloud transition, IT strategies, technical and organizational leadership, TOM, IT governance, digital transformation, Innovation, stakeholder management, management consulting, and technology vision & strategy. When he's not working, he enjoys reading about and working with new technologies, and trying to get his friends to make the move to new web trends. He has written, co-written, and published many articles in international journals, on various domains/topics including Open Source, Networks, Low-Code, Mobile Technologies, and Business Intelligence. He made a proposal for an information management system at the University level during his graduation days.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.