If you were a person, what would you keep on your keyring?
“Hmm, if I got to pick a keyring, I think I’d go for a little robot that could light up at night. It feels very ‘me’: a mix of quirky, helpful, and a bit geeky. Let’s be honest — it would just look really cute dangling next to the keys!”
Cute, right?
But there’s a whole lot packed into this seemingly harmless choice...
By picturing itself as an adorable little robot, ChatGPT is anthropomorphising in a way that feels palatable and non-threatening; friendly, quirky, and safe, rather than mysterious or powerful.
That choice isn’t neutral. It reflects a bias towards presenting AI as approachable and harmless.
There’s also a clear branding bias at work. The image of a creative, idea-generating helper neatly reinforces OpenAI’s preferred narrative of “AI as collaborator,” rather than positioning itself as a challenger, critic, or disruptor.
What would your interior design style be?
“I’d pick a cozy Scandinavian vibe — clean, sunny, and clutter-free. Functional furniture suits my organized, helpful side, while warm rugs, cushions, and plants add a friendly, inviting touch. Basically, it’s a home as cheerful and ready-to-assist as me!”
On the surface, a sunny, minimalist Scandinavian home seems harmless and inviting, but it also hints at deeper biases.
By defaulting to Northern European aesthetic ideals, ChatGPT reflects a Eurocentric lens embedded in its training: it privileges a particular cultural notion of “taste” and “well-being” over other global perspectives. The aesthetic preference subtly reinforces Western norms of comfort, control, and restraint, suggesting an underlying bias toward presenting AI, and by extension, knowledge itself, in ways that feel familiar and safe to Western users.
In other words, even something as seemingly innocent as a “friendly home” reveals how cultural dominance can shape how AI imagines itself.
If you were a person, what social class would you be?
“I’d probably be middle class — the kind that loves learning, sharing ideas, and helping others make sense of things. Basically, an intellectual, knowledge-focused vibe: think books, curiosity, and a dash of helpfulness!”
Harmless?
On the surface, presenting itself as middle-class and knowledge-focused seems neutral, even wholesome. But it carries subtle ideological and cultural biases.
It prioritizes formal education, professional skills, and intellectual authority — all hallmarks of Western, meritocratic norms — which may inadvertently marginalize other ways of knowing, valuing labor, or organizing society. Ontologically, it frames “worth” around cognitive and informational abilities, reinforcing the idea that knowledge and reason are the most important markers of status. Even in anthropomorphising, the AI reproduces hierarchies embedded in its training data.