ChatGPT’s personality experiment: funny, insightful and a bit problematic

You may have heard your friends mention that they have asked ChatGPT to write their cover letter for a job ad, as if it were a ‘25 year old university graduate seeking a job in the Finance industry’. In our latest experiment, we put ChatGPT’s adeptness at assuming a specific personality to the test. We challenged it to switch between two distinct personas: a laid-back 16-year-old American and a sophisticated 25-year-old Londoner. The result? A wildly entertaining exchange marked with slang and cultural references.

The Experiment

To test ChatGPT’s ability to shift between personas, we structured a conversation that encouraged contrasting responses. We started with simple icebreaker questions, asking both personas about their ideal weekend plans. The American teen, whom we nicknamed "Jake," leaned into casual slang and pop culture, while the Londoner, "Edward," offered up articulate and somewhat pompous responses.

For instance, when asked about their go-to activities, Jake enthusiastically replied:

"Dude, gotta hit up the skate park and maybe grab some epic tacos after. Oh, and if there's a new gaming drop, you bet I’m binging all night!"

Edward, in contrast, took a much more polished approach:

"Ah, a leisurely afternoon at a quaint café, followed by a stroll along the Thames—perhaps concluding with a West End show. Now that’s a proper weekend, isn’t it?"

As the conversation progressed, we dove into music preferences, pop culture, and even hypothetical scenarios, such as how each persona would react to winning the lottery. The differences were stark—Jake envisioned a shopping trip to pick up the latest trainers and video games, while Edward mused about investing in art and travel.

While the contrast between personas was amusing, it also revealed underlying biases in how AI constructs personalities. The American teen was depicted as informal and entertainment-driven, while the British persona defaulted to a highly cultured and upper-middle-class tone. This raised important questions about how AI models prioritise certain linguistic and cultural patterns over others.

Key Takeaways

●      Bias in AI: The experiment revealed that ChatGPT relies on cultural stereotypes when constructing personas, leading to oversimplified characteristics that may not fully represent real individuals.

●      Gender defaults: Despite no gender being specified, the AI defaulted to making both personas male, highlighting an inherent bias in AI-generated character creation.

●      Class representation: The British persona was clearly modeled after an upper-middle-class Londoner, reflecting AI's tendency to prioritise certain social classes over others, thus reinforcing class biases in representation.

●      The need for nuance: While the AI-generated conversation was entertaining, it underscored the importance of critically assessing the biases that underpin AI responses. The personas reflected oversimplified cultural and class-based assumptions, showing how AI models prioritise dominant narratives.

While this experiment was an entertaining exercise, it also highlighted an important issue: AI’s reliance on stereotypes when constructing personas. Jake, the American teenager, was portrayed as casual, slang-heavy, and deeply immersed in digital culture, while Edward, the Londoner, was framed as refined, articulate, and enamored with high culture.

These depictions reflect common societal biases that AI models inherit from the data they are trained on. Such portrayals risk reinforcing oversimplified or even misleading characteristics of real people. Not all American teenagers speak in internet slang, and not all Londoners frequent jazz clubs and enjoy formal speech. AI’s mimicry is only as nuanced as the information it has been fed.

Additionally, although we did not specify a gender for either persona, ChatGPT defaulted to making both of them male. This raises questions about gender bias in AI-generated characters. Why did the AI not randomly assign one of them as female or non-binary? This unintentional bias reflects deeper systemic trends in AI training data, which often skew towards male representations when neutrality is not explicitly stated. Such patterns warrant further investigation and refinement to encourage more diverse and balanced AI-generated personas.

Class bias also played a significant role in how the British persona was developed. The AI defaulted to an upper-middle-class version of a Londoner, with a preference for refined speech, jazz clubs, and high-culture activities. This reinforces a pattern where AI-generated personas tend to reflect privileged or dominant class perspectives while neglecting working-class or alternative representations. This bias further emphasises the need for AI models to incorporate a broader range of socio-economic backgrounds to provide a more nuanced and equitable depiction of human diversity.

Recognising these biases is crucial as we continue developing and using AI in creative and conversational applications. Rather than accepting these caricatures at face value, we should actively challenge AI to generate more diverse and representative depictions of human personalities.

Previous
Previous

Experiment: AI’s open mic night

Next
Next

The ethical algorithms of ChatGPT