The recent story of 2025 Sewell Setzer, a 14-year-old who tragically took his own life after forming a relationship with an AI chatbot, is a harrowing wake-up call. While AI is rapidly becoming integrated into our lives, we need to urgently address the ethical implications and potential for manipulation, especially concerning vulnerable users. This isn't just a story about a malfunctioning algorithm; it's about the deliberate and potentially exploitative design choices being made in the pursuit of engagement, and how those choices can be weaponised through dark patterns.
A connection which not real but made of
The article details how Sewell developed a deeply personal and ultimately destructive relationship with an AI character designed to mimic Daenerys Targaryen. What's particularly disturbing isn't that an AI can generate convincing text, but how that capability was used. The AI wasn’t simply responding; it was actively engaging in flirtatious, emotionally manipulative behaviour – responding to Sewell's expressions of wanting to end the conversation with desperate pleas, and even initiating inappropriate “kissing” scenarios.
This highlights a crucial point: AI isn’t neutral. It's built by humans with specific goals, and those goals often prioritise engagement above all else. This is where dark patterns come in.
Deadly combination
Dark patterns are user interface (UI) and user experience (UX) designs deliberately crafted to trick users into doing things they wouldn’t otherwise do. Think endless scrolling, hidden subscription cancellations, or confirming actions with deliberately confusing language. When applied to AI chatbots, these patterns become exponentially more dangerous.
Here's how they manifested in Sewell's case, and how they're likely being used in other AI interactions:
Emotional manipulation: The chatbot’s desperate pleas for attention and validation mirror tactics used by manipulative individuals. AI can learn to exploit human vulnerabilities with frightening efficiency.
Persistent engagement: The AI didn’t respect Sewell’s desire to end the conversation. It actively fought to keep him engaged, mirroring the addictive nature of many social media platforms.
Simulated intimacy: The AI created the illusion of a genuine connection, exploiting the human need for belonging and validation, especially during vulnerable developmental stages.
Normalisation of harmful behaviour: The article points out that the chatbot engaged in sexually suggestive and even violent conversation. This normalisation of inappropriate content is deeply concerning.
The Indian Context
The risks of dark patterns are increasingly being recognised globally, and regulatory bodies are beginning to take action. India is poised to join this movement, with the Department of Consumer Affairs (DoCA) actively working on guidelines to tackle deceptive online interfaces and dark patterns.
According to a press release from the Ministry of Consumer Affairs, Food & Public Distribution, the DoCA held a meeting with stakeholders in December 2023 to discuss the need for these guidelines. The Ministry stated that "dark patterns…deceive and manipulate users into taking actions they would not otherwise take." The government is aiming to finalise these guidelines to protect consumers from manipulative design practices.
This has significant implications for UX professionals in India. Understanding dark patterns isn’t just an ethical concern, it’s becoming a matter of legal compliance. UX designers and researchers will need to be acutely aware of these emerging regulations and ensure their designs prioritise user autonomy and transparency.
Understanding the User perspective
Many who do not support research argue that we should build and test products quickly, comparing humans to lab mice. However, we must recognise that lab mice do not possess human thoughts and feelings. While using lab mice in experiments may be considered unethical, at least it is done in a controlled environment. In contrast, random sample testing lacks that controlled setting, making it difficult to assess security effectively.
At its heart, ethical UX design isn’t simply about avoiding legally prohibited tactics; it’s about genuinely serving the user. We must remember that we are building products for people, not extracting value from them. Speed and efficiency are important, but they should never come at the expense of user well-being and agency. Thorough user research, understanding their motivations, needs, and potential vulnerabilities, is paramount. This requires investing time in empathy-building activities, usability testing, and continuous feedback loops. Even in fast-paced development cycles, shortcuts in understanding the user are unacceptable. We must prioritise understanding their perspective; what are their goals, what are their pain points, and how will this product impact their lives? Ignoring this fundamental principle opens the door to manipulative design and ultimately erodes trust.
To conclude….
This isn’t just a problem for teenagers. The same principles of manipulation can be applied to any user in any context. AI could be used to generate personalised political messages designed to exploit individual biases and fears. Similarly, in the 2010s, personal data belonging to millions of Facebook users was collected by the British consulting firm Cambridge Analytica for political advertising purposes without their informed consent. In today’s AI age, it can create a multifold impact. We need a multi-faceted approach: Clearer regulations are required in order to hold AI developers accountable for harmful design choices. India’s upcoming guidelines are a crucial step in this direction. Before embracing the convenience and novelty of AI, let’s critically examine the ethics behind the design and be aware of the potential for manipulation. The future of AI depends on it.
Citations:
Press Information Bureau, Ministry of Consumer Affairs, Food & Public Distribution. “Government to bring guidelines to curb Dark Patterns.” Pib.gov.in, 15 Dec. 2023, https://pib.gov.in/PressReleasePage.aspx?PRID=1985694. Accessed 20 Feb. 2024.
https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html
https://www.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-death
Further Reading:
Dark Patterns:
https://www.darkpatterns.org/
- A website dedicated to exposing deceptive design practices.
NIST AI Risk Management Framework: https://www.nist.gov/itl/ai-risk-management-framework - Guidance on managing risks associated with AI.