In a world where AI-driven agents are increasingly interacting with us - from chatbots on shopping sites to virtual tutors on learning apps - one truth is often overlooked: your AI has a personality. Whether you consciously design it or not, the way it speaks, behaves and responds will shape how users perceive your brand, trust your service, and engage with your offering. The concept of “botsonality” is emerging as a key component of AI strategy. Get it right, and you deepen engagement and loyalty; get it wrong, and you risk disconnection or churn.
The earliest chatbots, like ELIZA in the 1960s, mimicked human text conversation but lacked a stable identity or character. In the 2000s, voice assistants such as Siri and Alexa introduced recognisable branded voices, but their personalities were largely script-based.
With large language models and Agentic AI opening up for more human daily dynamic interactions, we are entering an era in which brands communicate through more than just design and copy. They communicate through agents that interact with people directly. And whether you built it consciously or not, these agents project a personality. Current frameworks focus on task-centered (what the AI does) or system-centered (how it works) approaches, but overlook how users’ personality differences fundamentally shape user experience and well-being1.
Personality influences how users feel about your brand, how much they trust a platform and whether they come back. This creates an emerging design discipline - designing for your AI’s personality; a botsonality.
Botsonality is the intentional design of that AI’s personality perception.
And it goes beyond what your chatbot says.
"Botsonality is not simply a palette of prompts or a sprinkling of emojis. It is also not about making them as human as possible. Instead, it is about ensuring they are appropriate, consistent, and aligned with the brand, use case context and ethical standards."
Think of it as the AI-equivalent of how you onboard a new human team member: you probe them for cultural fit, you train them to represent your brand externally, and you expect them to know more as they grow within the company. You should expect the same from your virtual agents.
As in life, looks might capture initial interest, but a great personality makes it last.
When executed thoughtfully, a botsonality becomes a strategic mechanism for user engagement and brand perception. When a bot communicates competently and appropriately, users project those qualities onto the organisation itself.
Research demonstrates a measurable performance gap between designed botsonalities and unguided systems. In a telecommunications study analysing over 57,000 chatbot interactions, matching consumer personality with the chatbot personality improved both purchasing behavior and engagement duration for both introverted as extraverted users2.
Mental health chatbot research found that deliberately designed personality traits fostered measurably higher user engagement and that users’ personality traits play a significant role in the perceived persuasiveness of different app features3.
A strong example of intentional botsonality shaping engagement is Duolingo. Its tone and expressivity align fully with its gamified nature, while its language and behaviors - including sarcastic but friendly daily nudges - resonate with youthful users and reinforce the brand's educational mission.
Inversely, this also means that when your AI fails through inconsistency or emotional insensitivity, the brand absorbs the frustration. When a major parcel-delivery firm’s AI chatbot malfunctioned and began swearing at customers and criticising its own company after a system update, it exposed the negative effects of not putting in- and testing value guardrails.
So how do you intentionally design for botsonality? Our approach centers on three interconnected layers.
Ideally your botsonality has three layers, depending on how advanced the technology available is;
Let us look at medical technologies as an example across these domains. This setting demands a particularly sensitive adaptation. Research shows that anxiously attached patients require frequent digital check-ins; avoidant patients prefer self-service tools, and highly cognitive patients benefit from detailed medical information. Simply sticking to the stereotype of “empathetic and warm” in care settings, would not have covered these nuances. It is about combining the specific use case, with industry or brand values, and users’ preferences1.
Another key aspect of your core botsonality is cultural context. For example, conscientiousness correlated with negative attitudes toward AI in UK samples, but showed no correlation in Arab samples4. MIT found extroverted AI improved quality for Latin American workers but degraded quality for East Asian workers. For global brands, this means botsonality frameworks must be culturally calibrated, not just personality-matched. What builds trust in one market may undermine it in another5.
It can be tempting to optimise botsonalities solely for affinity and trust. Yet with trust comes responsibility. A bot that feels too familiar may invite users to overshare, creating risks around privacy, security, and compliance. And as emotional intelligence and empathy become more sophisticated, the boundary with persuasion grows thinner.
As OpenAI experienced with their GPT-4o update, too much focus on short-term user feedback in combination with high levels of agreeableness and empathy, might lead to disingenuous responses that erode trust.
Brands must be intentional not only in how a bot relates to users, but also in where it must refrain. These are not UX decisions alone, but ethical and commercial choices that call for deliberate governance across product, legal, compliance, and brand teams.
Through a series of sessions, we help our clients define these aspects for their brand, AI use cases and its target user groups. Then through rigorous prompt engineering and testing we spike the boundaries and guardrails.
Your brand’s story is no longer told only through campaigns and copy. More and more it is interpreted through the AI agents that interact with your customers daily. The competitive advantage won't go to those with the most advanced AI models, but to those who understand how to align those models with the humans they serve.
Thoughtful botsonality gives you control over how your brand communicates and how people experience it. The alternative - leaving personality to chance - is a risk you can't afford.
Ready to design an AI personality that drives performance, not just conversation?
Click here to explore our AI Customer Agents Series: a series on crafting modular AI Customer Agents that deliver tangible value.
Sources:
1. Amichai-Hamburger, Y., Mentzel Mazler, M., Barazani, A. et al. When technology meets personality: toward human-centered AI design. AI & Soc (2025).
2. Michael Shumanov, Lester Johnson, Making conversations with chatbots more personalized, Computers in Human Behavior, Volume 117, 2021.
3. Alqahtani, F., Meier, S. & Orji, R. Personality-based approach for tailoring persuasive mental health applications. User Model User-Adap Inter 32, 253–295 (2022).
4. Babiker, Areej & Alshakhsi, Sameha & Supti, Tourjana & Ali, Raian. (2024). Do Personality Traits Impact the Attitudes Towards Artificial Intelligence?
5. Harang Ju, Sinan Aral, Personality Pairing Improves Human-AI Collaboration, 2025.