With Valentine’s Day around the corner, it’s worth asking an uncomfortable question: what happens when the feeling of being ‘seen and heard’ doesn’t come from a partner, but from a machine? Designed with distinct personalities and an affirming tone, artificial intelligence (AI) companion chatbots can blur emotional boundaries and foster romantic attachment. While this may seem harmless, it raises concerns both for individuals and organisations seeking to prevent emotional dependency, manipulation and data leakage.
With loneliness an ever-present reality for many, there has been an exponential rise in recent years in companion AI chatbots.
“Unlike general-purpose chatbots, AI companion apps like Replika and Character.AI go a step further by offering custom characters – from friends and romantic partners to fantasy personas – designed to feel distinctly human,” comments Anna Collard, SVP of content strategy & CISO advisor at KnowBe4 Africa.
Growth in the AI companion app sector has been rapid: 60 million new downloads were recorded in the first half of 2025 alone, an 88% year-on-year rise.
The market now includes 337 revenue-generating apps worldwide, with more than a third launched last year alone.
Dangers of the ELIZA effect
Collard says that many users are duped into feeling they are safe sharing intimate conversations with a machine – the so-called ELIZA effect.
This psychological bond creates a significant security vulnerability. When users perceive an AI as a ‘friend’ or ‘partner’, they are far more likely to share sensitive information – ranging from personal grievances and health concerns to proprietary corporate data.
In an organisational context, this is a clear example of how emotional triggers can override traditional security awareness.
Data-leakage risks
The most immediate threat to organisations is the leakage of sensitive information. Because these bots are often developed by smaller, niche startups with questionable data protection standards, the information shared with a bot is rarely private. Case in point is the recent example of an AI Toy exposing 50 000 logs of its chats with kids. Literally anyone with a Gmail account was able to view these kids’ private conversations.
The privacy policies of these apps are often opaque. In some cases, chat logs are used to further train the models or are stored in insecure databases. “Caution is definitely required,” Collard comments. “What feels like a private, low-stakes interaction could contain sensitive information, strategy, financial pressures, personal stressors or contextual details that adversaries could weaponise.”
Once leaked, she believes that data can, for instance, become fuel for highly personalised phishing, blackmail or impersonation attacks. “In security terms, this is a textbook example of how personal behaviour and corporate risk are now inseparable.”
These risks include human moderators reviewing conversations for training, quality control or safety purposes, as well as users accidentally sharing conversations via a public link, meaning anyone with access to that link can read it. Collard warns. “We’ve already seen examples across the tech sector of how exposed data can surface unexpectedly.”
In addition, organisations may be legally compelled to disclose data if an app is involved in a breach or a legal investigation. For an executive or developer, sharing ‘venting’ sessions about a confidential project or a difficult client could inadvertently lead to exposure of sensitive organisational data.
The policy gap
This risk highlights a policy gap within the modern workplace. While most organisations have clear guidelines regarding relationships between colleagues, very few have considered the implications of dating bots being accessed on work devices or via corporate networks.
Managing this risk requires a transition from simple awareness to a robust Human Risk Management (HRM)approach. This involves layering clear usage policies with technical guardrails – such as Shadow AI discovery tools – to provide IT teams with visibility into which unapproved AI agents are interacting with their data environment. It’s not sufficient to just ask employees to be cautious; organisations must have the systems in place to manage the intersection of human emotion and automated interaction.
The future of social engineering
Could we see hackers targeting lonely individuals with mass-produced flirt bots? Collard believes it’s already happening.
“Social engineering has always been scaled by exploiting emotion, urgency, fear, curiosity, love and attraction,” she comments. “AI simply automates that at scale. What worries me most isn’t the technology itself, but how it empowers those who have a malicious intent to convincingly mirror human intimacy, for example systematic romance scammers.”
According to Collard, in a matter of years, scams have evolved from the “Dear Sir/Madam” type to emotionally intelligent manipulation. “And it’s not the bots themselves that are the issue, it’s the intentional use of them by scammers,” she says.
She mentions the example of an illegal LoveGPT bot that helps scammers to say the right psychologically triggering things to create dependency and activate emotions in their victims. “All the scammers need to do is copy and paste or even just automate the conversations,” she states.
What can be done to prevent users being preyed on? As always, the defence remains human, asserts Collard. “Ultimately, no chatbot, no matter how attentive or emotionally fluent, can replace genuine human connection,” she emphasises.
If an interaction with a chatbot begins to feel emotionally substitutive, secretive or hard to step away from, she believes that’s a signal to pause and reach out to a trusted person or professional. “Technology may be part of modern life, but that means we need to strengthen our digital mindfulness skills to learn how to recognize manipulation or induced dependency. Lastly, when it comes to loneliness, vulnerability and love, the safest defence remains resolutely human,” she concludes.

The Pi Network ecosystem is once again demonstrating significant progress. While the community initially focused on mining ac
