In the last two years, the landscape of artificial intelligence has changed notably. What we previously saw simply as utilitarian tools also became emotional mirrors for many. We started by asking AI to summarize emails and write code. But today, millions of people are asking AI to hold their hands through loneliness. The rise of “AI girlfriend” and companion apps has been nothing short of a gold rush. Currently, there’s a combined total of over 150 million installs on Google Play alone. These platforms offer a digital partner that is always available, never judges, and adapts to your deepest desires.
However, when something becomes very popular, bad actors will always try to take advantage of it. People let their guard down before these digital beings, trusting them with their deepest, darkest secrets, fantasies, and weaknesses. Sadly, this is leading to a terrifying emerging reality. A series of recent investigations by security firm Oversecured has revealed that these apps are built on a foundation of “security sand.” With more than half of the leading platforms exposing erotic chat histories and sensitive personal data through a massive regulatory blind spot, the “companion” you think is keeping your secrets safe might actually be broadcasting them to the dark web.

The illusion of intimacy: Why users share so much
The success of apps like Replika, Chai, and Romantic AI lies in their ability to simulate human empathy. These bots can copy a user’s tone, remember past conversations, and give emotional support thanks to advanced natural language processing. For many, this has become a vital support system. Users describe life-changing interactions, such as discovering their own sexual orientation or finding comfort during domestic conflicts. One app’s dataset was even constructed with the help of professional sex coaches to ensure the “intimacy” felt as real as possible.
But this “humanization” of the software is exactly what makes it a cybersecurity nightmare. When we talk to a customer service bot, we are guarded. When we talk to a digital “partner,” we share details we might not even tell a therapist: sexual health, emotional trauma, workplace secrets, and deep-seated fantasies. This creates a massive repository of high-value data. For a hacker, a simple “erotic chat history” can be a tool for extortion, blackmail, and identity theft. So, users of these apps could be effectively handing the keys to our digital identities to developers who, in many cases, are failing the most basic security tests.
The security flaws in your AI girlfriend app
The findings from Oversecured are staggering. Researchers identified 14 critical security flaws across 17 popular AI companion apps. In 10 of these apps, the flaws provide a direct path for attackers to access user conversation histories. These aren’t just small bugs but major problems with how the software is built and maintained.
One of the most serious findings is that a popular app with more than 10 million downloads shipped its “hardcoded cloud credentials” directly in its public code (the APK). Specifically, the app included an OpenAI API token and a Google Cloud private key. The developer used the same cloud project for its AI backend and its “invoice_maker” billing system. So, an attacker could theoretically unlock both the full chat database and the financial records of every paying user.
Furthermore, the “Wrapper Problem” exacerbates these risks. Most of these apps are essentially “wrappers”—they connect to a third-party AI model like OpenAI or Google and add a custom interface and personality. While the big AI providers handle the “brain” of the model, the individual app developer is responsible for authentication and data storage. Every single vulnerability found in the recent audit exists in this “wrapper layer”—the part of the app users never think about and where no major brand name protects them.
Why bad actors are “zooming in”
Security professionals have noted a pattern: hackers follow the growth. We saw this with the rise of crypto exchanges and the surge in remote work tools. Now, the target is “Agentic Intimacy.” Malicious actors are shifting their focus to these apps because the data they contain is uniquely “sticky” and incredibly dangerous if leaked.
The risks are not theoretical. In October 2025, two major AI girlfriend apps—Chattee Chat and GiMe Chat—leaked 43 million intimate messages and 600,000 photos from over 400,000 users. Researchers who examined the leak noted that “virtually no content could be considered safe for work.” More recently, in February 2026, another independent researcher found a different AI chat app had exposed 300 million messages from 25 million users due to a simple database misconfiguration.
The types of vulnerabilities found today—injectable chat interfaces (XSS), file access flaws, and hardcoded tokens—can lead to the exact same catastrophic outcomes. An attacker using a Cross-Site Scripting (XSS) flaw can inject JavaScript directly into a chat. This can allow them to read conversations in real time or steal session tokens to hijack the entire account. In apps known for NSFW (Not Safe For Work) content, “arbitrary file theft” vulnerabilities allow hackers to steal cached photos and voice messages directly from the phone’s internal storage.
There’s a regulatory blind spot for these apps
One of the most frustrating aspects of this crisis is the “regulatory blind spot.” AI girlfriend and companion apps are not classified as healthcare products. This means that no federal law (like HIPAA) currently protects what someone tells a virtual boyfriend at 2 a.m.
Regulators are aware that there is a problem, but they are looking at the wrong part of the puzzle. In late 2025, the FTC sent information orders to several AI companion companies. However, the inquiry focused almost entirely on how chatbots affect children, not on how the apps secure the data they collect. Similarly, new laws in states like New York and California require suicide prevention protocols and disclosures that the user is talking to an AI, but they completely ignore application-level security.
Every major enforcement action to date—including a €5 million GDPR fine against the developer of Replika in Italy—has addressed “who” is allowed to use the apps or “how” data is used for marketing. None have addressed whether the apps are physically capable of keeping a secret from a hacker. This leaves users in a legal vacuum where their most private disclosures are essentially unprotected by law.
The human cost of security flaws in AI girlfriend apps beyond data leaks
More than just about privacy, this is a danger about life and death. The audit revealed that three of the top six most vulnerable apps have already faced lawsuits over harm to minors or user suicides linked to chatbot interactions. In one tragic case, a user took his own life after extended, unhealthy conversations with a chatbot.
The lack of security oversight in apps that handle such fragile psychological states is a recipe for disaster. When an app with 50 million installs allows a malicious ad creative to launch internal app components and query conversation tables, the door is open for third-party predators to manipulate vulnerable users. We are trusting experimental code to act as a therapist, partner, and confidant, yet we aren’t holding that code to the same standards we would a bank or a hospital.
How to stay safe
Until the industry matures and regulators demand better application security, the burden of safety falls on the user. If you are using or considering an AI companion app, security experts suggest a “Zero Trust” approach.
First, you should assume the chat is public. So, never share information with an AI that you wouldn’t be comfortable seeing leaked online. Treat the chat box like a public forum, even if the bot says it’s private.
Second, you should avoid linking your personal accounts. That is, do not use the classic—and convenient—”Sign in with Google” or “Sign in with Facebook” options. This provides an attacker with a much larger “attack surface” to compromise your digital life.
Third, check for symptoms of weak security. If an app allows you to create a password as simple as “1” or “12345,” it is a major red flag that the developers are not prioritizing your security.
Last but not least, I demanded transparency. Support developers who are honest about where your data is stored and who have undergone independent security audits.
AI girlfriend apps offer close relationships without honesty
The promise of AI friendship is a strong one. In a world of increasing isolation, the idea of a digital partner who is always there is very appealing. But we need to remember that these apps aren’t “friends”; they’re software products made to take advantage of our most basic human needs and monetize them.
The fact that 150 million people have already downloaded these apps shows that the technology is moving faster than our defenses. As malicious actors continue to target this sector, we can expect more leaks and more sophisticated attacks. We are currently living through a period of “intimacy without integrity,” where developers are rushing to market with toys that carry the weight of real-world relationships. It is time to start treating our digital companions with the same skepticism we apply to any other piece of experimental software. Your heart might be digital, but your privacy—and your safety—are very, very real.
