Does your chatbot know an excessive amount of? This is why you must suppose twice earlier than you inform your AI companion all the pieces.
17 Nov 2025
•
,
4 min. learn

Within the film “Her” the movie’s hero strikes up an finally doomed romantic relationship with a classy AI system. On the time of its launch in 2013, such a state of affairs was firmly within the realms of science fiction. However with the emergence of generative AI (GenAI) and enormous language fashions (LLMs), it’s not such an outlandish prospect. In reality, “companion” apps are proliferating at present.
Nonetheless, inevitably there are dangers related to hooking up with an AI bot. How are you aware your private info gained’t be shared with third events? Or stolen by hackers? The solutions to questions like these will enable you to decide whether or not it’s all definitely worth the danger.
On the lookout for (digital) love
Companion apps meet a rising market demand. AI girlfriends and boyfriends harness the ability of LLMs and pure language processing (NLP) to work together with their customers in a conversational, extremely customized method. Titles like Character.AI, Nomi and Replika fill a psychological and typically romantic want for individuals who use them. It’s not onerous to see why builders are eager to enter this area.
Even the large platforms are catching up. OpenAI not too long ago stated it’ll quickly roll out “erotica for verified adults,” and will enable builders to create “mature” apps constructed on ChatGPT. Elon Musk’s xAI has additionally launched flirtatious AI companions in its Grok app.
Analysis printed in July discovered that just about three-quarters of teenagers have used AI companions, and half accomplish that often. Extra worryingly, a 3rd have chosen AI bots over people for critical conversations, and 1 / 4 have shared private info with them.
That’s significantly regarding as cautionary tales start to emerge. In October, researchers warned that two AI companion apps (Chattee Chat and GiMe Chat) had unwittingly uncovered extremely delicate person info. A misconfigured Kafka dealer occasion left the streaming and content material supply methods for these apps with no entry controls. That meant anybody may have accessed over 600,000 user-submitted photographs, IP addresses, and thousands and thousands of intimate conversations belonging to over 400,000 customers.
The dangers of hooking up with a bot
Opportunistic menace actors could sense a brand new technique to earn a living. The data shared by victims in romantic conversations with their AI companion is ripe for blackmail. Photographs, movies and audio could possibly be fed into deepfake instruments to be used in sextortion scams, for instance. Or private info could possibly be offered on the darkish net to be used in follow-on identification fraud. Relying on the safety posture of the app, hackers may additionally be capable to pay money for bank card info saved for in-app purchases. In line with Cybernews, some customers spend 1000’s of {dollars} on such purchases.
As per the above instance, income technology somewhat than cybersecurity is the precedence for AI app builders. Which means menace actors could possibly discover vulnerabilities or misconfigurations to use. They may even attempt their hand at creating their very own lookalike companion apps which cover malicious information-stealing code, or manipulate customers into divulging delicate particulars which can be utilized for fraud or blackmail.
Even when your app is comparatively safe, it might be a privateness danger. Some builders gather as a lot info on their customers as potential to allow them to promote it on to third-party advertisers. Opaque privateness insurance policies could make it obscure if, or how, your information is protected. You might also discover that the knowledge and conversations you share along with your companion are used to coach or fine-tune the underlying LLM, which additional exacerbates privateness and safety dangers.
The way to preserve your loved ones protected
Whether or not you’re utilizing an AI companion app your self or are involved about your kids doing so, the recommendation is identical. Assume the AI has no safety or privateness guardrails inbuilt. And don’t share any private or monetary info with it that you just wouldn’t be snug sharing with a stranger. This consists of doubtlessly embarrassing or revealing photographs/movies.
Even higher, when you or your youngsters wish to check out one among these apps, do you analysis forward of time to search out those that supply one of the best safety and privateness protections. That may imply studying the privateness insurance policies to know how they use and/or share your information. Keep away from any that aren’t specific about meant utilization, or which admit to promoting person information.
When you’ve discovered your app, remember to change on safety features like two-factor authentication. This may assist forestall account takeovers utilizing stolen or brute-forced passwords. And discover its privateness settings to dial up protections. For instance, there could also be an choice to choose out of getting your conversations saved for mannequin coaching.
In case you’re frightened in regards to the safety, privateness and psychological implications of your youngsters utilizing these instruments, begin a dialog with them to search out out extra. Remind them of the dangers of oversharing, and emphasize that these apps are a device for revenue which don’t have their customers’ finest pursuits at coronary heart. In case you’re involved in regards to the affect they might be having in your kids, it might be crucial to place limits on display screen time and utilization – doubtlessly enforced by way of parental monitoring controls/apps.
It goes with out saying that you just shouldn’t enable any AI companion apps whose age verification and content material moderation insurance policies don’t supply enough protections in your kids.
It stays to be seen whether or not regulators will step in to implement stricter guidelines round what builders can and might’t do on this realm. Romance bots function in one thing of a gray space at current, though an upcoming Digital Equity Act within the EU may prohibit excessively addictive and customized experiences.
Till builders and regulators catch up, it might be higher to not deal with AI companions as confidants or emotional crutches.
