Privacy Guide · RPDATE Blog
Are AI Companion Apps Safe?
Privacy Guide (2026)
AI companion apps store your conversations, behavioral patterns, and personal details on remote servers. Here's what they actually collect — and how to use them without unnecessary risk.
In April 2026, MyLovelyAI suffered a data breach exposing over 100,000 users' data — conversation logs, prompts, and generated content. Some of that data subsequently appeared in targeted harassment attempts against users. The breach wasn't a surprise to researchers who'd been tracking the category.
WIRED called AI companion platforms a "privacy nightmare" in 2024. Italy's data protection authority fined Replika for GDPR violations. The Mozilla Foundation studied 11 romantic chatbot platforms and found all of them collected more data than they disclosed. The European Data Protection Supervisor issued a formal warning about the category.
If you use an AI companion app — or are considering one — this guide covers AI companion privacy in practical terms: what platforms actually collect, where the documented risks are, and what you can do to use them more safely without giving them up. The goal is accurate information, not panic.
What Happened: 2026 Privacy Incidents
Documented events, not speculation. Understanding the actual incident landscape matters more than abstract warnings.
MyLovelyAI data breach. Over 100,000 users' data exposed — conversation logs and generated content. Some data subsequently used in targeted harassment. The incident illustrated the specific harm when identity-linked sensitive content is stored without adequate security.
Sources: Malwarebytes (April 2026) · Help Net Security - "113,000 exposed prompts"
Replika — Italian GDPR fine. Italy's Garante fined Luka Inc. for GDPR violations: insufficient transparency about data collection and sharing data with third parties without explicit user consent.
Sources: arXiv documentation · European Data Protection Supervisor warning
Janitor AI introduces ID verification. Officially framed as age control. In practice: the platform now stores identity-verified data paired with chat content. Identity documents linked to conversation history create maximum breach damage if exposed.
Source: Platform policy update, 2026
Character.AI — biometric verification friction. Biometric verification introduced for a subset of accounts in 2026, raising questions about facial data storage alongside chat history.
Source: User reports · Character.AI policy changes 2026
Mozilla Foundation report. Study of 11 romantic chatbot platforms: all collected more data than disclosed. Most did not provide full data deletion. Several failed minimum security standards.
Source: Mozilla Foundation - "Privacy Not Included" romantic chatbot research (2024)
The pattern is consistent: platforms storing the most sensitive content often have the weakest data governance. Regulation is catching up unevenly. The gap between what platforms collect and what users understand they are sharing is significant across the category.
What AI Companion Apps Actually Collect
Not all data collection carries equal risk. What matters is the combination of what's collected, how it's stored, and whether it's linked to your real identity.
Conversation Data
Everything written — in both directions
For cloud-based platforms, all conversation content is stored on the platform's servers. The model needs context to function — that's a genuine technical requirement.
- → How long is conversation data retained — days, months, or indefinitely?
- → Is it encrypted at rest, and is this published?
- → Can platform employees access conversation content?
- → What actually happens to data when you delete your account?
If you cannot find clear answers in a platform's current policy, treat the data as accessible to the platform and act accordingly.
Behavioral and Pattern Data
Less visible than conversation logs, but often more commercially valuable: when you log in, how long sessions run, which scenario types you engage with, what you respond to most. This builds a behavioral profile. Platforms with advertising-based monetization use this behavioral data for ad targeting. On ad-supported platforms, behavioral data flows to advertising partners by design.
Identity and Registration Data
- → No account required (RPDATE): anonymous session, no identity-linked records created.
- → Email registration: email permanently linked to all chat content in the database.
- → ID or biometric verification: real identity linked to content history.
Identity + sensitive content = material usable for targeted harassment. The registration model a platform uses is the single most consequential privacy variable.
Red Flags: Signs a Platform Has Poor Privacy Practices
A checklist you can apply to any platform before using it.
🚩 Mandatory registration before first message
If a platform requires your email before you have sent a single message, your identity is already linked to your intent to use the product.
🚩 Vague data deletion policy
We may retain data to improve our services without a clear timeframe or deletion mechanism is a red flag.
🚩 ID or biometric verification
Government document or facial scan requirements link real identity to chat history.
🚩 Advertising on the free tier
Ad-based monetization requires sharing behavioral data with advertising partners.
🚩 No published information on encryption at rest
If this is not stated clearly in policy docs, ask support directly.
🚩 History of policy changes without user notification
Platforms that changed this before will likely do it again.
Platform Privacy Audit: 6 Platforms Compared
Based on publicly available information as of April 2026. Treat this as a starting point — current platform documentation takes precedence.
RPDATE
rpdate.com
Data Collected
Session chat content. No account = no persistent identity-linked storage. No biometrics.
Registration
Not required to start. Email only for saving history.
2026 Incidents
No publicly reported incidents.
Data Deletion
Saved chats deletable in account settings.
Low Risk Anonymous start is a real privacy feature. No publicly reported incidents as of this writing.
Replika
replika.com
Data Collected
Conversation history, behavioral data, voice data (if enabled), payment data.
Registration
Required. Email linked to all conversation content.
2026 Incidents
GDPR fine from Italian Garante for insufficient transparency.
Data Deletion
Policy wording changed multiple times; formal deletion request recommended.
Moderate-High Risk Documented regulatory action and policy volatility increase exposure uncertainty.
Character.AI
character.ai
Data Collected
Conversation logs, behavioral ad-targeting data, and biometric verification data for some accounts.
Registration
Required. Ad-driven free tier.
2026 Incidents
Biometric verification introduced for subset of users.
Data Deletion
Limited controls; ad-targeting data lifecycle is less transparent.
High Risk Mandatory account + ad model + biometric friction create the largest identity surface among mainstream options.
Janitor AI
janitorai.com
Data Collected
Conversation history paired with identity verification data after 2026 policy changes.
Registration
Account plus age/ID verification gate.
2026 Incidents
Identity gate added, increasing breach impact potential.
Data Deletion
Retention terms for verification data are not clearly published.
High Risk Identity documents linked to sensitive chat logs significantly raise downstream harm in breach scenarios.
CrushOn AI
crushon.ai
Data Collected
Conversation logs and account-linked email data. No biometrics reported.
Registration
Required for most persistent features.
2026 Incidents
No major public incidents reported.
Data Deletion
Policy exists but remains less explicit on retention windows.
Moderate Risk Better identity surface than ID-gated apps, but still email-linked cloud storage.
SillyTavern + Local LLM
Self-hosted
Data Collected
No cloud collection by default. Data stays local.
Registration
None.
2026 Incidents
N/A for hosted platform breaches in local-only setups.
Data Deletion
Full user control via local file management.
Minimal Risk Strongest privacy model available if setup complexity is acceptable.
Privacy Comparison Table
Quick reference. Scroll horizontally on mobile. Ratings based on publicly available data, April-May 2026.
RPDATE · Privacy by design
Try RPDATE — No Account Required, No Biometrics
Your first chat requires no registration. No identity linked to your conversation.
Browse Characters →No email · No ID · No biometrics · Start anonymously
How to Use AI Companions More Privately — 6 Practical Steps
Concrete actions. These apply regardless of which platform you use.
Start without an account where possible
Some platforms — including RPDATE — allow you to start a session without creating an account. Use this for testing first.
RPDATE: first session fully open — no email required.
Read the data deletion section — not the intro paragraph
Find how permanent deletion works and when retention ends. Ignore marketing summaries.
Search policy for "delete", "retain", or "erasure".
Avoid platforms that require ID for content access
ID verification paired with conversation content creates high breach impact if exposed.
If ID is required for content access, choose an alternative.
Use a dedicated email address — not your primary one
If registration is required, separate identity from your main accounts.
Create a separate email before registering on any new platform.
Do not share real personal details in chat
AI companions do not need your real address, workplace, or legal name to function.
Treat chat logs as potentially accessible on cloud platforms.
For maximum privacy: local setup
SillyTavern with local model means no data leaves your device, but setup is more technical.
Use this route if privacy is non-negotiable.
What "Private" and "Safe" Actually Mean Here
"Safe" and "private" are not the same thing, and conflating them creates confusion about what you are actually evaluating.
Content safety is about moderation and model behavior. Data privacy is about what happens to your content after you send it — retention, access, deletion, and breach exposure.
No cloud-based platform provides absolute privacy. The practical question is how much collection is minimized, how transparent the policy is, and what the exposure looks like if a breach happens.
The relevant frame is not safe vs unsafe. It is which risk level is acceptable for you, given what the platform offers, and what actions reduce that risk.
Related reading
Frequently Asked Questions
They are stored on platform servers — not private in the way a local file on your device would be. The level of protection varies by platform: some publish encryption-at-rest policies, others do not. Using RPDATE without an account means session content is not linked to an identity. SillyTavern with a local model is the only option where conversations remain entirely on your device.
Bottom Line
The risks associated with AI companion app data privacy are documented — not theoretical. Recent incidents and regulatory actions show clear gaps between collection practices and user expectations.
The practical response is to use the category with accurate information: pick platforms with smaller identity footprint, avoid sharing real personal details, and use dedicated email addresses where required.
If privacy is the priority: RPDATE (anonymous start, no biometrics) or SillyTavern (fully local). Understand identity-linkage tradeoffs before using other platforms for sensitive content.
RPDATE · Start anonymously
Start Anonymously — No Email, No ID, No Signup
Your first chat requires no registration. No identity linked to your conversation.
Start a Session →No account · No biometrics · Session data not stored without signup
About The Author & Editorial Standards
This article is prepared by RPDATE TEAM based on direct product usage, scenario testing, and platform-level comparison. We update guides when UX, pricing, filtering, or access conditions change.
What was tested:
- Real chat sessions with multiple character types and tags
- Conversation consistency, memory behavior, and prompt adherence
- Onboarding friction: signup, paywalls, platform constraints
Editorial policy
We separate observations from opinion, mark limitations explicitly, and avoid sponsor-driven ranking claims. If a section is outdated, we revise it after verification.
Verification & transparency
Recommended next reads
