Your AI Should Work for You, Not Against You
The more an AI knows about you, the more powerful it becomes. That's exactly why privacy isn't optional — it's the foundation everything else is built on.
The trust problem
You're about to tell an AI your deepest fears, your career ambitions, your relationship struggles, your health concerns. This is the most personal data you've ever generated — more intimate than your search history, more revealing than your social media.
And most AI companies treat this data the same way they treat everything else: as fuel for their models, their advertisers, their investors.
We think that's broken. Not because of regulation. Because you shouldn't have to wonder what happens to your thoughts after you share them.
What we chose and why
When we designed Amili's architecture, we made three decisions that cost us convenience but earned something more important: your trust.
Your data is isolated. Every user gets their own database partition. Your memories, conversations, and extracted facts live in a container that no other user — and no Amili employee — can access through the application. There's no shared pool where your data could accidentally leak into someone else's experience.
We don't train on your conversations. Your messages to Amili are never used to improve our models, fine-tune our systems, or build aggregate datasets. When you tell Amili something, it remembers it for you — not for us.
You own the delete button. When you delete a conversation, it's gone. When you delete a memory, it's gone. Not archived. Not soft-deleted. Not retained for 30 days. Gone. And if you ever want to leave entirely, we'll delete everything — no hoops, no retention, no "we need 90 days to process your request."
Privacy isn't a feature we added to Amili. It's a constraint we designed around from day one. Every architecture decision — from database isolation to memory extraction — was made with the assumption that your data is not ours to use.
The infrastructure behind the promise
Amili runs on Cloudflare's edge network — the same infrastructure that protects millions of websites. Your data is encrypted at rest and in transit. It's stored in the region closest to you, not in a centralized data center on another continent.
Our database uses Cloudflare D1, which provides per-row isolation and enterprise-grade access controls. API authentication uses short-lived tokens that expire automatically. Rate limiting prevents abuse without tracking your behavior.
We chose this stack specifically because it lets us make strong privacy guarantees without asking you to trust our good intentions. The architecture enforces the rules, not our policies.
What we don't do
We don't sell your data. We don't show you ads. We don't build profiles for targeting. We don't share your information with third parties. We don't use your conversations for marketing research. We don't track you across the web.
Our analytics are privacy-first too. We use Plausible — no cookies, no personal data, no tracking across sites. We see aggregate page views and click patterns. We never see who you are or what you said.
Why this matters for AI companions
A personal AI that remembers everything about you is either the most helpful technology ever built — or the most dangerous. The difference is entirely in who controls the data.
If the company controls it, they have leverage over you. They can change their privacy policy. They can be acquired. They can be compelled by governments. They can monetize your innermost thoughts in ways you never agreed to.
If you control it, the AI works for you. Period. That's the only version of this technology we're interested in building.
← Back to blog