Personal Data Safety

Personal data is increasingly interacting with AI. We're advancing the research and systems for AI that improves your life while you keep full sovereignty over your data.

Our principles

Sovereignty

Sovereignty

Personal data safety starts with ownership. Your data is not something you hand over and hope is handled well. It is something you control. You decide what connects, what is used, and what is off limits. A system can only be safe in this era if your authority over your data remains intact, even as the AI becomes more capable.

Design against worst-case attacks

Design against worst-case attacks

We assume your data will be targeted, even if the probability seems low. That includes everything outside our direct control: a stolen device, a laptop left open at a cafe, a convincing phishing email, and more. As personal data becomes more valuable in the AI era, these everyday failure modes matter more, not less.

Provable

Provable

You should not have to trust a promise to be safe. A modern safety model has to be inspectable. The protections should be explainable, observable, and provable. Even if most people never read technical details, the fact that the system can be verified changes the entire relationship between the user and the provider. It replaces "trust us" with "you can know."

Least access

Least access

Safety is not just about protecting data. It is about limiting access in the first place. Systems should touch the smallest possible amount of data, for the shortest possible time, to do the job. Access should be narrowly scoped, explicitly granted, and easy to revoke. When you design for least access, you reduce the blast radius of mistakes, abuse, and unexpected changes.

Pickle Enclave

Pickle Enclave is an open-source privacy architecture designed to remove the service provider from the trust anchor when AI uses personal data, so we can keep your data unreadable to anyone, without requiring you to trust us.

Learn More
Pickle Enclave