Privacy isn't a settings page. In AI products, it's the foundation.
An AI assistant is only useful if it can work with real context -- inbox, calendar, documents, and communication history.
But that immediately raises the question: What happens to my data? Who controls it?
We take a privacy-first approach because trust is not optional in professional workflows.
Start with the basics: data minimisation and purpose limitation
If you're building in the UK or EU sphere, regulators are clear: collect what you need, and don't collect what you don't.
The UK ICO describes data minimisation as using the minimum personal data required for your purpose -- no more.1 And GDPR principles emphasize purpose limitation: data should only be collected for specified, explicit purposes.2
This is more than compliance language. It is a product design principle.
AI adds new risks -- so we design for them explicitly
Modern AI security guidance repeatedly warns about:
- prompt injection
- insecure output handling
- and sensitive information disclosure3
That's why we treat privacy as a system design problem, not just policy text.
What we mean by privacy as a user experience
A privacy-respecting assistant should feel:
- predictable (no surprises)
- visible (you can see what it's doing and why)
- controllable (you decide what happens)
Approval-first plays a big role here because it reduces accidental or unwanted actions. It aligns with the broader human-in-the-loop approach used to improve accountability in automated systems.4
How the industry is talking about data use (and why it matters)
It is also helpful to know how major platforms describe their data practices:
- OpenAI states that, by default, business data is not used for training in its enterprise offerings, unless you explicitly opt in.5
- Google's Workspace privacy docs say chats and uploaded files in Gemini for Workspace are not reviewed by humans or used to train models without permission.6
We are citing these not because big company says so equals perfect privacy, but because it shows where the bar is moving: clear commitments and control.
The Clastines privacy mindset
Our north star is simple: Clastines should feel like a colleague you trust -- not a black box you tolerate.
That means designing around:
- minimisation (only what's necessary)
- purpose limitation (clear use boundaries)
- guardrails against common AI risks
- and approval-first execution for meaningful actions
Trust is the product. Privacy is how we earn it.
References
Footnotes
-
https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/data-protection-principles/a-guide-to-the-data-protection-principles/data-minimisation/?utm_source=chatgpt.com "Principle (c): Data minimisation | ICO" ↩
-
https://www.dataprotection.ie/en/individuals/data-protection-basics/principles-data-protection?utm_source=chatgpt.com "Principles of Data Protection" ↩
-
https://owasp.org/www-project-top-10-for-large-language-model-applications/?utm_source=chatgpt.com "OWASP Top 10 for Large Language Model Applications" ↩
-
https://www.ibm.com/think/topics/human-in-the-loop?utm_source=chatgpt.com "What Is Human In The Loop (HITL)? - IBM" ↩
-
https://openai.com/enterprise-privacy/?utm_source=chatgpt.com "Enterprise privacy at OpenAI" ↩
-
https://support.google.com/a/answer/15706919?hl=en&utm_source=chatgpt.com "Generative AI in Google Workspace Privacy Hub" ↩