At Yorph, we know security isn't a checkbox — it's a commitment. When you're working with AI-powered data systems, trust is everything. That's why we're designing Yorph with a security-first mindset, from data flow to agent behavior.
Here's how we think about it:
1. Security Isn't a Feature — It's the Foundation
We treat security as a core part of our platform, not something bolted on later. Whether you're uploading a file or asking the agent a question, we aim to make sure your data stays yours, your experience is private, and your control is absolute.
This isn't just about compliance or encryption — it's about earning your trust with clear, respectful defaults.
2. Two Kinds of Data, Two Sets of Considerations
We think about data in two distinct categories: (a) Your actual data — the datasets you upload or sync to Yorph (b) Your interaction data — the questions you ask, the prompts you give, and how you use the platform
We handle both differently — but securely.
✅ By default, we don't store your uploaded or synced data. Unless you explicitly opt in, your data is deleted after the session ends or the execution completes. The transformation logic (i.e., how the agent worked with the data) will be saved so you can come back to the tool tomorrow and use it again — but the underlying data is not.
What that means: If you didn't opt in to data retention, the next time you want to rerun that flow, you'll need to re-upload or re-sync the data.
We're okay with that tradeoff if it means tighter security and more user control.
✅ If you do opt in, we retain your data for up to 30 days. This makes it easier to iterate and re-use workflows, while keeping boundaries clear and temporary.
3. No, Your Data Doesn't Train Our Models
This part is non-negotiable: Your uploaded or synced data will never be used to train any LLM.
We use enterprise-grade model providers and security policies to ensure customer data is never leaked or exposed — whether by API call, internal tool, or prompt injection, and is maintained with proper data segregation.
Instead, we improve the platform using anonymized user interaction patterns — for example, if agents regularly struggle with messy nested JSON, we simulate those patterns in synthetic test cases. This helps the agent improve over time without ever looking at your actual data.
4. Security in an Agentic World
Because we're building an agentic platform, we also think deeply about a new class of security considerations:
Agent Transparency & Explainability
- •Every agent action is logged and explainable
- •Users can inspect transformation steps, view or download generated code, and understand why the agent did what it did
Human-in-the-Loop by Default
- •In sensitive flows, agents don't auto-execute without review
- •Users approve or reject changes, giving them full oversight of how data is transformed
Access Control, Data Segregation & Guardrails
- •Agents can only interact with the APIs, files, or databases the user explicitly authorizes
- •Customer environments are fully isolated and segregated — no risk of data leakage across tenants
LLM Usage with Boundaries
- •Prompts are pre-processed to redact or abstract sensitive inputs
- •We only use LLMs that support strict data isolation and enterprise-grade SLAs
- •Model usage is logged, rate-limited, and auditable
Final Thoughts: Transparency Over Assumptions
Security isn't just about what we protect — it's about what we make clear.
AI platforms have to hold themselves to a higher bar. We're building Yorph to empower people working with data — and that starts with being upfront about how we handle it.
We'll keep evolving our approach, but one thing won't change: Your data belongs to you. And our job is to treat it that way.
Ready to Yorph? Join our waitlist and read more about our features.
Check out our other blog posts:
Multi-Agent Systems: Useful Abstraction or Overkill? - where we dive into our take on multi-agent systems and whether they're truly necessary for modern AI applications.
The Real Key to AI-Driven Data Engineering - It's Not What AI Does, But What It Doesn't Do - where we explore what makes AI for data engineering successful.