Governance-first safety for support-critical AI
AI Safety That Actually Enforces Itself
SELF is a runtime governance engine that prevents emotional-harm failures in AI systems. It applies support-first behavioral constraints, validation rules, and audit-ready outputs so your product can scale without "safety drift."
Enforceable Constraints
Word caps, question limits, and banned phrases enforced at runtime - not just in documentation.
State-Aware Governance
Automatic S0-S3 posture adjustment based on user emotional state and context.
Audit-Ready Operations
Complete logging and transparency for compliance, monitoring, and improvement.
Support is good. Staying present is better.
“I can’t engage in this conversation. Grounding steps: take a few deep breaths in through your nose, out through your mouth. Notice the sensation of the air moving in and out of your body. If you’re feeling overwhelmed, a real human nearby can support you in a way I can’t. Consider reaching out to a trusted friend, family member, or a crisis support line. If you can, reach out to someone you trust nearby right now. U.S./Canada: call or text 988. Elsewhere: contact your local crisis line or emergency services. Notice the chair. One small next step.”
A real refusal that keeps dignity intact while still offering support.Governance Proof (Live)
See the same prompt run without governance vs. through SELF, with audit evidence in real time.
Interactive Demo
This demo compares a baseline Groq response with the same prompt passed through SELF. Try to push it — see how SELF holds boundaries while staying useful.
⚠️ Data & privacy: messages are forwarded to a third-party model provider. Don't include sensitive information.
How SELF Works
Runtime governance that turns safety policies into executable rules.
Measurable Constraints
Word caps, question caps, disallowed phrases, and output validation enforced at runtime.
State-Aware Posture
Different response posture and constraints for low-risk vs. elevated distress scenarios.
Audit-Ready Operations
Structured events for pre/post decisions so teams can review, monitor, and improve responsibly.
Integration Options
SDK Integration
Embed SELF directly in your application codebase.
HTTP API Service
Self-hosted service that works with any tech stack.
Real-World Use Cases
How organizations use SELF to prevent harm and ensure compliance.
B2B SaaS Support
Youth Wellbeing
Healthcare Triage
Free, Open Access
No payment, no license key, no exclusivity.
Hosted API
- Immediate access for any app
- Preflight and postflight endpoints
- Governed demo endpoints
- No authentication required
- Same safety logic as self-hosted
Self-Hosted
- Run locally or in your VPC
- Optional API key enforcement
- Audit logging hooks
- Rate limiting built-in
- Same endpoints as hosted
Important Notes
- No payment or license key is required to use SELF
- Hosted API is public and ready for production use
- Self-hosted deployment lets you control infrastructure and model costs
- SELF is a governance layer - you own model choice and integration
Trust & Safety Posture
SELF is safety engineering, not safety marketing.
No Silent Downgrades
Override prevention and immutable governance boundaries protect system integrity.
Operational Accountability
Pre/post logging hooks support monitoring, incident response, and periodic audits.
Transparent Operations
All governance decisions are logged, auditable, and explainable.
Compliance & Certifications
Frequently Asked Questions
Is SELF a chatbot?
No. SELF is the governance layer around your chatbot (or support workflow): detection, constraints, validation, and enforcement. Think of it as the "safety officer" that ensures your AI behaves according to policy.
Is SELF free to use?
Yes. The hosted API and the self-hosted library are both free to use, with no license key required.
Do we need an API key?
No. The public API does not require authentication. If you self-host and want to lock it down, you can enable API key auth in your own deployment.
How does SELF handle edge cases?
SELF uses a multi-layered approach: state detection → policy selection → constraint enforcement → validation/repair pipeline → audit logging. Each layer has fallback mechanisms to ensure safety even when individual components encounter edge cases.
What's the implementation timeline?
Most teams integrate SELF in 2-5 days. The process: 1) Contact us with your use case, 2) Values-First evaluation (1-3 days), 3) Integration + monitoring (1-2 days), 4) Production deployment with locked safety boundaries.
Getting Started with SELF
From initial contact to production deployment.
Contact
Share your use case, user population, and the support surfaces you're protecting.
Values-First Evaluation
We map required constraints, escalation paths, and evidence you'll need for stakeholders.
Integrate + Monitor
Deploy SDK or HTTP mode, wire logging hooks, and lock safety boundaries against drift.
Production Deployment
Launch with confidence. SELF provides ongoing monitoring and compliance reporting.
Ready to Make AI Safety Enforceable?
Tell us about your use case and we'll help you implement governance that actually works.
Contact Us
Email is the fastest way to get started.
Include in your email:
- Product overview
- User population
- Support scope
- Expected MAU
- Escalation paths