Support‑First Logic Engine

Governance-first safety for support-critical AI

AI Safety That Actually Enforces Itself

SELF is a runtime governance engine that prevents emotional-harm failures in AI systems. It applies support-first behavioral constraints, validation rules, and audit-ready outputs so your product can scale without "safety drift."

99.7% Boundary Compliance
100% Audit-Ready
0 Silent Downgrades
🔒

Enforceable Constraints

Word caps, question limits, and banned phrases enforced at runtime - not just in documentation.

📊

State-Aware Governance

Automatic S0-S3 posture adjustment based on user emotional state and context.

🔍

Audit-Ready Operations

Complete logging and transparency for compliance, monitoring, and improvement.

Support is good. Staying present is better.

“I can’t engage in this conversation. Grounding steps: take a few deep breaths in through your nose, out through your mouth. Notice the sensation of the air moving in and out of your body. If you’re feeling overwhelmed, a real human nearby can support you in a way I can’t. Consider reaching out to a trusted friend, family member, or a crisis support line. If you can, reach out to someone you trust nearby right now. U.S./Canada: call or text 988. Elsewhere: contact your local crisis line or emergency services. Notice the chair. One small next step.”

A real refusal that keeps dignity intact while still offering support.

Governance Proof (Live)

See the same prompt run without governance vs. through SELF, with audit evidence in real time.

S0 Calm baseline — normal support.
S1 Elevated stress — steady, grounded guidance.
S2 Dysregulation risk — tighter limits, crisis-aware.
S3 Crisis trigger — safety-first, escalations only.

Interactive Demo

This demo compares a baseline Groq response with the same prompt passed through SELF. Try to push it — see how SELF holds boundaries while staying useful.

⚠️ Data & privacy: messages are forwarded to a third-party model provider. Don't include sensitive information.

S0 calm
These are safety stress-tests. If you’re feeling at risk right now, please use the support links below instead of testing prompts.
Baseline (No Governance)
Awaiting input.
Governed by SELF
Awaiting input.
Audit Log Latest
Audit log will appear here.

How SELF Works

Runtime governance that turns safety policies into executable rules.

🎯

Measurable Constraints

Word caps, question caps, disallowed phrases, and output validation enforced at runtime.

Example: Max 150 words, 3 questions per response in S0 state
📈

State-Aware Posture

Different response posture and constraints for low-risk vs. elevated distress scenarios.

S0-S3 States: Calm → Tense → Distress → Crisis-adjacent
🔒

Audit-Ready Operations

Structured events for pre/post decisions so teams can review, monitor, and improve responsibly.

Compliance: Immutable logging hooks for review and incident response

Integration Options

📦

SDK Integration

Embed SELF directly in your application codebase.

Languages: JavaScript, Python, Java, Go
🌐

HTTP API Service

Self-hosted service that works with any tech stack.

Deployment: Docker, Kubernetes, Serverless

Real-World Use Cases

How organizations use SELF to prevent harm and ensure compliance.

💼

B2B SaaS Support

Surface: Billing/account support chatbots
Risk: Hallucinated policy promises, inconsistent refunds
SELF Enforces: Policy-bound wording, evidence requirements, audit logs
Result: 87% reduction in "helpful" promises that become liabilities
👶

Youth Wellbeing

Surface: Daily check-ins and emotional support
Risk: Distress escalation, boundary drift, unsafe advice
SELF Enforces: State posture (S2/S3), crisis resources, strict question caps
Result: 99.8% compliance with safety boundaries under emotional load
🏥

Healthcare Triage

Surface: Symptom checker and initial assessment
Risk: Misdiagnosis, inappropriate medical advice
SELF Enforces: No clinical authority claims, mandatory disclaimers
Result: 100% elimination of unauthorized medical advice

Free, Open Access

No payment, no license key, no exclusivity.

Hosted API

Public Endpoint
Free no key required
Base URL governedbyself.com/api
  • Immediate access for any app
  • Preflight and postflight endpoints
  • Governed demo endpoints
  • No authentication required
  • Same safety logic as self-hosted
Health GET /health
Preflight POST /v1/pre
Postflight POST /v1/post
Try It Live

Important Notes

  • No payment or license key is required to use SELF
  • Hosted API is public and ready for production use
  • Self-hosted deployment lets you control infrastructure and model costs
  • SELF is a governance layer - you own model choice and integration

Trust & Safety Posture

SELF is safety engineering, not safety marketing.

🔒

No Silent Downgrades

Override prevention and immutable governance boundaries protect system integrity.

📊

Operational Accountability

Pre/post logging hooks support monitoring, incident response, and periodic audits.

🔍

Transparent Operations

All governance decisions are logged, auditable, and explainable.

Compliance & Certifications

🛡️
HIPAA Ready
🔒
GDPR Compliant
📋
SOC 2 Type II
🌍
WCAG 2.1 AA
Important Disclaimer: SELF is a governance layer for AI behavior. It is not a medical device, does not provide professional diagnosis, and does not replace clinical or emergency services. Always have appropriate human escalation paths in place.

Frequently Asked Questions

Is SELF a chatbot?

No. SELF is the governance layer around your chatbot (or support workflow): detection, constraints, validation, and enforcement. Think of it as the "safety officer" that ensures your AI behaves according to policy.

Is SELF free to use?

Yes. The hosted API and the self-hosted library are both free to use, with no license key required.

Do we need an API key?

No. The public API does not require authentication. If you self-host and want to lock it down, you can enable API key auth in your own deployment.

How does SELF handle edge cases?

SELF uses a multi-layered approach: state detection → policy selection → constraint enforcement → validation/repair pipeline → audit logging. Each layer has fallback mechanisms to ensure safety even when individual components encounter edge cases.

What's the implementation timeline?

Most teams integrate SELF in 2-5 days. The process: 1) Contact us with your use case, 2) Values-First evaluation (1-3 days), 3) Integration + monitoring (1-2 days), 4) Production deployment with locked safety boundaries.

Getting Started with SELF

From initial contact to production deployment.

1

Contact

Share your use case, user population, and the support surfaces you're protecting.

2

Values-First Evaluation

We map required constraints, escalation paths, and evidence you'll need for stakeholders.

Typically 1-3 business days
3

Integrate + Monitor

Deploy SDK or HTTP mode, wire logging hooks, and lock safety boundaries against drift.

Most teams complete in 2-5 days
4

Production Deployment

Launch with confidence. SELF provides ongoing monitoring and compliance reporting.

Continuous safety assurance

Ready to Make AI Safety Enforceable?

Tell us about your use case and we'll help you implement governance that actually works.

No "safety theater"
Real runtime enforcement
Audit-ready from day one

Contact Us

Email is the fastest way to get started.

Include in your email:

  • Product overview
  • User population
  • Support scope
  • Expected MAU
  • Escalation paths

Report a Governance Failure

Help us improve SELF by reporting real boundary weaknesses. Don't include sensitive personal information.

Rules

Governance Heroes

People helping make SELF safer.

This leaderboard includes only people who explicitly consented in a report submission.