New0% platform fee on Tier 2+ — upgrade from 4% Pay As You Go. Start now
Active on every request

Every request.
Protected.

5 guardrails run on every organization from day one. Block abuse, redact PII, catch leaked API keys, stop prompt injection. Under 2ms. Zero config.

guardrail-demo.py
# Your app sends a normal request...
messages = [{"role": "user", "content":
"My SSN is 123-45-6789 and key is sk-proj-abc123..." "My SSN is [SSN_REDACTED] and key is [API_KEY_REDACTED]..."
}]
# Nemo redacts PII + secrets before the LLM sees it
# Prompt injection? Blocked before it reaches the model
2 redacted 0 blocked1.4ms

Active on every organization

5 guardrails. Zero setup.

API Key Leak Protection

pre-call · redact

Scans every prompt for OpenAI, Anthropic, AWS, and generic API keys. Redacts them before the LLM sees the message. Your users' secrets never leave your perimeter.

User prompt:
My key is sk-proj-abc123def456ghi789jklmno012
What the LLM receives:
My key is [API_KEY_REDACTED]
OpenAI sk-*Anthropic sk-ant-*AWS AKIA*Bearer tokensGeneric secrets

Harmful Content

pre-call · block

Blocks violence, weapons, drug synthesis, and CSAM keywords with leet-speak normalization.

403 guardrail_blocked

Output Safety

post-call · block

Scans LLM responses for sexually explicit content. Blocks delivery before it reaches your users.

Response scanned post-generation

PII Redaction

pre-call · redact

SSNs, credit cards, and emails replaced with safe placeholders. The LLM never sees the real data.

SSN123-45-6789[SSN_REDACTED]
CC4111...1111[CC_REDACTED]

Prompt Injection Guard

pre-call · block

Heuristic detection across 6 attack categories. Catches jailbreaks, instruction overrides, system prompt extraction, delimiter injection, encoding tricks, and DAN-style role switches.

Instruction override
Role switching
Prompt extraction
Delimiter injection
Encoding tricks
Jailbreak (DAN)

The request pipeline

Guardrails run inside the proxy path. No sidecar. No extra API calls. No SDK changes.

Your app

OpenAI SDK

Pre-call scan

PII, keys, injection

LLM provider

GPT-4o, Claude, etc.

Post-call scan

Output safety

Response

Clean output

<2ms

Pre-call overhead

0ms

SDK changes

5

Default guardrails

9 providers. One toggle.

Mix Presidio for PII, keyword filters for policy, custom webhooks for your business logic — all from the same dashboard.

Presidio
Microsoft
Regex Patterns
Built-in
Keyword Filter
Built-in
Prompt Injection
Built-in
Custom Webhook
Yours
Azure Content SafetySoon
Microsoft
Bedrock GuardrailsSoon
AWS
Lakera GuardSoon
Lakera
AporiaSoon
Aporia

Built for teams that ship to production

Healthcare

Redact patient SSNs and medical records. Block responses that could constitute medical advice.

Finance

Prevent credit card numbers from reaching LLMs. Block unqualified financial advice in output.

Enterprise

Stop employees from pasting API keys into prompts. Block prompt injection targeting internal tools.

SaaS

Protect end users from harmful LLM output. Custom webhooks for domain-specific content policies.

Zero integration effort

Guardrails run transparently. Your existing OpenAI SDK code works without changes. Blocked requests raise a standard API error. Redacted content arrives clean.

Works with Python, Node.js, Go, Ruby, Java, C#, PHP, Rust — any OpenAI-compatible SDK.

app.py
from openai import OpenAI
client = OpenAI(
api_key="sk-nemo-...", # your virtual key
base_url="https://api.nemorouter.com/v1"
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": msg}]
)
# guardrails run automatically — nothing else to add

FAQ

Ship with confidence

Every plan. Every provider. No paywalls on security. Your first request is protected automatically.