When multiple organizations share a single LLM gateway, every piece of data must be scoped to the right tenant. API keys, credit balances, usage logs, guardrail configurations, prompt templates — a single leaked row is a security incident. A single miscounted credit is a billing error.
Most multi-tenant systems enforce isolation at the application layer: every database query includes a WHERE organization_id = ? clause, and developers must remember to add it every time. This works until someone forgets. One missing filter in one query in one code path, and Organization A can see Organization B's data.
NemoRouter enforces isolation at the database layer using Supabase Row Level Security (RLS). The database itself refuses to return rows that do not belong to the requesting user's organization. Even if application code has a bug, the data stays isolated.
Every table in NemoRouter's nemo schema has RLS enabled. No exceptions, no USING (true) policies that would allow cross-tenant access. Policies reference auth.uid() or auth.jwt() to determine the requesting user and their organization membership.
Here is a simplified example for the credit_balances table:
-- Enable RLSALTER TABLE nemo.credit_balances ENABLE ROW LEVEL SECURITY;-- Users can only see their own organization's balanceCREATE POLICY credit_balances_select_member ON nemo.credit_balances FOR SELECT USING ( organization_id IN ( SELECT organization_id FROM "LiteLLM_OrganizationMembership" WHERE "user_id" = auth.uid() ) );
This policy runs on every SELECT against the table. Even if application code forgets to filter by organization, the database enforces it. The policy checks the user's JWT, looks up their organization membership, and only returns rows matching their org.
We follow a strict naming convention for policies: {table}_{operation}_{role}. Each table has separate policies for SELECT, INSERT, UPDATE, and DELETE, scoped to the user's role within the organization (owner, admin, member, viewer).
Multi-tenancy across multiple services typically requires mapping tables. Organization A in Service 1 maps to org-xyz in Service 2, which maps to tenant-123 in Service 3. These mapping layers are a constant source of bugs and synchronization failures.
NemoRouter eliminates this entirely. When an organization is created, it gets a single UUID that flows unchanged through every service:
Supabase stores it as organization_id in all nemo schema tables
LiteLLM uses the same UUID as its organization_id in Prisma-managed tables
Nemo Backend reads it from the authenticated request and passes it through
No mapping columns. No sync jobs. No entity translation. One UUID, three services, one database.
This is possible because both schemas live in the same Postgres instance. LiteLLM manages 22 tables via Prisma; NemoRouter manages 22 tables in the nemo schema via Supabase migrations. They coexist cleanly because NemoRouter never creates tables in the public schema (Prisma would drop them on its next migration).
To keep policies readable and consistent, we use helper functions that encapsulate common authorization checks:
-- Check if the current user is a member of the given orgCREATE FUNCTION nemo.is_org_member(org_id UUID)RETURNS BOOLEAN AS $$ SELECT EXISTS ( SELECT 1 FROM "LiteLLM_OrganizationMembership" WHERE "user_id" = auth.uid() AND "organization_id" = org_id );$$ LANGUAGE sql SECURITY DEFINER STABLE;-- Get the current user's role in the given orgCREATE FUNCTION nemo.get_org_role(org_id UUID)RETURNS TEXT AS $$ SELECT "user_role" FROM "LiteLLM_OrganizationMembership" WHERE "user_id" = auth.uid() AND "organization_id" = org_id LIMIT 1;$$ LANGUAGE sql SECURITY DEFINER STABLE;
These functions query the LiteLLM_OrganizationMembership table directly. Because LiteLLM's membership table and NemoRouter's feature tables share the same database and the same organization UUIDs, the join is straightforward. No cross-service API calls needed for authorization checks.
Database-per-tenant or schema-per-tenant sharding is a common approach, but it introduces operational complexity that does not pay off at our scale. With RLS, adding a new tenant is just creating a row — no new schemas, no connection pool reconfiguration, no migration coordination.
RLS policies are evaluated at query time with minimal overhead because they leverage indexed columns. Our organization_id columns are indexed, making the policy check fast even as the table grows.
We do not trust application-level testing alone. Our test suite includes explicit cross-tenant verification: create two organizations, insert data for each, then verify that authenticated queries from Org A return zero rows from Org B. This runs on every CI build.
# Verify cross-tenant isolationorg_a_data = await query_as_user(org_a_member, "SELECT * FROM nemo.credit_balances")org_b_data = await query_as_user(org_b_member, "SELECT * FROM nemo.credit_balances")assert all(row["organization_id"] == org_a_id for row in org_a_data)assert all(row["organization_id"] == org_b_id for row in org_b_data)assert len(set(r["organization_id"] for r in org_a_data) & set(r["organization_id"] for r in org_b_data)) == 0
Multi-tenancy is not a feature you ship once and forget. It is a property you verify continuously, enforce at the database layer, and test on every change.
NT
Written by Nemo TeamEngineering, product, and company posts from the NemoRouter team — code-first, cost-honest, no vendor-marketing fluff.