Mockly

glossary

Service Role Key

The service role key must live on the server; leaking it to the browser hands attackers full database control. This page explains it in plain English, then goes deeper into how it works in Supabase/Postgres, what commonly goes wrong, and how to fix it without relying on fragile client-side rules.

What “Service Role Key” means (plain English)

Service role bypasses RLS and should be stored only in backend environment variables. Service Role Key is a practical security issue for teams using Supabase because exposed tables, storage, or RPC endpoints can return or mutate data beyond intended account boundaries.

How Service Role Key works in Supabase/Postgres (technical)

Because the service role is a privileged entry point, exposing it lets anyone read or write any row, so it must be shielded behind backend-only APIs.

Attack paths & failure modes for Service Role Key

  • Service role leaked into browser bundle: A developer added the key as a NEXT_PUBLIC variable to make a feature work faster.
  • Service role rotation after exposure: A repo or log accidentally contained the key for a short time.
  • Service role leaked into browser bundle: The key was bundled into shipped JS, so anyone could query Supabase with full privileges.
  • Service role rotation after exposure: Even brief exposure is dangerous because logs, CI artifacts, and browser caches keep copy of the key.
  • The service_role key is shipped to the browser bundle or mobile app (full database bypass).
  • Keys are logged (server logs, client logs, error reporting) and later leaked.
  • A CI/CD secret is misconfigured and becomes accessible to untrusted contexts.
  • Keys are reused across environments; a dev leak compromises production patterns.
  • Keys are not rotated after exposure, keeping the blast radius large.

Why Service Role Key matters for Supabase security

Any leak of the service role gives attackers the ability to bypass your safeguards, change data, or generate signed URLs without approval.

Common Service Role Key mistakes that lead to leaks

  • Putting the service role into NEXT_PUBLIC env vars for quick debugging.
  • Logging the key to console output or CI artifacts.
  • Failing to rotate the key after suspect exposure.
  • Service role leaked into browser bundle: The key was bundled into shipped JS, so anyone could query Supabase with full privileges.
  • Service role rotation after exposure: Even brief exposure is dangerous because logs, CI artifacts, and browser caches keep copy of the key.

Where to look for Service Role Key in Supabase

  • Environment variables and build configuration: NEXT_PUBLIC_ variables must never contain secrets.
  • Client bundle artifacts (search for service_role prefixes) and source maps.
  • Server logs and error trackers where headers/URLs may include secrets.
  • Any code that instantiates a Supabase client with service_role outside trusted server runtime.

How to detect Service Role Key issues (signals + checks)

Use this as a quick checklist to validate your current state:

  • Try the same queries your frontend can run (anon/authenticated). If sensitive rows come back, you have exposure.
  • Verify RLS is enabled and (for sensitive tables) forced.
  • List policies and look for conditions that don’t bind rows to a user or tenant.
  • Audit grants to anon / authenticated on sensitive tables and functions.
  • Service role leaked into browser bundle: Never put service_role in NEXT_PUBLIC variables.
  • Service role leaked into browser bundle: Treat key exposure as an incident and rotate immediately.
  • Service role leaked into browser bundle: Use backend endpoints for privileged access.
  • Re-test after every migration that touches security-critical tables or functions.

How to fix Service Role Key (backend-only + zero-policy posture)

Mockly’s safest default is backend-only access: the browser should not query tables, call RPC, or access Storage directly.

  1. Decide which operations must remain client-side (often: none for sensitive resources).
  2. Create server endpoints (API routes or server actions) for required reads/writes.
  3. Apply hardening SQL: enable+force RLS where relevant, remove broad policies, and revoke grants from client roles.
  4. Generate signed URLs for private Storage downloads on the server only.
  5. Re-run a scan and confirm the issue disappears.
  6. Add a regression check to your release process so drift doesn’t reintroduce exposure. Fixes that worked in linked incidents:
  • Service role leaked into browser bundle: Remove the key from client env vars, rotate it, and move the access flow to backend endpoints.
  • Service role rotation after exposure: Rotate the key, audit access patterns, and refactor the app so the privileged key is rarely needed.

Verification checklist for Service Role Key

  1. Search your codebase and deployed artifacts to confirm the key is not present in any client output.
  2. Rotate the key if there is any chance it was exposed and update server environments.
  3. Limit service_role usage to server-only endpoints with strict authorization.
  4. Add a guardrail: block builds if a secret matches known Supabase key patterns.
  5. Re-run scans and audits after rotation to confirm no new direct-access paths exist.
  6. Service role leaked into browser bundle: Never put service_role in NEXT_PUBLIC variables.
  7. Service role leaked into browser bundle: Treat key exposure as an incident and rotate immediately.
  8. Service role leaked into browser bundle: Use backend endpoints for privileged access.

SQL sanity checks for Service Role Key (optional, but high signal)

If you prefer evidence over intuition, run a small set of SQL checks after each fix.

The goal is not to memorize catalog tables — it’s to make sure the access boundary you intended is the one Postgres actually enforces:

  • Confirm RLS is enabled (and forced where appropriate) for tables tied to this term.
  • List policies and read them as plain language: who can do what, under what condition?
  • Audit grants for anon/authenticated and PUBLIC on the tables, views, and functions involved.
  • If Storage is involved: review bucket privacy and policies for listing/reads.
  • If RPC is involved: review EXECUTE grants for functions and whether privileged functions are server-only.

Pair these checks with a direct API access test using client credentials. When both agree, you can ship the fix with confidence.

Over time, keep a small “query pack” for the checks you trust and run it after every migration. That’s how you prevent quiet regressions.

Prevent Service Role Key drift (so it doesn’t come back)

  • Adopt a policy: secrets only in server env vars; never in frontend env vars.
  • Implement redaction in logs and error reporting for headers and credentials.
  • Rotate keys on a schedule and immediately after any suspected exposure.
  • Keep one reusable verification test for “Service role leaked into browser bundle” and rerun it after every migration that touches this surface.
  • Keep one reusable verification test for “Service role rotation after exposure” and rerun it after every migration that touches this surface.

Rollout plan for Service Role Key fixes (without breaking production)

Many hardening changes fail because teams revoke direct access first and only later discover missing backend paths.

Use this sequence to reduce both risk and outage pressure:

  1. Implement and verify the backend endpoint or server action before permission changes.
  2. Switch clients to that backend path behind a feature flag when possible.
  3. Then revoke direct client access (broad grants, permissive policies, public bucket reads, or broad EXECUTE).
  4. Run direct-access denial tests and confirm authorized backend flows still succeed.
  5. Re-scan after deployment and again after the next migration.

This turns security fixes into repeatable rollout mechanics instead of one-off emergency changes.

Incident breakdowns for Service Role Key (real scenarios)

Service role leaked into browser bundle

Scenario: A developer added the key as a NEXT_PUBLIC variable to make a feature work faster.

What failed: The key was bundled into shipped JS, so anyone could query Supabase with full privileges.

What fixed it: Remove the key from client env vars, rotate it, and move the access flow to backend endpoints.

Why the fix worked: Only backend code holds the privileged key, so clients can no longer bypass RLS or backend logic.

Key takeaways:

  • Never put service_role in NEXT_PUBLIC variables.
  • Treat key exposure as an incident and rotate immediately.
  • Use backend endpoints for privileged access.
  • Add build checks to prevent reintroduction.

Read full example: Service role leaked into browser bundle

Service role rotation after exposure

Scenario: A repo or log accidentally contained the key for a short time.

What failed: Even brief exposure is dangerous because logs, CI artifacts, and browser caches keep copy of the key.

What fixed it: Rotate the key, audit access patterns, and refactor the app so the privileged key is rarely needed.

Why the fix worked: Rotation invalidates leaked credentials and an architecture shift to backend-only access reduces future incidents.

Key takeaways:

  • Assume keys are compromised if they appear in logs or repos.
  • Rotate quickly and verify old keys are rejected.
  • Harden architecture to reduce direct key usage.
  • Add secret scanning and build-time checks.

Read full example: Service role rotation after exposure

Real-world examples of Service Role Key (and why they work)

Related terms

  • Public Table Exposure → /glossary/public-table-exposure
  • Signed URLs → /glossary/signed-urls

FAQ

Is Service Role Key enough to secure my Supabase app?

It’s necessary, but not sufficient. You also need correct grants, secure Storage/RPC settings, and a backend-only access model for sensitive operations.

What’s the quickest way to reduce risk with Service Role Key?

Remove direct client access to sensitive resources, enable/force RLS where appropriate, and verify via a repeatable checklist that anon/authenticated cannot query what they shouldn’t.

How do I verify the fix is real (not just a UI change)?

Attempt direct API queries using the same client credentials your app ships. If the database denies access (401/403) and your backend endpoints still work, your fix is effective.

Next step

Want a quick exposure report for your own project? Run a scan in Mockly to find public tables, storage buckets, and RPC functions — then apply fixes with verification steps.

Explore related pages

parent

Glossary

/glossary

sibling

Public Table Exposure

/glossary/public-table-exposure

sibling

Signed URLs

/glossary/signed-urls

cross

Next.js backend-only Supabase access

/integrations/nextjs-backend-only-supabase

cross

Service role leaked into browser bundle

/examples/service-role-key/service-role-leaked-in-browser-bundle

cross

Pricing

/pricing