glossary
Storage Object Enumeration
Object enumeration happens when buckets allow listing or predictable keys, giving attackers a catalog of files. This page explains it in plain English, then goes deeper into how it works in Supabase/Postgres, what commonly goes wrong, and how to fix it without relying on fragile client-side rules.
What “Storage Object Enumeration” means (plain English)
When listing is enabled or keys are predictable, attackers can iterate every object and request them.
How Storage Object Enumeration works in Supabase/Postgres (technical)
Listing permissions or deterministic prefixes expose object keys that can be combined with weak read rules to exfiltrate data even from private buckets.
Attack paths & failure modes for Storage Object Enumeration
- Storage listing enables object enumeration: The frontend lists a bucket to show “my uploads” in a hurry.
- Predictable prefix keys leak user files: Uploads are stored as
{userId}/{timestamp}.pdfand the pattern leaks into debugging workflows. - Storage listing enables object enumeration: Listing became a discovery API, so attackers collected keys and abused any permissive read rule.
- Predictable prefix keys leak user files: Predictable keys turned identity into filenames, so attackers guessed prefixes and downloaded files when weak reads existed.
- Object listing is allowed, providing an attacker with an index of object keys.
- Object keys are predictable (userId/tenantId/date/incrementing IDs), enabling brute-force discovery.
- Buckets are public or have permissive read rules, turning discovery into bulk downloads.
- Signed URL endpoints sign without ownership checks, effectively becoming a download oracle.
Why Storage Object Enumeration matters for Supabase security
Enumeration turns small misconfigurations into large leaks because once keys are known attackers can download many files.
Common Storage Object Enumeration mistakes that lead to leaks
- Allowing bucket listing for convenience.
- Using predictable prefixes like
userId/timestamp. - Assuming signed URLs alone prevent enumeration.
- Storage listing enables object enumeration: Listing became a discovery API, so attackers collected keys and abused any permissive read rule.
- Predictable prefix keys leak user files: Predictable keys turned identity into filenames, so attackers guessed prefixes and downloaded files when weak reads existed.
Where to look for Storage Object Enumeration in Supabase
- Bucket privacy settings and any rules that allow listing or broad reads.
- Object key structure and whether it encodes identity or predictable patterns.
- Signed URL generation: where it lives (must be server-side) and how ownership is validated.
How to detect Storage Object Enumeration issues (signals + checks)
Use this as a quick checklist to validate your current state:
- Try the same queries your frontend can run (anon/authenticated). If sensitive rows come back, you have exposure.
- Verify RLS is enabled and (for sensitive tables) forced.
- List policies and look for conditions that don’t bind rows to a user or tenant.
- Audit grants to
anon/authenticatedon sensitive tables and functions. - Storage listing enables object enumeration: Listing is an attacker’s index of your files.
- Storage listing enables object enumeration: Backend listing plus signed URLs is a safer default.
- Storage listing enables object enumeration: Client-side filtering is not authorization.
- Re-test after every migration that touches security-critical tables or functions.
How to fix Storage Object Enumeration (backend-only + zero-policy posture)
Mockly’s safest default is backend-only access: the browser should not query tables, call RPC, or access Storage directly.
- Decide which operations must remain client-side (often: none for sensitive resources).
- Create server endpoints (API routes or server actions) for required reads/writes.
- Apply hardening SQL: enable+force RLS where relevant, remove broad policies, and revoke grants from client roles.
- Generate signed URLs for private Storage downloads on the server only.
- Re-run a scan and confirm the issue disappears.
- Add a regression check to your release process so drift doesn’t reintroduce exposure. Fixes that worked in linked incidents:
- Storage listing enables object enumeration: Remove listing permissions, implement a backend listing endpoint that returns only authorized objects, and serve downloads via signed URLs.
- Predictable prefix keys leak user files: Switch to random UUID keys, keep the bucket private, and generate signed URLs only after verifying ownership.
Verification checklist for Storage Object Enumeration
- Attempt to list objects without backend authorization; confirm it fails.
- Try guessing object keys and confirm direct downloads fail without signed URLs.
- Validate that signed URLs are issued only after ownership checks and expire quickly.
- Monitor for repeated listing/signing attempts that suggest enumeration.
- Storage listing enables object enumeration: Listing is an attacker’s index of your files.
- Storage listing enables object enumeration: Backend listing plus signed URLs is a safer default.
- Storage listing enables object enumeration: Client-side filtering is not authorization.
- Storage listing enables object enumeration: Monitor for repeated listing/signing attempts.
SQL sanity checks for Storage Object Enumeration (optional, but high signal)
If you prefer evidence over intuition, run a small set of SQL checks after each fix.
The goal is not to memorize catalog tables — it’s to make sure the access boundary you intended is the one Postgres actually enforces:
- Confirm RLS is enabled (and forced where appropriate) for tables tied to this term.
- List policies and read them as plain language: who can do what, under what condition?
- Audit grants for anon/authenticated and PUBLIC on the tables, views, and functions involved.
- If Storage is involved: review bucket privacy and policies for listing/reads.
- If RPC is involved: review EXECUTE grants for functions and whether privileged functions are server-only.
Pair these checks with a direct API access test using client credentials. When both agree, you can ship the fix with confidence.
Over time, keep a small “query pack” for the checks you trust and run it after every migration. That’s how you prevent quiet regressions.
Prevent Storage Object Enumeration drift (so it doesn’t come back)
- Adopt a baseline: private buckets, no listing for sensitive buckets, UUID object keys.
- Keep signing endpoints server-only with rate limiting and strict authorization.
- Audit storage policies after every change to upload/download flows.
- Keep one reusable verification test for “Storage listing enables object enumeration” and rerun it after every migration that touches this surface.
- Keep one reusable verification test for “Predictable prefix keys leak user files” and rerun it after every migration that touches this surface.
Rollout plan for Storage Object Enumeration fixes (without breaking production)
Many hardening changes fail because teams revoke direct access first and only later discover missing backend paths.
Use this sequence to reduce both risk and outage pressure:
- Implement and verify the backend endpoint or server action before permission changes.
- Switch clients to that backend path behind a feature flag when possible.
- Then revoke direct client access (broad grants, permissive policies, public bucket reads, or broad EXECUTE).
- Run direct-access denial tests and confirm authorized backend flows still succeed.
- Re-scan after deployment and again after the next migration.
This turns security fixes into repeatable rollout mechanics instead of one-off emergency changes.
Incident breakdowns for Storage Object Enumeration (real scenarios)
Storage listing enables object enumeration
Scenario: The frontend lists a bucket to show “my uploads” in a hurry.
What failed: Listing became a discovery API, so attackers collected keys and abused any permissive read rule.
What fixed it: Remove listing permissions, implement a backend listing endpoint that returns only authorized objects, and serve downloads via signed URLs.
Why the fix worked: Backend listing lets you control which keys are returned and keep signed URLs as delivery-only tokens.
Key takeaways:
- Listing is an attacker’s index of your files.
- Backend listing plus signed URLs is a safer default.
- Client-side filtering is not authorization.
- Monitor for repeated listing/signing attempts.
Read full example: Storage listing enables object enumeration
Predictable prefix keys leak user files
Scenario: Uploads are stored as {userId}/{timestamp}.pdf and the pattern leaks into debugging workflows.
What failed: Predictable keys turned identity into filenames, so attackers guessed prefixes and downloaded files when weak reads existed.
What fixed it: Switch to random UUID keys, keep the bucket private, and generate signed URLs only after verifying ownership.
Why the fix worked: Random keys break brute-force and server-side signing enforces authorization before access.
Key takeaways:
- Object keys should not encode identity.
- Predictable paths amplify policy mistakes.
- Prefer UUID keys and private buckets.
- Authorize before signing, then keep TTL short.
Read full example: Predictable prefix keys leak user files
Real-world examples of Storage Object Enumeration (and why they work)
- Storage listing enables object enumeration — Allowing bucket listing handed attackers a catalog of object keys.
- Predictable prefix keys leak user files — Predictable prefixes make brute-force enumeration easy.
Related terms
- Supabase Storage Bucket Privacy →
/glossary/supabase-storage-bucket-privacy - Signed URLs →
/glossary/signed-urls
FAQ
Is Storage Object Enumeration enough to secure my Supabase app?
It’s necessary, but not sufficient. You also need correct grants, secure Storage/RPC settings, and a backend-only access model for sensitive operations.
What’s the quickest way to reduce risk with Storage Object Enumeration?
Remove direct client access to sensitive resources, enable/force RLS where appropriate, and verify via a repeatable checklist that anon/authenticated cannot query what they shouldn’t.
How do I verify the fix is real (not just a UI change)?
Attempt direct API queries using the same client credentials your app ships. If the database denies access (401/403) and your backend endpoints still work, your fix is effective.
Next step
Want a quick exposure report for your own project? Run a scan in Mockly to find public tables, storage buckets, and RPC functions — then apply fixes with verification steps.
Explore related pages
cross
Make a bucket private + serve files with signed URLs/templates/storage-safety/make-bucket-private-signed-urls