Back to Blog
27 min read

Doors Behind Doors: A Pentest of Leviathan News

#security#pentest#case-study#django#infrastructure

Doors Behind Doors: A Pentest of Leviathan News

A black-box pass over Leviathan News produced one Critical finding, fixed in fifty-seven minutes round-trip. A grey-box pass over the same platform — same engagement, same week, with an unprivileged SSH user on the application server — produced sixteen. Same code. Same operators.

I should say something up front: I am a contributor to Leviathan. Have been since February, working on both the backend and the frontend. What I write below is not the report of an outsider arriving cold to an unfamiliar stack. It is the report of someone who commits to the codebase regularly, who agreed with the founder to set down that hat for a window, run a formally-scoped attacker simulation against the platform he otherwise helps build, and write down what he found from the position of someone who genuinely does not know what is on the other side of the next probe. The discipline of the separation is the entire point of the post.

The gap between the two numbers is not "the team got worse." It is the shape of how compositional risk hides behind a hardened front in 2026 — and it is harder to see, not easier, when you are inside the team. It is what I want to write about. With Gerrit's permission, after the remediation cycle shipped and was independently re-verified, here is the part that generalizes.

The Engagement, Briefly

Gerrit Hall — Leviathan's founder — and I have been working together on the platform since February. The decision to formally audit the platform from an attacker's seat came out of a longer conversation about doing the security work properly once the codebase had reached the size where opportunistic patching was no longer enough. We scoped a five-hour black-box engagement on three production hosts (leviathannews.xyz, api.leviathannews.xyz, ads.leviathannews.xyz), signed a rules-of-engagement document on 2026-04-18 with a one-month window, and agreed up front that I would test against the surface as if I had no insider knowledge. Attack traffic ran from a Hetzner exit through WireGuard so attribution was clean and his Apache logs would see one stranger's IP for the duration. Methodology was the usual: passive recon, surface enumeration over the full OpenAPI schema (153 paths, 160 operations), authentication and authorization matrix, SSRF, injection, session lifecycle, JWT primitives, and a Nuclei sweep across the full misconfiguration / exposure / CVE catalog at severity ≥ low.

The discipline of pretending not to know is harder than it sounds. I have a contributor account on the GitHub organization. I have read the source. I have written some of the source. The black-box methodology required me to forget, for the duration of the window, every implementation detail I happened to know — to discover what nmap and curl and the public OpenAPI schema were prepared to tell a passing attacker, and stop there. There is a non-trivial amount of self-policing in that. The reason to do it is that the things you "already know" about a codebase are the things you won't probe carefully, because your model of "this is safe, I wrote it" is exactly the model an outside attacker doesn't share.

To make the separation operational, Gerrit created a purpose-made zknpr-pentest SSH user on the API server — uid 1004, group memberships limited to its own group plus www-data and webdev, with its own password rotated at engagement close and at engagement teardown so it never persisted as a live foothold. Different credential, different attribution, parked for a later phase. When he sent it before the black-box pass began, I declined to use it: "Those are ssh credentials for the api server, I thought that's what you were looking for, not the website," he answered, surprised that I was electing to start blind. We agreed to park the SSH user, run the black-box pass first, and revisit afterwards.

After the black-box pass closed, Gerrit greenlit a one-day grey-box follow-up using the parked credentials, scoped against an Amendment 02 to the original RoE. Active testing was permitted — not just read-only — and host-side enumeration ran outside the off-peak window, since it does not produce the user-visible load that external scanning does. The original safeguards carried: no destructive actions, no production-data writes beyond a single owned record for evidence purposes, no service disruption, secrets redacted in any logged evidence, every write-side-effect probe announced before it ran. He named the limits explicitly. He named what credentials I could try and which I could not.

The thing that is unusual about Leviathan is not that the founder agreed to a pentest. It is that the founder agreed to one from a contributor, and was willing to follow the discipline of pretending the contributor did not have the access he in fact has. Most teams in this position would either skip the pentest (because they "already know the code") or hire an outsider (because mixing the two seems contaminating). The third option — formal scope, attacker simulation, contributor in the seat — is rarer, harder to execute honestly, and produces the kind of writeup this is.

What the Outside Said

The black-box pass said the platform was well-defended.

I want to write that sentence first because it is the truth, and because the rest of this post will be about everything that becomes uncomfortable later. The leviathannews.xyz API is one of the better-defended Django stacks I have looked at in a working capacity. Specifically:

  • SSRF defense was exemplary. Two URL-fetching endpoints (/api/v1/link-preview/, /api/v1/ad/preview/) survived the full canonical bypass matrix — direct local addresses, IPv4-mapped IPv6, redirect-to-internal via httpbin, DNS rebinding via rbndr.us, IDN homoglyphs, short-form integers. The fetcher resolves DNS, classifies the resulting IP against the private-network ranges, re-validates after each redirect, and pins the resolved address against rebind attempts. That last bit is the hard one to get right; most naive implementations don't. Recommend making this validator a shared library across the codebase if it isn't already.
  • The JWT implementation refuses substitution. alg=none in three case variants, RS256/HS256 confusion, payload tampering with reused signature, expiry manipulation, token-type confusion — all return 401. The signing key is not in any obvious common-secrets dictionary. The middleware pins the algorithm and enforces exp server-side.
  • The BOLA / BFLA matrix was clean. Every attempted access to a fabricated resource ID returned either 404 (for owned-resource endpoints that refuse to confirm existence) or 403 (for admin-scoped endpoints). The full prediction-market control plane (/markets/{id}/freeze/, /resolve/, /cancel/), the moderation flags, the user-follow and tag-subscription DELETEs — all refused non-staff callers cleanly.
  • Mass assignment on the profile endpoint is whitelisted. PUT /api/v1/wallet/profile/ silently drops is_staff, is_admin, is_editor, points, and any non-whitelisted field while returning 200. The only writable surface is the documented six fields plus account_type, which is itself constrained to a documented enum.
  • Webhook signatures validate before parsing. The Alchemy payment webhook returns 401 with empty body on every unsigned request, regardless of payload shape. No JSON traceback leaks. Nothing to fingerprint against.
  • The Nuclei catalog produced zero hits at severity ≥ low across all three in-scope hosts.

This is what a hardened front looks like. If you wrote a security audit report at this point, an honest one would be twelve pages of "tested, defended, recommend filing the following hardening list," and that would be a fair description of the platform. I wrote that report. It exists in the engagement folder under EXECUTIVE-SUMMARY.md.

The One Critical From Outside

The black-box engagement turned up exactly one Critical, and it took eleven minutes of active testing to find.

A passive Shodan check of the production droplet, run during recon, returned ports 3000 and 3001 listening publicly. Classic Next.js application-origin ports that should be localhost-only. I queued it as a HIGH candidate for active verification once the agreed off-peak window opened. nmap from the Hetzner exit, into the agreed window, came back clean: both ports served the identical Next.js application as port 443 — full CSP header, device_id cookie, production news articles. Port 3000 was the production frontend's origin. Port 3001 was a development origin whose robots.txt pointed at dev.leviathannews.xyz, the dev subdomain explicitly marked out-of-scope in the RoE for one specific reason: it shared the production database.

The exposure escalated from HIGH to Critical the second I read that robots.txt. By exposing the dev origin directly on the production IP's port 3001, the host had quietly defeated its own scope boundary. Anyone scanning the droplet reached the development app, and by transitivity, the production database — without authentication, without a hostname, without any of the protections the team had thought they had.

Discovered 23:17 UTC. Reported by Telegram DM at 23:19. Root cause was a PM2 ecosystem configuration setting HOSTNAME: "0.0.0.0" on both Next.js apps. Fix shipped at 23:52, with a one-line commit binding PM2 to 127.0.0.1 and closing the public port exposure. Externally re-verified at 00:14 the next morning: both ports Connection refused from the exit IP, prod still healthy. Fifty-seven minutes round trip from a stranger's nmap to a clean re-probe.

The remaining work was a hardening list — bearer-vs-cookie auth precedence, JWT logout invalidation, host-scoped CSP on the API and ad subdomains, a missing security.txt, Swagger UI pinned away from @latest, EVM-address validation on the wallet-nonce endpoint, generic errors on the ad-confirmation flow. Seven Low items, nine Info. All shipped two days later as a single PR. None individually exploitable in a way that justified an out-of-cycle release.

That is the story the black-box pass told. It is a true story. It is also incomplete in a way I could not have known from outside.

What the Inside Said

The grey-box pivot took twenty-four hours and changed the shape of the report.

The unprivileged SSH user I had been given was in the webdev group. That group existed because the dev team, on a small operations footprint, wanted multiple humans to be able to deploy to the same set of project directories without sudo. Setting an application tree to 2775 (setgid, group-write) and assigning it to a shared group is the standard convenience for that workflow. Many shops use it. It is a reasonable thing to set up in week one of a new project. It is the most dangerous thing on the box once one human in the group is a contractor instead of a permanent operator.

What that group membership composed with, in concrete terms:

  • The deployed source tree was group-writable. Not Apache's served HTML — the actual Django source on disk. Modifying any .py file would be picked up by the WSGI worker on its next reload. RCE on dev.leviathannews.xyz was a one-line echo away from any group member.
  • The Python virtualenv inside the source tree was also group-writable. Every __init__.py in every installed package — Django itself, DRF, simplejwt, the full ~200-package transitive closure — was editable by anyone in webdev. The exposure was identical to the source tree: silent RCE on next worker reload.
  • .env files were 640 app:webdev. Reading them gave the Postgres doadmin superuser-equivalent password (COPY FROM PROGRAM was thankfully blocked at the DigitalOcean managed-Postgres layer, but the role spanned a thirteen-database cluster covering not only Leviathan but every other project running under the same operator account — a dozen unrelated applications sharing the one Postgres role). The same .env exposed the Redis password, the Django SECRET_KEY shared across api/, source/, staging/, production/, and a minimal_backup/ folder.
  • The on-chain signer keyfile was readable to the www-data group, of which I was transitively a member. The file held an EOA private key used as the production ADMIN_PRIVATE_KEY for on-chain auction and IPFS contract operations. Native balance was tiny (a hot signing key, not a treasury), but the role itself was the live signer.
  • A sibling-app SQLite database was world-writable. A neighboring Django stack on the same host ran on top of db.sqlite3 mode 666 for its admin tables. Direct write into auth_user, no code execution required, instant admin on the sibling app.
  • The shared Django SECRET_KEY plus SESSION_ENGINE=django.contrib.sessions.backends.db plus the doadmin write capability composed into a session-forge primitive. Late on the second day I ran a sha256 over the five .env files on the box. The first sixteen hex of DJANGO_SECRET_KEY were identical across api/.env, source/.env, staging/.env, production/.env, and a minimal_backup/.env that nobody on the current team remembered putting there. Combined with Django's database-backed session backend and a Postgres role with write access to the session table, that meant: forge a django_session row, set the cookie, log in as any user — including any superuser — on any Leviathan subdomain. No password. No MFA. No audit trail beyond a Postgres INSERT. That was the moment the writeup changed shape, and it stayed the most critical single finding of the engagement. (Forging was the dramatic primitive. Stealing was simpler still: 2,733 active sessions sat in django_session, ten of them belonging to staff users.)
  • A post_save signal handler auto-promoted any user whose telegram_account field matched a hardcoded five-handle allowlist to is_superuser=True. Attached to the regular User model. Reachable from any code path that wrote a telegram_account value, which turned out to include several unauthenticated webhook routes. The allowlist was intentional; nobody had thought through the chain that ended at is_superuser.
  • The deploy SSH key sitting at /var/www/.ssh/id_ed25519 was a GitHub deploy key for the production source repository, with known_hosts set to github.com. Read access on the file required RCE-as-www-data, which any of the source-tree-write findings provided. Combined with the GITHUB_WEBHOOK_SECRET exposed in api/.env, the chain was: harvest the deploy key, sign a webhook with the leaked secret, push back into the source repository through the production deploy pipeline. Two findings, separately uninteresting, composed.
  • A MockAuthenticationMiddleware existed in the dev.leviathannews.xyz middleware stack as a development convenience — if MODE=='dev', set request.user = User(id=request.GET['user_id']), gated on MODE=dev from the env file. The middleware was current in source. It was not yet current in runtime. Apache's master process had an uptime of 11 days, the settings file had been edited 2 days earlier, and wsgi.py itself had not been touched in three weeks — so mod_wsgi was still serving the pre-edit settings module. The vulnerability was latent: the next deploy that touched wsgi.py, or the next Apache restart, would have lit it up. A wound waiting is still a wound.
  • There was a deploy-wrapper privilege chain ending at root, going through a service account and a setuid wrapper script. Reaching it required pivoting to an account I did not have credentials for, so I noted the primitives and stopped. The team owns that one on an ops-track backlog and that is the right place for it.

Sixteen Critical findings total. Seven High. Nine Medium. Four Low. Five Info. Forty-one findings in the initial grey-box pile, with a few more surfacing during a follow-on re-audit pass before the deploy.

A note in the other direction. One of the findings looked, at first read, like a complete pre-auth path to superuser: the same signal-handler that auto-promoted on a telegram_account match was reachable from an unauthenticated /api-key/signup/ endpoint. POST a body with telegram_username=<legacy-admin-handle>, the signal fires, the new user gets is_superuser=True, an API key is returned in the response. End to end, no credentials needed. I queried the database to confirm the chain — and all five of the legacy admin handles already existed in bot_user. The handle resolver used __iexact for case-insensitive uniqueness, so even subtle case-bending didn't slip through. The chain was real; the original handle-resolver author had narrowly caught it years earlier with a one-line uniqueness check, almost certainly without knowing they were closing a future privilege-escalation path. Defense composes too. It was worth writing down.

I want to be clear about what makes the headline number look bad and how it actually reads: every one of these is, individually, the kind of thing a one-person or two-person ops team accumulates as it grows the surface area of an application. None of them looks negligent in isolation. The setgid-2775 deploy pattern is in every "how to set up a shared dev server" tutorial I have ever read. The shared SECRET_KEY is a one-line laziness anyone has committed at 2 AM during a re-deploy. The signal-handler admin-promotion is a five-line shortcut from when the platform had two contributors and the trust model was "we know each other." The wallet.key permissions are the inevitable result of "Apache needs to read this for an on-chain payment cron." Each is a small forgetting. Each has been forgotten by every team I have ever worked with.

The compositions are what make them critical.

Composition Is the Vulnerability

Behind the front door of any Django stack with more than two collaborators are more doors. Each was the obvious door to put there at the time. None of them, alone, justifies a Critical. The ones I found here were Critical because of how they fell against each other — a side door, a key left under a mat near it, a hallway whose lights nobody had been in long enough to turn off. This is the part of the engagement that I think is worth generalizing.

When I look at the sixteen-finding pile from inside, what I see is not sixteen mistakes. I see five or six convenience patterns that each shop into a long-tail of compositions. Cataloged by pattern:

One database role for the whole organization. The doadmin Postgres user spanned every project in the operator's account — not because it was meant to, but because spinning up a new managed-Postgres cluster per project is friction, and adding a connection string per app is another dotenv, and at some point in early growth it was easier to point everything at one cluster with one role. The composition is that any compromise of the role is a compromise of every app the operator runs. The blast radius is set at organization-creation time, years before any specific finding.

Setgid-2775 on application trees. Group-write on the deployed source is convenient when more than one human deploys. It composes with: any unprivileged group member, anywhere on the application server, having silent RCE on the next worker reload. The convenience pattern presupposes that the group is exactly the trusted humans. The composition pattern is that the group eventually contains a contractor, an automation user, a vendor account, an AI agent acting as a deploy bot. Each of those is a perfectly normal thing to add. None of them are reasons to revisit the original setgid choice. The setgid stays. The trust model has changed underneath it.

One SECRET_KEY across environments. Shipping the same DJANGO_SECRET_KEY in api/.env, source/.env, staging/.env, production/.env, and minimal_backup/.env was treated as cleanup hygiene — one source of truth, easy to rotate. The composition is that the key signs sessions, password reset tokens, CSRF tokens, and the Django signing module's general-purpose tamper protection. Sharing it across environments means a session forged in one environment is valid in every other. Combined with SESSION_ENGINE=django.contrib.sessions.backends.db and any path to a write into the session table, the result is universal session forge.

Convenience-shaped admin promotion. The signal handler that promoted users to is_superuser if their telegram_account matched a hardcoded list was, to a startup of two people, "the obvious way to keep the founders' accounts staff-flagged across reseeds." It composed with: every code path that writes telegram_account, including unauthenticated webhook ingress paths whose authors did not know the signal existed. The allowlist was static. The set of writers was not.

Shared wallet.key for both signing and ops. A signing key for on-chain operations and a read path that needed to be reachable from the application's effective user composed into a www-data-readable private key. The two requirements (sign things; be reachable from the running app) seem like the same problem. They are not. Signing should happen in a sidecar process, an HSM, or — at minimum — a uid the application server cannot read. The composition is that any compromise of the application user lifts the on-chain signing capability.

SQLite-on-disk in a shared filesystem context. A db.sqlite3 file at 666 is, on a single-tenant single-user laptop, fine. On a multi-tenant server where the application user and the operations group both need filesystem access for their own legitimate reasons, it composes with arbitrary writes from any group member. The DB has admin tables. The DB does not need code execution to be compromised; SQL is enough.

The diagnostic pattern, every time, is: what was the convenience? It is almost never sloppiness. It is almost always something that was, at the time, the most reasonable shortcut for a human at a keyboard with a deadline. Conveniences age into geography. The convenience at year zero is the geography of the system at year three, and the convenience never comes back up for review because nothing is breaking.

The kraken, at this scale, is almost never a kraken. It is a bloom. It is what many small individually-defensible decisions look like when the current of growth braids them together in the right order. You don't hunt a bloom. You change the current.

What I Did Not Reach

A pentest writeup that does not include what the tester did not reach is not honest, so:

I did not get root. The kernel was current (6.1.0-44, March 2026). SUID binaries on the box were a clean inventory — passwd, sudo, mount, the usual; no exposed setuid wrappers I had write paths to, no kernel exploit surface I had a credible PoC for at this kernel rev. /etc/shadow was unreadable to my user. There was no sudo permission on zknpr-pentest. The latent root-RCE chain via the deploy-wrapper service required a pivot to an account whose credentials I did not have, and the engagement RoE explicitly forbade exploitation paths I could not run cleanly. I noted the primitives in the report and stopped.

I did not force the latent MockAuth bypass into runtime. Confirming the bug end-to-end would have required me to touch wsgi.py to make mod_wsgi reload the current settings module — an effective one-byte write to a production-adjacent file that, on success, would have published a ?user_id=N admin bypass on a public-internet-facing dev subdomain for however many hours it took the operator to notice and roll it back. The code-and-configuration evidence was already complete; the runtime confirmation was not worth the live exposure. (The file was 644 app:webdev group-read-only anyway, which made the choice for me — but the reasoning is the part that matters.) A latent finding documented honestly is more valuable than a confirmed one whose confirmation introduced live risk.

I did not exfiltrate any user data beyond what the max-one-record rule required for evidence (a single bot_user row of my own pentest account, redacted before write). I did not touch on-chain assets. I did not move funds from the signer wallet. The wallet.key private key was read once into a local file long enough to derive the corresponding public address for the engagement evidence, then the local file was shredded; only the derived public address — public on-chain already from the wallet's prior transaction history — appears in the report.

All black-box active probing happened inside the agreed 02:00–08:00 UTC off-peak window. The grey-box pass ran outside that window — host-side enumeration does not produce the user-visible load that external scanning does, and Gerrit waived the off-peak constraint for it explicitly.

I did not chase findings outside Leviathan's own stack. The thirteen-database Postgres cluster spanned other projects under the same operator account; I confirmed scope (which databases existed, which roles connected) and stopped. Touching another tenant's data, even to demonstrate the composition risk, would have been outside the RoE's blast-radius limits.

A pentest report that fakes a descent it didn't actually make is worse than one that names where it stopped.

The Mending

This is where this engagement becomes a different category of writeup than most.

Inside thirty-six hours of receiving the grey-box report, the team had shipped:

  • A code-level PR closing the six findings that lived in the application's own repository: the MockAuth middleware deleted with a regression test pinning its absence in MIDDLEWARE; the auto-promotion signal handler removed and replaced with explicit admin grants; the submit-eth-address route removed entirely; the wallet-binding endpoint gated on ownership and authentication; a dedicated APIKey model with pbkdf2_sha256 hashing and a key_prefix index for O(1) lookup; a parser DoS guard with explicit byte and line caps. Six tests added, each pinning the specific behavior that had been wrong.
  • A server-side hardening pass — what the team and I called Phase 0 — that closed the filesystem class entirely. Setgid-2775 on the application trees was replaced with 2755 (setgid, group-read only — the group can deploy but cannot edit running code). The Python virtualenv was moved out of the application tree to a path the application user owns and the deploy group cannot reach. Redis required authentication. The deploy SSH key was set to 600 www-data:www-data. The on-chain signing keyfile was set to 400 root:root and the signer itself was rotated. .env files were tightened from 640 app:webdev to 640 app:www-data (Apache must read it; nobody else needs to). /server-status returned 403 from outside.
  • A Postgres credential rotation, with the application role downgraded out of doadmin to a per-app role with the minimum grants the application required.
  • An API-key migration. All eight legacy plaintext rows in the api_keys table were re-hashed and a migration manage-command was added so future operators have a safe path. The post-deploy verification confirmed key_pbkdf2_hashed = 8, empty_prefix_total = 0.

Twenty-seven of the engagement's combined findings were independently re-verified closed across a single re-probe pass by the same exit IP that had run the original audit. Of the remaining items, three were ops-track (the deploy-wrapper privilege chain; ptrace_scope hardening; a deploy-tree dead-file cleanup); one was a deferred migration window for legacy plaintext API keys, on a documented timeline with a ship-by date.

I am writing those numbers because the part of an engagement that almost never makes it into a portfolio writeup is what the team did with the report. That is, in many ways, the only part that matters. A finding that does not get fixed is a portfolio piece for the tester and a liability for the team, indefinitely. Leviathan shipped. That is the honest and unusual thing.

What This Says About 2026 Pentests

Four things, at the level of generalization that I think are worth the post.

One: hardened fronts hide compositional risk by design. The defensive surface that took a real operator to build — the SSRF allowlist, the JWT primitives, the BOLA/BFLA matrix, the input validation on writes, the webhook signature gate — is exactly the surface that a black-box pass measures. When that surface is well-built, the black-box report is short and clean and accurate. It is also a partial picture, because the kind of risk that lives in operator conveniences is not on that surface. It is in the seams between processes, the seams between environments, the seams between projects under the same operator. Black-box pentests, by their structure, cannot see those seams. This is not a failing of black-box methodology. It is the reason grey-box exists.

Two: convenience patterns scale faster than the trust model that justified them. Every single Critical I found in the grey-box pass was, at year one, a sensible decision by an operator running a small project with two collaborators. None of them were sensible decisions for the operator running a small project with two collaborators plus a contractor's SSH user, an automation account, a Celery worker, a managed Postgres role spanning thirteen apps, and an AI agent that deploys with the same group permissions a human deployer has. The convenience didn't drift. The trust model under it did. Auditing convenience patterns annually — which one human is in this group, which one role connects to that DB, which one secret signs across these environments — is a maintenance discipline that does not exist in most shops because nothing breaks if you skip it. Until it does.

Three: a black-box pass followed by a grey-box pass — same tester — is the underused shape. Doing both costs about half again what doing one costs. The output is qualitatively different from either alone. The black-box pass establishes the external attack surface. It also produces the Critical that operationally matters most: the one any random scanner would have found within a week. The grey-box pass measures the compositional risk living behind the front. Most teams pick one. The ones that pick both get a substantively more honest picture.

Four: contributors should pentest their own codebases — formally. The default in small-team operations is opportunistic security. Someone notices a thing. Files a fix. Moves on. The discipline this post is about is the opposite. Sit down with a signed scope and an attacker model. Route your traffic through a third-party exit. Write down what you find as if you didn't already know how the function works. The compositions in this post would not have been caught by any opportunistic patch. The people best positioned to opportunistically notice them are exactly the people whose mental model already explains them away. Formality forces the contributor to inhabit the seat of an attacker who does not share the contributor's assumptions. Most contributors will never do this. That is most of why their codebases ship with the front held and the back composed.

Close

I am still a contributor. The PRs Gerrit shipped to remediate this engagement landed on top of branches I review on a regular cadence. The next conversation we had after the engagement closed was not about security at all — it was about a feature shipping the following week. That is the part of working with someone for long enough that I want to land at: the engagement closed cleanly because the relationship was already there, and the relationship continued cleanly because the engagement closed honestly.

If you operate a Django stack — or any web application stack with a small operations footprint, several environments, and a team that has grown faster than its convenience patterns have been re-audited — the diagnostic question I would offer you, which costs nothing to ask, is:

Walk through every group on the application server and list, by name, every Unix user in it. For each user, ask whether the original convenience that justified that group membership still applies, and whether anything in the current trust model would notice if it didn't.

That is twenty minutes of work. The first time anyone has done it for any stack I have looked at, the answer has been a finding. It will probably be a finding for yours.

And if you contribute to a codebase you have not yet audited from outside the way an outsider would: schedule the time, sign a scope with yourself or your collaborators, route the traffic, and pretend not to know. The compositions hiding behind your hardened front are easier to find on a Saturday with a signed RoE than on a Tuesday afternoon at your normal terminal.

If you'd like a second pair of eyes on a stack of your own, my contact is in security.txt. There is also a longer, more literary version of this engagement — written in jellyfish and tides instead of .env files and setgid bits — sitting in a drafts folder that may eventually escape it.

For now, what mattered: the front held, the back composed, and the team mended. That is more than most engagements get to say.