I spent weeks writing a CSP by hand, then I automated it away
Content-Security-Policy is one of those headers that sits quietly in the "we should really do this properly" pile until a security review forces the issue. Then you sit down, open the devtools console, and spend a week clicking around your own app.
I did this often enough that I built a tool to stop doing it.
The loop I got tired of
I'd been working through a batch of apps, ratcheting up their baseline: stricter headers, tighter defaults, the usual hardening checklist. Each one bottomed out at the same step — writing a Content-Security-Policy by hand.
The loop is familiar to anyone who has done it:
- Draft a reasonable-looking policy based on what you think the app loads.
- Ship it in report-only mode.
- Open the site, click around, watch the console fill with violations you didn't predict.
- Loosen the policy. Ship again.
- Discover an authenticated page loads a third-party analytics script you'd forgotten about.
- Give up, add
'unsafe-inline'and a wildcard or two, tell yourself you'll tighten it later.
Step 6 is the quiet failure mode. "Later" never comes, and the policy that was supposed to harden the app ends up being a decorative header.
When another solution landed in front of me and I realised I was about to run the loop again, I stopped. A CSP is a mechanical artefact: load the pages, observe what the browser would have blocked, write it down. Nothing about that process actually requires a human — it just historically has, because the tooling was built around developer consoles rather than around automation.
What I wanted the tool to do
I wrote down three properties before touching any code.
Exhaustive. A human clicking around will miss the page they don't remember exists. The tool had to crawl every same-origin link up to a configurable depth, and — equally important — support an authenticated session, because half the interesting resources only load after login.
Honest. The output had to be tight enough to actually deploy. A generator that emits default-src * is worse than useless: it launders a permissive policy through a tool and makes it look rigorous. If the tool couldn't produce something I'd ship without further editing, it wasn't worth building.
Available inside Claude Code. Most of this hardening work flows through an AI coding agent now. I didn't want to context-switch to a separate CLI and then paste results back. An MCP server made far more sense than a standalone utility.
How it works
Under the hood, the approach is straightforward:
- Launch a headless Chromium via Playwright.
- Inject a deny-all report-only CSP on every response, so every resource the page loads produces a violation report.
- Crawl the site — follow same-origin links up to a configured depth, or drive the session manually in a headed browser for flows that don't follow links.
- Capture violations two ways: a DOM listener that watches
securitypolicyviolationevents, and a local report-URI endpoint that collects the ones the browser sends directly. - Aggregate violations into an allow-list policy, collapse directives that share the same sources into
default-srcwhere it's a win, and export in whatever format the target deployment wants — raw header,<meta>tag, nginx, Apache, Cloudflare, JSON.
The loop that used to take a week of clicking now takes however long it takes Playwright to crawl the site.
Two things that didn't work the first time
The MITM proxy I didn't need
My first version built a local MITM proxy to intercept HTTPS responses and rewrite the CSP header at the transport level. It worked, but it dragged in certificate management, dual code paths for local-vs-remote targets, logging noise, and a pile of aging dependencies.
Then I properly read the Playwright docs and realised page.route() with route.fetch() + route.fulfill() could rewrite response headers for every target I cared about — including remote HTTPS sites with existing CSP headers — without any of the proxy infrastructure. I deleted roughly 1,100 lines of code. The tool got faster, simpler, and stopped needing a trusted CA certificate on the machine.
The lesson: I'd built the complicated version first because I was solving the theoretical problem ("what if the browser processes headers before my hook runs?") instead of the actual one ("does this work on the sites I need it to work on?"). The theoretical problem wasn't real.
Hashing what the browser doesn't give you
To emit a CSP that drops 'unsafe-inline', you need a SHA-256 hash for every inline <script>, inline <style>, on* event handler, and style attribute the app legitimately uses. Miss one and the browser blocks it.
The obvious approach — hash the sample field on each CSP violation report — is quietly broken. Browsers truncate sample at 256 characters. The SHA-256 of 256 truncated characters never matches the SHA-256 of the full inline block that the browser will compute at enforcement time. My first implementation had a defensive guard that refused to emit hashes from truncated samples, which in practice meant almost no real inline scripts ever got hashed. The --hash flag was essentially a no-op.
The fix needed a different source of truth. I inject a MutationObserver at page init that watches document.documentElement from the earliest moment Playwright allows, and forwards the full text of every matched inline node through an exposed function back to Node.js for hashing. A post-load DOM scan runs as a belt-and-suspenders pass for anything the observer might have missed. Both feed into a UNIQUE(session_id, kind, hash) table that deduplicates automatically.
The result: --hash produces policies that actually cover inline content, and apps using CSS-in-JS libraries (styled-components, emotion, Vanilla Extract) or lazy-loaded widgets don't get mysteriously broken in production when 'unsafe-inline' is dropped.
What the output looks like
Against a typical app, from a cold run:
npx @makerx/csp-analyser crawl https://your-app.example.com --hash
You get back a policy you can paste into your deploy config. On a recent run against an internal tool I was hardening, it produced a working CSP across 24 pages — including authenticated routes and a handful of dynamically injected widgets — in under a minute, with every inline block hashed correctly.
The full time-to-deployment on that one was the minute of crawling, plus about five minutes to diff the policy against what I'd been about to write by hand. The hand-written version would have taken the rest of the afternoon and would have been worse.
A hardened app is only as good as the next pull request
A one-off CSP generation just moves the pain. You ship a strict policy, the next developer adds a third-party script, and something breaks in production. Eventually someone quietly loosens the header and the work evaporates.
The fix is keeping the tool in CI. CSP Analyser can crawl your dev server on every push, score the policy, and fail the build if the score drops. A diff command shows exactly which source expressions changed between runs — useful in PR review when a new dependency shows up.
That way the developer adding a legitimate resource sees immediately what CSP entry to add (a thirty-second fix), and an unexpected external resource gets caught before production, not after.
Where to get it
CSP Analyser is on npm as @makerx/csp-analyser, MIT-licensed. Docs live at cspanalyser.com.
CLI:
npx @makerx/csp-analyser crawl https://yourapp.com
Or wire it into Claude Code (or any MCP-compatible agent) as an MCP server, and ask the agent to generate and apply a CSP as part of its hardening pass. That's how I use it most days — it's one less thing I have to remember to do by hand.
If it saves you a week of console-watching, it was worth writing.