Who put vulns in my patch?
When updates can't save you

Welcome to the Security Theater
Every security patch is a gift to both defenders and attackers. For every “fix,” inevitably, changes will sneak in—more code made from the same fallible materials as the original!
Even worse, it’s a blueprint for attackers! Changes consist of binary code that can be diff’ed just as readily as any behavioral changes. They are ripe for reverse engineering.
ABP: Always Be Patching
WARNING: Patch may contain today’s fixes (and definitely tomorrow’s vulnerabilities.)
The reality is messier than I’d like. Ask any greybeard sysadmin about rapid updates and you’ll hear hard-earned wisdom: “Wait six months. Don’t be their free beta tester.”
Let’s take a moment to appreciate the IT team’s dilemma:
- Patch right now: risk breaking production.
- Patch later: risk getting pwned.
- Can’t (or Don’t) patch: serious risk of getting pwned.
- Defend & mitigate: harden systems, weak ciphers, change passwords, etc. Who has time for all that?!? Read CVEs? Oh, sweet child, I have bad news…
The Best Intentions
Patches kill systems too—no attackers required.
The CrowdStrike incident of July 2024 proved a harsh truth: following “best practices” offers no immunity when untested code crashes critical infrastructure. Within hours, flights were grounded worldwide and hospitals were largely paralyzed.
But ignoring patches? That’s guaranteeing exploitation of known vulnerabilities.
The Lies We Tell Ourselves
Throwing money at security often backfires. Complex, layered controls become impossible to manage—and impossible to monitor.
The right investment level? The optimal controls? The perfect security-usability balance?
It depends. (Yes, the consultant’s favorite answer.)
But that’s actually good news: personalized risk management beats one-size-fits-all every time.
Quitting Security Theater Camp
Stop the theatrics and start proactive risk management.
Determine & document everything that matters:
- Your actual threat landscape (not the vendor’s FUD)
- Incident response and recovery test schedules
- Downtime, data loss, and reputation damage tolerances
- Legal obligations when things go sideways
- Who does what during a crisis
Are there universal best practices? Yes, though implementation varies:
Key Considerations
- Adopt hardware security keys like YubiKeys for all users, or mandate passkeys at minimum.
- OTPs are susceptible to social engineering; hardware tokens are not.
- Require MFA on all services universally.
- Robust & verified backups.
- Ensure cloud infrastructure has offline backups—ideally immutable and geographically dispersed.
- “Offline” means cross-vendor or cross-account (e.g., AWS backups in GCP or Azure, or third-party solutions like Backblaze B2).
- Extend backup strategies to employee devices, accounting for recovery scenarios like unreliable conference Wi-Fi (aligning with SLAs and recovery objectives).
- Ensure cloud infrastructure has offline backups—ideally immutable and geographically dispersed.
- Conduct quarterly recovery drills: restore full infrastructure in an unused region using backups, snapshots, and infrastructure-as-code tools.
- Place CanaryTokens alongside any real credentials to be the first to know when a breach begins.
The Other Side of Fear
Know your risk profile: What data do you protect? Which threats matter? How much downtime can you afford? What’s cheaper—recovery or rebuilding?
Consider your actual exposure:
- Sensitive data access (banking/crypto)
- Web application vulnerabilities (XSS/CSRF)
- Supply chain risks and insider threats
- Public-facing services (zero-day targets)
- Tolerances for ransomware, fines, reputation damage
The unsexy truth: security is layers, not silver bullets. Defense in depth, offline backups, disaster drills, compensating controls. Treat patches as necessary evils, not cure-alls.