I'm guessing hundreds of billions if you could somehow add it all up.
I can't believe they pushed updates to 100% of Windows machines and somehow didn't notice a reboot loop. Epic gross negligence. Are their employees really this incompetent? It's unbelievable.
I wonder where MSFT and Crowdstrike are most vulnerable to lawsuits?
This outage seems to be the natural result of removing QA by a different team than the (always optimistic) dev team as a mandatory step for extremely important changes. And neglecting canary type validations. The big question is will businesses migrate away from such a visibly incompetent organization. (Note I blame the overall org; I am sure talented individuals tried their best inside a set of procedures that asked for trouble.)
So there was apparently an Azure outage prior to this big one. One thing that is a pretty common pattern in my company when there are big outages is something like this:
1. Problem A happens, it’s pretty bad
2. A fix is rushed out very quickly for problem A. It is not given the usual amount of scrutiny, because Problem A needs to be fixed urgently.
3. The fix for Problem A ends up causing Problem B, which is a much bigger problem.
tl;dr don’t rush your hotfixes through and cut corners in the process, this often leads to more pain
If you’ve ever been forced to use a PC with Crowdstrike it’s not amazing at all. I’m amazed incident of this scale didn’t happen earlier.
Everything about it reeks of incompetence and gross negligence.
It’s the old story of the user and purchaser being different parties-the software needs to be only good enough to be sold to third parties who never neeed to use it.
It’s a half-baked rootkit part of performative cyberdefence theatrics.
> It’s a half-baked rootkit part of performative cyberdefence theatrics.
That describes most of the space, IMO. In a similar vein, SOC2 compliance is bullshit. The auditors lack the technical acumen – or financial incentive – to actually validate your findings. Unless you’re blatantly missing something on their checklist, you’ll pass.
From a enterprise software vendor perspective, cyber checklists feel like a form of regulatory capture. Someone looking to sell something gets a standard or best practice created, added to the checklists, and everyone is forced to comply, regardless of the context.
Any exception made to this checklist is reviewed by third parties that couldn't care less, bean counters, or those technically incapable of understanding the nuance, leaving only the large providers able to compete on the playing field they manufactured.
This will go on for multiple days, but hundreds of billions would be >$36 trillion annualized if it was that much damage for one day. World annual GDP is $100 trillion.
Their terms of use undoubtedly disclaim any warranty, fitness for purpose, or liability for any direct or incidental consequences of using their product.
I am LMFAO at the entire situation. Somewhere, George Carlin is smiling.
I wonder if companies are incentivized to buy Crowdstrike because of Crowdstrike's warranty that will allegedly reimburse you if you suffer monetary damage from a security incident while paying for Crowdstrike.
There must be an incentive. Because from a security perspective bringing in a 3rd party to a platform (microsoft) to do a job the platform already does is literally just the definition of opening up holes in your security. Completely b@tshit crazy, the salesmen for these products should hang their heads in shame. It's just straight up bad practice. Im astounded it's so widespread.
I can't believe they pushed updates to 100% of Windows machines and somehow didn't notice a reboot loop. Epic gross negligence. Are their employees really this incompetent? It's unbelievable.
I wonder where MSFT and Crowdstrike are most vulnerable to lawsuits?