Several people who received the CrowdStrike offer found that the gift card didn't work, while others got an error saying the voucher had been canceled.
Tbh the RHEL/Debian bug only occurred because of bugs in Debian and RHEL, they couldn’t really do much about those. Especially the Debian one, which only took place in Linux kernels several versions above the normal Debian kernel.
CrowdStrike releasing a buggy release can just happen sometimes. I just hope the entire industry may condider that relying on three or four vendors for auto-updating software installed all corporate computers in the world may not be a good idea.
This whole thing could’ve been malicious. We got lucky now that it only crashed these systems, just imagine the damage you can do if you hack CrowdStrike themselves and push out a cryptolocker.
Not just Crowdstrike - any vendor that does automatic updates, which is more and more each day. Microsoft too big for a bad actor to do as you describe? Nope. Anything relying on free software? Supply chain vulnerabilities are huge and well documented - its only a matter of time.
The automatic update part was akin to virus definitions and triggered a bug in code released long before that. Not auto-updating your antivirus software would put a pretty high tax on the IT team as those updates can get released multiple times a day (and during weekends). I agree on not auto updating text editors and such, but there are types of software that need updates quickly and often.
Supply chain attacks can always work, but this shows how ill-prepared companies are for their systems failing on a scale like this. The fix itself is maybe a minute or two per device if you use Microsoft’s dedicated repair tool, maybe even less if you use that thing with PXE boot, but we’re still weeks away from fixing the damage everywhere.
Nah, I don’t buy that. When you’re in critical infrastructure like that it’s your job to anticipate things like people being above or below versions. This isn’t the latest version of flappy bird, this is kernel level code that needs to be space station level accurate, that they’re pushing remotely to massive amounts of critical infrastructure.
I won’t say this was one guy, and I definitely don’t think it was malicious. This is just standard corporate software engineering, where deadlines are pushed to the max and QA is seen as an expense, not an investment. They’re learning the harsh realities of cutting QA processes right now, and I say good. There is zero reason a bit of this magnitude should have gone out. I mean, it was an empty file of zeroes. How did they not have any pipelines to check that file, code in the kernel itself to validate the file, or anyone put eyes on the file before pushing it.
This is a massive company wide fuckup they had, and it’s going to end up with them reporting to Congress and many, many courts on what happened.
The Windows ordeal was definitely a fuck-up of their testing pipeline, and no doubt has something to do with the mass layoffs earlier this year. I’m sure they’ll be sued into oblivion (though I wonder what making this company go bankrupt or extracting the money out of it through lawsuits will do to all the businesses that currently have it deployed).
The channel file wasn’t entirely zeroes, not for every customer at least. The code pages that were mapped as callbacks were empty or garbled, but not the entire file (see this thread, for instance).
However, society shouldn’t crumble because of something like this. It shows how fragile our critical infrastructure really is. I don’t care about airlines and such, but 911 shouldn’t go down because of CrowdStrike or even because of Windows. Even airlines should’ve been able to fly some planes, it’s not like Boeings run Windows.
Tbh the RHEL/Debian bug only occurred because of bugs in Debian and RHEL, they couldn’t really do much about those. Especially the Debian one, which only took place in Linux kernels several versions above the normal Debian kernel.
CrowdStrike releasing a buggy release can just happen sometimes. I just hope the entire industry may condider that relying on three or four vendors for auto-updating software installed all corporate computers in the world may not be a good idea.
This whole thing could’ve been malicious. We got lucky now that it only crashed these systems, just imagine the damage you can do if you hack CrowdStrike themselves and push out a cryptolocker.
Not just Crowdstrike - any vendor that does automatic updates, which is more and more each day. Microsoft too big for a bad actor to do as you describe? Nope. Anything relying on free software? Supply chain vulnerabilities are huge and well documented - its only a matter of time.
The automatic update part was akin to virus definitions and triggered a bug in code released long before that. Not auto-updating your antivirus software would put a pretty high tax on the IT team as those updates can get released multiple times a day (and during weekends). I agree on not auto updating text editors and such, but there are types of software that need updates quickly and often.
Supply chain attacks can always work, but this shows how ill-prepared companies are for their systems failing on a scale like this. The fix itself is maybe a minute or two per device if you use Microsoft’s dedicated repair tool, maybe even less if you use that thing with PXE boot, but we’re still weeks away from fixing the damage everywhere.
Nah, I don’t buy that. When you’re in critical infrastructure like that it’s your job to anticipate things like people being above or below versions. This isn’t the latest version of flappy bird, this is kernel level code that needs to be space station level accurate, that they’re pushing remotely to massive amounts of critical infrastructure.
I won’t say this was one guy, and I definitely don’t think it was malicious. This is just standard corporate software engineering, where deadlines are pushed to the max and QA is seen as an expense, not an investment. They’re learning the harsh realities of cutting QA processes right now, and I say good. There is zero reason a bit of this magnitude should have gone out. I mean, it was an empty file of zeroes. How did they not have any pipelines to check that file, code in the kernel itself to validate the file, or anyone put eyes on the file before pushing it.
This is a massive company wide fuckup they had, and it’s going to end up with them reporting to Congress and many, many courts on what happened.
Even an AI is good enough to avoid (or let someone avoid) pushing a similar bug 🫣
The Windows ordeal was definitely a fuck-up of their testing pipeline, and no doubt has something to do with the mass layoffs earlier this year. I’m sure they’ll be sued into oblivion (though I wonder what making this company go bankrupt or extracting the money out of it through lawsuits will do to all the businesses that currently have it deployed).
The channel file wasn’t entirely zeroes, not for every customer at least. The code pages that were mapped as callbacks were empty or garbled, but not the entire file (see this thread, for instance).
However, society shouldn’t crumble because of something like this. It shows how fragile our critical infrastructure really is. I don’t care about airlines and such, but 911 shouldn’t go down because of CrowdStrike or even because of Windows. Even airlines should’ve been able to fly some planes, it’s not like Boeings run Windows.