All our servers and company laptops went down at pretty much the same time. Laptops have been bootlooping to blue screen of death. It’s all very exciting, personally, as someone not responsible for fixing it.

Apparently caused by a bad CrowdStrike update.

Edit: now being told we (who almost all generally work from home) need to come into the office Monday as they can only apply the fix in-person. We’ll see if that changes over the weekend…

  • jedibob5@lemmy.world
    link
    fedilink
    English
    arrow-up
    220
    arrow-down
    3
    ·
    4 months ago

    Reading into the updates some more… I’m starting to think this might just destroy CloudStrike as a company altogether. Between the mountain of lawsuits almost certainly incoming and the total destruction of any public trust in the company, I don’t see how they survive this. Just absolutely catastrophic on all fronts.

    • NaibofTabr@infosec.pub
      link
      fedilink
      English
      arrow-up
      128
      ·
      4 months ago

      If all the computers stuck in boot loop can’t be recovered… yeah, that’s a lot of cost for a lot of businesses. Add to that all the immediate impact of missed flights and who knows what happening at the hospitals. Nightmare scenario if you’re responsible for it.

      This sort of thing is exactly why you push updates to groups in stages, not to everything all at once.

      • rxxrc@lemmy.mlOP
        link
        fedilink
        English
        arrow-up
        78
        ·
        4 months ago

        Looks like the laptops are able to be recovered with a bit of finagling, so fortunately they haven’t bricked everything.

        And yeah staged updates or even just… some testing? Not sure how this one slipped through.

        • dactylotheca@suppo.fi
          link
          fedilink
          English
          arrow-up
          132
          arrow-down
          1
          ·
          4 months ago

          Not sure how this one slipped through.

          I’d bet my ass this was caused by terrible practices brought on by suits demanding more “efficient” releases.

          “Why do we do so much testing before releases? Have we ever had any problems before? We’re wasting so much time that I might not even be able to buy another yacht this year”

            • dactylotheca@suppo.fi
              link
              fedilink
              English
              arrow-up
              42
              ·
              4 months ago

              Certainly not! Or other industries for that matter. It’s a good thing executives everywhere aren’t just concentrating on squeezing the maximum amount of money out of their companies and funneling it to themselves and their buddies on the board.

              Sure, let’s “rightsize” the company by firing 20% of our workforce (but not management!) and raise prices 30%, and demand that the remaining employees maintain productivity at the level it used to be before we fucked things up. Oh and no raises for the plebs, we can’t afford it. Maybe a pizza party? One slice per employee though.

        • Confused_Emus@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          One of my coworkers, while waiting on hold for 3+ hours with our company’s outsourced helpdesk, noticed after booting into safe mode that the Crowdstrike update had triggered a snapshot that she was able to roll back to and get back on her laptop. So at least that’s a potential solution.

    • RegalPotoo@lemmy.world
      link
      fedilink
      English
      arrow-up
      50
      arrow-down
      2
      ·
      4 months ago

      Agreed, this will probably kill them over the next few years unless they can really magic up something.

      They probably don’t get sued - their contracts will have indemnity clauses against exactly this kind of thing, so unless they seriously misrepresented what their product does, this probably isn’t a contract breach.

      If you are running crowdstrike, it’s probably because you have some regulatory obligations and an auditor to appease - you aren’t going to be able to just turn it off overnight, but I’m sure there are going to be some pretty awkward meetings when it comes to contract renewals in the next year, and I can’t imagine them seeing much growth

      • Skydancer@pawb.social
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        1
        ·
        4 months ago

        Nah. This has happened with every major corporate antivirus product. Multiple times. And the top IT people advising on purchasing decisions know this.

        • SupraMario@lemmy.world
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          1
          ·
          4 months ago

          Yep. This is just uninformed people thinking this doesn’t happen. It’s been happening since av was born. It’s not new and this will not kill CS they’re still king.

        • corsicanguppy@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          At my old shop we still had people giving money to checkpoint and splunk, despite numerous problems and a huge cost, because they had favourites.

      • jedibob5@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        4 months ago

        Don’t most indemnity clauses have exceptions for gross negligence? Pushing out an update this destructive without it getting caught by any quality control checks sure seems grossly negligent.

    • rozodru@lemmy.ca
      link
      fedilink
      English
      arrow-up
      42
      arrow-down
      2
      ·
      4 months ago

      It’s just amatuer hour across the board. Were they testing in production? no code review or even a peer review? they roll out for a Friday? It’s like basic level start up company “here’s what not to do” type shit that a junior dev fresh out of university would know. It’s like “explain to the project manager with crayons why you shouldn’t do this” type of shit.

      It just boggles my mind that if you’re rolling out an update to production that there was clearly no testing. There was no review of code cause experts are saying it was the result of poorly written code.

      Regardless if you’re low level security then apparently you can just boot into safe and rename the crowdstrike folder and that should fix it. higher level not so much cause you’re likely on bitlocker which…yeah don’t get me started no that bullshit.

      regardless I called out of work today. no point. it’s friday, generally nothing gets done on fridays (cause we know better) and especially today nothing is going to get done.

      • Revan343@lemmy.ca
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        4 months ago

        explain to the project manager with crayons why you shouldn’t do this

        Can’t; the project manager ate all the crayons

      • candybrie@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        Why is it bad to do on a Friday? Based on your last paragraph, I would have thought Friday is probably the best week day to do it.

        • Lightor@lemmy.world
          link
          fedilink
          English
          arrow-up
          21
          ·
          edit-2
          4 months ago

          Most companies, mine included, try to roll out updates during the middle or start of a week. That way if there are issues the full team is available to address them.

        • rozodru@lemmy.ca
          link
          fedilink
          English
          arrow-up
          5
          ·
          4 months ago

          Because if you roll out something to production on a friday whose there to fix it on the Saturday and Sunday if it breaks? Friday is the WORST day of the week to roll anything out. you roll out on Tuesday or Wednesday that way if something breaks you got people around to jump in and fix it.

      • corsicanguppy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        rolling out an update to production that there was clearly no testing

        Or someone selected “env2” instead of “env1” (#cattleNotPets names) and tested in prod by mistake.

        Look, it’s a gaffe and someone’s fired. But it doesn’t mean fuck ups are endemic.

        • catloaf@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          I’m not sure what you’d expect to be able to do in a safe mode with no disk access.

    • ThrowawaySobriquet@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      4 months ago

      I think you’re on the nose, here. I laughed at the headline, but the more I read the more I see how fucked they are. Airlines. Industrial plants. Fucking governments. This one is big in a way that will likely get used as a case study.

    • Munkisquisher@lemmy.nz
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      4 months ago

      Yeah saw that several steel mills have been bricked by this, that’s months and millions to restart

      • gazter@aussie.zone
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        4 months ago

        Got a link? I find it hard to believe that a process like that would stop because of a few windows machines not booting.

          • drspod@lemmy.ml
            link
            fedilink
            English
            arrow-up
            15
            ·
            4 months ago

            Those machines should be airgapped and no need to run Crowdstrike on them. If the process controller machines of a steel mill are connected to the internet and installing auto updates then there really is no hope for this world.

            • Munkisquisher@lemmy.nz
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 months ago

              I work in an environment where the workstations aren’t on the Internet there’s a separate network, there’s still a need for antivirus and we were hit with bsod yesterday

            • Hotzilla@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              There is no unsafer place than isolated network. AV and xdr is not optional in industry/healthcare etc.

        • conciselyverbose@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          4 months ago

          There are a lot of heavy manufacturing tools that are controlled and have their interface handled by Windows under the hood.

          They’re not all networked, and some are super old, but a more modernized facility could easily be using a more modern version of Windows and be networked to have flow of materials, etc more tightly integrated into their systems.

          The higher precision your operation, the more useful having much more advanced logs, networked to a central system, becomes in tracking quality control.

          Imagine if after the fact, you could track a set of .1% of batches that are failing more often and look at the per second logs of temperature they were at during the process, and see that there’s 1° temperature variance between the 30th to 40th minute that wasn’t experienced by the rest of your batches. (Obviously that’s nonsense because I don’t know anything about the actual process of steel manufacturing. But I do know that there’s a lot of industrial manufacturing tooling that’s an application on top of windows, and the higher precision your output needs to be, the more useful it is to have high quality data every step of the way.)

      • Nachorella@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 months ago

        They can have all the clauses they like but pulling something like this off requires a certain amount of gross negligence that they can almost certainly be held liable for.

        • IsThisAnAI@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          4 months ago

          For what? At best it would be a hearing on the challenges of national security with industry.

    • Bell@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      8
      ·
      4 months ago

      Don’t we blame MS at least as much? How does MS let an update like this push through their Windows Update system? How does an application update make the whole OS unable to boot? Blue screens on Windows have been around for decades, why don’t we have a better recovery system?

      • sandalbucket@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        4 months ago

        Crowdstrike runs at ring 0, effectively as part of the kernel. Like a device driver. There are no safeguards at that level. Extreme testing and diligence is required, because these are the consequences for getting it wrong. This is entirely on crowdstrike.