• 0 Posts
  • 101 Comments
Joined 11 months ago
cake
Cake day: August 18th, 2023

help-circle



  • Except we know what the lifecycle of physical storage is, it’s rate of performance decay (virtually none for solid state until failure), and that the computers performing the operations have consistent performance for the same operations over time. And again, while for a car such a small amount can’t be reasonably extrapolated, for a computer processing an extremely simple format like JSON, when it is designed to handle FAR more difficult tasks on the GPU involving billions of floating point operations, it is absolutely, without a doubt enough.

    You don’t have to believe me if you don’t want but I’m very confident in my understanding of JSON’s complexity relative to typical GPU workloads, computational analysis, computer hardware durability lifecycles, and software testing principles and best practices. 🤷


  • Imagine you have a car powered by a nuclear reactor with enough fuel to last 100 years and a stable output of energy. Then you put it on a 5 mile road that is comprised of the same 250 small segments in various configurations, but you know for a fact that starts and ends at the same elevation. You also know that this car gains exactly as much performance going downhill as it loses going uphill.

    You set the car driving and determine that, it takes 15 minutes to travel 5 miles. You reconfigure the road, same rules, and do it again. Same result, 15 minutes. You do this again and again and again and always get 15 minutes.

    Do you need to test the car on a 20 mile road of the same configuration to know that it goes 20mph?

    JSON is a text-based, uncompressed format. It has very strict rules and a limited number of data types and structures. Further, it cannot contain computational logic on it’s own. The contents can interpreted after being read to extract logic, but the JSON itself cannot change it’s own computational complexity. As such, it’s simple to express every possible form and complexity a JSON object can take within just 0.6 MB of data. And once they know they can process that file in however-the-fuck-many microseconds, they can extrapolate to Gbps from there



  • There is no question that most myths and legends were originally an attempt to convey facts, theories, or guesses into the future.

    Humans are built to be pattern matching machines and prediction engines; it’s one of the big survival traits we developed through evolution and we’re better at it than any other species we know of.

    BUT objectively speaking we were still really, really bad at it. Yet that doesn’t stop us from trying.

    So we tend to do the best we can with the information we have available at the time.

    As others have said, “physics” - and science in general - is by definition immutable. It is the thing that can be tested with specific predictions that always turn out to be correct. If I can perform an experiment today, and you can perform the same experiment 100 years from now, and (adjusting for environmental factors and measurement accuracy) we get the same results, and we can repeat that over and over, that’s science.

    But our understanding, our knowledge of it, can change as you say. That doesn’t make physics less true, it just make our knowledge of and ability to describe physics less accurate.

    We can trace so many stories - including modern religions - to origins that attempt to explain our limited observations in the past. They were our best effort at matching patterns and predicting outcomes in the world around us. And the inaccuracies, the limitations don’t mean we should stop believing the things we think we understand today.

    It just means that we must recognize new information when it arrives as testable data, and incorporate it into our current understanding, relegating the wisdom of the past to history.






  • Here’s how it was intended to work:

    • debian, fedora, or another RPM-based distribution updates references to liblzma to 5.6.x in their latest release
    • the package repository is updated (usually through automation) by getting the infected tarball and compiling it into an RPM or DEB which is added to the repo
    • if the package is built using glibc and the gnu linker, and for a system that uses systemd, the exploit is enabled during compilation of the x86-64 version of the package; otherwise the result is normal
    • when an application is installed that depends on liblzma, possibly during OS installation itself, the infected RPM/DEB package from the package repository is downloaded and installed (assuming the system matches the requirements above)
    • in this particular case, OpenSSH was the primary target; if the attacker wanted to, it could have targeted any web-facing service that uses liblzma such as OpenSSL + Apache/nginx, etc
    • when the OpenSSH server is started on an infected system, it loads the infected liblzma binary
    • the attacker starts an SSH connection to the infected server, having already known about the server or by scanning the internet for visible ssh servers
    • during creation of the SSH connection, the user has the option of trying to sign in using an RSA key. The attacker uses a specially formed RSA key only available to the attacker that also contains a chunk of code (the “payload”) that they want executed on the server
    • liblzma is utilized to compress data in transit; when the infected liblzma decompresses the RSA key on the server, the exploit recognizes the attacker’s special RSA key and executes the payload on the host system. Otherwise, the ssh session continues as normal

    This would not impact MacOS because you couldnt install the infected package, since it is only ever built for debian or RPM-based systems running systemd, using glibc and the gnu linker, and for x86-64. Unless I’m misunderstanding something, there is no way to get the compiled binaries that are infected to work on a MacOS system

    Additionally, I should note that I’m not exactly an expert on this stuff; I’m just in the security space and have been reading about this as it happens, so it’s possible there are errors in my understanding. But that should at least give you the gist of the attack


  • Quick summary:

    • only impacts Debian and Linux distributions that utilize RPM for packages
    • only impacts cases where liblzma is compiled from a tarball, rather than cloned source repository or precompiled binary
    • only impacts x64 architecture
    • introduced in liblzma 5.6.0 which was released in late February so only impacts installs receiving updates to liblzma since then

    liblzma is a library for the lzma compression format. Loosely, this means it’s used by various other pieces of software that need this type of compression, rather than being an application itself.

    It is very widely used. It comes installed on most major Linux distributions and is used by software like openssh, one of the standard remote connection packages.

    However, since it was only in the tarball, you wouldn’t see it widely until debian, fedora, et al release a new version that includes the latest liblzma updates. This version hadn’t been added to any of the stable release channels yet, so the typical user wouldn’t have gotten it yet.

    I believe this would have gone out in debian 12.6 next week, and the attacker was actively petitioning fedora maintainers to get it added to fedora 40 & 41

    The interesting thing about this situation was how much effort the attacker put in to gain trust just to get to the point where they could do this, and how targeted the vulnerability seems to have been. They tried very hard to reduce the likelihood of being caught by only hitting a limited set of configurations


  • That’s why I put “real threat” in quotes ; I was paraphrasing what I consider to be the excessive focus on FR

    I’m a security professional. FR is not the easiest way to track everybody/anybody. It’s just the most visible and easily grok’d by the general public because it’s been in movies and TV forever

    To whit, FR itself isn’t what makes it “easy”, but rather the massive corpus of freely available data for training combined with the willingness of various entities to share resources (e.g. Sharing surveillance video with law enforcement).

    What’s “easiest” entirely depends on the context, and usually it’s not FR. If I’m trying to identify the source of a particular set of communications, FR is mostly useless (unless I get lucky and identify, like, the mailbox they’re using or something silly like that). I’m much more interested in voice identification, fingerprinting, geolocation, etc in that scenario

    Again, FR is just…known. And visible. And observable in its use for nefarious purposes by shitty governments and such.

    It’s the stuff you don’t see on the news or in the movies that you should really be worried about

    (and I’m not downvoting you either; that’s for when things don’t contribute, or deserve to be less visible because of disinformation; not for when you disagree with someone)


  • I know what you’re arguing and why you’re arguing it and I’m not arguing against you.

    I’m simply adding what I consider to be important context

    And again, the things I listed specifically are far from the only ways to track people. Shit, we can identify people using only the interference their bodies create in a wifi signal, or their gait. There are a million ways to piece together enough details to fingerprint someone. Facial recognition doesn’t have a monopoly on that bit of horror

    FR is the buzzword boogieman of choice, and the one you are most aware of because people who make money from your clicks and views have shoved it in front of your face. But go ahead and tell me about what the “real threat” is 👍👍👍



  • … Why would I allow a misunderstanding to be perpetuated? People don’t have a right to read something just because it was there at one time. And it kept happening because people were only reading one comment in isolation without reading the prior context. Like, I replied with “in this context one of those is far left and the other is far right” and a third person came in with “there is no far left in us politics” which is exactly what I was saying, they just failed to read the prior comments.

    It was only getting worse so I simply decided to stop it from progressing because I wasn’t about to sit there explaining over and over again the same damn thing.