I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.

This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.

Feedback is very much welcome. Thank you.

  • Humanius@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    3
    ·
    edit-2
    9 months ago

    Short answer: It’s because of binary.
    Computers are very good at calculating with powers of two, and because of that a lot of computer concepts use powers of two to make calculations easier.

    1024 = 210

    Edit: Oops… It’s 210, not 27
    Sorry y’all… 😅

    • insomniac_lemon@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      9 months ago

      Just to add, I would argue that by definition of prefixes it is 1000.

      However there are other terms to use, in this case Kibibyte (kilo binary byte, KiB instead of just KB) that way you are being clear on what you actually mean (particularly a big difference with modern storage/file sizes)

      EDIT: Of course the link in the post goes over this, I admit my brain initially glossed over that and I thought it was a question thread

    • klisurovi4@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      9 months ago

      To add to that, computers are great with powers of 2 because they work with bits. Each bit represents 2 values (0 or 1). That’s why computers have an easier time represting 1024, which is 2^10 (10 bits) instead of 1000, which isn’t a power of 2