I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.

This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.

Feedback is very much welcome. Thank you.

  • Eyron@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    9 months ago

    This is all explained in the post we’re commenting on. The standard “kilo” prefix, from the metric system, predates modern computing and even the definition of a byte: 1700s vs 1900s. It seems very odd to argue that the older definition is the one trying to retcon.

    The binary usage in software was/is common, but it’s definitely more recent, and causes a lot of confusion because it doesn’t match the older and bigger standard. Computers are very good at numbers, they never should have tried the hijack the existing prefix, especially when it was already defined by existing International standards. One might be able to argue that the US hadn’t really adopted the metric system at the point of development, but the usage of 1000 to define the kilo is clearly older than the usage of 1024 to define the kilobyte. The main new (last 100 years) thing here is 1024 bytes is a kibibyte.

    Kibi is the recon. Not kilo.

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      Kilo meaning 1,000 inside computer science is the retcon.

      Tell me, how much RAM do you have in your PC. 16 gig? 32 gig?

      Surely you mean 17.18 gig? 34.36 gig?