• HighPriestOfALowCult@lemmy.sdf.org
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    This desktop right here (running a couple of ZFS pools) has drives with more than 3 years on it…

    $ for d in $(find /dev/sd[a-z]); do sudo smartctl --all --json $d| jq -c '[.model_name,(.power_on_time?.hours?/8760)]' ; done
    ["CT1000MX500SSD1",2.2034246575342467]
    ["WDC WD140EDGZ-11B1PA0",0.3791095890410959]
    ["TOSHIBA HDWE140",4.040639269406393]
    ["TOSHIBA HDWE140",5.925684931506849]
    ["WDC WD80EMAZ-00WJTA0",3.359246575342466]
    ["TOSHIBA HDWE140",5.925684931506849]
    

    runs like a top (better not jinx myself).

    • azertyfun@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Yep, it’s basically the best environment for them. Presumably relatively few writes compared to the uptime, in a case with few vibrations (!), very few power cycles (!!!). Basically all it does is spin on a highly precise bearing.

      Anyway drive lisepans only matter for cost projections, when it comes to data integrity you should ALWAYS assume that a drive ia about to fail. Because sometimes it fails after 2 years and sometimes it runs for 20, that’s just the luck of the draw.