Greetings, I have a Debian server at home running a file server, jellyfin server, among some other things. I also had 4 external drives hooked up to it in a Raid 10 (or is it 1+0?) configuration. The SSD I had the actual server installed on failed overnight and looks to be beyond recovery. So my question is, when I install Debian on a new drive to replace the failed one, is there any method I could use to get the raid array to work with the new server without being rebuilt? From what I have found it looks like that is not possible but I just figured I would ask. The actual raid disks are fine, ironically they are about 8 years old while the SSD was only about 2 and it was the one that failed. No important data was lost, it will just be a bit of a pain to replace everything that was on the server if I have to rebuild the array and lose all of the data. Thanks in advance.

EDIT: Forgot to include that this was setup using mdadm

EDIT2: So it turns out this was not as massive a problem as I thought. I had assumed that since the server that set up the raid array with mdadm was lost, that I would not be able to get back into the array even though the data was still there. That was not the case. As soon as I connected the drives to the new server, mdadm recognized the array. It turns out one of the raid disks had also failed (no idea why but they are old), but I luckily have a spare one so I swapped it in and now have to wait patiently for 12 ish hours for the array to rebuild. In the meantime I got my file share back up and running and confirmed everything is accounted for. So provided there are no other random failures in the next 12 hours everything should turn out fine. Thanks all for your help. Now to get jellyfin installed and running again and I can get back to streaming the same shows over and over…

  • krnl386@lemmy.ca
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    Assuming you were using a Linux software RAID, you should be able to recover it.

    The first step would be to determine what kind of RAID you were using… btrfs, zfs, mdraid/dmraid/lvm… do you know what kind you set up?

    To start the process, try reconnecting your RAID disks to a working Linux machine, then try checking:

    1. The sudo lsblk command will help you get a list of all connected disks, sizes and partitions.
    2. The partition tables on the disks, eg: sudo fdisk -l /dev/sda (that’s a lowercase L and /dev/sda is your disk)
    3. Assuming you use a standard Linux software RAID, try sudo mdadm --examine /dev/sda1. If all goes well, the last command should give you an idea of what state the disk is in, what RAID level you had, etc.
    4. Next, I would try and see if mdadm can figure out how to reassemble the array, so try sudo mdadm --examine --scan. That should hopefully produce output with the name of the RAID array block device (eg, /dev/md0), RAID level and members of the RAID array (number of disks). Let me know what you discover…

    Note: if you used zfs of btrfs, do not do steps 3 and 4; they are MD RAID specific.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      To check for MD arrays you can also just cat /proc/mdstat.

      Modern kernels will auto-sense MD arrays. If an the array is listed there with a name like md1 it will tell you what partitions on what disks it’s using (like sda1 sdb1) and if they’re both OK ([UU] means OK). If the array is currently rebuilding or recovering it will say that too and show a progress meter. You can also find a corresponding filesystem device /dev/md1 (or whatever the name is) which you can mount to access the files.

      • krnl386@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Good point! I assumed the worst; but it’s possible the array is rebuilding or even already rebuilt and just needs to be mounted.