Okay I saw this posted a lot and apparently it is pretty common but why do people virtualize your nas in for example a proxmox server/cluster. If that goes down it gets super hard to get your data back than if you do it bare Metal, doesn’t it? Are people only doing it so save on seperate devices or are my concerns unreasonable?

    • digitallyfree@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I do this :). A virtualized router is amazing since you can move it between cluster nodes with zero downtime, and you can restore from backup very quickly if the router’s node fails.

      There are zero issues if you know what you are doing and have proper mitigation practices in place - basically that means you have your management interfaces on a flat network and can access opnsense/proxmox/bmc on the same laptop with a static IP. I have a specific switch port connected to that VLAN that I plug my laptop into. If you mess up your config then you just connect directly to the Proxmox webui and access the opnsense vm to fix it.

      • icy_mal@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Another vote for virtualized router! I keep set a core VMs on that host where uptime is the highest priority. I’ve upgraded RAM, downgraded CPU, and eventually switched to an entirely new host with 0 downtime over the past few months. I’d rather not have to wait until everyone else on the network is sleeping before doing any tinkering on the hardware. It’s pretty neat to be streaming some video and then live migrate the router to another physical host with 0 interruption.

    • fuser@quex.cc
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      okay you actually made me laugh. that’s not easy - take an upvote.

    • rufus@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I actually did this for a few months until I saved up enough for a decent dedicated firewall appliance. Got a cheap PCIe 2x1GB NIC off Amazon and passed it directly to an OPNsense VM.

      Honestly, it wasn’t that bad. The only downside is that that Proxmox server was just an old repurposed desktop PC that was super underpowered, so the VM only had like 2GB of RAM and that ended up being a bit of a bottleneck under load.

      • vividspecter@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I’m doing it with openwrt x86, since I need SQM + wireguard (and at least the former still isn’t supported on *sense last time I checked). Works fine in all honesty, and I can reboot the VM much faster than real hardware.

    • Greyscale@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      That was me for about 2 weeks until the esxi took a shit. Never again. I basically went “fuck this shit” and bought a ubiquity udm.

    • sneakyninjapants@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I’m all for this actually. Though I’d be doing that on a dedicated machine with just pfsense/opnsense on it. Any other way would be kinda dumb right?

    • arkcom@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I did it for years. The only problem is, if you mess up your opnsense config, you’re gonna need to get the keyboard and monitor out.