• 20 Posts
  • 129 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle

  • In the US at least, most equipment (unless you get into high-and datacenter stuff) runs on 120V. We also use 240V power, but a 240V connection is actually two 120V phases 180-degrees out of sync. The main feed coming into your home is 240V, so your breaker panel splits the circuits evenly between the two phases. Running dual-phase power to a server rack is as simple as just running two 120V circuits from the panel.

    My rack only receives a single 120V circuit, but it’s backed up by a dual-conversion UPS and a generator on a transfer switch. That was enough for me. For redundancy, though, dual phases, each with its own UPS, and dual-PSU servers are hard ro beat.




  • corroded@lemmy.worldtoTechnology@lemmy.worldIs Microsoft trying to commit suicide?
    link
    fedilink
    English
    arrow-up
    200
    arrow-down
    8
    ·
    edit-2
    26 days ago

    Microsoft knows that the addition of adds to Windows, Recall, data mining, etc are not suicide. As far as tech news goes, Lemmy really exists in an echo chamber. The vast majority of us at least have some interest in technology. For the majority of the population, though, this isn’t true. The typical person sees a computer as a tool to be used for other things. They’re not reading articles about the latest release of Windows, new CPU technology, the latest GPU, etc. They’re using their computer, and when it’s time for an upgrade, they buy whatever suits their needs.

    If I was to ask any of my family, or most of my coworkers, about any of the latest “controversies” surrounding Microsoft, they would have no idea what I was talking about. Microsoft obviously thinks that the added profits gained by monetizing their customers will offset the loss of 1% of their users that switch to Linux. They’re probably right, too.

    I like Windows, personally (well, Windows 10 at least). My unofficial rule has always been if it needs a GUI, then it runs Windows, otherwise, it runs Linux as a headless machine. Once Windows 10 is no longer a viable option, my unofficial rule will be “it runs Linux.” Most people will not make this switch.


  • Yes, a lot of my movies are 50GB or so. Not everything has a 4k repack available, though. I’d say the vast majority are around 20GB.

    1080p would just not be acceptable for me. There’s a clear difference between 1080p and 4k on a 4k screen, especially if the screen is large.

    If I’m in a situation where I don’t have connectivity to stream from my server, then I can always just start a Handbrake queue the night before and transcode a few videos to smaller size, or just dump a few onto an external drive. I have never been in a situation where I had to do this, though.


  • Any sort of media, including videos, I always go for the highest possible quality I can. I do have a number of 4k displays, so it makes sense to a certain extent, but a lot if it has to do with future-proofing.

    Here’s a good example: When personal video cameras were first starting to support 1080, I purchased a 1080i video camera. At the time, it looked great on my 1920x1080 (maybe 1024x768, not sure) monitor. Fast forward over 15 years later, and it the video I recorded back then looks like absolute garbage on my 4k TV.

    I remember watching a TV show when 1080p video first became available, and I was blown away at the quality of what was probably a less-than-1GB file. Now watching the same file even on my phone has a noticeable drop is quality. I’m not surprised you saw little difference between a 670MB and a 570MB file, especially if it was animation, which contains large chunks of solid colors and is thus more easily compressed. The difference between two resolutions, though, can be staggering. At this point, I don’t think you can easily find a 1080p TV; everything is 4k. 8k is still not widespread, but it will be one day. If you ever in your life think you’ll buy a new TV, computer monitor, or mobile device, eventually you’ll want higher quality video.

    My recommendation would be to fill your media library with the highest-quality video you can possibly find. If you’re going to re-encode the media to a lower resolution or bitrate, keep a backup of the original. You may find, though, that if you’re re-encoding enough video, it makes more sense to save the time and storage space, and spend a bit of money on a dedicated video card for on-the-fly transcoding.

    My solution was to install an RTX A1000 in my server and set it up with my Jellyfin instance. If I’m watching HDR content on a non-HDR screen, it will transcode and tone-map the video. If I’m taking a break at work and I want to watch a video from home, it will transcode it to a lower bitrate that I can stream over (slow) my home internet. Years from now, when I’m trying to stream 8k video over a 10Gb fiber link, I’ll still be able to use most of the media I saved back in 2024 rather than try to find a copy that meets modern standards, if a copy even exists.

    Edit: I wanted to point out that I realize not everyone has the time or financial resources to set up huge NAS with enterprise-grade drives. An old motherboard and a stack of cheap consumer-grade drives can still give you a fair amount of backup storage and will be fairly robust as long as the drive array is set up with a sufficient level of redundancy.




  • When I use OpenSpeedTest to to test to another VM, it doesn’t read or write from the HDD, and it doesn’t leave the Proxmox NIC. It’s all direct from one VM to another. The only limitations are CPU are perhaps RAM. Network cables wouldn’t have any effect on this.

    I’m using VirtIO (paravirtualized) for the NICs on all my VMs. Are there other paravirtualization options I need to be looking into?


  • I still enjoy the second-wave stuff from time to time, but you’re absolutely spot-on with what’s been coming out in recent years. I’m really into groups that have kept the original BM music style but embraced modern production. A few that come to mind are Faidra, Spectral Wound, Asarhaddon, and Funeral Winds; fantastic bands that play “true” BM but have good recording quality.

    Like you mentioned, the big change is just how many “crossover” bands there are, and I’m all for it. You didn’t ask for suggestions, but I’m going to offer some of my favorites anyway:

    • Harakiri for the Sky - One of the best post-black bands.
    • Anomalie (shares members with Harakiri for the Sky) - BM plus what I can only call “tribal” elements.
    • Psyclon 9 (at least their older albums) - BM plus industrial/aggrotech.
    • Dawn of Ashes - See above.
    • Anaal Nathrakh - BM + grind + industrial + ?
    • Darkthrone (yes, THAT Darkthrone) - Blackened hard rock? I don’t know what to call their new stuff, but it’s not bad.
    • Gaerea - Radio-friendly BM
    • Kanonenfieber - Blackened melodic death metal? Maybe?
    • Afsky - Folk-inspired BM. Seems like this is a really popular combination.
    • None - DSBM, but with the exception of their filler tracks, more on the BM, less on the DS.
    • Ernte - Fairly traditional BM, but with female vocals.

  • It was a good suggestion. That’s one of the first things I checked, and I was honestly hoping it would be as easy as changing the NIC type. I know that the Intel E1000 and Realtek RTL8139 options would limit me to 1Gb, but I haven’t tried the VMware vmxnet3 option. I don’t imagine that would be an improvement over the VirtIO NIC, though.





  • I will resort to ChatGPT for coding help every so often. I’m a fairly experienced programmer, so my questions usually tend to be somewhat complex. I’ve found that’s it’s extremely useful for those problems that fall into the category of “I could solve this myself in 2 hours, or I could ask AI to solve it for me in seconds.” Usually, I’ll get a working solution, but almost every single time, it’s not a good solution. It provides a great starting-off point to write my own code.

    Some of the issues I’ve found (speaking as a C++ developer) are: Variables not declared “const,” extremely inefficient use of data structures, ignoring modern language features, ignoring parallelism, using an improper data type, etc.

    ChatGPT is great for generating ideas, but it’s going to be a while before it can actually replace a human developer. Producing code that works isn’t hard; producing code that’s good requires experience.


  • I’m old enough to remember the 9/11 attacks. It was never in question that Saudi Arabia was complicit in what happened. The majority of the terrorists were Saudi. It took a bit longer for the fact that the Saudi government was complicit to emerge, but we knew within a short time that at the very least, they provided financial support to the terrorists.

    The argument for starting the “war on terror” was that Al-Qaeda planned the attack, so we should attack the countries that harbor them. At the time, the majority of the country supported this; I remember George Bush Jr.'s approval ratings being in the 90s for a short time. Even then, most of us knew that Saudi Arabia was at least complicit in what happened. The lust for revenge, as much as it was justified, made people forget that.

    Over the last 23 years, I feel like a lot of Americans have forgotten the role that Saudi Arabia played in the events of 9/11; after all, they’re our “ally,” right? I have always been on the fence regarding whether or not invading Iraq and Afghanistan was a good idea. Back in 2001, though, I felt like invading Saudi Arabia was a great idea. 23 years later, I don’t feel any different. Should the United States have attacked Iraq and Afghanistan, I’d say “probably”; should we have attacked Saudi Arabia? Absolutely. Yet it never happened.