Is Creative Cloud a requirement for using Adobe products these days? Surely someone can just save data locally instead. Media files (especially raw video) can be enormous.
This seems like a weird and convoluted way to charge for content. Why not just make it a paid DLC?
Microsoft knows that the addition of adds to Windows, Recall, data mining, etc are not suicide. As far as tech news goes, Lemmy really exists in an echo chamber. The vast majority of us at least have some interest in technology. For the majority of the population, though, this isn’t true. The typical person sees a computer as a tool to be used for other things. They’re not reading articles about the latest release of Windows, new CPU technology, the latest GPU, etc. They’re using their computer, and when it’s time for an upgrade, they buy whatever suits their needs.
If I was to ask any of my family, or most of my coworkers, about any of the latest “controversies” surrounding Microsoft, they would have no idea what I was talking about. Microsoft obviously thinks that the added profits gained by monetizing their customers will offset the loss of 1% of their users that switch to Linux. They’re probably right, too.
I like Windows, personally (well, Windows 10 at least). My unofficial rule has always been if it needs a GUI, then it runs Windows, otherwise, it runs Linux as a headless machine. Once Windows 10 is no longer a viable option, my unofficial rule will be “it runs Linux.” Most people will not make this switch.
Yes, a lot of my movies are 50GB or so. Not everything has a 4k repack available, though. I’d say the vast majority are around 20GB.
1080p would just not be acceptable for me. There’s a clear difference between 1080p and 4k on a 4k screen, especially if the screen is large.
If I’m in a situation where I don’t have connectivity to stream from my server, then I can always just start a Handbrake queue the night before and transcode a few videos to smaller size, or just dump a few onto an external drive. I have never been in a situation where I had to do this, though.
Any sort of media, including videos, I always go for the highest possible quality I can. I do have a number of 4k displays, so it makes sense to a certain extent, but a lot if it has to do with future-proofing.
Here’s a good example: When personal video cameras were first starting to support 1080, I purchased a 1080i video camera. At the time, it looked great on my 1920x1080 (maybe 1024x768, not sure) monitor. Fast forward over 15 years later, and it the video I recorded back then looks like absolute garbage on my 4k TV.
I remember watching a TV show when 1080p video first became available, and I was blown away at the quality of what was probably a less-than-1GB file. Now watching the same file even on my phone has a noticeable drop is quality. I’m not surprised you saw little difference between a 670MB and a 570MB file, especially if it was animation, which contains large chunks of solid colors and is thus more easily compressed. The difference between two resolutions, though, can be staggering. At this point, I don’t think you can easily find a 1080p TV; everything is 4k. 8k is still not widespread, but it will be one day. If you ever in your life think you’ll buy a new TV, computer monitor, or mobile device, eventually you’ll want higher quality video.
My recommendation would be to fill your media library with the highest-quality video you can possibly find. If you’re going to re-encode the media to a lower resolution or bitrate, keep a backup of the original. You may find, though, that if you’re re-encoding enough video, it makes more sense to save the time and storage space, and spend a bit of money on a dedicated video card for on-the-fly transcoding.
My solution was to install an RTX A1000 in my server and set it up with my Jellyfin instance. If I’m watching HDR content on a non-HDR screen, it will transcode and tone-map the video. If I’m taking a break at work and I want to watch a video from home, it will transcode it to a lower bitrate that I can stream over (slow) my home internet. Years from now, when I’m trying to stream 8k video over a 10Gb fiber link, I’ll still be able to use most of the media I saved back in 2024 rather than try to find a copy that meets modern standards, if a copy even exists.
Edit: I wanted to point out that I realize not everyone has the time or financial resources to set up huge NAS with enterprise-grade drives. An old motherboard and a stack of cheap consumer-grade drives can still give you a fair amount of backup storage and will be fairly robust as long as the drive array is set up with a sufficient level of redundancy.
I think one thing that’s very important for the worldwide audience to consider is what the involved countries count as “peace.” Peace for Ukraine is “give us our land back and stop attacking us.” Peace for Russia is “We’ll stop attacking you if you let us have a significant portion of you country as our own.” Obviously, the Russian “solution” is not an acceptable one. Sadly, I feel like China and other Russian-aligned countries probably support the distorted Russian version of “peace.”
What do you mean specifically? If I’m already testing between two VMs, doesn’t that already isolate any issues to Proxmox? Is there another performance metric you think I should be looking at?
When I use OpenSpeedTest to to test to another VM, it doesn’t read or write from the HDD, and it doesn’t leave the Proxmox NIC. It’s all direct from one VM to another. The only limitations are CPU are perhaps RAM. Network cables wouldn’t have any effect on this.
I’m using VirtIO (paravirtualized) for the NICs on all my VMs. Are there other paravirtualization options I need to be looking into?
I still enjoy the second-wave stuff from time to time, but you’re absolutely spot-on with what’s been coming out in recent years. I’m really into groups that have kept the original BM music style but embraced modern production. A few that come to mind are Faidra, Spectral Wound, Asarhaddon, and Funeral Winds; fantastic bands that play “true” BM but have good recording quality.
Like you mentioned, the big change is just how many “crossover” bands there are, and I’m all for it. You didn’t ask for suggestions, but I’m going to offer some of my favorites anyway:
It was a good suggestion. That’s one of the first things I checked, and I was honestly hoping it would be as easy as changing the NIC type. I know that the Intel E1000 and Realtek RTL8139 options would limit me to 1Gb, but I haven’t tried the VMware vmxnet3 option. I don’t imagine that would be an improvement over the VirtIO NIC, though.
Every VM is using VirtIO as the network card; they’ll all on the same bridge to the physical 10Gb NIC. As far as I understand, any traffic between VMs should not be leaving the Proxmox server.
This might be an unpopular list, but I’m ranking games in terms of overall enjoyment.
I will resort to ChatGPT for coding help every so often. I’m a fairly experienced programmer, so my questions usually tend to be somewhat complex. I’ve found that’s it’s extremely useful for those problems that fall into the category of “I could solve this myself in 2 hours, or I could ask AI to solve it for me in seconds.” Usually, I’ll get a working solution, but almost every single time, it’s not a good solution. It provides a great starting-off point to write my own code.
Some of the issues I’ve found (speaking as a C++ developer) are: Variables not declared “const,” extremely inefficient use of data structures, ignoring modern language features, ignoring parallelism, using an improper data type, etc.
ChatGPT is great for generating ideas, but it’s going to be a while before it can actually replace a human developer. Producing code that works isn’t hard; producing code that’s good requires experience.
I’m old enough to remember the 9/11 attacks. It was never in question that Saudi Arabia was complicit in what happened. The majority of the terrorists were Saudi. It took a bit longer for the fact that the Saudi government was complicit to emerge, but we knew within a short time that at the very least, they provided financial support to the terrorists.
The argument for starting the “war on terror” was that Al-Qaeda planned the attack, so we should attack the countries that harbor them. At the time, the majority of the country supported this; I remember George Bush Jr.'s approval ratings being in the 90s for a short time. Even then, most of us knew that Saudi Arabia was at least complicit in what happened. The lust for revenge, as much as it was justified, made people forget that.
Over the last 23 years, I feel like a lot of Americans have forgotten the role that Saudi Arabia played in the events of 9/11; after all, they’re our “ally,” right? I have always been on the fence regarding whether or not invading Iraq and Afghanistan was a good idea. Back in 2001, though, I felt like invading Saudi Arabia was a great idea. 23 years later, I don’t feel any different. Should the United States have attacked Iraq and Afghanistan, I’d say “probably”; should we have attacked Saudi Arabia? Absolutely. Yet it never happened.
This is a good thing, but it’s hardly unique. Any advanced manufacturing facility will have remote access to their equipment in case an operator needs reconfigure it, transfer data, or in this case if they’re invaded by Lesser Taiwan.
I’d like to hope that by the time Win10 is no longer supported, we have Win12 that doesn’t suck. The way things are going, though, I doubt it. I’m expecting that Win10 will be the last version of Windows I use.
I still prefer Windows over Linux for gaming and software development, but everyone has their limit. I am strongly opposed to advertisements, and when I can no longer block ads from my operating system, it’s dead to me.
I use a mixture of Linux and Windows 10 LTSC on my PCs/servers/VMs. I will be the first to admit that Windows does sometimes make sense to use. My desktop PC and my dev environment are both Windows 10.
That being said, what is the advantage in using Windows 11 over 10? As far as I can tell, it’s worse in every way. Built-in ads, a crappier UI, forced obsolescence with TPM requirements, and “feature” bloat that nobody asked for.
10 was a clear improvement over 8, but 11 just seems all-around worse.
That makes this a very misleading headline, then. “VPN Usage over a Public Network may be Vulnerable to Attack” would be a lot more accurate IMO.
It doesn’t sound to me like this really negates the purpose of a VPN, more accurately it provides a way for someone on your local network to snoop on VPN traffic, if I understand correctly.
From how the article describes the attack, someone on your local network would have to set up a malicious DHCP server/gateway. The average home user who is using a VPN to mask their public IP probably doesn’t need to worry about this.
Or am I misunderstanding?
In the US at least, most equipment (unless you get into high-and datacenter stuff) runs on 120V. We also use 240V power, but a 240V connection is actually two 120V phases 180-degrees out of sync. The main feed coming into your home is 240V, so your breaker panel splits the circuits evenly between the two phases. Running dual-phase power to a server rack is as simple as just running two 120V circuits from the panel.
My rack only receives a single 120V circuit, but it’s backed up by a dual-conversion UPS and a generator on a transfer switch. That was enough for me. For redundancy, though, dual phases, each with its own UPS, and dual-PSU servers are hard ro beat.