• 7 Posts
  • 135 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • Make sure you have package alsa-utils installed and try to run alsamixer. That’ll show all the audio devices your system detects. Maybe you’re lucky and it’s just that some volume control is muted and if you’re not it’ll give you at least some info to work with. Majority of audio devices don’t need any additional firmware to work and they almost always work out of the box just fine. What’s the hardware you’re running? Maybe it is something exotic which isn’t installed by default (which I doubt).

    And additionally, what you’re trying to play audio from? For example MP3’s need non-free codecs to be installed and without them your experience is “a bit” limited on audio side of things.


  • They both use upstream version number (as in the number software developer gave to the release). They might additionally have some kind of revision number related to packaging or some patch number, but as a rule of thumb, yes, the bigger number is the most recent. If you should use that as a only variable on deciding which to install is however another discussion. Sometimes dpkg/apt version is preferred over snap regardless of version differences, for example to save a bit of disk space, but that depends on a ton of different things.




  • Think a large office space or industrial application with several hundred (or thousands) of hosts connected to the network. Some of them need to be isolated from the internet and/or rest of the network, some need only access to the internet, some need internet and local services and so on.

    With that kind of setup you could just run separate cables and unmanaged switches for every different type of network you have and have the router manage where each of those can talk to. However, that would be pretty difficult to change or expand while being pretty expensive as you need a ton of hardware and cabling to do it. Instead you use VLANs which kinda-sorta split your single hardware switch into multiple virtual ones and you can still manage their access from a single router.

    If you replace all the switches with routers they’re quite a bit more expensive and there’s not too many routers with 24 or 48 ports around. And additonally router configuration is more complex than just telling the switch that ‘ports 1-10 are on vlan id 5 and ports 15-20 are on id 8’. With dozens of switches that adds up pretty fast. And while you could run most routers as a switch you’ll just waste your money with that.

    VLANs can be pretty useful in home environment too, but they’re mostly used in bigger environments.


  • I’m tempted to say systemd-ecosystem. Sure, it has it’s advantages and it’s the standard way of doing things now, but I still don’t like it. Journalctl is a sad and poor replacement from standard log files, it has a ton of different stuff which used to be their separate own little things (resolved, journald, crontab…) making it pretty monolithic thing and at least for me it fixed a problem which wasn’t there.

    Snapcraft (and flatpack to some extent) also attempts to fix a non-existing problem and at least for me they have caused more issues than any benefits.


  • It’s been a while (few years actually) since I even tried, but bluetooth headsets just won’t play nicely. You either get the audio quality from a bottom of the barrel or somewhat decent quality without microphone. And the different protocol/whatever isn’t selected automatically, headset randomly disconnects and nothing really works like it does with my cellphone/windows-machines.

    YMMV, but that’s been my experience with my headsets. I’ve understood that there’s some propietary stuff going on with audio codecs, but it’s just so frustrating.


  • The command in question recursively changes file ownership to account “user” and group “user” for every file and folder in the system. With linux, where many processes are run as root and on various other accounts (like apache or www-data for web server, mysql for MySql database and so on) and after that command none of the services can access the files they need to function. And as the whole system is broken on a very fundamental level changing everything back would be a huge pain in the rear.

    On this ubuntu system I’m using right now I have 53 separate user accounts for various things. Some are obsolete and not in use, but majority are used for something and 15 of them are in active use for different services. Different systems have a bit different numbers, but you’d basically need to track down all the millions of files on your computer and fix each of their permission by hand. It can be done, and if you have similar system to copy privileges from you could write a script to fix most of the things, but in vast majority of cases it’s easier to just wipe the drive and reinstall.


  • I’ve ran into that with one shitty vendor (I won’t/can’t give any details beyond this) lately. They ‘support’ deb-based distributions, but specially their postinst-scripts don’t have any kind of testing/verification on the environment they’re running in and it seems to find new and exiting ways to break every now and then. I’m experienced (or old) enough with Linux/Debian that I can go around the loopholes they’ve left behind, but in our company there’s not too many others who have sufficient knowledge on how deb-packages work.

    And they even either are dumb or play one when they claim that their packages work as advertised even after I sent them their postinst-scripts from the package, including explanations on why this and that breaks on a system which doesn’t have graphical environment installed (among other things).

    But that’s absolutely fault on the vendor side, not Debian/Linux itself. But it happens.


  • I don’t know about homeassistant, but there’s plenty of open source software to interact with odb2 at least for linux. With some tinkering it should be possible to have bluetooth enabled odb2 adapter where you can dump even raw data out and feed it to some other system of your choise, homeassistant included.

    If you want live data from the drive itself you of course need to have some kind of recording device with you (raspberry pi comes to mind) but if you’re happy just to log whatever is available when parking the car you could set up a computer with bluetooth nearby the parking spot on your yard and pull data from that. It may require that you keep the car powered on for a while after arrival to keep bus active, but some cars give at least some data via odb even when without the key being in ignition lock.


  • Wait a second. They used AMPRNet to manage these things? In here this kind of things are either hardwired to the internet or they use 3/4/5G uplink and while of course techinally possible either way to breach the system it’s a bit more difficult to find out proper IP’s and everything.

    Once upon a time I had a task to plan a scalable system to display stuff on billboards and even replace printed ads on stores with monitors. The whole thing fell down as we couldn’t secure a funding for it, but I made a POC setup where individual displays had a linux host running and managing the display with (if memory serves) plain X.org session with mplayer (or something similar, it was about 20 years ago) running on full screen and a torrent network to deliver new content to them with a web-based frontend to manage what’s shown on which site. Back then it would’ve been stupidly expensive to have the hardware and bandwidth on a single point to service potentially few thousand clients, so distributing the load was the sensible solution. I think that even today it would be a neat solution for the task, but no one has put up the money to actually make it happen.






  • Also a big(ish) issue for the industry. Local news had info that up to 80% of natural berries are picked up by foreginers. For majority of the pickers these gigs are pretty big source of income (compared on what they make back at home) but then there’s the few rotten apples who end up renting accommodiation, cars and everything to the pickers so that pretty much all of their earnings go back to the person providing work. Or that they don’t pay up at all and everything else in between. I’m not sure if that qualifies as human trafficking if pickers end up going back home empty handed, but that’s been an (relatively small, but existing) issue here.

    Human trafficking is of course a big deal, but from the ones who end up in our forests picking berries the slavery-like conditions and long work days with next to nothing paid in return is a more common problem. And even if it’s more common it’s still a relatively rare problem.


  • Most, but not all, do. So it might be as simple as setting a static address, or it may overlap in the future.

    You could ask from ISP (or try it out yourself) if you can use some addresses outside of DHCP pool, my ISP router had /24 subnet with .0.1 as gateway but DHCP pool started from .0.101 so there would’ve been plenty of addresses to use. Mine had a ‘end user’ account too from wehere I could’ve changed LAN IP’s, SSID and other basic stuff, but I replaced the whole thing with my own.


  • With plain linux that’s a bit complicated to actually dualboot. When booting windows grub just throws the ball to windows bootloader and it manages things from that on, but with grub you’d need to have two separate grub-installations on different partitions so that changes made in Arch doesn’t mess up stuff with PopOS (and other way round). It’s very much doable, but I suppose (without any experience on a setup like that) that if you just go with default options it’ll break something sooner or later and you need to pay attention to grub configs on both sides at all times, so it requires some knowledge. Basically you’d need a grub installed on (as an example) /dev/sda for the system to boot from bios and another grub instance at /dev/sda5 (or whatever you have) for second grub. They’d both have independent /boot directories, grub configs and all the jazz. It’s doable, but as both systems can access either one of the confgurations you really need to pay attention on what’s happening and where.

    Mixing home directory with different distros can create issues, as things have slightly different versions of software and their underlying philosophy, specially when mixing different package managers, is a bit different and they might not be compatible with eachother. Personally I would avoid that, but your mileage may vary wildly on how it actually plays out.

    For the partitioning, you can safely delete all the partitions, but you’ll of course lose the data on the drive while doing it.

    If I’d need such a system I might build a virtual machine to run all the dev stuff and just connect to it from a “real” desktop environment. Essentially mimic a two separate systems where you’ll have a “server” for the dev things and a “desktop” to connect with it. Or if you want a clear separation between the two it’s possible to run a different window manager for each of the tasks and just logout/login to switch between the two and with some scripting/tweaks you can even start/stop services as required when you switch between “modes”. Depending on your needs it might be enough just to run development environment with a virtualbox and start/stop it as needed and adjust the actual desktop experience accordingly.