

no fact checking and a ton of bots being added? Sounds like a fantastic place to spend time. /s
no fact checking and a ton of bots being added? Sounds like a fantastic place to spend time. /s
I have some Zooz relays and can also recommend. There are also two options, you can either get the zooz switches which replace your current switches ( https://www.thesmartesthouse.com/collections/light-switches/products/zooz-800-series-z-wave-long-range-wall-remote-zen37-800lr-battery-powered?variant=40387548938303 ) or the relays ( https://www.thesmartesthouse.com/collections/z-wave-relays/products/zooz-700-series-z-wave-plus-dry-contact-relay-zen51 ) that you install in the box behind your existing switches.
Our house has some fancy switches downstairs so I went with the relays for the few switches I wanted to automate. One thing to note, it’s not always easy to fit the relays in the box, if there’s a lot of spare wires in the box, it can be sort of cramped.
you got to hand it to him
Posting pictures of riced-out tiling window manager desktops is the hot new game they’re looking for.
Ollama and openwebui for a nice web interface.
DNFTA
If you go, definitely stay at Four Seasons Total Landscaping next door, best accommodations around and their convention spaces are great for any press conferences you might need to hastily put together.
I used to think they were bots. I still do, but I used to, too.
First a caveat/warning - you’ll need a beefy GPU to run larger models, there are some smaller models that perform pretty well.
Adding a medium amount of extra information for you or anyone else that might want to get into running models locally
If you look at https://ollama.com/library?sort=featured you can see models
Model size is measured by parameter count. Generally higher parameter models are better (more “smart”, more accurate) but it’s very challenging/slow to run anything over 25b parameters on consumer GPUs. I tend to find 8-13b parameter models are a sort of sweet spot, the 1-4b parameter models are meant more for really low power devices, they’ll give you OK results for simple requests and summarizing, but they’re not going to wow you.
If you look at the ‘tags’ for the models listed below, you’ll see things like 8b-instruct-q8_0
or 8b-instruct-q4_0
. The q part refers to quantization, or shrinking/compressing a model and the number after that is roughly how aggressively it was compressed. Note the size of each tag and how the size reduces as the quantization gets more aggressive (smaller numbers). You can roughly think of this size number as “how much video ram do I need to run this model”. For me, I try to aim for q8 models, fp16 if they can run in my GPU. I wouldn’t try to use anything below q4 quantization, there seems to be a lot of quality loss below q4. Models can run partially or even fully on a CPU but that’s much slower. Ollama doesn’t yet support these new NPUs found in new laptops/processors, but work is happening there.
It’s a good thing that real open source models are getting good enough to compete with or exceed OpenAI.
I like the game, but agree with the over-tutorialed complaints. They have two difficulty modes, I wish only story mode got all the handholding. I think there’s enough obvious indicators to get you through all the game mechanics.
surely he’ll be less of a twat then. right?
Really love arch and the AUR. I’ve been tempted to get nix set up for the rare cases when there’s no AUR package or the AUR package is unmaintained. I figure if there’s no package in the AUR or nixpkgs, it’s probably not worth running.
btop reports some gpu, network and disk information that I don’t think shows up in htop, feels a bit more comprehensive maybe? Both are fine, but I too use btop, it’s nice.
Random trivia: I think btop has been rewritten like 3-5 times now? It’s sort of an inside joke to the point that someone suggested another rewrite from C++ to Rust ( https://github.com/aristocratos/btop/issues/5 ). I guess the guy just likes writing system monitoring console apps.
I guess this solves part of the mystery about why the French rioted when they raised the retirement age last year
MAWP - Archer
Yeah yeah yeah… yeah