They weren’t going to vote for Harris anyways. If she magically brought peace to the middle east tomorrow, they’d find some other reason to not vote for her.
They weren’t going to vote for Harris anyways. If she magically brought peace to the middle east tomorrow, they’d find some other reason to not vote for her.
Totally. My comment was just a note that if you screw up like the alt text is humorously referring to, that’s a good tool for fixing it.
FYI if you get mojibake like in the alt text, this is a great tool for automatically fixing it:
You’re in good company. Steam even managed to do it for a whole bunch of people:
https://github.com/ValveSoftware/steam-for-linux/issues/3671
I was also curious, here’s a good answer:
https://unix.stackexchange.com/questions/670199/how-is-dev-null-implemented
The implementation is:
static ssize_t write_null(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
return count;
}
It’s going to be pretty funny if they get one of these bots to accept their “coupons”. It’s been upheld in court that the bots can agree to things in a legally binding manner for the company. I hope this sort of thing works, because it’s the only way companies will have real support, if the bot costs them more money by being stupid.
Thought you were talking about this Pete the Cat at first and was very surprised:
FYI the link requires login because it’s for edit mode. Might be good to also have a “What is Ibis?” bit here, instead of requiring people to follow the link.
At any rate, looks neat! Has there been any thought given to what happens if the Conservapedia or similar people want to get onto the network? Is it instance blocking like Lemmy?
The whole “it’s just autocomplete” is just a comforting mantra. A sufficiently advanced autocomplete is indistinguishable from intelligence. LLMs provably have a world model, just like humans do. They build that model by experiencing the universe via the medium of human-generated text, which is much more limited than human sensory input, but has allowed for some very surprising behavior already.
We’re not seeing diminishing returns yet, and in fact we’re going to see some interesting stuff happen as we start hooking up sensors and cameras as direct input, instead of these models building their world model indirectly through purely text. Let’s see what happens in 5 years or so before saying that there’s any diminishing returns.
Gary Marcus should be disregarded because he’s emotionally invested in The Bitter Lesson being wrong. He really wants LLMs to not be as good as they already are. He’ll find some interesting research about “here’s a limitation that we found” and turn that into “LLMS BTFO IT’S SO OVER”.
The research is interesting for helping improve LLMs, but that’s the extent of it. I would not be worried about the limitations the paper found for a number of reasons:
o1-mini
and llama3-8B
, which are much smaller models with much more limited capabilities. GPT-4o got the problem correct when I tested it, without any special prompting techniques or anything)Until we hit a wall and really can’t find a way around it for several years, this sort of research falls into the “huh, interesting” territory for anybody that isn’t a researcher.
Gary Marcus is an AI crank and should be disregarded
Yeah, it’s not impossible, but it’s much harder and you get a lot less info. You can also counteract the JS-less tracking with Firefox’s privacy.resistFingerprinting, or by using the Tor Browser, which enables a lot of anti-surveillance measures by default. Here’s also another good site for discovering how trackable you are: https://coveryourtracks.eff.org/
They’re also working with browser developers to push htmx into web standards, so that hopefully soon you won’t even need htmx/JS/etc, it’ll just be what your browser does by default
A lot of the web is powered by JS, but much less of it needs to be. Here’s a couple of sites that are part of a trend to not unnecessarily introduce it:
The negative implications for Google requiring JS is that they will use it to track everything possible about you that they can, even down to how you move your cursor, or how much battery you have left on your phone in order to jack up prices, or any other number of shitty things.
Could very well be a mobile thing. I was pretty annoyed recently when logging into gcal for work on my phone, it refused to let me sign in without giving them my cell phone number. When I switched to wifi, it stopped bugging me, so clearly they pay attention to that sort of signal.
Sometimes, yeah. My default is DDG, and I also use Kagi, but Google is still good at some stuff. Guess I’ll take the hit and just stop using it completely though. Kagi has been good enough, and also lets me search the fediverse for finding that dank meme I saw last week. Google used to be able to do that, but can’t shove as many ads in those queries I assume, so they dropped that ability.
What do they think “running the passport number” will do? I’ve seen that in several of these posts. I can’t imagine whoever they got their paper from will have been able to create a number that exists in any system that a cop uses during a traffic stop.
The two UCC references they list:
https://www.law.cornell.edu/ucc/1/1-201#1-201b37
“Signed” includes using any symbol executed or adopted with present intention to adopt or accept a writing.
https://www.law.cornell.edu/ucc/1/1-304
§ 1-304. Obligation of Good Faith.
Every contract or duty within the Uniform Commercial Code imposes an obligation of good faith in its performance and enforcement.
From what I can understand of this mess, they’re trying to buy a car, or otherwise enter a contract of some sort with Volkswagen. I’m no lawyer, but I’m pretty sure sections like § 1-304
only mean after you enter a contract, you’re obligated to uphold it. If someone doesn’t want to enter into a contract with you because they don’t like the way you write your signature, it’s their right to tell you to get bent.
You’ll like this poem:
https://ncf.idallen.com/english.html
The start of it: