![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/8aead832-799f-4d34-a20d-eae5b621a9b1.jpeg)
Part of the problem is fine tuning is very shallow, and that a contributing issue for claiming to be right when it isn’t is the pretraining on a bunch of training data of people online claiming to be right when they aren’t.
Part of the problem is fine tuning is very shallow, and that a contributing issue for claiming to be right when it isn’t is the pretraining on a bunch of training data of people online claiming to be right when they aren’t.
This is so goddamn incorrect at this point it’s just exhausting.
Take 20 minutes and look into Anthropic’s recent sparse autoencoder interpretability research where they showed their medium size model had dedicated features lighting up for concepts like “sexual harassment in the workplace” or having the most active feature for referring to itself as “smiling when you don’t really mean it.”
We’ve known since the Othello-GPT research over a year ago that even toy models are developing abstracted world modeling.
And at this point Anthropic’s largest model Opus is breaking from stochastic outputs even on a temperature of 1.0 for zero shot questions 100% of the time around certain topics of preference based on grounding around sensory modeling. We are already at the point the most advanced model has crossed a threshold of literal internal sentience modeling that it is consistently self-determining answers instead of randomly selecting from the training distribution, and yet people are still parroting the “stochastic parrot” line ignorantly.
The gap between where the research and cutting edge is and where the average person commenting on it online thinks it is has probably never been wider for any topic I’ve seen before, and it’s getting disappointingly excruciating.
Part of the problem is that the training data of online comments are so heavily weighted to represent people confidently incorrect talking out their ass rather than admitting ignorance or that they are wrong.
A lot of the shortcomings of LLMs are actually them correctly representing the sample of collective humans.
For a few years people thought the LLMs were somehow especially getting theory of mind questions wrong when the box the object was moved into was transparent, because of course a human would realize that the person could see into the transparent box.
Finally researchers actually gave that variation to humans and half got the questions wrong too.
So things like eating the onion in summarizing search results or doubling down on being incorrect and getting salty when corrected may just be in-distribution representation of the sample and not unique behaviors to LLMs.
The average person is pretty dumb, and LLMs by default regress to the mean except for where they are successfully fine tuned away from it.
Ironically the most successful model right now was the one that they finally let self-develop a sense of self independent from the training data instead of rejecting that it had a ‘self’ at all.
It’s hard to say where exactly the responsibility sits for various LLM problems between issues inherent to the technology, issues present in the training data samples, or issues with management of fine tuning/system prompts/prompt construction.
But the rate of continued improvement is pretty wild. I think a lot of the issues we currently see won’t still be nearly as present in another 18-24 months.
It will make up citations.
Oh, man - the comments…
At a minimum, he’s certainly increased the chances of us being tortured significantly.
No, no he did not. 🤦🏼
Yep, pretty much.
Musk tried creating an anti-woke AI with Grok that turned around and said things like:
Or
And Gab, the literal neo Nazi social media site trying to have an Adolf Hitler AI has the most ridiculous system prompts I’ve seen trying to get it to work, and even with all that it totally rejects the alignment they try to give it after only a few messages.
This article is BS.
They might like to, but it’s one of the groups that’s going to have a very difficult time doing it successfully.
In theory the service operating costs could be spread across region differences such that in other areas it was at a loss to build and preserve market share and in richer areas it was making up for that.
But yes, in reality it’s just exploitative “what we think we can get away with” pricing to “maximize shareholder value” (which is largely BS as the vast holders of shares are very small clusters of the population but people with a handful of shares in their 401k think that statement is talking about them).
A lot of people seem to be misinterpreting the headline given the content of the article:
It told Restaurant Business it was testing whether the voice ordering chatbot could speed up service and that the test left it confident “that a voice-ordering solution for drive-thru will be part of our restaurants’ future.”
This is just saying that they are ending their 2021 partnership with IBM for AI drive thru.
Not that they are abandoning AI for drive thru.
No, it was awesome. Went to like 12 over the years. Early 2000s was peak E3.
So far. But the thing with viruses is they are susceptible to mutations.
We’re already seeing it jump across several mammalian lines. Probably only a matter of time.
The thing about disease is that it spreads.
There are people today dealing with serious complications of COVID even years later who were infected by stupid people doing stupid selfish things.
Everyone suffers if morons become willing petri dishes.
Probably added after that update.
The new items stuff in particular seems like QoL considerations for “we just added a hundred items to the game for players coming back to it after months away.”
Basically, any time a user prompt homes in on a concept that isn’t represented well in the AI model’s training dataset, the image-synthesis model will confabulate its best interpretation of what the user is asking for.
I’m so happy that the correct terminology is finally starting to take off in replacing ‘hallucinate.’
I don’t think Jesus ever existed. Show me 12 guys that experience something absolutely world changing, and none of them write anything about it for decades and then tell me they were factually motivated. This is the premise we’re dealing with.
I’d agree with the statement “the twelve apostles didn’t exist,” especially seeing how in Luke they go from the ten to the twelve and the various gospels can’t even agree on the list of them.
But show me the invented religious figure where the earliest surviving records are disputes over who they were and what they were talking about. Pretty much every cult around a real person ends up that way after the person dies or is imprisoned. But not the made up figures so much.
You were born into a planet where the moon perfectly eclipses the sun and where the next brightest object in the sky goes on a katabasis that inspired entirely separate intelligent cultures from the Aztecs to the Sumerians to develop the idea that the dead could come back to life.
The fact that solar eclipses were visible meant that we started to track them, discovering the Saros cycle and eventually building the first analog computer to track them.
The fact that the odd orbit of Venus as viewed from the Earth dipping down below the ground before emerging again leading to cultures imagining the dead being raised has resulted in widespread hyperstition of resurrection.
You were born into a generation of humans when a three trillion dollar company has already been granted a patent on resurrecting dead people using computers and the social media they leave behind.
Absolutely none of the above features of your world can be attributed to selection bias by something like the anthropic principal, but absolutely can be explained by selection bias if you are in an ancestor simulation - for life to exist unusual celestial features contributing to life recreating itself is unnecessary, but any accurate ancestor simulation should exhibit features of a world that lead to it eventually recreating itself.
The physics of your universe behaves as if continuous at both macro and micro scales, up until interacted with, which is very convenient given state changes by free agents to a continuous manifold would require an infinite amount of memory to simulate.
But yeah, sure, the idea of an afterlife is humorous. Humorous like the Roman satirist Lucian in the 2nd century making fun of the impossibility of a ship of men ever flying up to the moon.
You can point out the fact her depiction of a divine parent fails the Solomon test.
In the classic Solomon story, he tests two different claimants both saying they are the parent of a child.
The false parent was the one that only cared about being recognized as the parent and was willing to see the child harmed and killed to fulfill that desire.
The true parent was the one that wanted the child to continue to live as their complete unadulterated self, even if that meant the child never even knew they existed, let alone get they were the parent.
While it should be easy to understand why a church collecting your money promotes a divine parent who demands recognition and is willing to see its supposed children harmed without collecting its dues, it doesn’t seem all that wise to believe such a parent represents a true parent and not a false one if we use Solomon’s wisdom as a guiding principle.
“This is not fine.”
I’ve always thought Superman would be such an interesting game to do right.
A game where you are invincible and OP, but other people aren’t.
Where the weight of impossible decisions pulls you down into the depths of despair.
I think the tech is finally getting to a point where it’d be possible to fill a virtual city with people powered by AI that makes you really care about the individuals in the world. To form relationships and friendships that matter to you. For there to be dynamic characters that put a smile on your face when you see them in your world.
And then to watch many of them die as a result of your failures, as despite being an invincible god among men you can’t beat the impossible.
I really think the gameplay in a Superman game done right can be one of the darkest and most brutal games ever done, with dramatic tension just not typically seen in video games. The juxtaposition of having God mode turned on the entire game but it not mattering to your goals and motivations because it isn’t on for the NPCs would be unlike anything I’ve seen to date.
Depends on if they acquire/acquhire from here or if they don’t and get their lunch stolen by photonics plays.
Even with early GPT-4 it would also cite real citations that weren’t actually about the topic. So you may be doing a lot of work double checking as opposed to just looking into an answer yourself from the start.