“we purposefully make it terrible, because we know it’s actually better” is near to conspiracy theory level thinking.
The internal models they are working on might be better, but they are definitely not making their actual product that’s publicly available right now shittier. It’s exactly the thing they released, and this is its current limitations.
This has always been the type of output it would give you, we even gave it a term really early on, hallucinations. The only thing that has changed is that the novelty has worn off so you are now paying a bit more attention to it, it’s not a shittier product, you’re just not enthralled by it anymore.
Researchers have shown that the performance of the public GPT models have decreased, likely due to OpenAI trying to optimise energy efficiency and adding filters to what they can say.
I don’t really care about why it, so I won’t speculate, but let’s not pretend the publicly available models aren’t purposefully getting restricted either.
likely due to OpenAI trying to optimise energy efficiency and adding filters to what they can say.
Which is different than
No companies are only just now realizing how powerful it is and are throttling the shit out of its capabilities to sell it to you later :)
One is a natural thing that can happen in software engineering, the other is malicious intent without facts. That’s why I said it’s near to conspiracy level thinking. That paper does not attribute this to some deeper cabal of AI companies colluding together to make a shittier product, but enough so that they all are equally more shitty (so none outcompete eachother unfairly), so they can sell the better version later (apparently this doesn’t hurt their brand or credibility somehow?).
but let’s not pretend the publicly available models aren’t purposefully getting restricted either.
Sure, not all optimizations are without costs. Additionally you have to keep in mind that a lot of these companies are currently being kept afloat with VC funding. OpenAI isn’t profitable right now (they lost 540 million last year), and if investments go in a downturn (like they have a little while ago in the tech industry), then they need to cut costs like any normal company. But it’s magical thinking to make this malicious by default.
Exactly. It’s a language learning and text output machine. It doesn’t know anything, its only ability is to output realistic sounding sentences based on input, and will happily and confidently spout misinformation as if it is fact because it can’t know what is or isn’t correct.
Mass effects lore differences between virtual intelligence and artificial intelligence, the first one is programmed to do shit and say things nicely, the second one understands enough to be a menace to civilization… always wondered if this distinction was actually accepted outside the game.
*Terms could be mixed up cause I played in German (VI and KI)
There are many definitions of AI (eg. there is some mathematical model used), but machine learning (which is used in the large language models) is considered a part of the scientific field called AI. If someone says that something is AI, it usually means that some technique from the field AI has been applied there. Even though the term AI doesn’t have much to do with the term intelligence as most of the people perceive it, I think the usage here is correct. (And yes, the whole scientific field should have been called differently.)
Sadly the definition of artificial still fits the bill. Even if it’s still a bit misleading and most poeple will associate Artificial Intelligence with something akin to HAL 9000
In that case I’m not really sure what you’re expecting from AI, without getting into the philosophical debate of what intelligence is. Most modern AI systems are in essence taking large datasets and regurgitating the most relevant data back in a relevant form.
Lol, the AI effect in practice - the minute a computer can do it, it’s no longer intelligence.
A year ago if you had told me you had a computer program that could write greentexts compellingly, I would have told you that required “true” AI. But now, eh.
In any case, LLMs are clearly short of the “SuPeR BeInG” that the term “AI” seems to make some people think of and that you get all these Boomer stories about, and what we’ve got now definitely isn’t that.
The AI effect can’t be a real thing since true AI hasn’t been done yet. We’re getting closer, but we’re definitely not in the positronic brain stage yet.
Can we stop calling this shit AI? It has no intelligence
This is what AI actually is. Not the super-intelligent “AI” that you see in movies, those are fiction.
The NPC you see in video games with a few branches of if-else statements? Yeah that’s AI too.
No companies are only just now realizing how powerful it is and are throttling the shit out of its capabilities to sell it to you later :)
“we purposefully make it terrible, because we know it’s actually better” is near to conspiracy theory level thinking.
The internal models they are working on might be better, but they are definitely not making their actual product that’s publicly available right now shittier. It’s exactly the thing they released, and this is its current limitations.
This has always been the type of output it would give you, we even gave it a term really early on, hallucinations. The only thing that has changed is that the novelty has worn off so you are now paying a bit more attention to it, it’s not a shittier product, you’re just not enthralled by it anymore.
Researchers have shown that the performance of the public GPT models have decreased, likely due to OpenAI trying to optimise energy efficiency and adding filters to what they can say.
I don’t really care about why it, so I won’t speculate, but let’s not pretend the publicly available models aren’t purposefully getting restricted either.
Which is different than
One is a natural thing that can happen in software engineering, the other is malicious intent without facts. That’s why I said it’s near to conspiracy level thinking. That paper does not attribute this to some deeper cabal of AI companies colluding together to make a shittier product, but enough so that they all are equally more shitty (so none outcompete eachother unfairly), so they can sell the better version later (apparently this doesn’t hurt their brand or credibility somehow?).
Sure, not all optimizations are without costs. Additionally you have to keep in mind that a lot of these companies are currently being kept afloat with VC funding. OpenAI isn’t profitable right now (they lost 540 million last year), and if investments go in a downturn (like they have a little while ago in the tech industry), then they need to cut costs like any normal company. But it’s magical thinking to make this malicious by default.
Exactly. It’s a language learning and text output machine. It doesn’t know anything, its only ability is to output realistic sounding sentences based on input, and will happily and confidently spout misinformation as if it is fact because it can’t know what is or isn’t correct.
Should probably use a more careful choice of words if you want to get hung up on semantic arguments
Sounds pretty much identical to human beings to me
Mass effects lore differences between virtual intelligence and artificial intelligence, the first one is programmed to do shit and say things nicely, the second one understands enough to be a menace to civilization… always wondered if this distinction was actually accepted outside the game.
*Terms could be mixed up cause I played in German (VI and KI)
There are many definitions of AI (eg. there is some mathematical model used), but machine learning (which is used in the large language models) is considered a part of the scientific field called AI. If someone says that something is AI, it usually means that some technique from the field AI has been applied there. Even though the term AI doesn’t have much to do with the term intelligence as most of the people perceive it, I think the usage here is correct. (And yes, the whole scientific field should have been called differently.)
It’s artificial.
Sadly the definition of artificial still fits the bill. Even if it’s still a bit misleading and most poeple will associate Artificial Intelligence with something akin to HAL 9000
I will continue calling it “shit AI”.
I like it too haha
That’s why we preface it with Artificial.
But it isn’t artificial intelligence. It isn’t even an attempt to make artificial “intelligence”. It is artificial talking. Or artificial writing.
In that case I’m not really sure what you’re expecting from AI, without getting into the philosophical debate of what intelligence is. Most modern AI systems are in essence taking large datasets and regurgitating the most relevant data back in a relevant form.
Lol, the AI effect in practice - the minute a computer can do it, it’s no longer intelligence.
A year ago if you had told me you had a computer program that could write greentexts compellingly, I would have told you that required “true” AI. But now, eh.
In any case, LLMs are clearly short of the “SuPeR BeInG” that the term “AI” seems to make some people think of and that you get all these Boomer stories about, and what we’ve got now definitely isn’t that.
The AI effect can’t be a real thing since true AI hasn’t been done yet. We’re getting closer, but we’re definitely not in the positronic brain stage yet.
“true AI”
AI is just “artificial intelligence”, there are no strict criterias defining what is “true” AI and not,
Do the LLM models show an ability to reason and problem solve? Yes
Are they perfect? No
So what?
Ironically your comment sounds like yet another example of the AI effect