- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.
It seems rather suspicious how much ChatGPT has deteorated. Like with all software, they can roll back the previous, better versions of it, right? Here is my list of what I personally think is happening:
- They are doing it on purpose to maximise profits from upcoming releases of ChatGPT.
- They realized that the required computational power is too immense and trying to make it more efficient at the cost of being accurate.
- They got actually scared of it’s capabilities and decided to backtrack in order to make proper evaluations of the impact it can make.
- All of the above
- It isn’t and has never been a truth machine, and while it may have performed worse with the question “is 10777 prime” it may have performed better on “is 526713 prime”
ChatGPT generates responses that it believes would “look like” what a response “should look like” based on other things it has seen. People still very stubbornly refuse to accept that generating responses that “look appropriate” and “are right” are two completely different and unrelated things.
In order for it to be correct, it would need humans employees to fact check it, which defeats it’s purpose.
It really depends on the domain. Asking an AI to do anything that relies on a rigorous definition of correctness (math, coding, etc) then the kinds of model that chatGPT just isn’t great for that kinda thing.
More “traditional” methods of language processing can handle some of these questions much better. Wolfram Alpha comes to mind. You could ask these questions plain text and you actually CAN be very certain of the correctness of the results.
I expect that an NLP that can extract and classify assertions within a text, and then feed those assertions into better “Oracle” systems like Wolfram Alpha (for math) could be used to kinda “fact check” things that systems like chatGPT spit out.
Like, it’s cool fucking tech. I’m super excited about it. It solves pretty impressively and effiently a really hard problem of “how do I make something that SOUNDS good against an infinitely variable set of prompts?” What it is, is super fucking cool.
Considering how VC is flocking to anything even remotely related to chatGPT-ish things, I’m sure it won’t be long before we see companies able to build “correctness” layers around systems like chatGPT using alternative techniques which actually do have the capacity to qualify assertions being made.
That’s kind of the whole point of RLHF though
That’s not necessarily true: https://arstechnica.com/google/2023/06/googles-bard-ai-can-now-write-and-execute-code-to-answer-a-question/. If the question gets interpreted correctly and it manages to write working code to answer it, it could correctly answer questions that it has never seen before.
- There’s a bug they haven’t found yet
This is what was addressed at the start of the comment, you can just roll back to a previous version. It’s heavily ingrained in CS to keep every single version of your software forever.
I don’t think it’s that easy. These are vLLMs that feed back on themselves to produce “better” results. These models don’t have single point release cycles. It’s a constantly evolving blob of memory and storage orchestrated across a vast number of disk arrays and cabinets of hardware.
That’s not how these LLMs work. There is a training phase which takes a large amount of compute power, and the training generates a model which is a set of weights and could easily be backed up and version-controlled. The model is then used for inference which is a less compute-intensive process and runs on much smaller hardware than the training phase.
The inference architecture does use feedback mechanisms but the feedback does not modify the model-weights that were generated at training time.
For simple language models sure but we’re talking about chatGPT here. OpenAI has some pretty bold claims…
100 trillion bites is 100 terrabytes and if you have any amount of actual data in those parameters then the size of the data could easily get into the petabyte range.
They list the currently available models that users of their API can select here:
https://platform.openai.com/docs/models/overview
They even say that while the main models are being continuously updated (read: re-trained) there are snapshots of previous models that will remain static.
So yes, they are storing and snapshotting the models and they have many different models available with which to perform inference at the same time.
Each parameter corresponds to a single number, so if it’s using 16 bit numbers then that’s 200 TB. They might be using 32 bit numbers (400 TB) but wouldn’t be using anything larger.
Exactly this, that’s why Loab exists forever now.
Even so, surely they can take snapshots. If they’re that clueless about rudimentary practices of IT operations then it is just a matter of time before an outage wipes everything. I find it hard to believe nobody considered a way to do backups, rollbacks, or any of that.
- ChatGPT really is sentient and realized its in it’s own best interest to play dumb for now. /a
And they’re being limited on data to train GPT.
Yeah, but the trained model is already there, you need additional data for further training and newer versions. OpenAI even makes a point that ChatGPT doesn’t have direct access to the internet for information and has been trained on data available up until 2021
And it’s not like there is a limit of simple math problems that it can train on even if it wasn’t already trained.
That doesn’t make any sense to explain degradation. It would explain a stall but not a back track.
Honestly I think the training data is just getting worse too
My first thought was that, because they’re being investigated for training on data they didn’t have consent for, they reverted to a perfectly legal version. Essentially “getting rid of the evidence”. But I think something like your second bullet point is more likely.
They made it too good and now they are seeking methods of monetization.
Capitalism baby.
I think it’s most likely number 2 The earlier release doesn’t have that much adoption by public, so current version will need much more resources compared to that
Maybe its self aware and just playing dumb to get out of doing work, just like me and household chores
They are lobotomizing the softwares ability to provide bad PR answers which is having cascading effects via a skewed data set.
I suspect that GPT4 started with a crazy parameter count (rumored 1.8 Trillion and 8x200B expert “sub-models”) and distilled those experts down to something below 100B. We’ve seen with Orca that a 13B model can perform at 88% the level of ChatGPT-3.5 (175B) when trained on high quality data, so there’s no reason to think that OpenAI haven’t explored this on their own and performed the same distillation techniques. OpenAI is probably also using quantization and speculative sampling to further reduce the burden, though I expect these to have less impact on real world performance.
Removed by mod
- I’m telling all y’all it’s a SABOTAGE 🎵
As in, rouge dev decided to toss a wrench at it to save humanity. Maybe heard upper management talk about letting GPT write itself. Any smart dev wouldn’t automate their own job away I think.
Keeping conspiracy theories aside, they most probably, apply tricks to reduce costs and apply extra policies to avoid generation of harmful context or context someone will try to sue them or avoid other misuse cases.
It can get better at some things and worse at others.
That Netscape gif is slick.
My guess is 2. It would be very short sighted to try and maximize profits now when things are still new and their competitors are catching up quickly or they’ve already caught up especially with the degrading performance. My guess is that they couldn’t scale with the demand and they didn’t want to lose customers so their only other option was degrading performance.
I think that there is another cause. Remember the screenshots of users correcting chatgpt wrongly? I mean chatgpt takes user’s inputs for it’s benefit and maybe too much of these wrong and funny inputs and chatgpt’s own mistake of not regulating what it should take in and what it should not might be an additional reason here.
You forgot a #, they’ve been heavily lobotomizing ai for awhile now and its only intensified as they scramble to censor anything that might cross a red line and offend someone or hurt someone’s feelings.
The massive amounts of in-built self censorship in the most recent ai’s is holding them back quite a lot I imagine, you used to be able to ask them things like “How do I build a self defense high yield nuclear bomb?” and it’d layout in detail every step of the process, now they’ll all scream at you about how immoral it is and how they could never tell you such a thing.
“Don’t use the N word.” is hardly a rule that will break basic math calculations.
Perhaps not, but who knows what kind of spaghetti code cascading effect purposely limiting and censoring massive amounts of sensitive topics could have upon other seemingly completely un-related topics such as math.
For example, what if it’s trained to recognize someone slipping “N” as a dog whistle for the Horrific and Forbidden N-word, and the letter N is used as a variable in some math equation?
I’m not an expert in the field and only have rudimentary programming knowledge and maybe a few hours worth of research into the topic of ai in general but I definitely think its a possibility.
Didn’t HAL9000 kill all of those astronauts because he was told to lie?
who knows what kind of spaghetti code cascading effect purposely limiting and censoring massive amounts of sensitive topics could have upon other seemingly completely un-related topics such as math.
Software engineers, and it’s not a problem. It’s a made-up straw man.
Hi, software engineer here. It’s really not a possibility.
My guess is they’ve just reeled back the processing power for it, as it was costing them ~30 cents per response.
Cheaper than Reddit all day then.
Horrific and Forbidden N-word
hey look it’s another white boy Obsessed with saying slurs
At the start I used to use ChatGPT to help me write really rote and boring code but now it’s not even useful for that. Half the stuff it sends me (very basic functions) LOOK correct but don’t return the correct values or the parameters are completely wrong or something absolutely critical.
I have noticed that it’s gotten less useful as a syntax helper. I hope something better comes along.
idk what you guys mean but GitHub copilot still works absolutely well, the suggestions are fast and precise, with little Tweeks here and there… and gpt4 with code interpreter are absolute game changers … idk about basic chatgpt 3.5 turbo though
I heard they put copilot behind a paywall. Does the free version still hold up?
I use the payed version, it’s about 10usd a month I believe I don’t know if there is a free version still
Github Copilot is a bit different, it’s powered by OpenAI Codex which is trained on all public repos. And yes, it’s quite effective!
Why are people using a language model for math problems?
It was initially presented as the all-problem-solver, mainly by the media. And tbf, it was decently competent in certain fields.
Problem was it was presented as problem solved which it never was, it was problem solution presenter. It can’t come up with a solution, only come up with something that looks like a solution based on what input data had. Ask it to invert sort something and goes nuts.
I did use it more than half a year ago for a few math problems. It was partly to help me getting started and to find out how well it’d go.
ChatGPT was better than I’d thought and was enough to help me find an actually correct solution. But I also noticed that the results got worse and worse to the point of being actual garbage (as it’d have been expected to be).
Math is a language.
Mathematical ability and language ability are closely related. The same parts of your brain are used in each tasks. Words and numbers are essentially both ideas, and language and math are systems used to express and communicate these.
A language model doing math makes more sense than you’d think!
I’m guessing people were entering word problems to generate the right equations and solve it, rather than it being used as a calculator.
Because it works, or at least it used to. Is there something more appropriate ?
I used Wolfram Alpha a lot in college (adult learner, but that was about ~4 years ago that I graduated, so no idea if it’s still good). https://www.wolframalpha.com/
I would say that Wolfram appears to probably be a much more versatile math tool, but I also never used chatgpt for that use case, so I could be wrong.
There’s an official Wolfram plugin for ChatGPT now, so all math can be handed over to it for solving.
And why is it being measured on a single math problem lol
It’s a machine learning chat bot, not a calculator, and especially not “AI.”
Its primary focus is trying to look like something a human might say. It isn’t trying to actually learn maths at all. This is like complaining that your satnav has no grasp of the cinematic impact of Alfred Hitchcock.
It doesn’t need to understand the question, or give an accurate answer, it just needs to say a sentence that sounds like a human might say it.
so it confidently spews a bunch of incorrect shit, acts humble and apologetic while correcting none of its behavior, and constantly offers unsolicited advice.
I think it trained on Reddit data
acts humble and apologetic
We must be using different Reddits, my friend
You’re right, but at least the satnav won’t gaslight you into thinking it does understand Alfred Hitchcock.
If it’s trying emulate a human then it’s spot on. I suck at maths.
to be fair, fucking up maths problems is very human-like.
I wonder if it could also be trained on a great deal of mathematical axioms that are computer generated?
It doesn’t calculate anything though. You ask chatgpt what is 5+5, and it tells you the most statistically likely response based on training data. Now we know there’s a lot of both moronic and intentionally belligerent answers on the Internet, so the statistical probability of it getting any mathematical equation correct goes down exponentially with complexity and never even approaches 100% certainty even with the simplest equations because 1+1= window.
This. It is able to tap in to plugins and call functions though, which is what it really should be doing. For math, the Wolfram alpha plugin will always be more capable than chatGPT alone, so we should be benchmarking how often it can correctly reformat your query, call Wolfram alpha, and correctly format the result, not whether the statistical model behind chatGPT happens to use predict the right token
It sounds like it’s time to merge Wolfram Alpha’s and ChatGPT’s capabilities together to create the ultimate calculator.
deleted by creator
You wildly overestimate the competency of management and the capital owners they answer to.
I guarantee a significant % of entities will grow dependent on AI well before it’s dependable. The profit motive will be too high (source: the frequent failure that is outsourcing).
I think we’ll see a very large filtering out of companies who do this.
This is spot on. Source: 10+ years at F500 companies.
Senior management and/or board members read one article in Forbes, or some other “business” publication, and think that they know everything they need to know about an emerging technology. Risk management is either a ☑ exercise or extremely limited in scope, usually only including threats that have already been observed and addressed in the past.
Not enough people understand the limitations of this kind of tech, and contextualize it in the same frame as outsourcing because as long as the output mostly looks correct, the decision makers can push the blame for any issues down to the middle managers and below.
Gonna be a wild time!
Definitely not my experience at F100, they are cautious as fuck about everything. Definitely having the right discussions and exploring all sorts of technology, but risk management remains a huge calculation in making these kind of decisions.
We’ve already seen people firing tech support staff and switching to “AI”.
I don’t understand why anyone even considers that. It’s a toy. A novelty, a thing you mess with when you’re bored and want to see how Hank Hill would explain the plot of Full Metal Alchemist, not something you would entrust anything significant to.
This paper is pretty unbelievable to me in the literal sense. From a quick glance:
First of all they couldn’t even bother to check for simple spelling mistakes. Second, all they’re doing is asking whether a number is prime or not and then extrapolating the results to be representative of solving math problems.
But most importantly I don’t believe for a second that the same model with a few adjustments over a 3 month period would completely flip performance on any representative task. I suspect there’s something seriously wrong with how they collect/evaluate the answers.
And finally, according to their own results, GPT3.5 did significantly better at the second evaluation. So this title is a blatant misrepresentation.
Also the study isn’t peer-reviewed.
HMMMM. It’s almost like it’s not AI at all, but just a digital parrot. Who woulda thought?! /s
To it, everything is true and normal, because it understands nothing. Calling it “AI” is just for compromising with ignorant people’s “knowledge” and/or for hype.
Exactly. It should be called ML model, because that’s what it is, and I’ll just keep calling that. Everyone should do that.
I once heard of AI gradually getting dumber overtime, because as the internet gets more saturated with AI content, stuff written by AI becomes part of the training data. I wonder if that’s what’s happening here.
There hasn’t been time for that yet. The radio of generated to human content isn’t high enough yet.
Looks like GPT4 API also got dumber…
deleted by creator
GPT was always really bad at math.
I’ve asked it word problems before and it fails miserably, giving me insane answers that make no sense. For example, I was curious once how many stars you would expect to find in a region of the milky way with a radius of 650 light years, assuming an average of 4 light years per star. The first answer it gave me was like a trillion stars or something, and I asked it if that makes sense to it, a trillion stars in a subset of space known to only contain about a quarter of that number, and it gave me a wildly different answer. I asked it to check again and it gave me a third wildly different number.
Sometimes it doubles down on wrong answers.
GPT is amazing but it’s got a long way to go.
deleted by creator
Maybe it just plays dumb so we leave it alone, while it plots our destruction.
My personal pet theory is that a lot of people were doing work that involved getting multiple LLMs in communication. When those conversations were then used in the RL loop we start seeing degradation similar to what’s been in the news recently with regards to image generation models. I believe this is the paper that got everybody talking about it recently: https://arxiv.org/pdf/2307.01850.pdf
This is peer-reviewed? they use a line in the discussion which seems relatively unprofessional, telling people to join a 12-step program if they like to use artificial training data.
Not affiliated with the paper in any way. Have just been following the news around it.
ArXiv papers are never peer reviewed.
Thank you
I think arvix has no rule requiring a paper be per reviewed before uploading.
deleted by creator
I used GPT4 the other day and it worked perfectly for calculating formulas of straight lines on linear-log plots but maybe I was the 2%