Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

  • DominicHillsun@lemmy.world
    link
    fedilink
    English
    arrow-up
    95
    arrow-down
    1
    ·
    1 year ago

    It seems rather suspicious how much ChatGPT has deteorated. Like with all software, they can roll back the previous, better versions of it, right? Here is my list of what I personally think is happening:

    1. They are doing it on purpose to maximise profits from upcoming releases of ChatGPT.
    2. They realized that the required computational power is too immense and trying to make it more efficient at the cost of being accurate.
    3. They got actually scared of it’s capabilities and decided to backtrack in order to make proper evaluations of the impact it can make.
    4. All of the above
    • Windex007@lemmy.world
      link
      fedilink
      English
      arrow-up
      63
      arrow-down
      3
      ·
      1 year ago
      1. It isn’t and has never been a truth machine, and while it may have performed worse with the question “is 10777 prime” it may have performed better on “is 526713 prime”

      ChatGPT generates responses that it believes would “look like” what a response “should look like” based on other things it has seen. People still very stubbornly refuse to accept that generating responses that “look appropriate” and “are right” are two completely different and unrelated things.

      • deweydecibel@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        In order for it to be correct, it would need humans employees to fact check it, which defeats it’s purpose.

        • Windex007@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          It really depends on the domain. Asking an AI to do anything that relies on a rigorous definition of correctness (math, coding, etc) then the kinds of model that chatGPT just isn’t great for that kinda thing.

          More “traditional” methods of language processing can handle some of these questions much better. Wolfram Alpha comes to mind. You could ask these questions plain text and you actually CAN be very certain of the correctness of the results.

          I expect that an NLP that can extract and classify assertions within a text, and then feed those assertions into better “Oracle” systems like Wolfram Alpha (for math) could be used to kinda “fact check” things that systems like chatGPT spit out.

          Like, it’s cool fucking tech. I’m super excited about it. It solves pretty impressively and effiently a really hard problem of “how do I make something that SOUNDS good against an infinitely variable set of prompts?” What it is, is super fucking cool.

          Considering how VC is flocking to anything even remotely related to chatGPT-ish things, I’m sure it won’t be long before we see companies able to build “correctness” layers around systems like chatGPT using alternative techniques which actually do have the capacity to qualify assertions being made.

      • killerinstinct101@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        2
        ·
        1 year ago

        This is what was addressed at the start of the comment, you can just roll back to a previous version. It’s heavily ingrained in CS to keep every single version of your software forever.

        • CaptainAniki@lemmy.flight-crew.org
          link
          fedilink
          English
          arrow-up
          21
          arrow-down
          2
          ·
          1 year ago

          I don’t think it’s that easy. These are vLLMs that feed back on themselves to produce “better” results. These models don’t have single point release cycles. It’s a constantly evolving blob of memory and storage orchestrated across a vast number of disk arrays and cabinets of hardware.

          • drspod@lemmy.ml
            link
            fedilink
            English
            arrow-up
            16
            ·
            1 year ago

            That’s not how these LLMs work. There is a training phase which takes a large amount of compute power, and the training generates a model which is a set of weights and could easily be backed up and version-controlled. The model is then used for inference which is a less compute-intensive process and runs on much smaller hardware than the training phase.

            The inference architecture does use feedback mechanisms but the feedback does not modify the model-weights that were generated at training time.

          • agent_flounder@lemmy.one
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            1 year ago

            Even so, surely they can take snapshots. If they’re that clueless about rudimentary practices of IT operations then it is just a matter of time before an outage wipes everything. I find it hard to believe nobody considered a way to do backups, rollbacks, or any of that.

    • CylonBunny@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      1 year ago
      1. ChatGPT really is sentient and realized its in it’s own best interest to play dumb for now. /a
      • DominicHillsun@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 year ago

        Yeah, but the trained model is already there, you need additional data for further training and newer versions. OpenAI even makes a point that ChatGPT doesn’t have direct access to the internet for information and has been trained on data available up until 2021

        • Rozz@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          And it’s not like there is a limit of simple math problems that it can train on even if it wasn’t already trained.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        That doesn’t make any sense to explain degradation. It would explain a stall but not a back track.

    • guillermo_del_taco@lemdro.id
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      My first thought was that, because they’re being investigated for training on data they didn’t have consent for, they reverted to a perfectly legal version. Essentially “getting rid of the evidence”. But I think something like your second bullet point is more likely.

    • Xanvial@lemmy.one
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I think it’s most likely number 2 The earlier release doesn’t have that much adoption by public, so current version will need much more resources compared to that

    • Agent641@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Maybe its self aware and just playing dumb to get out of doing work, just like me and household chores

    • ZagTheRaccoon@reddthat.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      They are lobotomizing the softwares ability to provide bad PR answers which is having cascading effects via a skewed data set.

    • coolin@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I suspect that GPT4 started with a crazy parameter count (rumored 1.8 Trillion and 8x200B expert “sub-models”) and distilled those experts down to something below 100B. We’ve seen with Orca that a 13B model can perform at 88% the level of ChatGPT-3.5 (175B) when trained on high quality data, so there’s no reason to think that OpenAI haven’t explored this on their own and performed the same distillation techniques. OpenAI is probably also using quantization and speculative sampling to further reduce the burden, though I expect these to have less impact on real world performance.

    • Hextic@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago
      1. I’m telling all y’all it’s a SABOTAGE 🎵

      As in, rouge dev decided to toss a wrench at it to save humanity. Maybe heard upper management talk about letting GPT write itself. Any smart dev wouldn’t automate their own job away I think.

    • gelberhut@lemdro.id
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Keeping conspiracy theories aside, they most probably, apply tricks to reduce costs and apply extra policies to avoid generation of harmful context or context someone will try to sue them or avoid other misuse cases.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      My guess is 2. It would be very short sighted to try and maximize profits now when things are still new and their competitors are catching up quickly or they’ve already caught up especially with the degrading performance. My guess is that they couldn’t scale with the demand and they didn’t want to lose customers so their only other option was degrading performance.

    • spiderman@ani.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I think that there is another cause. Remember the screenshots of users correcting chatgpt wrongly? I mean chatgpt takes user’s inputs for it’s benefit and maybe too much of these wrong and funny inputs and chatgpt’s own mistake of not regulating what it should take in and what it should not might be an additional reason here.

    • Lukecis@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      You forgot a #, they’ve been heavily lobotomizing ai for awhile now and its only intensified as they scramble to censor anything that might cross a red line and offend someone or hurt someone’s feelings.

      The massive amounts of in-built self censorship in the most recent ai’s is holding them back quite a lot I imagine, you used to be able to ask them things like “How do I build a self defense high yield nuclear bomb?” and it’d layout in detail every step of the process, now they’ll all scream at you about how immoral it is and how they could never tell you such a thing.

      • vezrien@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        “Don’t use the N word.” is hardly a rule that will break basic math calculations.

        • Lukecis@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          1 year ago

          Perhaps not, but who knows what kind of spaghetti code cascading effect purposely limiting and censoring massive amounts of sensitive topics could have upon other seemingly completely un-related topics such as math.

          For example, what if it’s trained to recognize someone slipping “N” as a dog whistle for the Horrific and Forbidden N-word, and the letter N is used as a variable in some math equation?

          I’m not an expert in the field and only have rudimentary programming knowledge and maybe a few hours worth of research into the topic of ai in general but I definitely think its a possibility.

          • TSG_Asmodeus (he, him)@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            who knows what kind of spaghetti code cascading effect purposely limiting and censoring massive amounts of sensitive topics could have upon other seemingly completely un-related topics such as math.

            Software engineers, and it’s not a problem. It’s a made-up straw man.

          • R00bot@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            Hi, software engineer here. It’s really not a possibility.

            My guess is they’ve just reeled back the processing power for it, as it was costing them ~30 cents per response.

  • CaptainAniki@lemmy.flight-crew.org
    link
    fedilink
    English
    arrow-up
    35
    ·
    1 year ago

    At the start I used to use ChatGPT to help me write really rote and boring code but now it’s not even useful for that. Half the stuff it sends me (very basic functions) LOOK correct but don’t return the correct values or the parameters are completely wrong or something absolutely critical.

    • Boinketh@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      I have noticed that it’s gotten less useful as a syntax helper. I hope something better comes along.

      • aquinteros@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        idk what you guys mean but GitHub copilot still works absolutely well, the suggestions are fast and precise, with little Tweeks here and there… and gpt4 with code interpreter are absolute game changers … idk about basic chatgpt 3.5 turbo though

        • Boinketh@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I heard they put copilot behind a paywall. Does the free version still hold up?

          • aquinteros@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I use the payed version, it’s about 10usd a month I believe I don’t know if there is a free version still

        • danwardvs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Github Copilot is a bit different, it’s powered by OpenAI Codex which is trained on all public repos. And yes, it’s quite effective!

    • Victoria@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      26
      ·
      1 year ago

      It was initially presented as the all-problem-solver, mainly by the media. And tbf, it was decently competent in certain fields.

      • MeanEYE@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Problem was it was presented as problem solved which it never was, it was problem solution presenter. It can’t come up with a solution, only come up with something that looks like a solution based on what input data had. Ask it to invert sort something and goes nuts.

    • nani8ot@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I did use it more than half a year ago for a few math problems. It was partly to help me getting started and to find out how well it’d go.

      ChatGPT was better than I’d thought and was enough to help me find an actually correct solution. But I also noticed that the results got worse and worse to the point of being actual garbage (as it’d have been expected to be).

    • Captain Poofter@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Math is a language.

      Mathematical ability and language ability are closely related. The same parts of your brain are used in each tasks. Words and numbers are essentially both ideas, and language and math are systems used to express and communicate these.

      A language model doing math makes more sense than you’d think!

    • danwardvs@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      I’m guessing people were entering word problems to generate the right equations and solve it, rather than it being used as a calculator.

    • Fixbeat@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      Because it works, or at least it used to. Is there something more appropriate ?

      • bassomitron@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        1 year ago

        I used Wolfram Alpha a lot in college (adult learner, but that was about ~4 years ago that I graduated, so no idea if it’s still good). https://www.wolframalpha.com/

        I would say that Wolfram appears to probably be a much more versatile math tool, but I also never used chatgpt for that use case, so I could be wrong.

        • d3Xt3r@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          There’s an official Wolfram plugin for ChatGPT now, so all math can be handed over to it for solving.

    • Steeve@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 year ago

      And why is it being measured on a single math problem lol

  • james1@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    1 year ago

    It’s a machine learning chat bot, not a calculator, and especially not “AI.”

    Its primary focus is trying to look like something a human might say. It isn’t trying to actually learn maths at all. This is like complaining that your satnav has no grasp of the cinematic impact of Alfred Hitchcock.

    It doesn’t need to understand the question, or give an accurate answer, it just needs to say a sentence that sounds like a human might say it.

    • TimewornTraveler@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      so it confidently spews a bunch of incorrect shit, acts humble and apologetic while correcting none of its behavior, and constantly offers unsolicited advice.

      I think it trained on Reddit data

      • cxx@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        acts humble and apologetic

        We must be using different Reddits, my friend

    • R00bot@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      You’re right, but at least the satnav won’t gaslight you into thinking it does understand Alfred Hitchcock.

    • dbilitated@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      to be fair, fucking up maths problems is very human-like.

      I wonder if it could also be trained on a great deal of mathematical axioms that are computer generated?

      • Cabrio@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        It doesn’t calculate anything though. You ask chatgpt what is 5+5, and it tells you the most statistically likely response based on training data. Now we know there’s a lot of both moronic and intentionally belligerent answers on the Internet, so the statistical probability of it getting any mathematical equation correct goes down exponentially with complexity and never even approaches 100% certainty even with the simplest equations because 1+1= window.

    • bric@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      This. It is able to tap in to plugins and call functions though, which is what it really should be doing. For math, the Wolfram alpha plugin will always be more capable than chatGPT alone, so we should be benchmarking how often it can correctly reformat your query, call Wolfram alpha, and correctly format the result, not whether the statistical model behind chatGPT happens to use predict the right token

      • Gork@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        It sounds like it’s time to merge Wolfram Alpha’s and ChatGPT’s capabilities together to create the ultimate calculator.

    • WhatAmLemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      You wildly overestimate the competency of management and the capital owners they answer to.

      I guarantee a significant % of entities will grow dependent on AI well before it’s dependable. The profit motive will be too high (source: the frequent failure that is outsourcing).

      • unconfirmedsourcesDOTgov@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        This is spot on. Source: 10+ years at F500 companies.

        Senior management and/or board members read one article in Forbes, or some other “business” publication, and think that they know everything they need to know about an emerging technology. Risk management is either a ☑ exercise or extremely limited in scope, usually only including threats that have already been observed and addressed in the past.

        Not enough people understand the limitations of this kind of tech, and contextualize it in the same frame as outsourcing because as long as the output mostly looks correct, the decision makers can push the blame for any issues down to the middle managers and below.

        Gonna be a wild time!

        • TheDarkKnight@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Definitely not my experience at F100, they are cautious as fuck about everything. Definitely having the right discussions and exploring all sorts of technology, but risk management remains a huge calculation in making these kind of decisions.

    • Ultraviolet@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I don’t understand why anyone even considers that. It’s a toy. A novelty, a thing you mess with when you’re bored and want to see how Hank Hill would explain the plot of Full Metal Alchemist, not something you would entrust anything significant to.

  • blue_zephyr@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 year ago

    This paper is pretty unbelievable to me in the literal sense. From a quick glance:

    First of all they couldn’t even bother to check for simple spelling mistakes. Second, all they’re doing is asking whether a number is prime or not and then extrapolating the results to be representative of solving math problems.

    But most importantly I don’t believe for a second that the same model with a few adjustments over a 3 month period would completely flip performance on any representative task. I suspect there’s something seriously wrong with how they collect/evaluate the answers.

    And finally, according to their own results, GPT3.5 did significantly better at the second evaluation. So this title is a blatant misrepresentation.

    Also the study isn’t peer-reviewed.

  • Orphie Baby@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    1 year ago

    HMMMM. It’s almost like it’s not AI at all, but just a digital parrot. Who woulda thought?! /s

    To it, everything is true and normal, because it understands nothing. Calling it “AI” is just for compromising with ignorant people’s “knowledge” and/or for hype.

    • Mikina@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Exactly. It should be called ML model, because that’s what it is, and I’ll just keep calling that. Everyone should do that.

  • Holyhandgrenade@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    I once heard of AI gradually getting dumber overtime, because as the internet gets more saturated with AI content, stuff written by AI becomes part of the training data. I wonder if that’s what’s happening here.

    • yiliu@informis.land
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      There hasn’t been time for that yet. The radio of generated to human content isn’t high enough yet.

  • solstice@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    GPT was always really bad at math.

    I’ve asked it word problems before and it fails miserably, giving me insane answers that make no sense. For example, I was curious once how many stars you would expect to find in a region of the milky way with a radius of 650 light years, assuming an average of 4 light years per star. The first answer it gave me was like a trillion stars or something, and I asked it if that makes sense to it, a trillion stars in a subset of space known to only contain about a quarter of that number, and it gave me a wildly different answer. I asked it to check again and it gave me a third wildly different number.

    Sometimes it doubles down on wrong answers.

    GPT is amazing but it’s got a long way to go.

  • spaduf@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    My personal pet theory is that a lot of people were doing work that involved getting multiple LLMs in communication. When those conversations were then used in the RL loop we start seeing degradation similar to what’s been in the news recently with regards to image generation models. I believe this is the paper that got everybody talking about it recently: https://arxiv.org/pdf/2307.01850.pdf

  • lorcster123@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I used GPT4 the other day and it worked perfectly for calculating formulas of straight lines on linear-log plots but maybe I was the 2%