Bill Gates feels Generative AI has plateaued, says GPT-5 will not be any better::The billionaire philanthropist in an interview with German newspaper Handelsblatt, shared his thoughts on Artificial general intelligence, climate change, and the scope of AI in the future.

  • ShittyBeatlesFCPres@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    1
    ·
    10 months ago

    I’m not sure I’d say it’s plateaued today but I definitely think machine learning is going to hit a wall soon. Some tech keeps improving until physical limits stop progress but I see generative AI as being more like self-driving cars where the “easy” parts end up solved but the last 10% is insanely hard.

    There’s also the economic reality of scaling. Maybe the “hard” problems could, in theory, be easily solved with enough compute power. We’ll eventually solve those problems but it’s going to be on Nvidia’s timeline, not OpenAI’s.

    • nossaquesapao@lemmy.eco.br
      link
      fedilink
      English
      arrow-up
      23
      ·
      10 months ago

      Generative ai is a bit different from self driving cars in the sense that they’re tolerant to failures. This may give more room for improvements when compared to other applications.

    • erwan@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Yes, especially when you consider that the human brain runs on 15W of power!

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    10 months ago

    Let me save you a click: he doesn’t say anything interesting about why he thinks this.

  • Pxtl@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    10 months ago

    I hope so. Theyve already got scary implications for creative parts of the economy.

    That said, we’re in the Cambrian explosion of the tech. As it plateaus, the next step will be enhanced tooling and convenience around it. Better inputs than just text, better, more applications in new spaces, etc.

    • Random_Character_A@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      10 months ago

      Now now. He only hired assholes and monsters to execute immoral MS mob style tactics, while he played the great innocent altruist.

  • oldfart@lemm.ee
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    4
    ·
    10 months ago

    Not a single comment yet stating how Gates is a great human being because of his foundation, and how all you haters should fuck the fuck off? sigh, let me the first one.

    • oldfart@lemm.ee
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      4
      ·
      10 months ago

      Just to make things extremely clear, the above comment has been sarcastic. He’s an awful person.

    • banneryear1868@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      10 months ago

      You mean his tax haven?

      I mean they’ve done some good things, but the capitalist system that gave him his wealth is the same one that causes poverty and his foundation isn’t working to change that.

  • Mio@feddit.nu
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    10 months ago

    But we have more areas to apply this to. I still can’t ask my PC to do some work, like Unistall OneDrive or change a setting in the OS. Send a message on Teams. Where is Jarvis?

    • Toribor@corndog.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      Having Generative AI make API calls on your behalf is a work in progress across pretty much every industry. It’ll make complex tasks across multiple services a lot easier but it’s definitely going to cause weird unpredictable behavior too.

      • Mio@feddit.nu
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        That is why you have to confirm the action before execting

      • dukk@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I wouldn’t trust it do everything yet, but it sure as hell would be useful to retrieve information. Wish I could just ask it “Hey, is the door locked?” and get an answer.

        Maybe it could suggest actions, but I wouldn’t want to have it do anything without manual human confirmation, it’s too unreliable.

  • r00ty@kbin.life
    link
    fedilink
    arrow-up
    4
    ·
    10 months ago

    On the one hand, I don’t really know enough about AI to comment. What I do remember is that, Bill Gates said the Internet was just a fad in the 90s. This comment caused myself and others problems promoting the Internet in workplaces because those in charge for some reason put some weight to his words. :p

  • trackcharlie@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    10 months ago

    I think he could be right about generative AI, but that’s not a serious problem given we’re moving beyond generative AI and into virtual intelligence territory.

    Generative ai right now requires someone (or something) to initiate it with a prompt, but according to some of the latest research papers in OpenAI as well as the drama that happened recently surrounding the leadership, it appears that we’re moving beyond the ‘generative’ phase into the ‘virtual intelligence’ phase.

    It’s not going to be ‘smart’ it will be knowledgeable (and accurate, hopefully). That is to say VI’s will be useful as data retrieval or organization but not necessarily data creation (although IIRC the way to get around this would be to develop a VI that specifically only works on creating ideas but we’d be moving into AGI territory and I don’t expect we’ll have serious contenders for AGI for another decade at least).

    The rumours abound surrounding the OpenAI drama, the key one being the potential for accidentally developing AGI internally (I doubt this heavily). The more likely reason is that the board of directors had a financial stake in Nvidia and when they found out altman was working on chips specifically for AI that were faster, lower cost, and lower power consumption than current nvidia trash (by literally tens of thousands of dollars), they fired him to try and force the company onto their preferred track (and profit in the process, which IMO, kind of ironic that a non-profit board of directors has so many ‘closed door’ discussions with nvidia staff…)

    This is just the thoughts of a comp-sci student with a focus on artificial intelligence systems.

    If interested in further reading:

    https://www.ibm.com/blog/understanding-the-different-types-of-artificial-intelligence/

    https://digitalreality.ieee.org/publications/virtual-intelligence-vs-artificial-intelligence

    https://www.psychologytoday.com/us/blog/what-we-really-want-in-a-leader/202204/why-you-need-to-focus-on-virtual-intelligence

    Keep in mind that because it’s still early days in this field that a lot of terms haven’t reached an established consensus across academia yet, so you’ll notice variations in how each organization explains what “x” type of intelligence is.

  • The Menemen!@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    edit-2
    10 months ago

    Maybe, but I am sure the tools the AIs can use will improve making the AIs jobs easier and thus the AI more efficient. I hope he is right tbh.

    Eww, as a long time Linux user I need to take a shower now. I feel dirty.