Google has plunged the internet into a “spiral of decline”, the co-founder of the company’s artificial intelligence (AI) lab has claimed.

Mustafa Suleyman, the British entrepreneur who co-founded DeepMind, said: “The business model that Google had broke the internet.”

He said search results had become plagued with “clickbait” to keep people “addicted and absorbed on the page as long as possible”.

Information online is “buried at the bottom of a lot of verbiage and guff”, Mr Suleyman argued, so websites can “sell more adverts”, fuelled by Google’s technology.

  • cybersandwich@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    8
    ·
    1 year ago

    I dunno. There have been quite a few times where I am trying to do something on my computer and I could either spend 5 minutes searching, refining, digging through the results…or I can ask chatgpt and have a workable answer in 5 seconds. And that answer is precisely tailored to my specifics. I don’t have to assume/research how to modify a similar answer to fit my situation.

    Obviously it’s dependent on the types of information you need, but for coding, bash scripting, Linux cli, or anything of that nature LLMs have been great and much better than Google searches.

    • Excrubulent@slrpnk.net
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Okay but the problem with that is that LLMs not only don’t have any fidelity at all, they can’t. They are analogous to the language planning centre of your brain, which has to be filtered through your conscious mind to check if it’s talking complete crap.

      People don’t realise this and think the bot is giving them real information, but it’s actually just giving them spookily realistic word-salad, which is a big problem.

      Of course you can fix this if you add some kind of context engine for them to truly grasp the deeper and wider meaning of your query. The problem with that is that if you do that, you’ve basically created an AGI. That may first of all be extremely difficult and far in the future, and second of all it has ethical implications that go beyond how effective of a search engine it is.

      • cybersandwich@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        Did you read my last little bit there? I said it depends on the information you are looking for. I can paste error output from my terminal into Google and try to find an answer or I can paste it into chatgpt and be, at the very least pointed in the right direction almost immediately, or even given the answer right away vs getting a stackoverflow link and parsing the responses and comments and following secondary and tiertiary links.

        I absolutely understand the stochastic parrot conundrum with LLMs. They have significant drawbacks and they are far from perfect, but then neither is are Google search results. There is still a level of skepticism you have to apply.

        One of the biggest mistakes people make is the idea that LLMs and websearching is a zero sum affair. They don’t replace each other. They compliment each other. Imo, google is messing up with their “ai” integration into Google search. It sets the expectation that it is an equivalent function.

        • Excrubulent@slrpnk.net
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          1 year ago

          Sure but if that becomes the norm then a huge segment of the population will believe the first thing the bot tells them. You might be okay, but we’re talking about an entire society filtering its knowledge through an incredibly effective misinformation engine that will lie rather than say “I don’t know”, because that simple phrase requires a level of self-awareness that eludes a lot of actual people, much less a chatbot.

          • Touching_Grass@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            4
            ·
            1 year ago

            That’s already a problem. The thing j think about is what will serve me better. Google or chat AI. The risk of bad information exists with both. But an AI based search engine is something that will be much better at finding context, retiring results geared towards my goals and I suspect less prone to fuckery because AI must be trained as a whole

            • Excrubulent@slrpnk.net
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              1 year ago

              Except we already know that LLMs lie and people in general are not aware of this. Children are using these. When you as a person have to sift through results you get a sense of what information is out there, how sparse it is, etc. When a chatbot word-vomits the first thing it can think of to satisfy your answer, you get none of that, and perhaps you should be aware of that yourself. You don’t really seem to be, it’s like you think the saved time is more important than context, which apparently I have to remind you - the bot doesn’t know context.

              When you say:

              an AI based search engine is something that will be much better at finding context

              It makes me think that you really don’t understand how these bots work, and that’s the real danger.

              We’re talking in this thread about this wider systemic issue, not just what suits you personally regardless of how much it gaslights you, but if that’s all you care about then you do you I guess ¯\_(ツ)_/¯

              • Touching_Grass@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                edit-2
                1 year ago

                Lie is a weird way to describe it. They give you an answer based on probabilities. When they’re off base they call it hallucinating. Its not lying its just lacking in data to give an accurate and correct a answer which will get better with more training and data. Everything else we have so far gets worse. Google isn’t what it was 15 years ago.

                I use chatgpt every day to find out answers over google. Its better in almost every single way to get information from and I can only imagine what it’s capable of once it can interface with crawlers.

                The language you’re using to speak on this issue makes it seem like theres a personal vendetta against LLM. Why people get so mad at a new tool is always fascinating.