I don’t think you have any idea how bad it’s going to get in the future - Grok is already giving us a glimpse, but LLMs haven’t replaced search engines like Google yet (has AI already) - but it’s definitely heading in that direction. Then the answers will be given even more strongly and with far less transparency by those who control the LLMs - and they are all multi-billion companies, because only they can afford the necessary computing power.
Whenever I see people criticise AI, it’s usually because the company steals copyrighted content, with the aim to replace the people they stole from. Or the environmental impact of training and running the data models, which is awful. And, both of those reasons are good enough to not like AI, in my opinion. But, I feel like I never see people talk about the fact that all the answers it gives, is being filtered through a private corporation with its own agenda.
People use it to learn, and to do research. They use it to catch up on news of all things!
Like others have mentioned, Google has already been doing this a long time, by sorting search results they show to the user. But, they haven’t written all the articles, the blog posts, the top 10 lists, or the reviews you read… until now. If they’ve wanted to, they’ve made certain things easier or harder to find. But, once you found the article you were looking for, it was written by a person unaffiliated with Google. All that changes with AI. You don’t read the article directly anymore. Google (or any other AI) scrapes it, parses it however they want, and spit it back out to the end user.
I’m very surprised that people are so willing to let a private corporation completely control how they see the world, just because it’s a bit more convenient.
And that is just one side of it. The other, and arguably even worse, is that the content being uploaded to the internet will become largely AI generated. AI generated content can be created at rates no humans can compete with, and there are plenty of incentives, economical as well as political, for malicious interests to flood any human made content with AI created disinformation.
That is also why the people hoping that AI is a bubble which will burst are wrong. There are plenty of interested parties which will keep it alive for very profitable reasons, even if it is the opposite of what LLMs were originally claimed to be created for.
But (classic) Google provides links that can be traced. LLMs do not do this consistently - and they are frequently hallucinating. Don’t you want to contribute anything to my core statement?
Sure, LLMs give worse-quality output. That does not mean the have-haves more influence over the narrative. In fact, I’d wager LLMs won’t be able to replace search engines because of how much faster and more accurate the latter are with simple queries. And with that, we’ll still be finding information with search engines.
I don’t think you have any idea how bad it’s going to get in the future - Grok is already giving us a glimpse, but LLMs haven’t replaced search engines like Google yet (has AI already) - but it’s definitely heading in that direction. Then the answers will be given even more strongly and with far less transparency by those who control the LLMs - and they are all multi-billion companies, because only they can afford the necessary computing power.
I totally agree with this.
Whenever I see people criticise AI, it’s usually because the company steals copyrighted content, with the aim to replace the people they stole from. Or the environmental impact of training and running the data models, which is awful. And, both of those reasons are good enough to not like AI, in my opinion. But, I feel like I never see people talk about the fact that all the answers it gives, is being filtered through a private corporation with its own agenda.
People use it to learn, and to do research. They use it to catch up on news of all things!
Like others have mentioned, Google has already been doing this a long time, by sorting search results they show to the user. But, they haven’t written all the articles, the blog posts, the top 10 lists, or the reviews you read… until now. If they’ve wanted to, they’ve made certain things easier or harder to find. But, once you found the article you were looking for, it was written by a person unaffiliated with Google. All that changes with AI. You don’t read the article directly anymore. Google (or any other AI) scrapes it, parses it however they want, and spit it back out to the end user.
I’m very surprised that people are so willing to let a private corporation completely control how they see the world, just because it’s a bit more convenient.
And that is just one side of it. The other, and arguably even worse, is that the content being uploaded to the internet will become largely AI generated. AI generated content can be created at rates no humans can compete with, and there are plenty of incentives, economical as well as political, for malicious interests to flood any human made content with AI created disinformation.
That is also why the people hoping that AI is a bubble which will burst are wrong. There are plenty of interested parties which will keep it alive for very profitable reasons, even if it is the opposite of what LLMs were originally claimed to be created for.
I don’t think so, Google Search’s algorithm’s transparency doesn’t seem any better
But (classic) Google provides links that can be traced. LLMs do not do this consistently - and they are frequently hallucinating. Don’t you want to contribute anything to my core statement?
Sure, LLMs give worse-quality output. That does not mean the have-haves more influence over the narrative. In fact, I’d wager LLMs won’t be able to replace search engines because of how much faster and more accurate the latter are with simple queries. And with that, we’ll still be finding information with search engines.