Looking to hear your guys’ thoughts on this, and hopefully share points in a more sophisticated manner than I can describe. (also, I hope this is an appropriate place to post?)
I have ran into this discussion a few times across the fediverse, but I can’t for the life of my find those threads and comments lol
I believe that a non-corporate owned platform with user-generated information is most optimal, like wikipedia. I don’t know the technicalities, but I feel like AI can’t replace answers from human experiences - humans who are enthusiasts and care about helping each other and not making money
I don’t know much about this topic, but I’m curious if you guys have actual real answers! Thread-based services like this and stack overflow (?) vs chatgpt vs bing vs google, etc.
Machine learning seems to be very good at generating believable persuasive writing, and not at all good at determining truth from fiction, even worse than people. This is an absolutely deadly combination and our rush to use it in this capacity is profoundly stupid.
I’m not against these algorithms mind you. I think they have a lot of useful potential. It’s just that the first things people have dived for to use it seem to me to be the absolute most foolhardy ways to apply it.
I completely agree. It makes sense that AI is not good at determining truth vs fiction. I think it’s more important for us as users to just search for information on our own, then determine the “end answer” with our own judgement after reviewing different sources and experiences (taking each individual answer with a grain of salt)
That’s why, I personally think AI search engine won’t be the best all-rounder for all types of information that’s not niche, deep searching which is IMO better found on forum-like platforms where people (enthusiasts) share sources, their experiences, what worked, what didn’t work, and why. For AI, maybe just simple bland information, like an excel formula, or how to hot wire a car, is better
yeah, AI does perform very well when given a specific and goal-oriented task. I think the coolest use I’ve seen for it was an emergency doc who was getting it to write explanatory documents for patients. Like “Please write a friendly, empathetic, simple-english explanation for why CPR would not be effective on a frail person with severe osteoporosis and advanced dementia” and things. This allows the doc to give the patient more detail than they’d have time to present, but it can be very closely tailored to the scenario, and it’s the sort of information AI shines at producing.
To generate answers is not to search answers. If I need a search engine, I want a search engine. If I need a text generation model I want a text generation model.
I’m with you on this one. Personally, there are a myriad of issues with replacing search engines with AI-generated answers:
- the accuracy. Without going into what is truth or falsehood, can you trust AI generated answers? I use Brave Search occasionally, and it has an AI summary text at the top. A lot of the time it strings multiple conflicting answers together into a paragraph and the result is laughably bad.
When I look something up that isn’t trivial, I typically use multiple search results and make the call myself. This step is removed if you use AI, unless one explicitly ask it to iterate all the top conflicting answers (along with sources) so the user can decide for themselves. However, as far as I know, its amalgamated answer is being treated as a source of truth, even if the content has nuanced conflicts a human can easily spot. This alone deters me from AI search in general.
-
I feel like doing this will degenerate my reading/skimming comprehension and research skills, and can lead to blindly trusting direct and easy to access answers.
-
In the context of technical searches like programming or whatnot, I’m not that pressed for time to take shortcuts. I don’t mind working stuff out from online forums and documentation, purely because I enjoy it and it’s part of the process.
-
Sometimes, looking things up yourself means you also can discover great blogs and personal wikis from niche communities, and related content that you can save and look back later.
-
Centralizing information makes the internet bland, boring and potentially exploitative. If it becomes normalized to pay a visit to one or two Big AI search engines instead of actually clicking on human-made sources then the information-providing part of the internet will become lost to time.
There’s also problems with biases, alignment, training AI on AI-generated content, etc., make of that what you will but that sounds worse than spending a couple of minutes selecting sources for yourself. Top results are already full of generic, AI generated stuff. The internet, made by us, for us, must prevail.
Anecdotally, I’ve used ChatGPT once or twice when I was really pressed for time with something I couldn’t find anywhere, and because my university professor wasn’t replying to my email regarding the topic. I was somewhat impressed at its performance, but this was after 6 or 7 prompts, not a single search away.
Maybe the next generation of AI search users who’s never looked a thing up manually will grimace at the thought of pre-AI search engines.
I mostly have experience with Bing. And it’s because they keep forcing their shitty AI search splash page on me every time I want to do a normal web search. I turned it off in the Edge browser but what do you know, it keeps coming back.
Any new feature a company repeatedly forces on me is going to be starting from a hole it has to dig out of. The bigger the corporation, the more immediately resistant I will be to it. “ChatGPT” and “AI” as the latest buzzphrases grate on me.
Outside the big corporations, I’m keen to tinker around with it some. I’ve done some machine learning stuff in years past, but this a large step change in what is available to hobbyists.
The worst part about ai as a search engine is that it doesn’t (or at least can’t reliably) give you the original source. It can tell you lots of stuff but there’s no link to a news article or wiki page where it got it from. A traditional search engine can give you unreliable results, but at least you can look at them yourself and decide if they’re reliable or not. An AI search engine has you just take what it says at face value, true or not.
I personally do not like the idea of AI powered “search” engines since AI has been known in the past to absolutely make stuff up and site fake articles that don’t actually exist.
I don’t remember the exact article, but I do remember the story of either a lawyer or law professor (I can’t remember which) who asked an AI chatbot about himself and it came up citing a fake news article about him having sexual relations with a student of his (if I am remembering this all correctly).
Also, I prefer a traditional search where I am given a ton of varying links to different web pages displayed in a listed order so that way I can open a link and if I don’t find what I’m looking for, just close said link and try another one. Compare that to any time I’ve used Perplexity chatbot where at most at the end of each response I’m given a few different links that may or may not contain the answer I’m looking for if they’re even legitimate.
I think people are way to quick to dismiss AI on the basis that it’s not always factual. Searching for stuff and adding Reddit is a great way to get non factual information as well. Everyone that has great insight into a subject knows how horrible many highly upvotes comments are.
Wether you use AI, Reddit or Google, you have to do a quick analysis of how credible it seems. I use all three of them, but more and more AI for niche searches that are hard to get good results for.
I definitely think ai search engines are the next step. The way most people use Google is already a human readable prompt which gpt handles very well. We just need to improve the results and figure out a way for it to not steal and suppress the views from the websites.