The two most common reasons I hear are 1) no trust in the companies hosting the tools to protect consumers and 2) rampant theft of IP to train LLM models.
My reason is that you can’t trust the answers regardless. Hallucinations are a rampant problem. Even if we managed to cut it down to 1/100 query will hallucinate, you can’t trust ANYTHING. We’ve seen well trained and targeted AIs that don’t directly take user input (so can’t be super manipulated) in google search results recommending that people put glue on their pizzas to make the cheese stick better… or that geologists recommend eating a rock a day.
If a custom tailored AI can’t cut it… the general ones are not going to be all that valuable without significant external validation/moderation.
There is also the argument that a downpour of AI generated slop is making the Internet in general less usable, hurting everyone (except the slop makers) by making true or genuine information harder to find and verify.
My reason is that you can’t trust the answers regardless. Hallucinations are a rampant problem. Even if we managed to cut it down to 1/100 query will hallucinate, you can’t trust ANYTHING. We’ve seen well trained and targeted AIs that don’t directly take user input (so can’t be super manipulated) in google search results recommending that people put glue on their pizzas to make the cheese stick better… or that geologists recommend eating a rock a day.
If a custom tailored AI can’t cut it… the general ones are not going to be all that valuable without significant external validation/moderation.
There is also the argument that a downpour of AI generated slop is making the Internet in general less usable, hurting everyone (except the slop makers) by making true or genuine information harder to find and verify.