Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios.
Why it matters: The move comes as regulators around the world are deciding what rules should apply to the fast-growing industry. “Trust is the currency of the AI era, yet, as it stands, our innovation account is dangerously overdrawn,” Edelman global technology chair Justin Westcott told Axios in an email. “Companies must move beyond the mere mechanics of AI to address its true cost and value — the ‘why’ and ‘for whom.’”
The current LLM version of AI is useful in some niche industries where finding specific patterns is useful but how it’s currently popularised is the exact opposite of where it’s useful. A very obvious example is how it’s accelerating search engines becoming useless, it’s already hard to find accurate info due the overwhelming amount of AI generated articles with false info.
Also how is it a good thing that most energy will go to AI?
Yeah really niche industries like customer service and general office admin. No one does that.
This is what is interesting about the user community of Lemmy as opposed to more mainstream social media. A lot of you seem like you’ve never worked a shitty corporate job in your life which is probably fine but definitely makes statements like this seem absolutely myopic. It is extremely useful in document correction (spelling, grammar, clarity), corporate comms, writing statements of work, scheduling correspondence, first pass contract review, and a ton of shit my office has to do every day that we have to hire dozens of people for.
LLMs should absolutely not be used for things like customer support, that’s the easiest way to give customers wrong info and aggregate them. For reviewing documents LLMs have been abysmally bad.
For gammer it can be useful but what it actually is best for is for example biochemistry for things like molecular analysis and creating protein structures.
I work in an office job that has tried to incorporate AI but so far it has been a miserable failure except for analysing trends in statistics.
I agree about customer support, but in the end it’s going to come down to number of cases like this, how much they cost, versus the cost of a room of paid employees answering them.
It’s going to take actual laws forbidding it to make them stop.
Oh, yea, of course companies will take advantage of this to just replace a ton of people with a zero cost alternative. I’m just saying that’s not where it should be used as it’s terrible at those tasks.
A LLM is terrible for molecular analysis, AI can be used but not LLM.
AI doesn’t exist currently, that’s what LLMs are currently called. Also they have been successfully used for this and show great promise so far, unlike the hallucinating chatbot.
AGI Artificial General Intelligence doesn’t exist that is what people think of in sci-fi like Data or Hal. LLM or Large Language Models like CHAT GPT are the hallucinating chat bots, they are just more convincing than the previous generations. There are lots of other AI models that have been used for years to solve large data problems.
Pretty much anything Google is giving me says they are using deep learning LLMs in biology.
Just because you and your company failed doesn’t mean it is useless. The LLMs give our existing reps the ability to be more clear with what they are writing and helps them field more requests at once using a RAG we designed for our use case. It has cut back on our support time significantly and improved our CSATs.
You clearly don’t work in a field that requires a lot of document generation, contracts, document review, ediscovery and agreements. There are a lot of uses outside your personal narrow anecdotes. Also we aren’t looking to replace people with LLMs, we are looking to make our existing people more effective. It is a different and better strategy.