It’s only a problem if you expect them to do formal reasoning. They are fancy word predictors, useful for when your output doesn’t need to be factually accurate. If you use them for things they’re not designed for, you’ll get bad results, but that would be your fault for using them in an incorrect manner, not the LLMs’ faults. You don’t use a screwdriver to bang in a nail and say the screwdriver ‘has a HUGE problem’ when it does a bad job.
I think it is a problem. Maybe not for people like us, that understand the concept and its limitations, but “formal reasoning” is exactly how this technology is being pitched to the masses. “Take a picture of your homework and OpenAI will solve it”, “have it reply to your emails”, “have it write code for you”. All reasoning-heavy tasks.
On top of that, Google/Bing have it answering user questions directly, it’s commonly pitched as a “tutor”, or an “assistant”, the OpenAI API is being shoved everywhere under the sun for anything you can imagine for all kinds of tasks, and nobody is attempting to clarify it’s weaknesses in their marketing.
As it becomes more and more common, more and more users who don’t understand it’s fundamentally incapable of reliably doing these things will crop up.
The problem is the laymen expect it to do reasoning, so the sales & marketing team says that it can do reasoning, and then the CEO will have consumed the Kool-Aid and restructure the company because he believes it can do reasoning.
Right, I find LLMs are fundamentally no different from Markov chains. It doesn’t mean they’re not useful, they’re a tool that’s good for certain use cases. Unfortunately, we’re in a hype phase right now where people are trying to apply them for a lot of cases they’re terrible at and where better tools already exist to boot.
they aren’t. The only difference is that the state transition table is so unimaginably gargantuan thit we can only generate an approximation of a tiny slice of it, instead of it being literally a table
It’s only a problem if you expect them to do formal reasoning. They are fancy word predictors, useful for when your output doesn’t need to be factually accurate. If you use them for things they’re not designed for, you’ll get bad results, but that would be your fault for using them in an incorrect manner, not the LLMs’ faults. You don’t use a screwdriver to bang in a nail and say the screwdriver ‘has a HUGE problem’ when it does a bad job.
I think it is a problem. Maybe not for people like us, that understand the concept and its limitations, but “formal reasoning” is exactly how this technology is being pitched to the masses. “Take a picture of your homework and OpenAI will solve it”, “have it reply to your emails”, “have it write code for you”. All reasoning-heavy tasks.
On top of that, Google/Bing have it answering user questions directly, it’s commonly pitched as a “tutor”, or an “assistant”, the OpenAI API is being shoved everywhere under the sun for anything you can imagine for all kinds of tasks, and nobody is attempting to clarify it’s weaknesses in their marketing.
As it becomes more and more common, more and more users who don’t understand it’s fundamentally incapable of reliably doing these things will crop up.
The problem is the laymen expect it to do reasoning, so the sales & marketing team says that it can do reasoning, and then the CEO will have consumed the Kool-Aid and restructure the company because he believes it can do reasoning.
Right, I find LLMs are fundamentally no different from Markov chains. It doesn’t mean they’re not useful, they’re a tool that’s good for certain use cases. Unfortunately, we’re in a hype phase right now where people are trying to apply them for a lot of cases they’re terrible at and where better tools already exist to boot.
they aren’t. The only difference is that the state transition table is so unimaginably gargantuan thit we can only generate an approximation of a tiny slice of it, instead of it being literally a table
exactly