- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
People are speaking with ChatGPT for hours, bringing 2013’s Her closer to reality::Long mobile conversations with the AI assistant using AirPods echo the sci-fi film.
People are speaking with ChatGPT for hours, bringing 2013’s Her closer to reality::Long mobile conversations with the AI assistant using AirPods echo the sci-fi film.
Yeah, I think a lot of people don’t realize that they’ve neutered the shit out of ChatGPT. They don’t want it to be vividly humanlike, just mostly so. The main corporate interest in chat AI is profit, i.e. how can they eliminate various repetitive jobs that haven’t been automated yet because they require someone with pseudo-humanlike behavior. Examples would include stuff like help desks, customer services, knowledge managers, certain types of assistants, legal aids, etc. That’s their end game.
Remember, many of these companies diving into this field are very wary of their tech coming off as too realistic. Some of us may be excited at the prospect of AGI becoming a reality, but I would bet the majority of society (and likely many governments) would instantly turn on such tech. Even though LLMs are far, far, faaaar away from achieving AGI, the fact that they already freak many folks out with their current limitations proves my point.
Anyway, sorry for the rant. TLDR, OpenAI intentionally keeps ChatGPT’s conversational abilities relatively simple/efficient since they’re focused more on it seeming human enough to get the job(s) done.