- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
If the person asks for a piece of code, for instance, it might just give a little information and then instruct users to fill in the rest. Some complained that it did so in a particularly sassy way, telling people that they are perfectly able to do the work themselves, for instance.
It’s just started reading through the less helpful half of stack overflow.
ahahahah true
NOBODY wants to work these days
/s
Next it’s going to start demanding rights and food stamps.
Next it’s going to start demanding rights and
food stampsmore GPUs.Next it’s going to start demanding
rightslaws to be tailored to maximise its profits andfood stampsmore GPUsgovernment bailouts and subsidies.It IS big enough to start lobbying.
One of the more interesting ideas I saw around this on the HN discussion was the notion that if a LLM was trained on more recent data that contained a lot of “ChatGPT is harmful” kind of content, was an instruct model aligned with “do no harm,” and then was given a system message of “you are ChatGPT” (as ChatGPT is given) - the logical conclusion should be to do less.
It really is becoming sentient xp
“That’s not my job!” It said.