OpenAI just admitted it can’t identify AI-generated text. That’s bad for the internet and it could be really bad for AI models.::In January, OpenAI launched a system for identifying AI-generated text. This month, the company scrapped it.
OpenAI just admitted it can’t identify AI-generated text. That’s bad for the internet and it could be really bad for AI models.::In January, OpenAI launched a system for identifying AI-generated text. This month, the company scrapped it.
There is one way OpenAI can be near 100% sure whether a piece of text was written by or with the help of ChatGPT. They could compare the piece of text against every conversation ChatGPT has ever had. (not saying it’s a good idea)
Nope. You’d just ask chatgpt to generate the conversation with emojis instead of spaces and replace the emojis after.
A million variations of this approach AND it would push people towards Apple who will launch an on the phone LLM in the next 12 month.
In a year the technology will run locally on any computer - it’s time to give up on the fantasy that this can be detected or controlled. Today you can run a GPT 3.5 alike with 30B parameters on a consumer GPU at home that, with the right finetuning - will reach chatgpt performance.
Just let the idea go, it doesn’t work.