According to that research mentioned in the article, the answer is yes. The big caveats are
that you need to get conspiracy theorists to sit down and do the treatment. With their general level of paranoia around a) tech, b) science, and c) manipulation, that not likely to happen.
you need a level of “AI” that isn’t going to start hallucinating and instead enforce the subjects’ conspiracy beliefs. Despite techbros’ hype of the technology, I’m not convinced we’re anywhere close.
It’s not even fundamentally possible with the current LLMs. It’s like saying “Yes, it’s totally possible to do that! We just need to invent something that can do that first!”
that you need to get conspiracy theorists to sit down and do the treatment. With their general level of paranoia around a) tech, b) science, and c) manipulation, that not likely to happen.
You overestimate how hard it is to get a conspiracy theorist to click on something. I don’t know, it seems promising to me. I more worry that it can be used to sell things more nefarious than “climate change is real”.
you need a level of “AI” that isn’t going to start hallucinating and instead enforce the subjects’ conspiracy beliefs. Despite techbros’ hype of the technology, I’m not convinced we’re anywhere close.
They used a purpose-finetuned GPT-4 model for this study, and it didn’t go off script in that way once. I bet you could make it if you really tried, but if you’re doing adversarial prompting then you’re not the target for this thing anyway.
According to that research mentioned in the article, the answer is yes. The big caveats are
It’s not even fundamentally possible with the current LLMs. It’s like saying “Yes, it’s totally possible to do that! We just need to invent something that can do that first!”
I think we agree on the limited capability of (what is currently passed off as) “artificial intelligence”, yes.
You overestimate how hard it is to get a conspiracy theorist to click on something. I don’t know, it seems promising to me. I more worry that it can be used to sell things more nefarious than “climate change is real”.
They used a purpose-finetuned GPT-4 model for this study, and it didn’t go off script in that way once. I bet you could make it if you really tried, but if you’re doing adversarial prompting then you’re not the target for this thing anyway.