Key Facts:
-
The AI system uses ten categories of social emotions to identify violations of social norms.
-
The system has been tested on two large datasets of short texts, validating its models.
-
This preliminary work, funded by DARPA, is seen as a significant step in improving cross-cultural language understanding and situational awareness.
I’m extremely skeptical of medical diagnosis AIs. Without being able to explain why it comes to a conclusion, how do we know it won’t just accidentally find correlations? One example I heard of recently was an AI that was extremely good at detecting TB… based on the age of the machine that took the x-ray. Because it turns out places with older machines tend to be poorer, and poorer places tend to have more TB.
The only positive use I can think of is time saving measures. A researcher can feed a study to ChatGPT and have it write a rough first draft of the abstract. A Game Master could ask it for inspiration on the next few game sessions if they’re underprepared. An internet commenter could ask it for a third example of how it could save time.
But for anything serious, until it can explain why it comes to the conclusions it comes to, and can understand when a human says “no, you’re doing it wrong,” I can’t see it being a real force for good.