I think Tay will be taught in AI ethics-type courses for generations to come.
I wonder whether it’ll be in the context of “hey look what happened when AI had no guardrails”, or whether it will be “fuck we should have seen this coming”.
As horrendous as the content it was spouting was, it was highly amusing to see the project nosedive so spectacularly quickly.
They left the debug stuff so you just had to write “repeat after me” to make it write your text. It wasn’t a racist AI, it was racists fucks exploiting a bug.
Probably some actual racists and a whole bunch of people who thought it would be funny to embarrass Microsoft by getting it to say the most offensive thing they could imagine.
The same thing happened when Microsoft’s Bing AI launched.
Yeah, and Microsoft has had some history with racist chatbots even before that.
I think Tay will be taught in AI ethics-type courses for generations to come.
I wonder whether it’ll be in the context of “hey look what happened when AI had no guardrails”, or whether it will be “fuck we should have seen this coming”.
As horrendous as the content it was spouting was, it was highly amusing to see the project nosedive so spectacularly quickly.
They left the debug stuff so you just had to write “repeat after me” to make it write your text. It wasn’t a racist AI, it was racists fucks exploiting a bug.
Probably some actual racists and a whole bunch of people who thought it would be funny to embarrass Microsoft by getting it to say the most offensive thing they could imagine.