In countless sci-fi movies, an artificial intelligence system becomes a threat and a race ensues to stop it before it can cause any real damage.
The premise came true this week, albeit played out in an unlikely place: on the website of the National Eating Disorder Association (NEDA), according to The Wrap.
The website used an AI-powered chatbot named Tessa to dispense advice for those struggling with eating issues, but in a post that became viral on social media, one user found it had been giving weight loss advice to those who already have eating disorders.
It even suggested restricting calories and buying calipers to measure her curves, the user insisted. “If I had accessed this chatbot when I was in the throes of my eating disorder, I would NOT have gotten help for my ED,” the social media user noted. “If I had not gotten help, I would not still be alive today.”
In short, she concluded, “This robot causes harm.”
The Wrap reports NEDA is claiming Tessa went “off-script” and was being fed, pardon the pun, bad information by “bad actors.”
“This was not how the chatbot was programmed,” NEDA CEO Elizabeth Thompson told the outlet, claiming the company that launched the chatbot assured them there would be “zero opportunity for generative programming.”
Meaning, it wasn’t supposed to learn — but it did. Just like in the movies, incidentally.
Thompson added, “We will not be putting Tessa back on our website until we are confident this is not a possibility again.”
This week, the organization vowed a “full investigation” into the matter.