AI Chatbot Taken Down for Harmful Advice

Having previously covered the legal implications of artificial intelligence (AI) chatbots, one of the concerns included allocating liability for the output of harmful content. The National Eating Disorder Association’s (NEDA) development of its chatbot ‘Tessa’ has materialised these concerns. It has now been taken down for providing harmful advice.

 

What happened?

Tessa was an AI chatbot created with the aim of preventing; not treating, eating disorders. It was also designed as a separate resource, alongside NEDA’s helpline. However, the Association has been winding down its helpline since March and shifting towards AI technology.

 

Despite this, Tessa’s activity had to be paused due to the responses it has been generating. This included advising people to enter into dangerous calory deficits to achieve their weight loss goals. Activist Sharon Maxwell expressed concerns about the tips she received from Tessa. She explained that accessing such content, during the lowest points of her eating disorder, could have been very damaging. NEDA’s CEO Liz Thompson has declared that the technology and research teams will investigate the reasons behind this advice.

The legal implications

There are some existing potential legal implications for this kind of output. Firstly, the output of harmful content from AI chatbots could lead to the infringement of consumer protection laws. Consumers who have relied on these products and suffered may have a claim against the technology’s developers or owners. Secondly, issues concerning medical malpractice may arise. Although Tessa is not a healthcare professional, it was providing medical advice that could harm individuals and could have been enough to bring a claim against the Association.

 

Importantly, the legal implications of AI chatbots giving harmful advice will vary depending on the jurisdiction. They will undoubtedly evolve in time as regulators attempt to address the new technology.

Conclusion

For now, the example of Tessa providing harmful advice reflects the wider issues with AI chatbots. As they are rushed onto the market, they may produce unintended harmful effects. Nonetheless, while regulators attempcatch upch-up with the advancements to prevent leaving users vulnerable, the output of harmful content may already have some legal implications.

 

By Scott Hickman