AI For All?

MC
Marks & Clerk

Contributor

Marks & Clerk is one of the UK’s foremost firms of Patent and Trade Mark Attorneys. Our attorneys and solicitors are wired directly into the UK’s leading business and innovation economies. Alongside this we have offices in 9 international locations covering the EU, Canada and Asia, meaning we offer clients the best possible service locally, nationally and internationally.
A recent study led by Dr. Nomisha Kurian emphasizes the need for "child-safe AI" after incidents involving harmful advice from AI to children. The study highlights AI's potential risks to vulnerable users and calls for preventative measures in AI development and policy.
UK Intellectual Property
To print this article, all you need is to be registered or login on Mondaq.com.

A recent study has proposed a framework for developing AI for interacting with children. Citing recent incidents where a 10-year old was instructed to touch a live electrical plug with a coin, and advice was given to researchers posing as a teenager on how to hide alcohol and drugs, the Cambridge academic Dr Nomisha Kurian has called for "child-safe AI" to be a priority for AI developers and policy. It should be noted that both companies involved in the above incidents responded by implementing further safety measures, but with the number and combination of prompts that can be given to a chatbot in practice infinite, a preventative strategy seems far preferable to a reactive response.

This is part of a wider concern that the developments being made at the forefront of technology should be used and developed for the good of all. While the Large Language Models (LLMs) behind many chatbots are increasingly becoming part of adult life, the statistical probability models underlying the LLMs may react poorly to those with less developed linguistic reasoning - often associated with children who are still developing - or who use unexpected patterns of speech. Similar implications might be drawn for chatbots interacting with elderly people.

Added to this, recent research has shown that children are more likely than adults to trust a chatbot with sensitive information. Of course, this is to be expected if a chatbot interacts via speech as an adult would, but further work in the development stage will need to be undertaken to ensure that its most vulnerable users at not put at further risk.

To paraphrase Dr Kurian, the future of responsible AI depends on protecting all its users, not just its developers.

"Making a chatbot sound human can help the user get more benefits out of it, since it sounds more engaging, appealing and easy to understand," Kurian said. "But for a child, it is very hard to draw a rigid, rational boundary between something that sounds human, and the reality that it may not be capable of forming a proper emotional bond."

www.cam.ac.uk/...

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More