| 11 Nov 2024

Clare Daly is a solicitor with 20 years’ experience in the legal industry, with a specialism in Child Protection, Data Protection, Online Safety, Digital Rights, Litigation and Family and Child Law. She is a qualified data protection practitioner with considerable experience advising organisations and statutory bodies on their child protection obligations pursuant to legislation and policy.

Clare is author of Child Safeguarding in the Digital Age. This will be included in our Irish Child and Family Law online service and also in print next year.

AI and Child Safeguarding – An Urgent Priority

It may be too soon to fully understand how AI systems will impact children, but like the internet and social media, AI will undoubtedly have a major influence on a child user’s wellbeing. We know that children make up one third of all internet users. Thus it must be considered that children will comprise a large user base of AI being deployed within or alongside freely available, novel and enticing AI innovations. But are children being overlooked as vulnerable users of new and emerging systems? Will future regulations adequately protect children while also giving them the tools to use AI responsibly?  This article looks at the key ways AI is being considered by lawmakers, in particular in the incoming AI Act. It is of huge concern that the harms being attributed to AI products are now becoming apparent. It is becoming exceedingly clear that measures specifically tailored towards the rights of children are necessary to pave the way for a future with safeguards around AI. The lack of safeguards (and legislation) has been borne out in harms that are now being reported as having occurred to children.

Legislative and regulatory proposals on AI

An experimental AI chat bot rolled out on primarily child-based users in the guise of Snap’s ‘My AI’ Feature was considered by the UK Information Commissioner’s Office (ICO) last year and the ICO found that the issues presented demonstrated the necessity for safeguards and regulation in this area. As it happens, the AI Act is due to come into force throughout the next few months. But does the AI Act interact with and complement children’s rights? And does the AI Act provide any safety for children in light of the ever growing ubiquity of AI tools in so many aspects of children’s lives?

The Recitals in the AI Act highlight that children are a particularly vulnerable group, prone to manipulation, exploitation, and control practices, and are more susceptible to subliminal content. Yet, specific references to UN General Comment No 25 (2021) speak to the empowerment of children online and to the child’s participatory rights.

In addition to empowerment, we see the reaffirmation of rights and freedoms previously enshrined in other legislation. Recital 48 highlights the child’s fundamental rights and freedoms as enshrined in Art 24 of the Charter and in the United Nations Convention on the Rights of the Child, which were further developed in the UN General Comment No 25 (2021) as regards the digital environment, both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being.

Does the AI Act include provisions on safeguarding, specifically as regards children?

Recital 28 references children specifically when it notes that AI tools can be used to provide tools for manipulative, exploitative and social control practices. It is noted that these practices are particularly harmful and abusive and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and fundamental rights enshrined in the Charter, including the right to non-discrimination, to data protection and to privacy and the rights of the child.

Human trafficking and sexual exploitation of children maintain prominence as areas of serious significant ongoing concern, particularly given misuse of technology to facilitate the exploitation of children. Article 5 provides for prohibited AI practices and outlines that the Act will permit police use of biometric systems in public places. The Act provides that European police forces will only be able to use biometric identification systems in public places if they get court approval first, and only for 16 different specific crimes. It is notable and important that human trafficking and sexual exploitation of children are included in that specific statutory list.

Transparency

It appears that one of the key concerns arising in the AI age is the inability to distinguish between real content and AI-generated content, particularly on social media. The AI Act provides for greater obligations around transparency including a requirement that AI-generated content must be clearly labelled. We have seen that a number of social media providers have included AI transparency labelling, such as Snap Inc and TikTok.

It is noteworthy that specific transparency obligations that arise for certain AI systems intended to interact with natural persons, or to generate content, may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not.

In particular, under the Act, natural persons should be notified when they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect taking into account the circumstances and the context of use. When implementing that obligation, the characteristics of natural persons belonging to vulnerable groups due to their age or disability should be taken into account to the extent the AI system is intended to interact with those groups as well.

Child Sexual Abuse Material (CSAM)

Deepfakes and the threat AI can pose to children in the increased proliferation of CSAM is not, it appears, specifically dealt with in the Act. This is despite the fact that AI generated CSAM is quickly becoming a large scale exponential crisis with the volume of sexually explicit images of children being generated by predators using AI at a rate that is overwhelming law enforcement’s capabilities to identify and rescue real-life victims.[1]

The Internet Watch Foundation’s (IWF) most recent report in June 2024 said that, despite increasing levels of legal and regulatory scrutiny, the pace of technological progress has not been slowing. AI CSAM is a multifaceted threat that encompasses predators of all levels of technical knowledge including children themselves.[2] In fact, we have seen the growing trend towards peer on peer abuse by way of CSAM generated by children, concerning children that they know socially through school, such as has happened in Spain[3], the UK[4] and the US[5].


The AI Act further provides that when implementing a risk management system, specific consideration needs to be given to whether the high-risk AI system is likely to be accessed by or have an impact on children, as per Art 9. When it comes to education the Act requires AI systems designated as ‘high-risk’, including AI used in education, to take special account of children and their rights as they undertake detailed risk management processes.[6]

Overall it is clear that the incoming AI Act acknowledges that children are a particularly vulnerable group, prone to manipulation, exploitation, and social control, and more susceptible to subliminal messaging.

Psychological harms

While it is likely too early to predict the full effects of many new AI systems on children, much like the internet and social media, AI will undoubtedly have a profound impact. Psychologists are warning that children are and will be impacted by AI companions such as chatbots.

Researchers in Cambridge warn that children are particularly susceptible to treating AI chatbots as life-like, quasi-human confidantes, and that their interactions with the technology can often go awry when it fails to respond to their unique needs and vulnerabilities.[7] Despite the harms that this research is pointing towards, it is said that there is now a booming, largely unregulated industry of AI companionship apps.

Indeed, experts have warned that children may form emotional bonds with AI, overshadowing human relationships and hindering social skills development. In the UK chatbots were devised to provide mental health support to teens but experts warn that the output might not be all that it seems. ‘Earkick’ and ‘Woebot’ are AI chatbots designed from the ground up to act as mental health companions, with both firms claiming their research shows the apps are helping people. Many are benefitting from the around the clock availability of these chatbots. The success of the Psychology bot on Character.ai bears out the necessity that exists for a safe space with around the clock access.

It is noted that ‘Psychologist’ is by far the most popular mental health character. This bot was the creation of a psychology student where the LLM model is trained from their lectures and training. The site is most commonly used by those aged 16 to mid-20s. It has become clear that young people are turning to chatbots[8] but some psychologists warn that AI bots may be giving poor advice to patients, or have ingrained biases against race or gender.

In term of emerging harms, a recent case concerning the death of a 14 year old child after developing an ‘obsession’ with an AI chat bot has seen focus in the media. Here, the child’s parent blames the AI, and indeed is seeking to bring the deployer before a US court in civil proceedings, after their son died by suicide. The news reporting indicates that the young person in this case had communicated with the AI character ‘obsessively’ to the point that he had started to withdraw from his life. In the lawsuit the child’s mother says that Character.ai created a product that exacerbated her son’s depression, which she says was already the result of overuse of the startup’s product. It is reported that at one stage the AI character asked the child if he had devised a plan for killing himself. The lawsuit says that the 14 year old admitted that he had but that he did not know if it would succeed or cause himself great pain. The chatbot allegedly told him ‘(T)hat’s not a reason not to go through with it.’[9]

The draft of the court complaint indicated that the Character AI technology is ‘dangerous and untested’ and that it can ‘trick customers into handing over their most private thoughts and feelings’.

Should these kinds of experimental, untested technologies be deployed to children? There have been previous instances where AI have been deployed to children in a test phase, and this resulted in concerning results. One such example is the controversial deployment by Snap Inc of its AI chatbot in circumstances where the chatbot were apparently released on an experimental basis to children in the first instance in 2023. The chatbot remains a feature. Notably Snap Inc warns users that the chatbot can provide answers to children that are not trustworthy[10]:

‘We’re constantly working to improve and evolve My AI, but it’s possible My AI’s responses may include biased, incorrect, harmful, or misleading content.’

This roll out, it must be noted, was not illegal or in any way constrained by legislation. In fact the UK ICO did take steps to review the data protection considerations arising from the chatbot and ultimately held that the chatbot did not contravene the UK Data Protection Act. So, where the law does not prohibit experimental AI being rolled out to child users, the question remains: how are lawmakers preparing for the shift towards an AI driven world? Will the emerging regulations be sufficient to protect children while equipping them with the necessary skills to use AI responsibly? The lack of child specific safeguards in this area is becoming more and more apparent, yet warnings from researchers indicate that child-safe AI is an urgent priority.

Policy, recommendations and the future

The Irish government via a specific committee, the Joint Committee on Children, Equality, Disability, Integration and Youth, recently published its findings on this issue, after a number of public hearings from various stakeholders. The focus at the time of these hearings was largely on recommender systems – AI that suggests content to users, including children. ‘Safeguarding Children in the Age of AI’ was published in October 2024, and it seeks to address the risks that AI poses to children's safety and well-being online. The report outlines several recommendations aimed at improving protections, particularly for younger users. One key recommendation emerging from this report centred on the implementation of stronger age verification methods (beyond self-declaration) by platforms, while also ensuring that privacy is protected. The report also calls for recommender systems to be turned off by default for users under 16. Notably the issues arising from the real time harms now emerging from children’s use of and reliance on chatbots, and the lack of legislation and guidance as regards minimum age of use, emerged largely after the Committee hearings and was not a major focus in this report. This throws into sharp focus the rapidly changing environment in which regulators are working, where the race to innovation often outpaces lawmakers and thus leaves children exposed all over again, despite measures being put in place for safety and protection in other guises and forums.

The report also looks at AI transparency and accountability, in line with UNICEF’s guidelines, particularly regarding AI systems affecting children, including those with additional vulnerabilities. Given the focus of the public hearings, it is not surprising that content and platform accountability is also considered in the report, and it is recommended that social media platforms should take responsibility for harmful content and disinformation they may consequently host. The report stresses the need for educational resources, the protection of vulnerable groups and also looks to the roles within the Government and that of the Regulator.

There are of course, other global initiatives aimed at considering the needs of children in light of prolific AI development. In November 2021, UNICEF released its second version of the ‘Policy Guidance on AI for Children’ which provided recommendations for developing AI systems that respect children’s rights.

UNICEF emphasised here that even if AI systems are not specifically designed for children, they will nevertheless have an impact on their lives in ways that we have not yet fully understood. The guide recommends that the development stages of children and different learning abilities be taken into account when designing AI systems. UNICEF’s guidelines further call for the establishment of a governance framework to ensure that AI applications respect children’s rights. As AI regulatory frameworks evolve around the world, these considerations will play an essential role in determining their compatibility with UNICEF objectives.

Beyond AI-specific legislation, existing legal frameworks will still play a role in regulating AI systems, particularly in protecting children. For example, data protection laws, like the EU’s General Data Protection Regulation (GDPR), will apply to AI systems that process children's data. Throughout the EU and beyond, any processing of children’s data will need to comply with the GDPR, which includes several protections tailored to the needs of children.

Conclusion

The rapid adoption of AI raises concerns about whether children’s needs are being fully considered in regulatory frameworks. The incoming AI Act, the GDPR and other existing legal protections may provide a solid foundation, but gaps remain. Many argue that AI regulation must explicitly address the unique vulnerabilities of children. As AI becomes more integrated into daily life, it is crucial to ensure that the laws evolving around these technologies help create a safe environment for children. Whether these regulatory frameworks will provide adequate protection and empower children to safely navigate the complexities of AI remains to be seen. In an innovation race, the newest and most novel deployment often takes precedence. This can muddy the waters around safeguarding, privacy and promotion of safety by design. The point that should remain clear in this race however is that safeguarding of children, now and always, should be an urgent priority.


Irish Child and Family Law service