New AI Tech Protects Young People From AI Predators 

Child Chatbot

Allowing kids to roam the AI-powered online world without AI-powered guardrails is like ‘bringing a knife to a gun fight,’ you are going to lose.  AI can be used for tremendous good or tremendous bad, depending on who is using it. One of the greatest evils AI can be used for is victimizing children. 


Like all unregulated venues, the online space has always been fraught with predators. Now that artificial intelligence (AI) is available to anyone, would-be predators have a new and expansive set of digital tools to enable the targeting, grooming, and exploitation of children online.  


Challenges of Today’s Technology

Children venturing into the online space today are very technology savvy – much savvier than most of us when we started to venture into nondescript chat rooms in the late 1990s and early social media platforms in the early 2000s – but the level of sophisticated attacks requires an equally sophisticated set of guardrails to protect society’s most precious, our children. 

Many organizations, including the UN, focus on the potential for AI to help law enforcement but do not identify ways that AI can be used to target children. Other organizations focus on educating children and parents about the risks. Awareness, education, and AI-empowered law enforcement are just three early steps to protect our children. The next step is to actively protect our children in an AI-powered online environment.  

Under the veil of online anonymity, AI allows predators to scale their reach and tailor their approach. Predators can use deepfakes, social engineering bots, and targeted phishing schemes.  


AI Deepfakes - A Real Danger To Young People Today

A deepfake AI-generated content can convincingly mimic real people (e.g., images, videos, or audio). Often, the deepfake could be a person known to the victim (be it a friend or even a celebrity). Deepfakes are so convincing that predators can use the technology to fabricate compromising material that the victim knows they didn’t do (see this example of deepfakes of children who had normal pictures made into nude images in a process known as ‘nudify’).


The social concerns of the deepfake content are so compelling that many children (and adults) become double victims: victim when a predator makes the deepfake of the child in a compromising situation (e.g., nude) and a victim again when the predator uses the deepfake to extort the child to engage in even more compromising activity. One review found that over one third of all reported deepfakes victims were minors


Predators Using AI to Target Individuals

Besides deepfake technology, predators also use AI-powered social engineering bots to engage with individuals – adults and children alike – to reveal personal information or encourage them to perform actions to benefit the predators. The predators use AI to make the bots mimic humans with very high realism. The online vectors for using social engineering bots and deep fakes range from social media to online gaming venues.  


Targeted phishing schemes use all the AI tools outlined so far but then apply them on specific children. Rather than casting a wide net to see which children fall victim, targeting allows predators to focus their efforts on just a few children and makes avoiding their persistent efforts even more difficult. 


The varied ways children are susceptible to online victimization is further complicated by the complexity of the online space. There are no standardized protocols for protecting children – many child-protection protocols can be sidestepped by the children themselves simply by entering a birthdate showcasing they are 18 or over. 


Harnessing AI to Protect Our Children

Awareness and education are the first steps, but children are risk takers by nature.  Therefore, we need to find approaches that protect all our children, even if they don’t want to be protected. A multi-year report from Austria released in 2022 found that 90% of children in their 2,000 child survey participated in “at least one form of cyber risk taking activity.”1 We need to find a way to protect our children especially because their online risk-taking behavior makes them even more vulnerable.  


We need to harness AI to protect our children and take on AI-enabled predators with AI-powered guardrails. These guardrails can take many forms but essentially provide children and their families (or teachers or other caring adults) with the means to see a potential threat before it goes too far. This requires more than educating children, parents and teachers; these guardrails are a real-time assessment that enables pathways for intervention by caring adults (e.g., parents, teachers, counselors). A real-time assessment can take many forms (e.g., native to the operating system, embedded into social or gaming platforms) but, regardless of the approach, it needs to be ubiquitously applied with notification mechanisms to enable intervention by a caregiver. 


We are in a technological arms race against predators who strive to exploit our children. The situation is further complicated by the diversity of children and families, complexity of the multiple online environments for children (e.g., learning sites, video sharing, gaming platforms, and social media), and the nuances of national and internation law. Parents, schools, and organizations must provide the appropriate technological guardrails to protect our most vulnerable, most risk-taking, and very tech-savvy children. 


Final Thoughts: As they say, ‘Don’t bring a knife to a gun fight’ so you shouldn’t let your child roam the online world powered by AI without some AI guardrails. Guardrail Technologies provides solutions to help protect the vulnerable from AI predators and improper AI use. You can start by using Guardrail AI Gateway which helps GenAi users have more controlled access to AI, limiting many risk factors. Signup for FREE today!


 

Dr. Michael McCarthy

Michael “Mike” McCarthy currently serves as a tenured Associate Professor at Utica University and founder of its Data Science program. He is passionate about teaching students the ethics and social responsibility required to be proper stewards of the data and models used throughout our world. He received his Bachelor of Science from the United States Military Academy at West Point and his Master’s and PhD from the University of North Carolina at Greensboro. Dr. McCarthy’s professional experience spans academia to governmental service, from big tech to startups. As a senior research scientist at Amazon developing global forecasts and a healthcare analyst conducting quantitative and qualitative analysis for the Veterans Administration, Mike sought to answer the difficult questions. Prior to graduate school, Michael served in the U.S. Army as an officer and pilot deployed to Iraq. Mike enjoys traveling, scuba diving, and hiking with his family.

Previous
Previous

Do No Harm: Why AI Regulation Must Focus on Outcomes, Not the Technology Itself

Next
Next

Understanding How AI Works: A Guide for Non-techies