DNS Filtering Blog: Latest Trends and Updates | DNSFilter

Generative AI and Child Sexual Abuse Material (CSAM) | DNSFilter

Written by Gregg Jones | Aug 5, 2025 3:40:31 PM


The increase in Generative AI bots is noticeable across all aspects of our social, technological, and professional lives—even if you’re not completely hooked into the news and Internet. With the proliferation in these kinds of assistive technologies, there is both benefit and risk. 

As with most tools, risk proportionately increases and decreases with levels of understanding and how well or appropriately you apply them in any given situation. Using a table saw incorrectly can be catastrophic. However, using the saw to open a bottle of soda would result in a different level of catastrophe than incorrectly or unsafely applying force. One scenario may simply spray soda everywhere, while the other can send materials flying dangerously back at the operator. The same can be said for Generative AI. In some cases, folks could anticipate it to be the golden bullet for solving humanity’s mundane task-related woes, while in others it is no different than putting a glass soda bottle in a table saw. 

With that said, this article will discuss Generative AI, Child Sexual Abuse Material (CSAM), and the challenges we face now that everyone has the table saw in their pocket. Please keep in mind that I will be touching on sexual and disturbing content—one simply cannot write about the harms of CSAM without doing so. My hope is that after reading, you will have a better understanding of some of these tools and their serious implications.

Generative or Derivative?

I grew up in the golden age of AOL, AIM chatrooms, and chat bots. For fun, my friends and I would try taking turns around a family computer trying to make it say the dumbest thing we could think of. We were adolescents—tricking something into saying “balls” was peak comedy. Occasionally it would figure us out and our plans would be foiled. But most of the time, you could imagine the bot version of someone wearing full clown makeup, dancing for our amusement and inserting childish humor into every conceivable reply.

Now here we are in the year 2025 and there are countless sites full of chat bots programmed to talk like certain people, historical figures, and fictional characters. Create-your-own personal assistants to make you feel like off-brand Tony Stark.

What's Changed? Connectivity and Widespread Acceptance.

In chat rooms back in the day, we knew we were playing with a relatively isolated toy—something with zero input. A creative funhouse mirror we could stand in front of and still recognize ourselves in. 

Now, the makeup on the other side is incredibly convincing. While the aspect of “in good fun” still exists in some cases, there is an ever-increasing number of teens, kids, and even some adults turning to chat bots (including ChatGPT) for information, affirmation, and even affection. This might sound ridiculous—what was once a “say the funny thing” machine has evolved into a (fictional) lover—but cases do exist. 

Generally, it feeds into our confirmation bias and provides a hit of dopamine when we have anything to validate what we say. Generative models consume enormous amounts of data from practically everywhere—including misinformation and satire. Additionally, AI bots are capable of hallucinations (incorrect output from the multitudes of input) as well as bias that may exist in their programming. This means that the chat bots can easily confirm a bias, present a seemingly correlative fact, or even pull a nonsense thought out of nowhere to validate the user chatting with them. 

But where does CSAM play into these chat bots? 

Audience, Please Remember that Children are Naive (and Vulnerable)

We’ve all been young and dumb. Yes, even you. As children, we often did not understand the full ramifications of our actions: how our interactions with each other could hurt, how badly a piece of valuable fabric could be stained with a spill, or how terrible an idea it was to try to vault over the bleachers. 

“I’m sure it’ll be fine.” 

A broken tooth is a pretty memorable consequence (don’t ask how I know) but the concept of consequence with GenAI is a bit more nuanced. Photographic timelines of people’s lives are no longer the stuff of imagination. Connectivity and intersection with the world at large has never been bigger. Information, misinformation, and disinformation spread at lightning speed through a flurry of thumbs, viral dances, and sound bytes—and all of that is on the plate for whatever GenAI model chooses to consume it. 

The implications are staggering, and we haven’t even gotten to the explicitly criminal activity yet.

The Consequences of Unrestricted GenAI Access

While cyberbullying was relatively new-ish when I was in school, it was definitely leveraged by those that wished misery upon others via forums, email, and whatever the latest thing to snub someone on MySpace was. 

With Generative AI image models, you can plug and play with photos doing whatever, wherever, and however you desire. And it doesn’t take photo editing software and a decent understanding of composition to make something look convincing anymore. 

Allowing access of this kind to teens with underdeveloped temporal lobes intent on being mean? Unfortunately, the effects are immense.

Studies show that suicide rates in cyberbullying victims are on the rise. The elephant in the room: cyberbullying in the form of AI-created nude photos used to mock and traumatize the victims, passed around at speeds that adults have no true concept of. 

Some of the devastating GenAI cyberbullying scenarios include:

  • A malicious ex with a saved (clothed) selfie can make revenge porn at the click of a button
  • A teenaged bully can pull an image from their foe’s social media profile and turn it into an AI-generated nude that then spreads around the school
  • AI-generated nude photos can be used to blackmail unsuspecting teens via DMs, demanding payment or they will share the photo with others

Plus, even beyond the immediate bullying, the photos generated now exist in the zeitgeist of the GenAI’s library to be used as reference for other requests and generative content. 

Put this into context of a predator with time, energy, and a stash of material. They can then generate fresh CSAM material at whim. There are currently some protections in place to try to circumvent this. For instance, visiting a random bot and attempting to generate this content often will have it defer and reject the request. But much like the game of how quickly we can get the AOL chatbot to say rude things, it's about manipulating the systems at large to get it to do what the user intends it to do. As an example, technically savvy users may get around these protections by running GenAI models on their local machines. 

Clearly there is much more work to be done.

What Can (More Importantly, Should) We Do About It?

One solution may be finding the theoretical reset button to the Internet and pressing it for three seconds to magically make this problem disappear. Unfortunately it’s not that simple and wishful thinking won’t save us. And general legislation is moving too slowly to properly address the scale and rapid acceleration of technology overall. For those reasons, protection from Generative AI CSAM largely falls on the shoulders of tech companies, cybersecurity organizations, and individual users. 

DNSFilter partners with organizations like the Internet Watch Foundation, Project Arachnid, WeProtect Global Alliance, and multiple others to block—at the policy level—any content that has CSAM hosted. Thanks to these partnerships, as soon as you activate your first DNSFilter policy, protection against CSAM content is on and cannot be turned off. 

We also have content categories that, when added to policies, help to prevent and mitigate the threats associated with Generative AI, including:

  • Generative AI Tools: includes sites with capabilities to generate content from pictures to words to applications
  • Adult Content: can block pornography and other sensitive sites that could end up inadvertently hosting CSAM material 

Additionally, an initiative within DSNFilter’s Security Intelligence department is to leverage our data sets and see if we can map similarities between our data and what could possibly be output by a generative set. To put it simply: if there is a distinctive picture with clear and supporting evidence it has been fed into a model, is there an output that can be linked with it? This might be with internal image hashing, visual cues, or known sources of content linked to the libraries.

The intent behind this initiative is to try to help stifle this AI propagation. If we are able to assemble a list of hashes and words used in creation of this content, we can then provide that information to GenAI companies and tech leaders to further flag requests and prevent potential victims from becoming part of this nightmare.

The unfortunate truth is that Generative AI tools have been exploited and used wrongly. Glass and soda are all over the workshop. It will take ages for us to be able to properly find every small shard and clean every spot. But what we can do in the meantime is pull up our gloves, find the biggest stain we can, and get to scrubbing.