You have commented 339 times on Rantburg.

Your Name
Your e-mail (optional)
Website (optional)
My Original Nic        Pic-a-Nic        Sorry. Comments have been closed on this article.
Bold Italic Underline Strike Bullet Blockquote Small Big Link Squish Foto Photo
Cyber
Offenders confused about ethics of AI child sex abuse
2024-02-17
"AI images are acting as a gateway"
[BBC] A charity that helps people worried about their own thoughts or behaviour says an increasing number of callers are feeling confused about the ethics of viewing AI child abuse imagery.

The Lucy Faithfull Foundation (LFF) says AI images are acting as a gateway.

The charity is warning that creating or viewing such images is still illegal even if the children are not real.

Neil, not his real name, contacted the helpline after being arrested for creating AI images.

The 43-year-old denied that he had any sexual attraction to children.

The IT worker, who used AI software to make his own indecent images of children using text prompts, said he would never view such images of real children because he is not attracted to them. He claimed simply to be fascinated by the technology.

He called the LFF to try to understand his thoughts, and call handlers reminded him that his actions are illegal, regardless of whether or not the children are real.

The charity says it has had similar calls from others who are expressing confusion.

Another caller got in touch after discovering that her 26-year-old partner viewed indecent AI images of children, but said they were not serious because the pictures "aren't real". The offender has since asked for help.

A teacher asked for the charity's advice because her 37-year-old partner was viewing images that seemed illegal, but neither of them was sure if they were.

The LFF's Donald Findlater says some callers to its confidential Stop It Now helpline think that AI images are blurring the boundaries for what is illegal and morally wrong.

"This is a dangerous view. Some offenders think this material is in some way OK to create or view because there are no children being harmed, but this is wrong," he says.

In some cases, AI abuse images might also be wrongly labelled or advertised as AI-made and the difference in realism is becoming harder to spot.

Mr Findlater says that deviant sexual fantasy is the strongest predictor of reoffending for anyone convicted of a sexual crime.

"If you feed that deviant fantasy, then you're making it more likely you're going to do harm to children," he said.

The charity says the number of callers citing AI images as a reason for their offending remains low, but is rising. The foundation is urging society to recognise the problem and lawmakers to do something to reduce the ease in which child sexual abuse material (CSAM) is made and published online.
Posted by:Skidmark

#4  ^ Pinochet Helicopter Tours? They are the highest form of recidivists...and members of Teachers Unions (OK, maybe I made that last part up)
Posted by: Frank G   2024-02-17 17:59  

#3  Why would he think his story was believable?

Because this is what passes for high cunning in the nerd world.

The case does bring up some interesting questions though. Kiddie stuff is pervy, but is it a crime if you just think about it? I'm not sure I buy the 'gateway drug' theory, but what are you going to do with people that are mis-wired?
Posted by: SteveS   2024-02-17 15:36  

#2  “The IT worker, who used AI software to make his own indecent images of children using text prompts, said he would never view such images of real children because he is not attracted to them. He claimed simply to be fascinated by the technology.”

Why would he think his story was believable?
Posted by: Super Hose   2024-02-17 14:45  

#1  Completely unrelated...

Crossdressing teacher is placed on leave after outrage over video showing him at school wearing pink dress and cowgirl hat

Bet he could legally buy an AR.
Posted by: Skidmark   2024-02-17 07:42  

00:00