Study shows AI image-generators being trained on explicit photos of children::Hidden inside the foundation of popular artificial intelligence image-generators are thousands of images of child sexual abuse, according to a new report that urges companies to take action to address a harmful flaw in the technology they built

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    All of our protect the children legislation is typically about inhibiting technology that might be used to cause harm and not about assuring children have access to places of safety, adequate food and comfort, time with and access to parents, freedom to live and play.

    Y’know, all those things that help make kids resilient to bullies and challenges of growing up. Once again, we leave our kids cold and hungry in poverty while blaming the next new thing for their misery.

    So I call shenanigans. Again.

    • EatYouWell@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      It’s still abhorrent, but if AI generated images prevent an actual child from being abused…

      It’s a nuanced topic for sure.

      • there1snospoon@ttrpg.network
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        We need to better understand what causes pedophilic tendencies, so that the environmental, social and genetic factors can someday be removed.

        Otherwise children will always be at risk from people who have perverse intentions, whether that person is responsible or not for those intentions.

        • EatYouWell@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I don’t think it’ll ever be gotten rid of. At it’s core, pedophilia is a fetish, not functionally different from being into feet. And like some fetishes, it doesn’t mean a person will ever act on it.

          I’m sure that many of them hate the fact that they are wired wrong. What really needs to happen is for them to have the ability to seek professional help without worrying about legal repercussions.

  • SSUPII@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    That is bound to happen if what has been used is images from the open web. What’s the news

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This is the best summary I could come up with:


    Hidden inside the foundation of popular artificial intelligence image-generators are thousands of images of child sexual abuse, according to a new report that urges companies to take action to address a harmful flaw in the technology they built.

    Those same images have made it easier for AI systems to produce realistic and explicit imagery of fake children as well as transform social media photos of fully clothed real teens into nudes, much to the alarm of schools and law enforcement around the world.

    Until recently, anti-abuse researchers thought the only way that some unchecked AI tools produced abusive imagery of children was by essentially combining what they’ve learned from two separate buckets of online images — adult pornography and benign photos of kids.

    It’s not an easy problem to fix, and traces back to many generative AI projects being “effectively rushed to market” and made widely accessible because the field is so competitive, said Stanford Internet Observatory’s chief technologist David Thiel, who authored the report.

    LAION was the brainchild of a German researcher and teacher, Christoph Schuhmann, who told the AP earlier this year that part of the reason to make such a huge visual database publicly accessible was to ensure that the future of AI development isn’t controlled by a handful of powerful companies.

    Google built its text-to-image Imagen model based on a LAION dataset but decided against making it public in 2022 after an audit of the database “uncovered a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes.”


    The original article contains 1,221 words, the summary contains 256 words. Saved 79%. I’m a bot and I’m open source!

  • cyd@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    1 year ago

    3200 images is 0.001% of the dataset in question, obviously sucked in by mistake. The problematic images ought to be removed from the dataset, but this does not “contaminate” models trained on the dataset in any plausible way.