According to Wikipedia:

The goal of the C2PA is to define and establish an open, royalty-free industry standard that allows reliable statements about the provenance of digital content, such as its technical origin, its editing history or the identity of the publisher.

Has anyone explored this standard before? I’m curious about privacy implications, whether it’s a truly open standard, whether this will become mandatory (by law or because browsers refuse to display untagged images), and if they plan on preventing people from reverse engineering their camera to learn how to tag AI-generated photos as if they were real.

  • ZickZack@kbin.social
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    1 year ago

    They will make it open source, just tremendously complicated and expensive to comply with.
    In general, if you see a group proposing regulations, it’s usually to cement their own positions: e.g. openai is a frontrunner in ML for the masses, but doesn’t really have a technical edge against anyone else, therefore they run to congress to “please regulate us”.
    Regulatory compliance is always expensive and difficult, which means it favors people that already have money and systems running right now.

    There are so many ways this can be broken in intentional or unintentional ways. It’s also a great way to detect possible e.g. government critics to shut them down (e.g. if you are Chinese and everything is uniquely tagged to you: would you write about Tiananmen square?), or to get monopolies on (dis)information.
    This is not literally trying to force everyone to get a license for producing creative or factual work but it’s very close since you can easily discriminate against any creative or factual sources you find unwanted.

    In short, even if this is an absolutely flawless, perfect implementation of what they want to do, it will have catastrophic consequences.