According to Wikipedia:
The goal of the C2PA is to define and establish an open, royalty-free industry standard that allows reliable statements about the provenance of digital content, such as its technical origin, its editing history or the identity of the publisher.
Has anyone explored this standard before? I’m curious about privacy implications, whether it’s a truly open standard, whether this will become mandatory (by law or because browsers refuse to display untagged images), and if they plan on preventing people from reverse engineering their camera to learn how to tag AI-generated photos as if they were real.
If their proposed solution is not completely open sourced and the source visible to everyone, then I have no interest in it.
They will make it open source, just tremendously complicated and expensive to comply with.
In general, if you see a group proposing regulations, it’s usually to cement their own positions: e.g. openai is a frontrunner in ML for the masses, but doesn’t really have a technical edge against anyone else, therefore they run to congress to “please regulate us”.
Regulatory compliance is always expensive and difficult, which means it favors people that already have money and systems running right now.There are so many ways this can be broken in intentional or unintentional ways. It’s also a great way to detect possible e.g. government critics to shut them down (e.g. if you are Chinese and everything is uniquely tagged to you: would you write about Tiananmen square?), or to get monopolies on (dis)information.
This is not literally trying to force everyone to get a license for producing creative or factual work but it’s very close since you can easily discriminate against any creative or factual sources you find unwanted.In short, even if this is an absolutely flawless, perfect implementation of what they want to do, it will have catastrophic consequences.
I bet it won’t.
That’s a good bet! That means I’ll have zero interest in it and will not use it.
I got fed up with MS bullshit and moved to Linux. Replaced illustrator with Inkscape and Photoshop with Photopea until I can learn how to use GIMP’s unintuitive UI.
There is a learning curve with GIMP. Once you get past it, GIMP is great. It does about 90-95% of what Photoshop will do and that’s good enough for me. I’m fully on Linux as well. I run Arch and swear by it. I also like Open and Free BSD.
I’m an amateur macro photographer and I love taking photos and doing light tweeking to them to make them more presentable for your average person but I am definitely not going to spend the required hours upon hours to learn to do the simplest things in gimp and dark table that i can learn PS and LR in a 10 min video or less.
That being said I also refuse to pay a god dam subscription fee for something I used to own outright 20 years ago especially considering it can’t even stack or slab photos even 10% as good as zerene or helicon.
And even then I wouldn’t trust those companies.
It will not matter if it is open source but it is backed into the HW. You will be their removed anyways with no way to change it.
You will be their removed anyways with no way to change it.
Did you type removed or does some system in the fediverse automatically censor words?
I know Blockchain is always in search of a solution, but is this one place where it may work? Take a hash of the image and store that hash in a chain, that way you can always hash the image and see if it’s been altered?
Many CSAM detecting services already use image hashing to compare to a central database.
https://www.thorn.org/blog/hashing-detect-child-sex-abuse-imagery/
What value does having a blockchain here provide, exactly?
Publicly traceable and verifiable hashes of the images authenticity. Submitting a hash of the image can prove who submitted it and when and then any altering of the image would yield a different hash which you would know you’re not looking at the original image.
How would that functionally differ from having an authority verify these hashes? Certificate authorities already provide a similar service, and C2PA would likely work in a similar way, sans any effort to implement “trustlessness.”
Fingerprinting is about to take a step forward
I glossed through some of the specifications, and it appears to be voluntary. In a way, it’s similar to signing git commits: you create an image and chose to give provenance to (sign) it. If someone else edits the image, they can choose to keep the record going by signing the change with their identity. Different images can also be combined, and that would be noted down and signed as well.
So, suppose I see some image that claims to be an advertisement for “the world’s cheapest car”, a literal rectangle of sheet metal and wooden wheels. I could then inspect the image to try and figure out if that’s a legitimate product by BestCars Ltd, or if someone was trolling/memeing. It turns out that the image was signed by LegitimateAdCompany, Inc and combined signed assets from BestCars, Ltd and StockPhotos, LLC. Seeing that all of those are legitimate businesses, the chain of provenance isn’t broken, and BestCars being known to work with LegitimateAdCompany, I can be fairly confident that it’s not a meme photo.
Now, with that being said…
It doesn’t preclude scummy camera or phone manufacturers from generating identities unique their customers and/or hardware and signing photos without the user’s consent. Thankfully, at least, it seems like you can just strip away all the provenance data by copy-pasting the raw pixel data into a new image using a program that doesn’t support it (Paint?).
All bets are off if you publish or upload the photo first, though—a perceptual hash lookup could just link the image back to original one that does contain provenance data.
Shit smells like Google’s browser add-on Google tells you to install if you want to opt-out of Google’s tracking. Nice.
Cool, now I need to buy an AMD CPU.
Probably won’t help you all that much anyway…
This will de-anonymize memes.
Review of Fireship.io about this topic, showing a possible surveillance use…
https://odysee.com/@fireship:6/the-future-of-truth-on-the-internet:7
This video is also available on odysee if you don’t want shitty ads and your data stolen
I like the idea. I don’t like the logos involved with the idea.
What I don’t understand is why having every smartphone or DSLR sign every image captured couldn’t solve this problem better and faster than something like this.
From what I can tell, that’s basically what this is trying to do. Some company can sign a source image, then other companies can sign the changes made to the image. You can see that the image was created by so-and-so and then manipulated by so-and-other-so, and if you trust them both, you can trust the authenticity of the image.
It’s basically
git
commit signing for images, but with the exclusionary characteristics of certificate signing (for their proposed trust model, at least. It could be used more like PGP, too).
And I want world peace and a unicorn.
I mean, who wins, digital fingerprinting or r/faxofafax?