Google Search will soon label images as AI-generated, edited with photo editing software or if it was taken with a camera in the image search results. This label will be added to the about this image feature, according to The Verge who spoke to Laurie Richardson, vice president of trust and safety at Google.
No, but it seems like you’re assuming they would look at this sandboxed by itself…? Of course there is more than one data point to look at, when you uploaded the image would noted, so even if you uploaded an image with older exif data, so what? The original poster would still have the original image, and the original image would have scraped and documented when it was hosted. So you host the image with fake data later, and it compares the two and sees that your fake one was posted 6 months later, it gets flagged like it should. And the original owner can claim authenticity.
Metadata provides a trail and can be used with other data points to show authenticity when a bad actor appears for your image.
You are apparently assuming to be looking at a single images exif data to determine what? Obviously they would use every image that looks similar or matches identical and use exif data to find the real one. As well as other mentioned methods.
The only vector point is newly created images that haven’t been digitally signed, anything digitally signed can be verified as new, unless you go to extreme lengths to fake and image and than somehow recapture it with a digitally signed camera without it being detected fake by other methods….
No, but it seems like you’re assuming they would look at this sandboxed by itself…? Of course there is more than one data point to look at, when you uploaded the image would noted, so even if you uploaded an image with older exif data, so what? The original poster would still have the original image, and the original image would have scraped and documented when it was hosted. So you host the image with fake data later, and it compares the two and sees that your fake one was posted 6 months later, it gets flagged like it should. And the original owner can claim authenticity.
Metadata provides a trail and can be used with other data points to show authenticity when a bad actor appears for your image.
You are apparently assuming to be looking at a single images exif data to determine what? Obviously they would use every image that looks similar or matches identical and use exif data to find the real one. As well as other mentioned methods.
The only vector point is newly created images that haven’t been digitally signed, anything digitally signed can be verified as new, unless you go to extreme lengths to fake and image and than somehow recapture it with a digitally signed camera without it being detected fake by other methods….
deleted by creator