A proposed state bill in California would require text, audio and video created using AI technology to be labeled as such to help consumers tell real from fake.
Watermarking AI-generated content might sound like a practical approach for legislators to track and regulate such material, but it’s likely to fall short in practice. Firstly, AI technology evolves rapidly, and watermarking methods can become obsolete almost as soon as they’re developed. Hackers and tech-savvy users could easily find ways to remove or alter these watermarks.
Secondly, enforcing a universal watermarking standard across all AI platforms and content types would be a logistical nightmare, given the diversity of AI applications and the global nature of its development and deployment.
Additionally, watermarking doesn’t address deeper ethical issues like misinformation or the potential misuse of deepfakes. It’s more of a band-aid solution that might give a false sense of security, rather than a comprehensive strategy for managing the complexities of AI-generated content.
It would also be impossible to force a watermark on open source AI image generators such as stable diffusion since someone could just download the code, disable the watermark function and compile it or just use an old version.
You can do that, but if you are in California you have just broken the law. If California enforces the law you will discover projects all make a big deal about this since users can be arrested for violation of the law if they don’t handle it correctly. Most likely it is just turned on by default for all versions, but there is also the possibility that they have large warning about turning it off. Note that if you go with warning nobody with your project should travel to California as then you are liable for helping someone violate the law.
Watermarking AI-generated content might sound like a practical approach for legislators to track and regulate such material, but it’s likely to fall short in practice. Firstly, AI technology evolves rapidly, and watermarking methods can become obsolete almost as soon as they’re developed. Hackers and tech-savvy users could easily find ways to remove or alter these watermarks.
Secondly, enforcing a universal watermarking standard across all AI platforms and content types would be a logistical nightmare, given the diversity of AI applications and the global nature of its development and deployment.
Additionally, watermarking doesn’t address deeper ethical issues like misinformation or the potential misuse of deepfakes. It’s more of a band-aid solution that might give a false sense of security, rather than a comprehensive strategy for managing the complexities of AI-generated content.
This comment brought to you by an LLM.
It would also be impossible to force a watermark on open source AI image generators such as stable diffusion since someone could just download the code, disable the watermark function and compile it or just use an old version.
You can do that, but if you are in California you have just broken the law. If California enforces the law you will discover projects all make a big deal about this since users can be arrested for violation of the law if they don’t handle it correctly. Most likely it is just turned on by default for all versions, but there is also the possibility that they have large warning about turning it off. Note that if you go with warning nobody with your project should travel to California as then you are liable for helping someone violate the law.
Plus what if the creator simply doesn’t live in California. What are they gonna do about it?