The promise of watermarking AI content
Cybersecurity researchers encouraged by recent announcement on AI labels but questions remain
Cybersecurity researchers encouraged by recent announcement on AI labels but questions remain
By Jon Parsons Faculty of MathematicsThe recent announcement by a group of major tech companies about watermarking AI-generated content might have been greeted with a sigh of relief by many, but cybersecurity researchers are already suggesting this new approach has several flaws.
Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI held a conversation with the White House to discuss how they can help to address the risks posed by the artificial intelligence they develop. They promised to invest in cybersecurity and watermarking of AI-generated content.
“The companies pitched a technology called watermarking, which embeds a secret message into the code of the content,” says Dr. Florian Kerschbaum, a professor of computer science and a member of the Cybersecurity and Privacy Institute at the University of Waterloo. “The idea is that the message cannot be removed unless the content is removed.”
But as Kerschbaum points out, there are still some uncertainties in the scientific foundations of watermarking. It is possible that malicious actors may be able to remove a watermark, and the question of digital watermarks has intrigued scientists for decades.
“The answers to some of the most important questions are somewhat unsatisfactory,” Kerschbaum continues.
Watermarking is a decades old technique and non-digital watermarks predate computers. Watermarking and secretly embedding messages last became a major area of attention when state intelligence services were concerned that they could be used to hide encrypted messages and make them undetectable.
Now, watermarking can possibly be helpful to label benign uses of AI generated content, since the content creator needs to cooperate and embed the watermark.
“In fact, AI itself may help to strengthen watermarks,” Kerschbaum says. “AI’s greatest weakness is that humans do not understand how it works. But that it is better than human performance in many tasks, such as image recognition, may help design watermarks that are more robust.”
Using AI, Kerschbaum continues, one can embed watermarks that only AI can detect. But again, is this reliable for a deployment on the scale necessary for the companies that visited the White House?
“Scientists cannot yet answer this question,” he says. “To see the companies making this promise is encouraging, but too many of the important questions are still open.”
Conference highlights opportunities and pitfalls of current approaches
Canada Research Chair Ian Goldberg sees privacy as a precondition for freedom, dignity and autonomy
Engineering researchers win federal backing for cybersecurity tech to mitigate supply chain risks
The University of Waterloo acknowledges that much of our work takes place on the traditional territory of the Neutral, Anishinaabeg, and Haudenosaunee peoples. Our main campus is situated on the Haldimand Tract, the land granted to the Six Nations that includes six miles on each side of the Grand River. Our active work toward reconciliation takes place across our campuses through research, learning, teaching, and community building, and is co-ordinated within the Office of Indigenous Relations.