Artificial intelligence (AI) is becoming more common in education, both inside and outside the classroom, according to a report by FOX News.
Current methods for labeling AI-generated content may not meet the required standard, and they may not be achievable with today's technology, warns Michael Wilkowski, Chief Technology Officer of AI-driven bank compliance platform Silent Eight. In his view, distinguishing AI-generated content from human-generated content is nearly impossible.
Traditionally, watermarking has been used to identify AI-generated material. However, this method has evolved, replacing physical marks with embedded codes. Companies like OpenAI and Meta have championed digital watermarking to address concerns that AI-generated content blurs the line between real and generated.
Challenges Arise
Several studies from American universities have shown that it's possible for users to remove or "break" digital watermarks. Researchers from the University of California, Santa Barbara, and Carnegie Mellon have demonstrated that these watermarks are "provably removable," rendering them ineffective.
The "regeneration attack" method, used to remove watermarks, introduces noise into the image with the watermark and then reconstructs the image, making the watermark undetectable. The University of Maryland conducted a similar study, reaching the same conclusion, stating that "we don't have any reliable watermarking at this point."
Security Concerns
Not only can watermarks be removed, but they can also be spoofed, fooling detection algorithms. This presents a problem because these tools are widely available. Bad actors can use similar algorithms to undermine the watermark's effectiveness.
According to Wilkowski, the difficulty in distinguishing AI-generated text or trade transactions from human-created content is a significant concern. AI-generated text is derived from existing materials used to train AI models.
A Constant Battle
Security firms face an ongoing challenge as methods of detection improve, so do the methods to avoid detection. Criminals adapt to avoid triggering investigations, conducting their activities just below the detection threshold.
For example, Spanish police and Interpol recently dismantled a sports betting ring that manipulated sports bets by hijacking satellite signals to gain an unfair advantage. They detected irregularities, such as unusually large bets during a pingpong tournament.
Wilkowski notes that the battle against sophisticated criminal activities relies on big data processing and remains a constant endeavor to discover and prevent irregularities in financial transactions.
In conclusion, the educational use of AI is expanding, but challenges in identifying AI-generated content persist, making it essential to develop more robust methods for safeguarding against misuse.
Note: This article has been abridged for simplicity and clarity.
Please share by clicking this button!
Visit our site and see all other available articles!