By Shadailing R., Joshua L., Christian T., and Kiosha S.
Generative AI can do some wild things. It can write essays, draw art, copy voices, and even create fake videos that look almost real. That sounds cool at first, but there is a dark side too: deepfakes, especially explicit ones, are hurting real people. To protect people and keep trust in what we see online, lawmakers should require strong, non-removable watermarks on AI-generated content and back that with real laws and penalties.
Explicit deepfakes are not just “fake videos.” They can destroy someone’s reputation and mental health. Survivors describe feeling ashamed, anxious, and powerless when their faces are used in sexual content they never agreed to, sometimes spreading across the internet in hours. Advocacy groups and doctors say this can lead to long-term trauma, problems at school or work, and a constant fear that the content will never fully disappear. It is especially bad for teens and young adults, who may be targeted by classmates or exes and then bullied or harassed online and in person.
Deepfakes are also becoming a tool in bullying and school drama. There are cases where students, often girls, find fake nude images of themselves shared in group chats or posted without consent. That can make school feel unsafe, push students to skip classes, and seriously damage their mental health. On top of that, deepfakes can be used for blackmail; people threatening to post images unless the victim does something. Current laws are still catching up, and in many places, it is not even clear what charges can be filed when someone’s face is used in synthetic porn without permission.
The problem is not only personal; it is political. Deepfakes can be used to spread lies about candidates, fake speeches, or make it look like someone did something they never did. That kind of content can make people stop trusting any video they see, even when it is real. Some researchers call this the “liar’s dividend”: once deepfakes exist, bad actors can just say “That’s fake” whenever they are caught on camera. That hurts democracy, journalism, and basic conversations about what is true.
Watermarking gives one possible solution. A watermark is like an invisible digital stamp that says “this was made by AI,” even if the image or video is edited or reposted. If watermarks are strong enough and hard to remove, platforms and investigators could quickly spot synthetic content and trace it back to a specific system. Victims of deepfakes could use that as evidence that the content is fake and nonconsensual. Some researchers are already testing “unremovable” watermarks for AI models, and big companies and governments are talking about making these tools standard.
But watermarking cannot just be “optional” or based on trust. If it is voluntary, the people who are already breaking rules, like those making explicit deepfakes, will simply refuse to use it or try to strip it out. That is why laws are needed. Some states, like California, have started creating standards for AI-generated content and watermarking, and federal officials have pushed for labeling and provenance tools. However, there are also critics who say watermarking alone will not stop disinformation, especially if open-source models or foreign systems do not follow the rules.
Even with its limits, a strong, mandatory watermarking system is still worth fighting for. Laws should require AI companies to build in robust, tested watermarks, make it illegal to remove them on purpose, and demand clear labeling of AI content in sensitive areas like elections, news, and advertising. They should also give victims of nonconsensual deepfakes clear legal rights to get content taken down and to sue people who created or spread it.
As students, it is scary to think that someone could fake our faces or voices and spread that around. That fear alone can silence people, especially girls and LGBTQ+ students, who already face a lot of online harassment. Lawmakers owe it to young people to treat this as a serious civil-rights issue, not just a tech problem. Mandatory watermarking is not a magic fix, but it is one important step to make sure AI is used to create, not to crush, people’s dignity and safety.