In May 2019, a video of Nancy Pelosi, the Speaker of the United States House of Representatives, surfaced on Facebook, showing her slurring her words and appearing disoriented. However, it was not a real video of Pelosi, but rather a "deepfake" - an AI-generated video that makes a person appear to say or do something that they never did.
The potential of deepfakes to spread misinformation, manipulate elections, harm reputations, and even incite violence is frightening. Recently, the US Senate held a hearing on deepfakes, featuring witness testimony from experts in the field, including Danielle Citron, Siva Vaidhyanathan, and Clint Watts. The hearing was an important step in raising awareness and exploring policy options to address this emerging threat.
During the Senate hearing, a video was played showing Senator Ben Sasse, a Republican from Nebraska, speaking at a hearing about the dangers of deepfakes. However, what the audience saw and heard was not Senator Sasse, but an edited video of Jordan Peele doing an impersonation of Barack Obama, changing his voice and facial expressions to match those of the Senator.
The deepfake caused a stir in the hearing room and online, as it demonstrated how easy it is to create convincing fake videos and how it can be difficult to distinguish them from real ones. Senator Sasse tweeted afterwards, "This is scary stuff. The tech is real and the danger is real."
The risks of deepfakes go beyond mere entertainment or satire. They can be used to spread fake news, defame people, blackmail individuals, or manipulate public opinion. For instance, in India, a deepfake video of the Prime Minister, Narendra Modi, went viral during the 2019 general elections, falsely showing him as saying, "I am ashamed to call myself a Hindu-Muslim leader."
Another example is the "peegate" scandal, where a dossier alleged that Donald Trump had hired prostitutes to urinate on a bed in a Moscow hotel, which the Russians filmed for their leverage. The scandal gained momentum when a video surfaced, purportedly showing Trump with prostitutes, but later turned out to be a deepfake.
Similarly, deepfakes can be used to impersonate celebrities, politicians, or business leaders, and extract sensitive information or scam people. In one case, a CEO was tricked into transferring $243,000 to a fraudulent account by a deepfake voice that sounded like his boss's.
Deepfakes pose a serious challenge to public trust, media credibility, and national security. The technology behind deepfakes is becoming more sophisticated, making it harder to detect them with naked eyes or standard software. In fact, some experts predict that within a few years, deepfakes will be indistinguishable from real videos.
Furthermore, deepfakes can be created by anyone with a computer, an internet connection, and some basic knowledge of AI tools. This means that the potential for deepfakes to proliferate and cause harm is high, as there are no gatekeepers or filters to stop them.
Lastly, deepfakes raise ethical and legal questions about the right to privacy, the right to free speech, and the responsibility to verify information. For instance, if a deepfake of a politician saying something offensive goes viral, who is responsible for fact-checking it and correcting it?
The rise of deepfakes requires a multi-stakeholder approach to tackle the problem. Here are three areas that need attention:
The Senate hearing on deepfakes was a wake-up call to the potential of AI-generated videos to disrupt our society. While there is no silver bullet to eliminate the risks of deepfakes, there are solutions that can mitigate them. By working together, we can build a resilient and trustworthy media ecosystem that values truth, transparency, and accountability.
Reference urls:
Curated by Team Akash.Mittal.Blog
Share on Twitter Share on LinkedIn