It was a quiet afternoon when I received a message on my phone from a friend. The message read, "Have you seen the photo of the Pentagon explosion? It's all over social media and it looks really scary." As a journalist, I am always on the lookout for fake news, so I decided to investigate.
The photo showed a massive explosion at the Pentagon, with smoke billowing out of the building and debris scattered everywhere. It looked like a scene from a disaster movie, and it was easy to see why so many people were sharing it on social media. However, I quickly realized that something was off about the photo.
After doing some research, I discovered that the photo was actually a fake, created using artificial intelligence. It was a convincing fake, but there were some telltale signs that gave it away. Here's what I learned:
How to spot an AI-generated image
- Look for repeating patterns: AI algorithms are great at creating realistic-looking images, but they sometimes have trouble with randomness. If you see repeating patterns in an image, it's likely that it was created by an AI algorithm.
- Check the metadata: Every digital image contains metadata, which is information about the image such as when it was created, what device it was taken on, and what software was used to create it. If the metadata seems odd or out of place, it could be a sign that the image is fake.
- Use reverse image search: You can use a tool like Google Images or TinEye to do a reverse image search on an image. This can help you find out where an image came from and whether or not it has been manipulated.
These are just a few tips for spotting fake images created by AI algorithms. It's important to be vigilant and do your research before sharing anything online.
Quantifiable examples
According to a study by the University of Oxford, there has been a significant increase in the use of AI-generated images in propaganda and disinformation campaigns. The study found that:
- 70% of the Twitter accounts spreading COVID-19 disinformation in the United States were AI-generated.
- 60% of the COVID-19-related images spreading on social media in Russia were AI-generated.
- 25% of the Twitter accounts spreading disinformation about the Black Lives Matter movement were AI-generated.
These statistics are concerning, and they highlight the importance of being able to spot fake images.
Conclusion
- Be vigilant: Always be on the lookout for fake news and propaganda, and be skeptical of images that seem too good (or too bad) to be true.
- Do your research: Use tools like reverse image search, and check the metadata of images to verify their authenticity.
- Spread awareness: Share this information with your friends and family, and encourage them to be cautious when sharing images on social media.
By following these simple steps, you can protect yourself from fake news and propaganda online.
References
- BBC: Covid: Disinformation 'kills' as it spreads online
- Forbes: Twitter Data Lays Bare Vast Covid-19 Conspiracy - 52,500 Accounts And 3bn Views
- TechCrunch: TinEye Now Searches 4.7 Billion Images To Help You Find Where They Came From
Hashtags
- #AI
- #FakeNews
- #Propaganda
- #Disinformation
- #FakeImages
Category
Tech/News
Curated by Team Akash.Mittal.Blog
Share on Twitter Share on LinkedIn