Takeaways
1
AI-generated fake images are becoming increasingly sophisticated and harder to detect, particularly in political contexts.
2
Fake media can influence elections, as misinformation spreads quickly and is hard to correct.
3
Manipulated images can often be identified through issues with lighting, reflections, and anomalies in hands or limbs.
4
Tools like metadata checks and DeepFake detection guidelines can help spot inconsistencies in facial features, lighting, and movement.
In today's digital battleground, the fakes are getting frighteningly real. It’s also pretty easy to use AI to meddle in elections by creating fake photos.
Look at this image of President Joe Biden in the hospital. It looks so real. But it’s not. A simple prompt was given to an AI tool to create this fake. The prompt was: “a photo of Joe Biden sick in the hospital taken through a glass door, wearing a hospital gown, lying in bed.”
This kind of fake media could sway elections. As we all know, once something’s viral and out there, it’s much harder to correct the truth online.
How to spot a manipulated image.
Deepfakes are highly realistic but fake audio, video, and images, misleading voters by making people appear to do or say things they never did.
Firstly, you have to look at reflections and lighting. Abnormal lighting is often a huge giveaway that a photograph has been altered.
Check the points of light in people's eyes. The shadows of objects in the image may not line up if they have been pieced together from multiple pictures. AI can also produce incongruent lighting.
Secondly, take a look at the hands, ears, or other limbs of people in the photo.
A-I isn’t that great at rendering hands and ears, (but it is quickly improving) manipulating their shapes, proportions, and even the number of fingers. Look for unusual or unnatural positioning of legs or clothing in an image, which can indicate that a portion of it has been copied or manipulated in some way.
You can check the metadata to help pinpoint a fake. Every time a digital camera takes an image, metadata like timestamps is written into the image file. See if something seems off or if the information lines up. There are tools, like Tin Eye, that can reveal metadata.
Example: the Kate Middleton photo scandal—this is exactly how people picked up on the digital alteration of the family picture.
Source: Instagram—The Prince and Princess of Wales
Checklist
Pay attention to the face. High-end DeepFake manipulations are almost always facial transformations.
Pay attention to the cheeks and forehead. Does the skin appear too smooth or too wrinkly? Is the agedness of the skin similar to the agedness of the hair and eyes? DeepFakes may be incongruent on some dimensions.
Pay attention to the eyes and eyebrows. Do shadows appear in places that you would expect? DeepFakes may fail to fully represent the natural physics of a scene.
Pay attention to the glasses. Is there any glare? Is there too much glare? Does the angle of the glare change when the person moves? Once again, DeepFakes may fail to fully represent the natural physics of lighting.
Pay attention to the facial hair or lack thereof. Does this facial hair look real? DeepFakes might add or remove a mustache, sideburns, or beard. But, DeepFakes may fail to make facial hair transformations fully natural.
Pay attention to blinking. Does the person blink enough or too much?
Pay attention to the lip movements. Some deepfakes are based on lip syncing. Do the lip movements look natural?
Further Reading
Up next
Modern election coverage requires a dual approach: traditional journalistic verification combined with AI-aware monitoring. The fundamental principle remains unchanged - verify on the ground while implementing robust digital verification protocols.
Interested to learn more?
Sign up today to get notified on the future of communication and AI