Suffice it to say, this mountain of direct source evidence outweighs flagged images from conservative commentators like Chuck Callesto And Dinesh D'Souzaboth of whom have been caught in the past spreading disinformation about the elections.
When it comes to accusations of AI fakery, the more diverse sources of information you have, the better. While a single source can easily generate a plausible-looking image of an event, multiple independent sources showing the same event from multiple angles are much less likely to have the same hoax. Photos that match video evidence are even better, especially since creating convincing long-form videos of people or complex scenes remains a challenge for many AI tools.
It’s also important to track down the original source of the supposed AI image you’re viewing. It’s incredibly easy for a social media user to create an AI-generated image, claim it came from a news report or live footage of an event, and then use obvious flaws in that fake image as “proof” that the event itself was fake. Links to original images from an original source’s own website or verified account are far more trustworthy than screenshots, which could have come from anywhere (and/or been modified by anyone).
Storyboards
While tracking down original and/or corroborating sources is useful for a major news event like a presidential rally, confirming the authenticity of images and videos from a single source can be trickier. Tools like the Winston AI Image Detector or IsItAI.com claim to use machine learning models to figure out whether or not an image is AI. But while detection techniques continue to evolve, these types of tools are generally based on unproven theories that haven’t been shown to hold up in broad studies, making the prospect of false positives/negatives a real risk.
Writing on LinkedIn, UC Berkeley professor Hany Farid cited two GetReal Labs models that showed “no evidence of AI generation” in the Harris rally photos that Trump posted. Farid then cited specific parts of the image that pointed to its authenticity.
“The text on the signs and the plane do not show any of the usual signs of generative AI,” Farid writes. “However, the absence of evidence of manipulation is not proof that the image is real. We find no evidence that this image was generated by AI or digitally altered.”
And even if parts of a photo seem to be nonsensical signs of AI manipulation (a la deformed hands in some AI image models), remember that there may be a simple explanation for some apparent optical illusions. The BBC notes that the lack of a reflection of the crowd on the plane in some Harris rally photos could be caused by a large, empty stretch of tarmac between the plane and the crowd, as seen in reverse corners of the scene. Simply circling odd-looking things in a photo with a red marker isn’t necessarily strong evidence of AI manipulation on its own.