The Trump administration has actively shared AI-generated images online, utilizing and endorsing them through its official channels.
However, a manipulated — and highly realistic — picture of a civil rights attorney crying following an arrest is causing fresh concern about how the administration is obscuring the distinction between reality and fabrication.
The account of Homeland Security Secretary Kristi Noem posted the original image from Levy Armstrong’s arrest, after which the official White House account shared a modified version depicting her in tears. This altered photograph is one example of a flood of AI-modified images that have been circulated widely since the fatal shootings of Renee Good and Alex Pretti by U.S. Border Patrol officers in Minneapolis.
This use of artificial intelligence by the White House has alarmed misinformation experts, who worry that the proliferation of AI-created or altered imagery undermines the public’s grasp of the truth and fosters distrust.
When criticized for the edited image of Levy Armstrong, White House officials reinforced their stance. A deputy communications director stated that the “memes will continue,” and Deputy Press Secretary Abigail Jackson also shared a post supporting the content.
According to David Rand, a professor of information science at Cornell University, labeling the altered picture a meme “certainly appears to be an effort to frame it as a joke or humorous post, similar to their previous cartoons. This likely aims to protect them from criticism for distributing manipulated media.” He noted that the intent behind sharing the modified arrest image seems “much more ambiguous” than previous content shared by the administration.
Zach Henry, a Republican communications consultant and founder of the influencer marketing firm Total Virality, explained that memes have always contained layered meanings that are humorous or informative to those in the know, but confusing to others. He said AI-enhanced or edited visuals are simply the newest instrument the White House is using to connect with the part of Trump’s base that is highly active online.
“Individuals who are constantly online will see it and immediately identify it as a meme,” he said. “Your grandparents might see it and not grasp the meme, but because it appears authentic, it encourages them to inquire with their children or grandchildren.”
Henry, who spoke favorably of the White House social media team’s efforts, added that provoking a strong reaction is even better, as it aids the content’s spread.
Michael A. Spikes, a Northwestern University professor and news media literacy researcher, said the production and sharing of manipulated images, particularly by trustworthy sources, “solidifies a perception of events, rather than depicting what is truly occurring.”
“The government ought to be a source of reliable information, a place where you can be confident it’s accurate, because they have that duty,” he stated. “By distributing and producing this kind of material … it is diminishing the trust — though I am often somewhat skeptical of the word trust — but the trust we ought to place in our federal government to provide us with accurate, verified information. It’s a significant loss, and it concerns me greatly.”
Spikes mentioned he already observes “institutional crises” related to distrust in news outlets and higher education, and believes this conduct from official sources worsens these problems.
Ramesh Srinivasan, a UCLA professor and host of the Utopias podcast, said numerous individuals are now unsure where to find “trustworthy information.” “AI systems will only worsen, intensify, and speed up these issues of a lack of trust, a lack of even comprehending what could be deemed reality, truth, or evidence,” he commented.
Srinivasan expressed that when the White House and other officials share AI-generated content, it not only encourages ordinary citizens to post similar material but also authorizes other credible and powerful figures, such as policymakers, to distribute unlabeled synthetic content. He further noted that since social media platforms often “algorithmically favor” extreme and conspiratorial material — which AI tools can produce effortlessly — “we are facing a major set of challenges.”
A surge of AI-generated videos concerning Immigration and Customs Enforcement operations, protests, and encounters with citizens has been spreading on social media. After an individual was shot by an ICE officer while in her car, several AI-generated clips began circulating showing women driving off from ICE officers who ordered them to stop. Numerous fabricated videos are also being shared depicting immigration raids and confrontations with ICE officers, where people are often shouting at them or throwing food.
Jeremy Carrasco, an expert in media literacy and debunking viral AI videos, said most of this content probably originates from accounts that are “engagement farming,” or attempting to profit from clicks by creating material with trending keywords like ICE. However, he also suggested the videos attract viewers who oppose ICE and DHS, who might watch them as a form of “fan fiction” or “wishful thinking,” hoping to see genuine resistance against these agencies.
Nevertheless, Carrasco also contends that the majority of viewers cannot discern if what they are watching is fake, and he wonders if they would be able to determine “what is real or not in critical situations, when the consequences are far greater.”
Even when there are obvious indicators of AI creation, such as nonsensical street signs or other clear mistakes, it is only in the “ideal situation” that a viewer would be knowledgeable or attentive enough to notice the use of AI.
This problem, of course, extends beyond news related to immigration enforcement and protests. Earlier this month, falsified and distorted images emerged online following the capture of a deposed Venezuelan leader. Experts, including Carrasco, anticipate that the dissemination of AI-generated political content will only increase.
Carrasco is of the opinion that broadly adopting a watermarking system, which encodes data about a media file’s origin into its metadata, could be a move in the right direction. The Coalition for Content Provenance and Authenticity has created such a system, but Carrasco doubts it will see widespread use for at least another year.
“This is a permanent issue now,” he said. “I don’t believe people grasp the severity of the situation.”
__
Associated Press writers Jonathan J. Cooper in Phoenix and Barbara Ortutay in San Francisco contributed to this report.
