🔥 Discover this trending post from PBS NewsHour – Politics 📖
📂 **Category**: AI,artificial intelligence,Donald Trump news
📌 **What You’ll Learn**:
LOS ANGELES (AP) — The Trump administration has not been shy about sharing artificial intelligence-generated images online, embracing cartoon-like images and memes and promoting them on official White House channels.
But the doctored — and realistic — photo of civil rights attorney Nikema Levy Armstrong crying after her arrest raises new alarms about how the administration is blurring the lines between what’s real and what’s fake.
Homeland Security Secretary Kristi Noem’s account posted the original photo from Levi Armstrong’s arrest before the official White House account posted an edited photo showing her crying. The manipulated image is part of a deluge of AI-edited images that have been shared across the political spectrum since the killings of Rene Judd and Alex Peretti by US Border Patrol agents in Minneapolis.
However, the White House’s use of AI has alarmed disinformation experts who fear that the spread of AI-generated or edited images will erode public perception of the truth and sow mistrust.
Read more: The response to Preeti’s killing highlights the challenges the Trump administration faces with confidence and credibility
In response to criticism of the edited photo of Levi Armstrong, White House officials redoubled their efforts on the post, with Deputy Communications Director Caelan Doerr writing on X that “the memes will continue.” White House Deputy Press Secretary Abigail Jackson also shared a post mocking the criticism.
Calling the edited image a meme “certainly seems like an attempt to pass it off as a joke or a humorous post, like their previous cartoons,” says David Rand, a professor of information science at Cornell University. “This is presumably to protect them from criticism for publishing manipulated media.” He said the purpose of sharing the doctored arrest photo seemed “more ambiguous” than the caricatured photos the department has shared in the past.
Memes have always carried multi-layered messages that are funny or useful to people who understand them, but cannot be understood by outsiders. Enhanced or modified images using artificial intelligence are just the latest tool the White House is using to engage a segment of Trump’s base that spends a lot of time online, said Zach Henry, a Republican communications consultant who founded Total Virality, an influencer marketing company.
“People who are permanently online will see it and instantly recognize it as a meme,” he said. “Your grandparents might see it and not understand the meme, but because it looks real, it leads them to ask their children or grandchildren about it.”
Read more: EU is investigating Musk’s chatbot Grok for sexual deepfakes
Henry, who generally praised the work of the White House social media team, said it would be better if it sparked a fierce reaction, helping it spread widely.
Michael Spikes, a professor at Northwestern University and a researcher in news media literacy, said that the creation and dissemination of altered images, especially when shared by trusted sources, “crystallizes an idea of what is happening, rather than showing what is actually happening.”
“Government should be a place where you can trust the information, where you can say it’s accurate, because they have a responsibility to do that,” he said. “By sharing this type of content and creating this type of content… it erodes trust — although I’m always skeptical of the term trust — but the trust that we have in our federal government to provide us with accurate, verified information. It’s a real loss, and it really worries me a lot.”
Spikes said he already sees “institutional crises” around mistrust in news organizations and higher education, and feels this behavior from official channels is fueling those issues.
Ramesh Srinivasan, a professor at the University of California and host of the Utopia podcast, said many people are now wondering where they can turn for “reliable information.” He said: “Artificial intelligence systems will only exacerbate, amplify and accelerate these problems of lack of trust, and lack of even understanding of what can be considered truth, fact or evidence.”
Srinivasan said he feels the White House and other officials who share AI-generated content are not only inviting ordinary people to continue posting similar content, but also giving permission to others who are in positions of credibility and authority, such as policymakers, to share indefinite synthetic content. He added that given that social media platforms tend to “algorithmically privilege” extremist and conspiratorial content — which AI-generating tools can easily generate — “we have a very large set of challenges on our hands.”
The influx of AI-generated videos related to immigration and customs procedures, protests, and interactions with citizens has already gone viral on social media. After Renee Judd was shot by an ICE officer while in her car, several AI-generated videos began circulating of women driving away from ICE officers who had asked them to stop. There are also many fabricated videos circulating of immigration raids and of people confronting ICE officers, often yelling at them or throwing food in their faces.
The bulk of these videos will likely come from “engagement farming” accounts, or looking to cash in on clicks by creating content using popular keywords and search terms like ICE, said Jeremy Carrasco, a content creator who specializes in media literacy and debunking viral AI videos. But he also said the videos are viewed by people who oppose ICE and DHS and may view them as “fan fiction” or engage in “wishful thinking,” hoping to witness a real setback against the two organizations and their officers.
He watches: You can’t get weapons. ‘You can’t go in with guns,’ Trump says of Alex Peretti’s killing.
However, Carrasco also believes that most viewers can’t tell if what they’re watching is fake, and wonders whether they’ll know “what’s real or not when it really matters, like when the stakes are much higher.”
Even when there are glaring signs of AI creation, such as street signs containing gibberish or other obvious errors, only in a “best-case scenario” would a viewer be smart enough or pay enough attention to register the use of AI.
This issue, of course, is not limited to news surrounding immigration enforcement and protests. Fabricated and distorted photos spread online earlier this month following the arrest of ousted Venezuelan leader Nicolas Maduro. Experts, including Carrasco, believe the proliferation of AI-generated political content will become more common.
Carrasco believes that widespread implementation of a watermarking system that integrates information about the origin of a piece of media into its metadata layer could be a step toward a solution. The Alliance for Content Source and Authenticity has developed such a system, but Carrasco doesn’t think it will be widely adopted for at least another year.
“This is going to be a problem forever now,” he added. “I don’t think people understand how bad it is.”
Associated Press writers Jonathan J. Cooper in Phoenix and Barbara Ortutay in San Francisco contributed to this report.
A free press is the cornerstone of a healthy democracy.
Support trustworthy journalism and civil dialogue.
🔥 **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#Experts #Trumps #images #erodes #public #trust**
🕒 **Posted on**: 1769543248
🌟 **Want more?** Click here for more info! 🌟
