Military Pursues AI Systems To Suppress Online Dissent Abroad

20 godzin temu

Military Pursues AI Systems To Suppress Online Dissent Abroad

Authored by José Niño via Headline USA,

The U.S. military wants artificial intelligence to do what human propagandists cannot: create and spread influence campaigns at internet speed while systematically suppressing opposition voices abroad, according to internal Pentagon documents obtained by The Intercept.

The classified wishlist reveals SOCOM’s ambition to deploy “agentic AI or multi-LLM agent systems” that can “influence foreign target audiences” and “suppress dissenting arguments” with minimal human oversight. The military branch seeks contractors who can provide automated systems that operate at unprecedented scale and speed.

“The information environment moves too fast for military remembers [sic] to adequately engage and influence an audience on the internet,” the document said.

“Having a program built to support our objectives can enable us to control narratives and influence audiences in real time.”

As reported by The Intercept, the proposed AI systems would extend far beyond simple content generation. SOCOM envisions technology that can “scrape the information environment, analyze the situation and respond with messages that are in line with MISO objectives.” More controversially, the systems would “suppress dissenting arguments” and “access profiles, networks, and systems of individuals or groups that are attempting to counter or discredit our messages.”

The Pentagon plans to use these capabilities for comprehensive social manipulation, creating “comprehensive models of entire societies to enable MISO planners to use these models to experiment or test various multiple scenarios.”

The systems would generate targeted messaging designed to “influence that specific individual or group” based on gathered intelligence.

SOCOM spokesperson Dan Lessard reportedly defended the initiative, declaring that “all AI-enabled capabilities are developed and employed under the Department of Defense’s Responsible AI framework, which ensures accountability and transparency by requiring human oversight and decision-making.”

The Pentagon’s move comes as adversaries deploy similar technology. Chinese firm GoLaxy has developed AI systems that can “reshape and influence public opinion on behalf of the Chinese government,” according to recent reporting by The New York Times. The company has “undertaken influence campaigns in Hong Kong and Taiwan, and collected data on members of Congress and other influential Americans.”

However, experts question whether AI-generated propaganda proves effective. Emerson Brooking of the Atlantic Council noted that “Russia has been using AI programs to automate its influence operations. The program is not very good.” He warned that “AI tends to make these campaigns stupider, not more effective.”

The Pentagon has previously conducted covert influence operations with mixed results.

In 2022, researchers exposed a network of social media accounts operated by U.S. Central Command that pushed anti-Russian and Iranian messaging but failed to gain traction, becoming what Brooking called “an embarrassment for the Pentagon.”

Critics worry about the broader implications of automated propaganda systems. Heidy Khlaaf, former OpenAI safety engineer, cautioned that “framing the use of generative and agentic AI as merely a mitigation to adversaries’ use is a misrepresentation of this technology, as offensive and defensive uses are really two sides of the same coin.”

Tyler Durden
Thu, 09/04/2025 – 20:55

Idź do oryginalnego materiału