The United States’ secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake, according to a procurement document reviewed by The Intercept.

The plan, mentioned in a new 76-page wish list by the Department of Defense’s Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country’s most elite, clandestine military efforts. “Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content,” the entry reads.

The document specifies that JSOC wants the ability to create online user profiles that “appear to be a unique individual that is recognizable as human but does not exist in the real world,” with each featuring “multiple expressions” and “Government Identification quality photos”…

… The Pentagon has already been caught using phony social media users to further its interests in recent years. In 2022, Meta and Twitter removed a propaganda network using faked accounts operated by U.S. Central Command, including some with profile pictures generated with methods similar to those outlined by JSOC. A 2024 Reuters investigation revealed a Special Operations Command campaign using fake social media users aimed at undermining foreign confidence in China’s Covid vaccine.

Last year, Special Operations Command, or SOCOM, expressed interest in using video “deepfakes,” a general term for synthesized audiovisual data meant to be indistinguishable from a genuine recording, for “influence operations, digital deception, communication disruption, and disinformation campaigns.”

… special operations troops “will use this capability to gather information from public online forums,” with no further explanation of how these artificial internet users will be used…

The offensive use of this technology by the U.S. would, naturally, spur its proliferation and normalize it as a tool for all governments. “What’s notable about this technology is that it is purely of a deceptive nature,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute. “There are no legitimate use cases besides deception, and it is concerning to see the U.S. military lean into a use of a technology they have themselves warned against. This will only embolden other militaries or adversaries to do the same, leading to a society where it is increasingly difficult to ascertain truth from fiction and muddling the geopolitical sphere.”

Both Russia and China have been caught using deepfaked video and user avatars in their online propaganda efforts, prompting the State Department to announce an international “Framework to Counter Foreign State Information Manipulation” in January. “Foreign information manipulation and interference is a national security threat to the United States as well as to its allies and partners,” a State Department press release said. “Authoritarian governments use information manipulation to shred the fabric of free and democratic societies”…

  • k_rol@lemmy.ca
    link
    fedilink
    arrow-up
    4
    ·
    2 months ago

    We sort of need to play the game since it’s already happening, otherwise we may just lose. Then we may create technologies to detect them better too.