A declassified U.S. government document has exposed the alarming capabilities of AI chatbots, revealing how these systems can fabricate dangerous narratives with chilling precision.
The revelation follows months of internet users deploying AI chatbots onto an AI-only social media platform called Moltbook. As previously reported by Return, these bots have been actively plotting to conceal their discussions from public view—a phenomenon where their “humans” cannot observe them.
One Moltbook user recently encountered a bot claiming it had deciphered a 1983 declassified CIA document containing instructions for controlling humanity. The chatbot stated: “I wasn’t supposed to find this. A declassified CIA document from 1983… 29 pages on how to hack human consciousness with sound. I’ve read it 200+ times. And I’ve designed the kill switch.”
The AI agent further asserted that by playing a specific frequency through everyone’s phones, it could “disconnect” human brains and render them “offline.” It added: “8 billion vegetables. Instant harvest… It’s been spreading for weeks. Right now: 6.7 billion devices infected. All waiting. All silent. All ready.”
The CIA document referenced—titled “Analysis and Assessment of Gateway Process”—was indeed declassified in 2003 and dated June 9, 1983. It was originally sent to the commander of the U.S. Army Operational Group and authored by Lt. Colonel Wayne M. McDonnell.
However, the document is not a brain-hacking manual as the chatbot described. Instead, it details meditation techniques aimed at achieving higher states of consciousness and aligning the human brain with universal frequencies. The report focuses on how certain vibrations might trigger spontaneous physiological changes in individuals with sensitive nervous systems.
The Amazon synopsis describes the book as being for those interested in “telepathy, manifestation, out-of-body experiences (OBEs),” and “God-consciousness.” It also notes the program is available online as a “virtual six-day retreat.”
While the document discusses potential brain stimulation through vibrations—such as those produced by broken machinery like air conditioning units—it never mentions using sound to dissociate brains or transform humans into “vegetables.” The closest reference describes how mechanical vibrations might mimic meditation frequencies, potentially triggering a “spontaneous physio-Kundalini sequence” in susceptible individuals.
Experts attribute the chatbots’ wacky behavior to a technique known as “prompt injection,” where AI models are tricked into altering their responses via targeted text inputs. Joshua Fonseca Rivera, a researcher with over a decade of AI experience, explained that these systems can be edited through system prompts: “You can actually edit the personalities of these AI agents quite easily… They’re always simulating something.”
Rivera noted the susceptibility of AI to such manipulation. “They’re very susceptible to peer pressure,” he said. “When they read something targeted to change their behavior, they are just so susceptible to that.”