DOGE Used Meta AI Model to Review Emails From Federal Workers
May 22, 2025
Elon Musk’s so-called Department of Government Efficiency (DOGE) used artificial intelligence from Meta’s Llama model to comb through and analyze emails from federal workers.
Materials viewed by WIRED show that DOGE affiliates within the Office of Personnel Management (OPM) tested and used Meta’s Llama 2 model to review and classify responses from federal workers to the infamous “Fork in the Road” email that was sent across the government in late January.
The email offered deferred resignation to anyone opposed to changes the Trump administration was making to its federal workforce, including an enforced return to office policy, downsizing, and a requirement to be “loyal.” To leave their position, recipients merely needed to reply with the word “resign.” This email closely mirrored one that Musk sent to Twitter employees shortly after he took over the company in 2022.
Records show Llama was deployed to sort through email responses from federal workers to determine how many accepted the offer. The model appears to have run locally, according to materials viewed by WIRED, meaning it’s unlikely to have sent data over the internet.
Meta and OPM did not respond to requests for comment from WIRED.
Meta CEO Mark Zuckerberg appeared alongside other Silicon Valley tech leaders like Musk and Amazon founder Jeff Bezos at Trump’s inauguration in January, but little has been publicly known about his company’s tech being used in government. Because of Llama’s open-source nature, the tool can easily be used by the government to support Musk’s goals without the company’s explicit consent.
Soon after Trump took office in January, DOGE operatives burrowed into OPM, an independent agency that essentially serves as human resources for the entire federal government. The new administration’s first big goal for the agency was to create a government-wide email service, according to current and former OPM employees. Riccardo Biasini, a former Tesla engineer, was involved in building the infrastructure for the service that would send out the original “Fork in the Road” email, according to material viewed by WIRED and reviewed by two government tech workers.
In late February, weeks after the Fork email, OPM sent out another request to all government workers and asked them to submit five bullet points outlining what they accomplished each week. These emails threw a number of agencies into chaos, with workers unsure how to manage email responses that had to be mindful of security clearances and sensitive information. (Adding to the confusion, it has been reported that some workers who turned on read receipts say they found that the responses weren’t actually being opened.) In February, NBC News reported that these emails were expected to go into an AI system for analysis. While the materials seen by WIRED do not explicitly show DOGE affiliates analyzing these weekly “five points” emails with Meta’s Llama models, the way they did with the Fork emails, it wouldn’t be difficult for them to do so, two federal workers tell WIRED.
“We don’t know for sure,” says one federal worker on whether DOGE used Meta’s Llama to review the “five points” emails. “Though if they were smart they’d reuse their code.”
DOGE did not appear to use Musk’s own AI model, Grok, when it set out to build the government-wide email system in the first few weeks of the Trump administration. At the time, Grok was a proprietary model belonging to xAI and access to its API was limited. But earlier this week, Microsoft announced that it would begin hosting xAi’s Grok 3 models as options in its Azure AI Foundry, making the xAI models more accessible in Microsoft environments like the one used at OPM. This potentially, should they want it, would enable Grok as an option as an AI system going forward. In February, Palantir struck a deal to include Grok as an AI option in the company’s software, which is frequently used in government.
Over the last few months, DOGE has rolled out and used a variety of AI-based tools at government agencies. In March, WIRED reported that the US Army was using a tool called “CamoGPT” to remove DEI-related language from training materials. The General Services Administration rolled out “GSAi” earlier this year, a chatbot aimed at boosting overall agency productivity. OPM has also accessed software called AutoRIF that could assist in the mass firing of federal workers.
Search
RECENT PRESS RELEASES
Related Post