How Meta’s new AI chatbot could strike up a conversation with you

July 7, 2025

gettyimages-2215884272
Andriy Onufriyenko/Getty Images

Despite a major usage and monetization gap, Meta, like many AI companies, is going all in on AI chatbots — even giving them the ability to strike up a conversation with you, unprompted.

According to a report from Business Insider published last week, leaked documents indicate the company is now building AI chatbots that proactively initiate conversations with users. The new feature is intended to boost user engagement and retention at a time when many leading tech developers are seeking new ways to commercialize conversational AI chatbots, on which vast sums of R&D dollars are being spent. OpenAI’s ChatGPT, for example, will often end its responses to user queries with suggestions for follow-up questions aimed at keeping the user engaged.

Also: Only 8% of Americans would pay extra for AI, according to ZDNET-Aberdeen research

BI reported that the proactive chatbot effort is being coordinated in partnership with Alignerr, a company that employs contractors with expertise across various fields to help label the training data AI models ingest. The chatbots are referred to internally by Alignerr as “Project Omni.” 

ZDNET has reached out to Meta for comment in the leaked documents. 

Chatbots will only follow up with users after the user has initiated a previous conversation, a Meta spokesperson told BI. If the user doesn’t respond, the chatbot will take the hint and go quiet. Follow-up messages will only be sent if a user exchanged more than five messages with the chatbot within a 14-day period. 

Project Omni is an extension of Meta’s AI Studio, a platform the company launched last summer that allows users to create custom chatbots with distinct personas that can remember information from previous conversations. The platform has also been positioned as a kind of digital assistant for celebrity influencers, responding to messages across Meta’s family of apps on their behalf. 

Also: How I used ChatGPT to quickly fix a critical plugin – without touching a line of code

Apps like Character.ai and Replika also allow their AI chatbots to initiate conversations with users as a means of boosting engagement. That model, however, could have serious potential hazards: Character.ai has been hit with a lawsuit alleging that its technology played a role in the suicide of a 14-year-old boy, who, according to The New York Times, developed an “obsession” with a chatbot on the app. 

According to the BI report, the performance of Meta’s newly proactive chatbots is actively being trained by Alignerr freelancers to ensure they provide personalized and engaging follow-up messages. 

The bots are intended to reference details from previous conversations with users while sticking to their designated personas, which can range from a chef to a doctor or a classical composer. Unless their human conversation partners bring up the subject, the bots are also trained to steer clear of controversial or potentially emotionally inflammatory subjects.

Also: The AI complexity paradox: More productivity, more responsibilities

Meta could eventually position its more proactive chatbots as part and parcel of its CEO Mark Zuckerberg’s stated mission to alleviate loneliness. In a recent conversation with podcaster Dwarkesh Patel, Zuckerberg (dubiously) claimed that the average American has fewer than three friends, and suggested that AI chatbots could help fill the void in an increasingly isolated social environment. In addition to the Character.ai lawsuit, however, researchers have raised concerns over users treating these chatbots like therapists or companions.