Facebook’s former top executive and now OpenAI’s CEO of Apps has a ‘message’ for Mark Zuck

November 20, 2025

Facebook's former top executive and now OpenAI's CEO of Apps has a 'message' for Mark Zuckerberg: Meta not doing well in …

Fidji Simo, the CEO of Apps at ChatGPT-makerOpenAI, has revealed a few shortcomings of her previous employer, Meta. Simo, who was the former Head of theFacebookapp at the social media giant, identified that the Mark Zuckerberg-led company failed to do something well: adequately anticipate the societal risks created by its products.In an interview with Wired, Simo stated that this failure is a primary area of focus and responsibility in her new role at the AI firm.She said, “I would say the thing that I don’t think we did well at Meta is actually anticipating the risks that our products would create in society.”However, at OpenAI, she quickly started projects to address these potential problems. Her first two areas of focus are mental health and jobs, as she sees both being changed by AI.“Mental health and jobs were my first two initiatives when I came into the company. I was looking at the landscape and being like, ‘Yep, immediately, mental health is something that we need to address. Jobs are clearly going to face some disruption, and we have a role to play to help minimise that disruption,’” Simo explained.Simo also noted that she understands how difficult it is to deal with these issues because there isn’t a clear path forward. However, she feels OpenAI is set up to handle this important responsibility.“That’s not going to be easy, because the path is uncharted. So it is a very big responsibility, but it’s one that I feel like we have both the culture and the prioritisation to really address up-front,” she added.When asked about how she feels OpenAI is doing on mental health right now, Simo said: “Just in the span of the last few months, we have massively reduced the prevalence of negative mental health responses. We have launched parental controls with leading protections. And we are working on age prediction to protect teens.At the same time, when you have 800 million people [per week], when we know the prevalence of mental health illnesses in our society, of course, you are going to have people turn to ChatGPT during acute distress moments. And doing the right thing every single time is exceptionally hard. So what we’re trying to do is catch as much as we can of the behaviours that are not ideal and then constantly refine our models.It’s not as if we’re ever going to reach that point where we’re done. Every week, new behaviours emerge with features that we launch where we’re like, ‘Oh, that’s another safety challenge to address.” A good example is mania. You look at the transcripts, and sometimes people say, ‘I feel super great. I haven’t slept in two days, and I feel on top of the world.’ A clinical psychologist would understand that that’s not normal—that’s mania. But if you look at the words, it seems fine. So we work with psychologists to detect the signal that this isn’t someone being super excited, this is a sign of mania, and have a strategy to intervene.Getting it wrong is also really annoying. If you’re a normal adult, being excited and ChatGPT tells you, ‘Hey, you might be having a manic episode,’ that’s not great. It is a very subtle area, and we’re trying to do it with as much care and as much external input as possible.” 

Search

RECENT PRESS RELEASES