What’s behind the AI talent gold rush?

June 20, 2025

Scale AI founder and CEO Alexandr Wang
Alexandr Wang, co-founder of Scale AI, at the company’s San Francisco offices. Meta has invested $15bn in the start-up and hired Wang in one of the most expensive so-called ‘acqui-hires’ © Jeff Chiu/AP

The writer is former editor-in-chief of Wired magazine and writes Futurepolis, a newsletter on the future of democracy

Even by Silicon Valley’s historically rarefied standards, Big AI is spending stratospheric amounts on talent this year. Meta has invested $15bn in Scale AI, a data labelling start-up that claims just 900 employees. Scale’s 28-year-old chief executive, Alexandr Wang, will take up a job at a new Meta lab devoted to creating AI “superintelligence”. His cash and equity in the deal is reported to be worth some $5bn, making him one of the most expensive so-called “acqui-hires” on record.

Meta is also reportedly offering $100mn sign-on bonuses to lure researchers from other artificial intelligence companies to its lab. OpenAI, meanwhile, has paid $6.4bn for io, the boutique design firm led by Apple’s former top designer Jony Ive. And bidding wars from rivals keen to hire its top researchers have led it to pay up to $2mn in retention bonuses to employees whose existing packages already reach eight figures.

What’s behind this gold rush? It’s not a shortage of talent, per se. The San Francisco Bay Area is awash with unemployed software engineers, a result of the industry shedding jobs after the pandemic and adopting AI coding tools.

Rather, the eye-watering figures are a marker of how hard it is for the biggest AI companies to build an unassailable “moat”, or competitive advantage. Their models jostle for the top spot in performance while scrappier, cheaper alternatives from rivals like China’s DeepSeek nip at their heels. Data centres and the chips that fill them are a commodity, albeit a very expensive one. That leaves two areas in which companies can hope to steal a march: data and talent.

Scale AI is both a talent and a data play. The company’s main business is providing high-quality annotated data for training AI models. Now that the big AI companies have scraped most of the internet, Scale’s labelling work can help them improve the quality of their models. Meta’s Mark Zuckerberg must be desperate for his company to remain a serious player in AI after its most recent big model release, Llama 4, was underwhelming.

But the war for talent is not just about output — it’s about perception. A start-up’s ability to attract investors and a listed company’s ability to keep its stock price up are both aided by the buzz generated by a few superstar minds.

Relative to its projected revenue and the size of its team, Scale’s is among the costliest major tech acquisitions ever. But it is not a one-off. In 2014, Facebook acquired WhatsApp for $21.8bn when the messaging company had just 55 people. Its founders, Brian Acton and Jan Koum, both joined Facebook as part of the deal.

Still, the amounts on offer for AI are unusual. The source of all these dollars is threefold. The first is the relentless pursuit of profits by Big AI and chip companies, abetted by the US government’s determination to maintain America’s lead in AI over China.

Second is the blistering pace of advances in AI tools, and the rush by other industries to sprinkle AI dust over everything for fear of falling behind. Many of these investments have not yet yielded productivity gains, but Fomo is a powerful force.  

The third factor is that Big AI, for better or worse, has locked itself into a race for artificial general intelligence, or AGI. This is the notional point at which AI reaches and then surpasses human capabilities. In reality, some of the world’s most legendary AI experts, including Meta’s own Yann LeCun, argue that large language models won’t do the trick and that research into new approaches will be needed. AGI itself may be a mirage; definitions of it vary widely, and it may well be that what the future holds is not humanlike AI but many different, highly capable, more specialised kinds.

This is why this race for talent requires more than just cash. Culture and mission matter, too. Anthropic and Safe Superintelligence, both created by former OpenAI employees, put emphasis on creating “safe” AI, for example. Anthropic, which is reported to give its researchers more autonomy, does well at retention.

OpenAI, on the other hand, has lost many of its best people in recent years after rifts with the top leadership. At Meta, researchers have cited its neglect of blue-sky research among their reasons for leaving. This year, it lost Joelle Pineau, the head of Fair, its prestigious AI research lab. 

These talent wars show no sign of slowing down. The chieftains of AI have staked their reputations on being the first to AGI. As long as greed, fear and the dream of superintelligence are maintained, vast riches for top talent will keep on flowing.

 

Search

RECENT PRESS RELEASES