Meta vs. OpenAI: The $100M Battle for AI’s Brightest Brains
True stat: Top AI researchers are worth 50x their weight in gold, or at least according to Meta.
Good morning AI entrepreneurs & enthusiasts,
Meta’s recruiting blitz just scored a major haul, with four OpenAI researchers jumping ship to Zuck’s new superintelligence team.
OpenAI CEO Sam Altman has shown confidence in retaining staff despite $100M offers, but Meta’s deep pockets are clearly talking — and its new unit is starting to take shape in a big way.
IN TODAY’S AI NEWS:
Meta poaches four OpenAI researchers
Google’s Gemma 3n brings powerful AI to devices
Anthropic studies Claude’s emotional support
Today's Top Tools & Quick News
Meta poaches four OpenAI researchers
The News: Meta has reportedly successfully recruited four OpenAI researchers for its new superintelligence unit, including three from OAI’s Zurich office and one key contributor to the AI leader’s o1 reasoning model.
The details:
Zuckerberg personally recruited Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai, the trio that established OpenAI’s Zurich operations last year.
Meta also landed Trapit Bansal, a foundational contributor to OpenAI's o1 reasoning model who worked alongside co-founder Ilya Sutskever.
Sam Altman said last week that Meta had offered $100M bonuses in poaching attempts, but “none of OpenAI’s best people” had taken the offer.
Meta’s hiring spree comes after its $15B investment in Scale AI and poaching of its CEO Alexandr Wang to lead the new division.
Why it matters: Meta’s new superintelligence team is taking shape — and despite Altman’s commentary last week, at least four of his researchers are willing to make the move. With an influx of new talent from top labs and a clear willingness to spend at all costs, Meta’s first release from the new unit will be a fascinating one to watch.
Google’s Gemma 3n brings powerful AI to devices
The News: Google launched the full version of Gemma 3n, its new family of open AI models (2B and 4B options) designed to bring powerful multimodal capabilities to mobile and consumer edge devices.
The details:
The new models natively understand images, audio, video, and text, while being efficient enough to run on hardware with as little as 2GB of RAM.
Built-in vision capabilities analyze video at 60 fps on Pixel phones, enabling real-time object recognition and scene understanding.
Gemma’s audio features translate across 35 languages and convert speech to text for accessibility applications and voice assistants.
Gemma’s larger E4B version becomes the first model under 10B parameters to surpass a 1300 score on the competitive LMArena benchmark.
Why it matters: The full Gemma release is another extremely impressive launch from Google, with models continuing to get more powerful despite shrinking in size for consumer hardware. The small, open model opens up limitless intelligent on-device use cases.
Anthropic studies Claude’s emotional support
The News: Anthropic published new research on how Claude is used for emotional support and affective conversations, finding its use is far less common than reported, with companionship and roleplay accounting for under 0.5% of interactions.
The details:
Researchers analyzed 4.5M Claude conversations using Clio, a tool that aggregates usage patterns while anonymizing individual chats.
The data found that only 2.9% involved emotional support, with most focused on practical concerns like career transitions and relationship advice.
Despite media narratives, the study showed that conversations seeking companionship or engaging in roleplay made up less than 0.5% of total use.
Researchers also noted that users' expressed sentiment often grew more positive over the course of a chat, suggesting AI didn’t amplify negative spirals — though this reflects short-term sentiment changes rather than lasting outcomes.
Why it matters: While recent media has highlighted extreme cases of AI romance and dependency, the data shows these are rare among Claude users. Anthropic’s audience skews toward developers and more technical use cases compared to platforms like ChatGPT or Character AI — so usage patterns may differ across models. Still, this research provides a much-needed corrective to exaggerated narratives around AI companionship.
Today's Top Tools:
Gemini CLI - Open-source terminal agent with high free usage limits
Higgsfield Soul - New high-aesthetic photo model with advanced realism
AlphaGenome - DeepMind’s new AI model for DNA analysis
Voice Design V3 - Create any voice you can imagine with a prompt
Quick News:
Black Forest Labs released FLUX.1 Kontext devdev, an open-weight, SOTA image editing model that can efficiently run on consumer hardware.
DeepSeek’s R2 model has faced issues due to export controls creating Nvidia chip shortages, with CEO Liang Wenfeng not happy with the model’s performance.
OpenAI released a series of updates, including Deep Research via API, Web Search in o3 and o4-mini, and its next DevDay event, slated for Oct. 6 in San Francisco.
HeyGen introduced HeyGen Agent, a “Creative Operating System” that creates video content with scripts, actors, edits, and more from a simple text, image, or video.
Google launched Doppl, a new experiment on its Labs platform, allowing users to create AI-generated try-on videos from a photo and a product.
Meta became the latest AI company to earn a favorable “fair use” ruling in court, winning a lawsuit brought by authors over copyright infringement.
Suno announced the acquisition of WavTool, bringing the startup’s browser-based digital audio workstation to the platform for more advanced music creation.
Thanks for reading this far! Stay ahead of the curve with my daily AI newsletter—bringing you the latest in AI news, innovation, and leadership every single day, 365 days a year. See you tomorrow for more!
The headline numbers around Meta’s hires are eye-catching, but I’m more intrigued by what it means for knowledge diffusion. Will pulling senior scientists into one “super-team” accelerate deployment, or create new silos? What’s your take?
Zuckerberg’s personal courtship of Zurich’s founders highlights how relational this competition is becoming. Are we at risk of turning AI research into a superstar economy with all its distortions? Interested to hear if this helps or hinders innovation pace.