AI Week in Review: Is This China's Next DeepSeek Moment?
303,792 AI headlines dropped this week. 99% were noise but these signals cut through...
303,792 AI headlines dropped this week. 99% were noise but these signals cut through: MiniMax shattered the context ceiling with a million‑token open model, while its sibling Hailuo 02 unlocked director‑level 1080p video at bargain prices. Midjourney joined the motion parade with its first video-gen model, and OpenAI staked a claim in Washington with a $200 M Pentagon deal—just as watchdogs cracked open its governance files.
In the labs, MIT’s SEAL showed models can now tutor themselves, and Stanford reminded us that workers want copilots, not replacements. The signal is unmistakable: the race is shifting from brute horsepower to longer attention spans, richer modalities, and human‑first alignment. Let's dive in...
MINIMAX 🧠 1M Token Context Open Reasoner Debuts
The News: MiniMax, a rising Chinese AI firm, has launched M1 — an open-source reasoning model with a 1 million token context window and performance rivaling leading models at lower cost.
The Details:
M1 supports 1M input tokens with an 80k output token budget — more than OpenAI GPT-4o or DeepSeek R1.
Excels at software reasoning and tool use; leads in long-context tests.
Trained using CISPO, a new RL technique delivering 2x faster results.
Entire training cost was just $535k over three weeks.
Why it matters: Think of this as China’s third DeepSeek moment—a moonshot that shatters the price‑to‑power curve. By vaulting to a million‑token runway for the cost of a seed round, MiniMax proves scale no longer belongs solely to trillion‑parameter titans. It signals an open‑source juggernaut red‑lining past Western cost curves and announcing that Beijing’s AI engine isn’t just back in the race—it’s punching a new weight class.
Is this China’s next DeepSeek Moment?
OpenAI secures $200M Pentagon contract
The News: OpenAI has launched "OpenAI for Government", consolidating its federal operations and announcing a $200M deal with the Department of Defense focused on AI for national security and administrative support.
The details:
This marks OpenAI’s official entry as a DoD contractor, focused on initiatives in Washington, D.C.
ChatGPT Enterprise and ChatGPT Gov will support military personnel with admin navigation, while custom models will tackle cybersecurity and operational support.
Existing collaborations with NASA, NIH, the Air Force, and Treasury are now unified under this new initiative.
The DoD contract outlines a scope of developing “prototype frontier AI” for enterprise and combat scenarios, with performance-based task orders.
Why it matters: Governments are stepping deeper into the AI arms race. This move cements OpenAI’s position as a major federal AI supplier, positioning it within the U.S. national security tech stack and increasing its influence. At the same time, it raises ethical concerns around military applications of generative AI, surveillance, and the evolving role of AI companies in geopolitics. As rivals like China continue advancing military AI, this contract reflects the intensifying global race for AI dominance.
Do you support AI military applications? Yes/no → why?
AI watchdogs publish OpenAI governance files
The News: Two oversight groups, The Midas Project and the Tech Oversight Project, have published the OpenAI Files — a transparency-focused archive uncovering internal tensions, leadership issues, and structural risks within OpenAI.
The details:
The archive compiles testimonies, org charts, and internal documents to expose structural risks in OpenAI’s leadership.
It identifies four core issues: CEO behavior, restructuring concerns, safety gaps, and organizational opacity.
The documents trace OpenAI’s evolution from nonprofit to capped-profit entity to a public benefit corporation (PBC).
A reform blueprint titled "Vision for Change" proposes accountability standards for AGI companies.
Why it matters: As AGI development accelerates, the question of who governs and benefits from these technologies becomes urgent. The OpenAI Files bring internal politics and risk into public view, amplifying calls for stronger oversight and ethical governance at AI labs building transformative systems.
Claude 4 Opus Rebuts Apple’s Reasoning Claim
The News: Apple’s research paper, “The Illusion of Thinking,” argued that advanced LLMs collapse on reasoning tasks. But a rebuttal co-authored by Claude 4 Opus and Alex Lawsen, titled "The Illusion of the Illusion of Thinking" challenges those conclusions.
The details:
Apple ignored token output constraints. Claude Opus flagged it was saving tokens but was marked as failing.
Apple’s tests included unsolvable puzzles, penalizing models that correctly identified the impossibility.
Grading scripts required exhaustive step listings, failing models even when they provided correct programmatic solutions.
Why it matters: This is the AI world's subtweet: Apple—still trailing in the model race—took a swing at reasoning models. Claude 4 Opus, one of its fiercest competitors, fired back with a co-authored rebuttal that did more than defend itself—it flipped the script. Turns out, the only real failure was Apple’s testing setup, not the model’s reasoning.
Thanks for reading this far! Stay ahead of the curve with my daily AI newsletter—bringing you the latest in AI news, innovation, and leadership every single day, 365 days a year. See you tomorrow for more!
Seeing MiniMax crash the cost curve makes open-source suddenly look inevitable. Could that undercut closed subscriptions like ChatGPT Enterprise? I’d love to hear thoughts on how the Pentagon contract influences pricing power.
The article frames a pivot from horsepower to attention span—MiniMax delivered exactly that. I’m curious whether U.S. labs mirror the trend or counter with stricter oversight angles given the governance spotlight on OpenAI.