DeepMind Marks 10 Years Since AlphaGo Changed AI Forever
Google DeepMind CEO Demis Hassabis has published a retrospective marking ten years since AlphaGo defeated world Go champion Lee Sedol, outlining how that 2016 victory spawned breakthroughs from Nobel Prize-winning protein prediction to what he calls a clear path toward artificial general intelligence.
The March 2016 match in Seoul—where AlphaGo won 4-1 against the 18-time world champion—wasn't supposed to happen for another decade. Go's complexity, with more possible board positions than atoms in the universe, had long been considered AI's Mount Everest. Move 37 in game two, a play so unconventional that commentators initially thought it was a mistake, became the moment many researchers point to as AI's creative awakening.
From Board Games to Biology
The search and reinforcement learning techniques that powered AlphaGo's victory have since been repurposed for problems that actually matter. AlphaFold 2, which cracked the 50-year protein folding challenge in 2020, used similar architectural principles. The system has now predicted structures for all 200 million known proteins, with over 3 million researchers accessing the free database for work ranging from malaria vaccines to plastic-degrading enzymes.
That work earned Hassabis and colleague John Jumper the 2024 Nobel Prize in Chemistry—a rare instance of AI research receiving science's highest honor.
Mathematical Reasoning Hits New Heights
AlphaProof, described as AlphaGo's "most direct descendant," combines language models with the original system's reinforcement learning to prove formal mathematical statements. Alongside AlphaGeometry 2, it achieved silver-medal performance at the International Mathematical Olympiad—the first AI system to reach that benchmark.
Gemini's Deep Think mode pushed further, hitting gold-medal standard at the 2025 IMO. That same approach now tackles open-ended scientific and engineering challenges.
AlphaEvolve, DeepMind's coding agent, had what Hassabis calls its own "Move 37 moment" when it discovered a novel matrix multiplication method—a fundamental operation underlying virtually all modern neural networks. The system is currently being tested on data center optimization and quantum computing problems.
The AGI Roadmap
Hassabis's post makes DeepMind's AGI strategy explicit: combine Gemini's multimodal world models, AlphaGo's search and planning techniques, and specialized AI tools like AlphaFold into a unified system. The goal isn't just an AI that can devise a winning Go strategy, but one capable of inventing "a game as deep and elegant, and as worthy of study as Go."
An AI co-scientist system, currently in validation studies at Imperial College London, already demonstrates this potential. By having AI agents "debate" hypotheses, the system independently reproduced antimicrobial resistance findings that took human researchers years to develop.
Ten years ago, AlphaGo proved machines could master a game humans had played for 2,500 years. The techniques it pioneered are now being applied to fusion energy, weather prediction, and genomics. Whether that path leads to AGI remains uncertain, but DeepMind is betting that Move 37 was just the opening.
Read More
Circle Launches Gas-Free USDC Nanopayments for AI Agent Economy
Mar 10, 2026 0 Min Read
Microsoft 365 Copilot Transforms Enterprise Workflows as Wave 3 Rolls Out
Mar 10, 2026 0 Min Read
Harvey AI Doubles Down on Multi-Model Strategy Amid Provider Risk Concerns
Mar 10, 2026 0 Min Read
AI Website Builders Transform Ecommerce as 2026 Brings Agentic Commerce Era
Mar 10, 2026 0 Min Read
CRV Price Prediction: Targets $0.27 Resistance by End of March
Mar 10, 2026 0 Min Read