- Funs Jacobs
- Posts
- Deep Dive: Google’s AI Power Moves Explained
Deep Dive: Google’s AI Power Moves Explained
From PageRank to Gemini, the full story of how Google built (and nearly lost) its AI edge.
Move 37, Gemini, and the Making of Google’s AI Empire
Google has been on an absolute massive winning streak the last few weeks. A great reason to dive a bit deeper into how they got here. A fascinating road of the right acquisitions, long-term visions and bumpy starts.
Let’s start at the very beginning. Google began as a Stanford project in 1996, when Larry Page and Sergey Brin built BackRub, an indexing bot that scanned the web and ranked pages with a new algorithm called PageRank. They registered google.com in September 1997 and incorporated in September 1998 after a $100,000 check from Sun Microsystems cofounder Andy Bechtolsheim, setting up in Susan Wojcicki’s Menlo Park garage (Susan was employee #16 of Google, later became longtime YouTube CEO before passing from cancer in 2025).

Wojcicki gave her garage on rent to Page and Brin for $1,700. A year later, Wojcicki joined Google as a marketing manager.
From day one the mission stayed the same and still guides the AI push today: organize the world’s information and make it universally accessible and useful.
We all know that Google didn’t remain “just a search engine”. Although that product alone truly revolutionized the way we used the web. For the first time you could use one website to find other websites, instead of directly typing in a specific www. address. The word “Google it / I Googled it” became a verb and, interestingly enough, is also a verb that once again will disappear as I don’t think the kids growing up now will ever really use it in the future.
In order for Google to become the AI giant that it is now, let’s look at some critically important moments and decisions they made, leading up to maybe the most important one, which is the acquisition of DeepMind.
Ad engine and measurement
AdWords 2000, AdSense 2003, Google Analytics via Urchin 2005, DoubleClick acquired 2008. Advantage: a cash machine, real-time demand signals, and deep ties with brands and agencies.
YouTube (2006 acquisition)
Advantage: the world’s video attention and a creator economy under one roof.
Android (2005 acquisition; first phones 2008)
Advantage: default presence on billions of devices and a direct path to ship new features.
Chrome (launched 2008)
Advantage: a desktop control point where one browser update can change user experience at scale.
Knowledge Graph (launched 2012; Metaweb acquired 2010)
Advantage: a structured catalog of people, places, and things that improves result quality and keeps users in the Google experience.
Maps and Waze
Google Maps launched 2005; Waze acquired 2013. Advantage: daily-use utility with local business ties and high switching costs.
Gmail and Workspace
Gmail launched 2004. Google Apps for Your Domain 2006, rebranded G Suite 2016, rebranded Google Workspace 2020. Advantage: recurring enterprise footprint and a trusted channel to roll out new capabilities.
Google Cloud (key early pillars)
App Engine 2008, BigQuery 2011 GA, Compute Engine GA 2013. Advantage: enterprise relationships and infrastructure revenue that pair well with the rest of the stack.
Google stacked deals, products, and launches that plugged into everyday life. Ads and Analytics captured what people wanted, YouTube captured what people watched, Android and Chrome became the doorway on phone and desktop, Maps and Waze mapped how people move, Gmail and Workspace anchored how teams communicate, and the Knowledge Graph organized the world’s facts. Layer Cloud on top and you get pipes into enterprise data as well. The result is a company fully wired into the moments people work, watch, move, and buy, with a steady flow of real-world signals at global scale. For more than a decade that breadth and depth has given Google arguably the densest feed of intent and behavior on the internet, plus the distribution to turn those insights into products quickly.
Why this data matters for AI
All those touchpoints create three things AI needs most: range and variety of data, freshness, and context. Range and variety of data comes from the wide range of moments Google sits in, from search and video to maps and mail. Freshness comes from products people use daily, which turns user behavior into a constant feedback loop. Context comes from seeing actions across devices and sessions, which helps systems interpret intent rather than just match keywords. The effect is simple. Better signals in, better results out.
In practice this data powers four loops:
• Training: so systems learn the patterns of language, media, and behavior.
• Tuning: so results feel helpful for specific products and use cases.
• Evaluation: so teams see quickly what works and what falls short.
• Personalization and safety: so experiences adapt to the user while filtering spam and abuse.
Put together, Google has arguably the richest mix of signals on the internet, plus the distribution to roll improvements and products out fast.
Google Brain: Google’s first modern AI team
Google Brain began in 2011 as Google’s first true AI team in the modern sense, led by Jeff Dean, Greg Corrado, and Andrew Ng to see what very large neural networks could do with Google’s data and compute. In 2012 the team ran a now-famous experiment where a huge system trained on YouTube frames learned to spot patterns like cats without labels, a proof that scale changes the game.
The Atlantic shared an article in 2012, covering the AI breakthrough, with the title: “The Triumph of Artificial Intelligence! 16,000 Processors Can Identify a Cat in a YouTube Video Sometimes”. Damn, we have come a long way!

But in what is being hailed as a triumph in machine learning, Google researchers turned 16,000 processors loose on 10 million thumbnails from YouTube videos to see what they could (machine) learn. This is a vast data set that's an order of magnitude larger than what had been attempted before, according to The New York Times.
What they found was that even without humans training the computers to know certain objects ("This is a cat"), the machines were able to teach themselves the features of a cat face, as you can dimly see above, among many other objects. As one of the researchers told The Times, "[The system] basically invented the concept of a cat" by looking at all those photos and looking for patterns.
That same year in Toronto, Geoffrey Hinton’s group with Alex Krizhevsky and Ilya Sutskever won a major photo-recognition competition by a wide margin. It was the moment industry noticed this approach could leap ahead. Sutskever later co-founded OpenAI and is now building his own company.
In 2013 Hinton, often called a godfather of AI, joined Google through the DNNresearch acquisition and helped spread deep-learning know-how across teams. Google Brain moved from side project to core research engine, setting up the path to TensorFlow and, soon after, Google’s custom AI chips.

The Godfather of AI, Geoffrey Hinton, left Google in 2023 for concerning reasons. Hinton says he has new fears about the technology he helped usher in and wants to speak openly about them, and that a part of him now regrets his life’s work.
With Google Brain established and Hinton on board, Google had a working AI engine inside the company. The next step was to add a second one: DeepMind.
DeepMind: from London lab to Google’s most important AI engine
DeepMind began in London in 2010, founded by Demis Hassabis, Shane Legg, and Mustafa Suleyman (now CEO of Microsoft AI) with a bold ambition from day one: build general-purpose learning systems on the path to AGI “solve intelligence, then use that to solve everything else.” Google acquired the company in early 2014 for a reported sum just over $500 million. As part of the acquisition, DeepMind’s founders insisted on a joint Google–DeepMind ethics board to oversee sensitive projects, such as AGI.
AlphaGo vs Lee Sedol: Seoul 2016 & Move 37
The first major breakthrough happened pretty fast after, introducing: AlphaGo. DeepMind decided to build a model to play the board game Go because it was the hardest classic board game for computers. The game’s branching factor is huge and positions are hard to judge, so the brute-force tricks that worked in chess do not scale. They say there are more Go moves possible than there are atoms in the universe.. If you can make a system plan well in Go, you probably have something general enough to matter beyond games.
Their AlphaGo approach started simple: first learn from thousands of expert human games to get a feel for good moves, then improve by playing itself over and over, while a second network learned to estimate who was likely winning from any position. During play it used this “intuition plus look-ahead” combo to explore only the most promising lines rather than everything. 
To test it, they began privately. In October 2015 AlphaGo beat Europe’s champion, Fan Hui, 5–0 in a closed match at DeepMind’s London office.

Fan Hui did not stand a chance against the AI, losing 5-0.
Fan then worked closely with the team, advising and training in their office as AlphaGo was strengthened for a bigger stage. That stage was set for Lee Sedol. A 9-dan legend nicknamed “The Strong Stone,” Lee ranked second all-time in international titles with 18 as of early 2016 and was widely seen as the defining player of the 2000s. Many expected him to win comfortably, and Lee himself predicted a 5–0 or 4–1 result in his favor. I invite you to watch the documentary, to see what happened. It’s a phenomenal thing to watch, completely free on Youtube:
For the people who will not watch it, i’ll explain what happened in short:
March 2016, Four Seasons Seoul: the room was pin-drop quiet as AlphaGo faced Lee Sedol. Game 1 set the tone. Lee (Black) played a close, steady game, but AlphaGo (White) pulled ahead late and Lee resigned. Afterward Lee said he’d made an early mistake and that AlphaGo’s early strategy was “excellent,” including an unusual move no human would likely choose.
In Game Two came Move 37, a play so unexpected that commentators called it a blunder and Lee stepped away from the board before returning with a long, careful reply; it turned out to be brilliant. AlphaGo’s calm, whole-board control built a 3–0 lead, Lee answered with his own flash of genius in Game Four’s “God move” 78, but AlphaGo closed the series 4–1. The ripple effects were immediate: Go pros rewrote their game playbooks, the “AlphaGo” documentary brought the story to a global audience, and policymakers treated it as a before-and-after moment for AI.
Move 37 showed us something very particular and very interesting though. DeepMind said a human would pick that move roughly 1 in 10,000 times. It showed the system wasn’t just copying human play or brute-forcing lines. AlphaGo combined learned “instinct” with selective look-ahead to find a line pros had overlooked, then converted the idea into a win. That moment proved AI could generate original, high-level strategy, not just mimic databases.
A question that gets asked a lot with regards to AI is “can it create something new, or will it just always copy another thing”. The answer might lie in Move 37.. 😉.

Lee Sedol vs AlphaGo - live on TV all over the world.
AGI of Generative AI?
From the start, Google was focused on the long-horizon bet: build general-purpose AI, which is why bringing DeepMind into the fold in 2014 made so much sense. Then GPT-3.5 landed via OpenAI in late 2022 and the public reaction, and success, moved faster than anyone expected.
Sundar Pichai later said the generative-AI boom “took [Google] by surprise.” Google triggered a company-wide “code red,” reassigning teams and accelerating product plans, even as leaders like Jeff Dean reminded staff the company had held back similar chatbots over reputational risk in 2022.
It was clear though that they had to pivot. When the 2022 AI hype train started I had always believed that Google was the clear, clear favorite to build the best AI possible. Again, considering all that they have and all that they already achieved. But damn, what a rocky start it was..
From the wobble to the win
In February 2023 Google demoed its new chatbot Bard and it euuuh kinda sucked.. wiping 100 billion off Alphabet stock price in a day after a promotional video had an incorrect answer from Bard in the video(!). A year later Google rebranded Bard to Gemini, for a fresh start? This all went sideways again when it had to pause its image generator for people after historically inaccurate outputs like “diverse” Nazis and Vikings.

It was clear that Google was caught off guard by the impact of GenAI and was playing catch up. I kinda thought this might even would cause Sundar Pichai to be fired, as the backlash on Google was enormous. Fast forward two years and the opposite is true. Sundar has shown incredible resilience and many of his decisions have proven to be the right ones.
Sundar Pichai collapsed the split system into one unit called Google DeepMind in April 2023, put Demis Hassabis in charge, and moved Jeff Dean into a company-wide Chief Scientist role. The new group kept research stars but added clearer product ownership: Koray Kavukcuoglu leading research, Eli Collins running product, and a scientific board to steer priorities. The mandate was simple: one roadmap, faster decisions, and tighter handoffs from lab to product.   
The direction showed up fast in the Gemini era. Google framed Gemini 1.0 in December 2023 as the first realization of the newly unified team, then pushed 1.5 with a breakthrough long-context window two months later.
An early strategic decision now seems to be bringing a lot of success to the Google team: Gemini would be multimodal from day one. One family of models, built in different sizes, all wired back into the same spine. Search, Workspace, Android, YouTube, every surface would run on Gemini. That choice now seems to pay off.
Veo 3 – embedded in Gemini and Vertex AI, generates video with realistic motion, physics, and synchronized audio.
Genie 3 – acts as a “world model,” producing playable environments in real time, a glimpse of interactive media’s future.
Gemini 2.5 Flash Image (aka Nano Banana) – delivers quick, consistent image generation and editing inside AI Studio.

Nano Banana relighting: RAW-ISO 100 - F28-1/200 24mm setting
Three products, three different domains, all drawing strength from the same Gemini backbone. The effect is clear: instead of scattered experiments, Google ships a coordinated wave of models that improve together.
This is also where Google’s approach differs from OpenAI’s as they took a different route. GPT-3 and GPT-3.5 were text-only, GPT-4 added vision, and true real-time multimodality only arrived with GPT-4o in 2024. But how multimodal came together is not the same as how it works for Gemini. For images they used DALL·E, for speech Whisper, and for video Sora—separate model families that are gradually being woven together. In other words, Google built “one spine, many surfaces,” while OpenAI has built “many models, stitched together.” Both work, but they create very different paths for scale and speed.
Beyond products: the big promises of AI
It’s easy to get caught up in chatbots, image models, and the product race with OpenAI. But Google, through DeepMind, has also stayed close to the bigger promise of AI: using these systems to push science forward. The most famous is AlphaFold, which cracked the 50-year protein folding puzzle, released a public database of more than 200 million protein structures, and earned Demis Hassabis, John Jumper, and David Baker the 2024 Nobel Prize in Chemistry.
And AlphaFold is just one of many. Over the past few years DeepMind has delivered a string of scientific breakthroughs that show what AI can do when pointed at the hardest problems in the world:
• AlphaMissense (2023): Predicted the effects of nearly all possible missense mutations in human DNA, helping researchers pinpoint which genetic changes might cause disease.
• Weather models (2021–2023): GraphCast produced more accurate medium-range weather forecasts than traditional systems, now used for early warnings on extreme weather.
• Fusion energy (2022): Reinforcement learning models trained to stabilize plasma inside nuclear fusion reactors, a step toward viable clean energy.
• Materials discovery (2022–2023): Identified structures for 2.2 million new crystalline materials, opening pathways to stronger batteries, electronics, and solar cells.
• Medical AI (2016–2021): Systems that can spot eye disease from scans, predict kidney injury hours in advance, and assist radiologists in breast-cancer detection.

Demis Hassabis (left) and John Jumper (right) receiving their Nobel Prizes in Chemistry (2024)
Closing thoughts
I’ll admit it: from the very beginning I thought Google was destined to be the clear winner in this AI race. They had the data, the talent, and the history. That’s why their early f*** ups shocked me so much. But the turnaround since then has been nothing short of remarkable. Sundar’s call to unify Brain and DeepMind, the discipline of building Gemini as one multimodal spine, and the pace of recent releases have put Google right back at the front.
I’m in absolute awe of how quickly they’ve reset the narrative, and I have massive respect for the resilience it took to pull that off inside a company as large as Google. With Gemini 3 rumored to be around the corner, I’ll be honest: if it is as good as we hope, I’ll probably end my GPT Plus subscription and go all in on Google.
The race is heating up. OpenAI was the clear frontrunner for a long time, but Google and xAI, as I wrote about recently, have now officially caught up. The next year will be the most exciting yet. Let’s see where it takes us.
PS, some study material from the Deep Mind legends below:
The following book by Mustafa Suleyman is a great read but I must warn you, when I read it I had a moral crisis and an incredible urge to help with what is coming:
PS... If you’re enjoying my articles, will you take 6 seconds and refer this to a friend? It goes a long way in helping me grow the newsletter (and help more people understand our current technology shift). Much appreciated!
PS 2... and if you are really loving it and want to buy me some coffee to support. Feel free! 😉
Thank you for reading and until next time!

Who am I and why you should be here:
Over the years, I’ve navigated industries like advertising, music, sports, and gaming, always chasing what’s next and figuring out how to make it work for brands, businesses, and myself. From strategizing for global companies to experimenting with the latest tech, I’ve been on a constant journey of learning and sharing.
This newsletter is where I’ll bring all of that together—my raw thoughts, ideas, and emotions about AI, blockchain, gaming, Gen Z & Alpha, and life in general. No perfection, just me being as real as it gets.
Every week (or whenever inspiration hits), I’ll share what’s on my mind: whether it’s deep dives into tech, rants about the state of the world, or random experiments that I got myself into. The goal? To keep it valuable, human, and worth your time.
Reply