The Next Data Centers Won't Be on Earth: AI Update #7
Plus: What you need to know about today's launch of ChatGPT-5.2
👋 Hey there, I’m Aakash. In this newsletter, I cover AI, AI PM, and getting a job. This is your weekly AI update. For more: Podcast | Cohort
Annual subscribers get a free year of 9 premium products: Dovetail, Arize, Linear, Descript, Reforge Build, DeepSky, Relay.app, Magic Patterns, and Mobbin (worth $28,336).
Welcome back to the AI Update.
The most interesting thing in AI this week iswasnot GPT-5.2, or Cursor’s new features, it was the debate around AI data centers in space.
It sounds like science fiction. But Elon Musk, Jeff Bezos, and Sundar Pichai have all said its the future of AI compute.
So in today’s deep dive, I present the web’s first deep dive on AI data centers in space. This includes an exclusive interview with the CEO of Starcloud, the first company to build a data center in space.
Plus, in today’s weekly update, I cover all the AI news you need to know.
Finally, I end with a recap of my sit down with Wes Bush on AI and PLG.
Bolt.new: AI Prototyping Like the Best
Did you now that I built my cohort’s site on bolt.new? That’s right, since I had their CEO Eric Simons on my podcast, I have become a super fan of the product.
For PMs, it represents the ultimate tool build high-fidelity prototypes fast. In multiple head-to-head battles, I have found it wins in ease of use.
I personally recommend this tool. If you’ve only used Lovable, use my link to try Bolt!
Top News this Week: GPT-5.2 Released
The result of Sam Altman’s “code red” after the Gemini 3 launch is finally here. It’s GPT-5.2, and it predictably puts OpenAI back on the throne for having the best model:
But that’s just the eval performance. The biggest learning from GPT-5.2 that you should know about is the amazing gains in cost-efficient performance.
Look at this chart on the ARC-AG1:
A year ago, hitting 88% on ARC-AGI-1 cost an estimated $4,500 per task. Today, 90.5% costs $11.64. That’s 390x cheaper in 12 months.
As product builders, we can continue to expect frontier-level performance to get cheaper and cheaper.
There’s a million AI news articles, resources, tools, and fundraises every week. Here’s what mattered:
News
Nvidia received approval to sell H200s to China with a 25% cut going to the US government. Beijing is reportedly limiting access anyway to push domestic chip self-sufficiency. Days earlier, the DOJ uncovered $160M in illegal chip smuggling through relabeled “Sandkyan” chips.
The Pentagon launched GenAI.mil, a custom Gemini chatbot for 3M+ military personnel through a $200M Google deal. It’s one of the largest enterprise Gemini deployments yet.
Mistral shipped Devstral 2 (72.2% on SWE-bench, nearly matching DeepSeek V3.2 at 5x smaller) plus Vibe CLI, their first terminal-native coding agent
Claude Code now works directly in Slack: tag @Claude in any channel and coding tasks route to a new Claude Code session
Cursor just released the ability to design in your codebase, and some people say its news is even bigger than GPT-5.2
Resources
How Google’s $900M SpaceX bet became worth $111B (my breakdown)
Why Boom Supersonic pivoted to AI power
New Tools
Dapple: Run submission-based programs (collect & pick winners) hit #1 on PH
Incredible: Deep work AI Agents hit #1 on PH
Market
Unconventional AI confirmed a $475M seed round at $4.5B valuation, led by a16z and Lightspeed. Founder Naveen Rao (former head of AI at Databricks, previously sold MosaicML for $1.3B and Nervana to Intel for $400M+) is building energy-efficient AI hardware “as efficient as biology.”
Harness hit a $5.5B valuation with $240M raise to automate AI’s ‘after-code’ gap
IBM announced plans to acquire data streaming company Confluent for $11B to connect real-time data to AI systems
And now on to today’s deep dive:
Everything You Need to Know About AI Data Centers in Space
What do Elon Musk, Jeff Bezos, and Sundar Pichai agree on?
That we should be building AI data centers in space.
But now, the critics are out in full force:
What’s the truth? Are AI data centers in space the research?
I have done all the research for you. I talked to the CEO of Starcloud who yesterday trained the first LLM in space with an Nvidia H100. I researched the physics. I watched the Elon videos.
And that’s today’s deep dive:
The first-principles case for orbital data centers
What the skeptics get right (and wrong)
Inside Starcloud: what the pioneers are building
The real timeline and who’s paying attention
What this means for AI infrastructure
1. The first-principles case for orbital data centers
Gavin Baker is one of the most respected technology investors of his generation. He ran Fidelity’s flagship technology fund, now leads Atreides Management, and has been right about AI infrastructure longer than most people have been paying attention.
On a recent episode of Invest Like the Best, he made a claim that sounds like science fiction:
In every way, data centers in space from a first principles perspective are superior to data centers on Earth.
Baker’s argument is disarmingly simple. What are the fundamental inputs to running a data center?
Power. Cooling. Chips.
Space is structurally better at two of the three.
Power: You can keep a satellite in the sun 24 hours a day. The sun is 30% more intense in space, resulting in 6x more solar energy than on Earth. And because you’re in constant sunlight, you don’t need batteries. Starcloud’s CEO Philip Johnston told CNBC that orbital data centers will have 10x lower energy costs than terrestrial ones.
Cooling: On Earth, cooling is the majority of rack mass and cost. The HVAC systems, the liquid cooling, the CDUs. In space, you put a radiator on the dark side of the satellite. It’s near absolute zero. All that complexity disappears.
Networking: In terrestrial data centers, racks communicate via fiber optics. That’s a laser going through a cable. The only thing faster than a laser through fiber? A laser through vacuum. If you link satellites with lasers, you get a faster, more coherent network than on Earth.
Inference latency: When you ask Claude a question, your phone sends a signal to a cell tower, through fiber to a metro facility, to a data center, computes, and returns. With direct-to-cell satellites (Starlink has demonstrated this), you skip most of that path.
2. What the skeptics get right (and wrong)
Not everyone is persuaded. When Patrick O’Shaughnessy posted Baker’s clip, the replies filled with physicists and engineers pointing out complications.
Their core objection: cooling in space is harder; much harder than it sounds in a sound bite.
The physics problem
Cofounder of Power Dynamics, Jen Zhu captured the misconception well (as I shared above): the idea that space “automatically” cools hardware because it is cold is wrong.
In a vacuum, there is no air, so there is no conduction or convection to carry heat away. The only way to reject heat is through thermal radiation that emits infrared energy from a surface into space.
This is governed by the Stefan-Boltzmann law, which dictates how much power you can radiate as a function of surface area and temperature.
At typical electronics operating temperatures, radiating kilowatts or megawatts of heat is slow and inefficient unless you have an enormous radiative surface.
The math gets extreme
Mikhail Klassen (Senior AI Engineer @ Planet) ran the numbers for modern AI hardware:
A single Nvidia H100 GPU (roughly 700W) needs about 1.1 m² of radiator area
A DGX H100 system (8 GPUs, around 10.2 kW) needs at least 16 m² of radiator surface
The radiator might need to be roughly half the size of the satellite’s solar arrays
Scale that up to gigawatt-era AI data centers, and the numbers get extreme. For comparison, the International Space Station (which consumes only 75-90 kW) already relies on large, articulated radiator wings and an elaborate thermal control system.
Here’s the thing: the skeptics are right about the physics. But “hard” doesn’t mean “impossible.”
3. Inside Starcloud: what the pioneers are building
To move beyond abstractions, I spoke with Philip Johnston, the founder of Starcloud, to understand how they see the physics and the business case.
a. The radiator problem, quantified
When I asked about radiator requirements for a 1 GW data center, Johnston’s answer: “About 1 km square.” But how do you build something that large? “You assemble it in space,” he said. You don’t launch a kilometer-wide radiator as a single object. You launch compact, modular components and assemble or deploy them in orbit.
This is where Starcloud’s background matters: Johnston’s co-founder Ezra holds a PhD in engineering and spent a decade designing deployable structures for NASA, including solar arrays for the Lunar Pathfinder mission.
Johnston addressed the criticism directly:
The criticism usually is: in order to dissipate that heat, you need a large surface area. And they think for some reason that’s super impractical. You just have to build a large surface area. That’s what we’re doing. Half our engineering team is building very large, low-cost, low-mass deployable radiators. That is the core IP of our company.
b. The economics: waiting for launch costs to fall
At the end of the day, this comes down to cost per kilogram to orbit. Starcloud has done the math on multiple space-based business models:
Space-based solar power (beaming energy back to Earth): makes sense around $50/kg to orbit
Data centers in space (monetizing computation directly): break even around $500/kg
SpaceX is marching toward that number with Falcon 9 and rideshare missions. Starship, if it reaches anything like projected performance, should drive launch costs well below that threshold.
Starcloud is making a timed bet on that trajectory.
c. What’s in orbit today
Starcloud already has hardware in space:
Starcloud-1: A 60 kg satellite (about the size of a small refrigerator) carrying a single Nvidia H100 GPU. Launched November 2 on a SpaceX rideshare mission.
Firsts in orbit: They’re running what they describe as the first AI model training in space, the first high-powered inference in space, and the first deployment of Google’s Gemma model in orbit.
Speed to orbit: Starcloud went from founding to satellite in space in 15 months. The previous fastest timeline for any space startup to reach orbit was roughly four years.
Their next launch is planned for October 2026. That satellite is slated to carry Blackwell-architecture GPUs with roughly 10x the compute of today’s H100, plus optical terminals for always-on, high-bandwidth connectivity.
When Johnston reposted the Gavin Baker clip on X with the caption, “We have the first Nvidia H100 operating in space,” it drew 4.8 million views.
4. The real timeline and who’s paying attention
Let’s strip away the hype and the reflexive skepticism. Here’s where things actually stand:
a. Training vs. inference
Training frontier AI models in space is still a long way off. Modern training runs demand extreme interconnect bandwidth and ultra-low latency between thousands of GPUs, plus the ability to move petabytes of training data. Shipping that data to orbit doesn’t make economic sense yet.
Inference is different. Once a model is trained, serving it (especially at scale) is far more bandwidth- and latency-flexible. The near-term opportunity for orbital data centers is running inference workloads: answering queries, processing video streams, and handling edge AI tasks for users and devices below.
b. The radiator challenge is hard, not fatal
The ISS is a functioning proof of concept that you can manage significant thermal loads in space for decades with high availability.
The question has shifted from “Can you cool in space?” to “Can you do it with mass- and cost-efficient modular radiators at data center scale?”
Starcloud’s core bet is that advances in deployable structures plus falling launch costs and in-space assembly will push the answer toward yes within 5-10 years.
c. This isn’t a fringe idea anymore
Perhaps the clearest signal is who else is paying attention:
Google has announced Project Suncatcher, an initiative to deploy TPUs in orbit via Planet Labs
Amazon, SpaceX, and other major players are exploring similar avenues
This is no longer a single startup with a contrarian deck. Orbital compute is a line item on hyperscalers’ long-term infrastructure roadmaps.
d. The next proof point: 2026
The real inflection point will be Starcloud’s second satellite, targeting October 2026. If Blackwell-generation GPUs can run sustained, high-utilization workloads in orbit (powered by solar, cooled by deployable radiators, and connected via high-bandwidth optical links) then space stops being a demo environment and starts looking like a new tier of core AI infrastructure.
5. What this means for AI infrastructure
Baker’s broader framing is worth sitting with:
“Whenever there’s a bottleneck that might slow AI down, everything accelerates. We’re running up on boundaries of power on Earth. All of a sudden, data centers in space.”
We’re already seeing the outlines of those constraints. Grid interconnections are delayed for years. Communities are pushing back against new data centers that strain local power and water. Renewable build-out is fast but not fast enough to painlessly support exponential curves in AI compute demand.
At some point, we exhaust the “easy” power and cooling capacity on Earth’s surface and start to look up.
From a physics standpoint, the cheapest, most abundant solar energy in our solar system is not on the ground. It’s in space, above the atmosphere, where the sun shines nearly all the time and every photon can be captured.
The open question isn’t whether that energy exists. It’s how quickly we can learn to harvest it, convert it into computation, and integrate orbiting infrastructure into the broader AI ecosystem.
Whether Baker’s “3-4 years” timeline proves precisely right matters less than the direction of travel. The combination of falling launch costs, improving deployable structures, and insatiable demand for AI compute points us toward a world where a nontrivial share of humanity’s computation happens off-planet.
The question now isn’t if data centers move to space.
It’s how quickly and which companies will be ready when the physics, economics, and demand curves finally intersect.
Finally, onto insights from a webinar I did this week:
How to do PLG in the Era of AI
I sat down with Wes Bush (ProductLed) to break down how AI changes PLG. We covered a lot of ground: the tools you should actually be using, how pricing is shifting, and why taste is becoming the only defensible moat. Here’s the rundown.
Most conversations about AI and PLG focus on one thing: adding AI features to your product. That’s only half the equation.
The other half, and often more impactful, is how AI helps you personally succeed at PLG. Before you can build better activation flows or optimize pricing, you need to be a 10x practitioner yourself.
We mapped out both sides. On the job side: sole-context LLMs like Notebook LM, Claude Projects as your co-pilot, agent platforms (Lindy, Relay, Zapier), dictation tools, and AI prototyping with Bolt or Lovable. On the PLG side: activation, pricing, model, expansion, GTM, data, team, and strategy:
That’s all for today. See you next week,
Aakash
P.S. You can pick and choose to only receive the AI update, or only receive Product Growth emails, or podcast emails here.











