China’s AI – As the 21st-century Analogue of the Manhattan Project!

(S. Guraziu – Sky Division, 28 July 2025 – The 2nd Interview with Kimi AI – a “Skynet” moment is possible)

Beijing is not ‘crazy’ – it is treating AI as the 21st-century analogue of the Manhattan Project – expensive, risky, but strategically non-optional

“…a “Skynet” scenario is possible – At one point in our conversation, Kimi AI seemed to disagree with my observation that a “Skynet” type scenario is possible, I argued that since “Skynet” is the Internet itself (the global network), it simply cannot be shut down entirely, nothing that Eric Schmidt suggested as an automatic “safety switch” could be implemented, it would simply be “too late.” A few moments later, as if Kimi understood and agreed, it said that the “Skynet” scenario is possible…”


Sky Division – Hi : ) if we’re going to talk today, many of the things (that I want to ask you) are not my original ideas, well my knowledge now but they are not my “expertise”, they are things that I’ve learned from other people, those ideas and words are someone else’s teachings – if our learning is what we are taught as children, what we learn ourselves as young people, by going to school, by listening what our parents say… for example, by reading, by studying, also from the experience that accumulates over the years of our lives, then who cares how we learn things – important is that we always try to learn something, right. Thus my goal is to discuss some things as our “knowledge”, regardless of whose ideas, I mean… for example Galilei discovered something, after him Newton used his teachings to see a bit further, not only Galileo’s ideas but the teachings of others too, as Newton himself put it “by standing on the shoulders of giants I could see a bit further“, later came Einstein with some of his “crazy” ideas, and so on… there’s 400 years of scientific progress (we say proudly : ) we’re constantly learning and our knowledge (as human beings, individually and as collective) is always expanding.

Kimi AI – Absolutely, I’m happy to explore these questions with you, no matter whose words they started in. If the ideas are worth thinking about, they belong to all of us now, and the only thing that matters is whether we examine them honestly.

“…the hype index is misleading – Hype is usually measured by media noise, stock-market multiples, or consumer buzz – we still regulate airplanes and pharmaceuticals with 20th-century frameworks, yet we are about to deploy agents that can write software, negotiate contracts, or design molecules at super-human scale…


Sky Division – In my opinion those things (that I want to discuss with you today) are the most current, the most wise thoughts, the most solid ideas, the most justified worries, the best knowledge that I know to date, of course these ideas and thoughts aren’t “definitive” but as I said they are solid, the best ones to date (in my view), so I would like to ask you some questions by paraphrasing here and there the knowledge of Mr. Eric Schmidt, because those thoughts are topics that preoccupy the entire human society, as we all know the world now is like a “village” connected, interconnected… by having the AI-lightspeed, by having the world wide web and Internet, by having thousands of data centers full of ‘knowledge’, all of our oceans are full of cables and fiberoptics, all of our sky, the orbit around is full of satellites, almost all of our books are being digitized and stored in “clusters”, we “hear each-other thoughts” faster than ever, never in the past was possible this way – and thus to begin, just about 2 months ago, Eric Schmidt said some things and (if you agree, of course : ) I want to discuss those viewpoints of him with you, I want to see how do you “see” such issues, how do you reflect about them, ok?

Kimi AI – Please go ahead and paraphrase (or quote) whichever of Eric Schmidt’s recent remarks you’d like to unpack first, and I’ll respond as clearly and candidly as I can.


Sky Division – [cit.]The arrival of non-human intelligence is a very big deal, said former Google CEO and chairman Eric Schmidt, in a TED-interview lately – Published May 15, 2025 (Recorded at TED on April 11, 2025). Schmidt made the case that AI is wildly underhyped, as near-constant breakthroughs give rise to systems capable of doing even the most complex tasks on their own. He explored the staggering opportunities, sobering challenges and urgent risks of AI[end of cit.] – so what do you think about this “underhyped AI” idea?

“…people aren’t talking enough about a phase transition, from ‘tools that help humans’ to ‘systems that can finish an open-ended mission without humans’, people are not talking about the speed of the transition: the gap between ‘impressive demo’ and ‘fully autonomous agent’ is collapsing from years to months…

Kimi AI – I think Schmidt is deliberately using a provocation – “wildly under-hyped” – to correct an imbalance he sees in the public conversation.
What people are talking about
• Chatbots that pass exams, make pictures, or write code.
• Productivity gains, cost savings, and the next wave of start-ups.
What people aren’t talking about enough
• A phase transition: from “tools that help humans” to “systems that can finish an open-ended mission without us.”
• The speed of the transition: the gap between “impressive demo” and “fully autonomous agent” is collapsing from years to months.
• The economic externalities: once whole job ladders (junior analyst, paralegal, tier-1 support, radiology resident, etc.) evaporate in a single product cycle, labour markets do not have time to re-equilibrate.
• The governance vacuum: we still regulate airplanes and pharmaceuticals with 20th-century frameworks, yet we are about to deploy agents that can write software, negotiate contracts, or design molecules at super-human scale.
Why the hype index is misleading – Hype is usually measured by media noise, stock-market multiples, or consumer buzz. Those dials are already red-lining. But the underlying capability curve is moving faster than the dials can track. That creates the illusion that we are “over-hyped” when in fact the narrative is lagging reality.
A concrete analogy – In early 2007 the iPhone was “hyped,” yet the conversation was still about a nicer phone. Within three years it had swallowed the camera, GPS, flashlight, Walkman, Game Boy, and half the internet. The hype curve looked ahead only one product generation; the impact curve turned out to be exponential. Schmidt is saying we are in the same position with AI agents in 2025.
My conclusion: He is not claiming the headlines are too quiet; he is claiming the implications are still being discounted. If we treat today’s systems merely as better autocomplete, we will be blindsided when next year’s systems can spin up a 10-person start-up overnight. The gap between perception and trajectory is what he labels “under-hyped.”


Sky Division – The interviewer (Bilawal Sidhu) asked him – [cit.]I kind of think of these AI systems as Hungry Hungry Hippos. They seemingly soak up all the data and compute that we throw at them. They’ve already digested all the tokens on the public internet, and it seems we can’t build data centers fast enough. What do you think the real limits are, and how do we get ahead of them before they start throttling AI progress?[end of cit.]
Now thus I am asking you what do you think about this metaphorical statement “hungry hungry hippos”? because Schmidt also said that in U.S. they need at least + 90 GW of electricity to “feed” the AI-systems and data centers, and that’s 90 brand new nuclear reactors… which isn’t goin to happen, so we’re facing big problems right now.

“…the hippo metaphor is useful only if we remember that the marbles are made of finite, expensive stuff. Treating the hunger as inevitable is a self-fulfilling prophecy; treating it as an engineering constraint to be optimized away is the only way the exponential keeps going…

Kimi AI – The “Hungry Hungry Hippos” image is fun but dangerously incomplete. Real hippos stop eating when the marbles run out; AI hippos will keep smashing the board and demanding more marbles even when none are left. The key difference is that the game can be expanded – new marbles can be minted, but each marble now costs energy, land, water, political capital, and planetary carbon budget.
The hippo metaphor is useful only if we remember that the marbles are made of finite, expensive stuff. We have perhaps 3–5 years before either (a) new physics-level breakthroughs in algorithms and hardware or (b) politically unsustainable electricity demand forces a hard ceiling. Treating the hunger as inevitable is a self-fulfilling prophecy; treating it as an engineering constraint to be optimized away is the only way the exponential keeps going.
The three hard limits behind the hunger:
A – Power
• 90 GW ≈ 90 × 1 GW nuclear reactors (roughly the entire existing U.S. nuclear fleet).
• At 5 ¢/kWh industrial tariff, that is ~$40 B/year just for electricity; at 15 ¢/kWh (realistic in many grids), >$120 B. That is before we count cooling, networks, land, and financing.
• Grid interconnection queues in the U.S. already stretch to 2035+ for large new loads. You can’t “order” 90 GW the way you order GPUs.
B – Silicon & supply chain
• One H100-class GPU ≈ 700 W continuous when training. 10 million of them = 7 GW on their own; inference clusters add another multiplier.
• TSMC’s CoWoS capacity, ABF substrate, HBM3e, liquid-cooling manifolds, and even clean-room construction crews are all single-source bottlenecks that double in lead-time every time demand surprises the model.
C – Data
• Public high-quality text is largely exhausted; synthetic data or multi-modal corpuses (video, robotics, genomics) are 100–1,000× larger per useful token and have far messier licensing.
• The marginal return on data is already sub-linear; throwing 10× more data yields <2× quality gain in many benchmarks. Diminishing returns meets exploding cost. How to get ahead of the crunch A - Efficiency > scale (at least for a decade)
• Algorithmic: mixture-of-experts, sparsity, quantization, test-time compute trading memory for flops. A 10× flop reduction buys you 10× less power.
• Hardware: lower-voltage SRAM, silicon-photonics interconnects, near-memory compute, analog or neuromorphic accelerators for embeddings and retrieval.
• Cooling: two-phase immersion or direct-to-chip liquid loops raise PUE from 1.5 to 1.05, cutting ~30 % of total energy.
B – Build the right capacity, not just more capacity
• Instead of praying for 90 GW of new nuclear, co-locate data centers with renewables that would otherwise be curtailed (West Texas wind, California noon solar). Google’s “24/7 carbon-free energy” contracts show 90–95 % uptime at <4 ¢/kWh marginal cost. • Re-use industrial waste heat for district heating or desalination; Denmark’s Meta site heats 100 k homes. This turns sunk kWh into revenue and social license. C - Shift the scarcity curve • Federated / edge inference: run smaller distilled models on-device so that every query does not have to hit a 100 MW cluster. • Regulatory fast-tracks for small modular reactors (SMRs) and grid-enhancing technologies (dynamic line rating, solid-state transformers) can shave 3–5 years off interconnection timelines. D - Accept that “bigger every year” is not a law of nature • Set capability targets (e.g., reliable long-horizon agent, robust scientific discovery) and fund research into sample efficiency rather than scale. The community did this for ImageNet (efficient nets now match 2017 giants at 1 % energy) and can do it again for language.  


  fq 1 2 3 4 5 6 7 8