Episode 63

Dear Neil deGrasse Tyson, You're Wrong About the Singularity

We respond to Neil deGrasse Tyson and Adam Becker's StarTalk arguments against the Singularity, examining where their five core claims go wrong.

After our Episode 56 coverage of the AI-2027 paper — which lays out a timeline to artificial superintelligence by December 2027 — Neil deGrasse Tyson dropped a StarTalk episode called “Why the Singularity Is Probably Wrong.” We love Neil, but on this topic, we think he and guest Adam Becker are making some real mistakes.

Their five core arguments:

1. Exponential growth always hits a ceiling. Becker argues that singularity believers project exponential curves without acknowledging S-curves. He’s right that every individual technology follows an S-curve — vacuum tubes maxed out, transistors are approaching atomic limits. But the singularity hypothesis isn’t about one technology. It’s about paradigm shifts. Each S-curve tops out, but a new one starts before the old one finishes. And in AI specifically, the rate of paradigm-level breakthroughs has been accelerating.

2. Intelligence isn’t a single dial. This is Becker’s strongest point — AI can be superhuman at chess while having zero social awareness. But nobody serious in AI claims you “just turn up one dial.” Modern development involves architecture innovations, training methodology, RLHF, multi-modal integration, and tool use. More importantly, the trend shows AI becoming more general, not less. The “intelligence is too complex” argument has been the retreat position for decades as goalposts keep moving.

3. The brain is not a computer. True — the brain isn’t a Von Neumann machine. But you don’t need to simulate a brain to achieve general intelligence. Planes don’t flap their wings. The question isn’t whether AI works like a brain, but whether it can achieve comparable cognitive outputs.

4. Moore’s Law is dead. Traditional transistor scaling is slowing, but computing capability continues advancing through new architectures, specialized hardware, algorithmic efficiency gains, and neuromorphic computing. The computational resources available for AI training have grown by roughly 10x per year even as Moore’s Law plateaus.

5. The real problems are political, not technical. Tyson argues we should worry about inequality and climate change instead. But this is a false dichotomy — AI could be the most powerful tool for addressing those very problems. And responsible development requires understanding what’s coming, not dismissing it.

We respect Tyson enormously, but his platform means his AI skepticism shapes how millions think about the most transformative technology of our era. Getting this conversation right matters.

Tyson’s Five Core Arguments (And Why They Fall Short)

Neil deGrasse Tyson and guest Adam Becker made five main arguments against the singularity in their StarTalk episode. Each has intuitive appeal but significant weaknesses when examined against current evidence.

Argument 1: “Intelligence has diminishing returns.” Tyson argues that doubling intelligence doesn’t double capability — there are fundamental limits to how much intelligence helps. The counterpoint: this hasn’t been observed in AI systems. Each generation of language models (GPT-3 → GPT-4 → GPT-4o) shows roughly proportional capability gains with scale. More importantly, intelligence applied recursively to the problem of improving intelligence could compound in ways that biological intelligence never could.

Argument 2: “We don’t understand consciousness, so we can’t build it.” This conflates consciousness with intelligence. You don’t need to be conscious to be superintelligent — you need to solve problems better than humans. AlphaFold predicts protein structures better than any human scientist without being conscious. The singularity hypothesis doesn’t require machine consciousness, just machine capability.

Argument 3: “Exponential growth always hits a ceiling.” True in nature, but misleading here. The question isn’t whether AI improvement has any ceiling — it’s whether that ceiling is above human-level intelligence. Given that human brains are biological computers constrained by skull size, metabolic limits, and evolutionary baggage, there’s strong reason to think silicon-based intelligence can exceed human capabilities even if it eventually plateaus.

The Astrophysicist’s Blind Spot

Tyson’s skepticism reveals a broader pattern: domain experts often underestimate transformative technologies outside their expertise. Physicists in the 1930s couldn’t predict the impact of transistors. Telephone engineers in the 1990s couldn’t predict the smartphone revolution. Tyson’s expertise in astrophysics gives him powerful analytical tools, but it doesn’t give him privileged insight into the dynamics of AI development.

The people closest to frontier AI development — researchers at OpenAI, Anthropic, Google DeepMind, and Meta’s FAIR — are consistently more concerned about rapid capability gain than external observers. This isn’t because they’re credulous or self-interested (many of them are working on safety precisely because they’re worried). It’s because they see internal capability gains months before the public does.

Argument 4: “The brain is not a computer.” Becker argues that the brain uses fundamentally different principles than digital computers, implying AI can’t replicate intelligence. But this is a category error. You don’t need to replicate the brain’s architecture to match its capabilities. Airplanes don’t fly like birds, but they fly better. Modern AI doesn’t process information like neurons, but it’s increasingly matching and exceeding human performance on cognitive benchmarks.

Argument 5: “Previous predictions of imminent AI breakthroughs were wrong.” The “boy who cried wolf” argument. Yes, AI researchers in the 1960s and 1980s made overoptimistic predictions. But those predictions were wrong because the hardware didn’t exist and the algorithms were primitive. Today’s predictions are backed by actually existing systems that pass bar exams, write research papers, and design novel proteins. Dismissing current predictions because previous ones failed is like dismissing the Wright Brothers because Icarus failed.

The Real Debate We Should Be Having

The productive question isn’t “will the singularity happen?” but “what do we do if it might?” Tyson’s dismissiveness, while understandable, contributes to complacency. If there’s even a 10% chance that artificial superintelligence emerges in the next 20 years, the rational response is vigorous preparation: alignment research, governance frameworks, and international coordination.

The existential risk community — researchers at Oxford’s Future of Humanity Institute, Cambridge’s Centre for the Study of Existential Risk, and organizations like MIRI — argues that the expected value calculation demands action even at low probabilities, because the stakes (human extinction or permanent disempowerment) are so high. Tyson’s confident dismissal, if wrong, carries asymmetric risk: if singularity skeptics are right, we’ve wasted some research effort; if they’re wrong, we’re unprepared for the most transformative event in human history.

The Emotional vs. Analytical Response

What’s most striking about Tyson’s episode is how emotional his response is compared to his usual analytical rigor. When discussing black holes or dark energy, Tyson carefully distinguishes between established physics and speculation. With the singularity, he dismisses the concept wholesale without engaging with the specific technical arguments. This suggests the singularity triggers something that pure physics doesn’t: a visceral discomfort with the idea that human intelligence might be surpassable.

This is a common and entirely human reaction. The singularity threatens our species’ most fundamental self-concept — that we are the most intelligent beings in existence. Rejecting the singularity is psychologically comforting in the same way that rejecting mortality is psychologically comforting. But comfort and truth don’t always align.

Frequently Asked Questions

What does Neil deGrasse Tyson think about the singularity?

Tyson has expressed skepticism about the technological singularity, arguing that intelligence may have physical limits and that AI progress will encounter diminishing returns. He emphasizes that consciousness and general intelligence are poorly understood, making predictions about superintelligence premature.

Is the singularity possible according to physics?

There’s no known physical law preventing artificial superintelligence, but significant theoretical and practical barriers exist. The brain’s computational principles aren’t fully understood, and it’s unclear whether silicon-based systems can replicate emergent properties of biological neural networks. The debate remains open.

If you enjoyed this episode, check out these related deep dives:

Related Articles

Episode 1Jul 18

Creatine: From Discovery to Health Benefits

Discover the science behind creatine supplementation: muscle growth, brain health benefits, exercise performance, and safety. Learn how this natural compound powers your cells and enhances both physical and cognitive function.

Read More
Episode 10Jul 31

The Health and Science of Heat Therapy

Discover the science of heat therapy: sauna benefits, heat shock proteins, cardiovascular health, and mental wellness. Learn optimal protocols, temperature settings, and safety guidelines for maximum benefits.

Read More