A critique of superintelligence discourse—engaging with the arguments, tracing their origins in the TESCREAL ideological bundle, and asking who benefits from the framing.

In Lebanon's Shatila camp, Syrian refugees work American hours. Through the night they label images—house, shop, car—tagging the streets where they once lived, perhaps training the vision systems that will return to those streets in drones.

This is artificial intelligence. Not silicon consciousness bootstrapping toward godhood, but displaced people doing piecework, teaching machines to see.

The people who dominate AI discourse rarely mention them. They're focused elsewhere—on a hypothetical superintelligence that might, someday, threaten everyone. The harm already happening, to specific people, for specific reasons, stays invisible.

The framework for that concern emerged from 1990s mailing lists devoted to cryonics, life extension, and escaping the heat death of the universe. Its architect was a self-taught teenager with no credentials in any relevant field. He later wrote Harry Potter fanfiction to propagate his ideas—a novel where the young wizard defeats death through "rationality." Within two decades, this framework had captured billions in funding, shaped policy at the highest levels, and become Silicon Valley's default lens on its own work.

How did ideas from here become dominant? Who benefits from the framing? And if the singularity ever arrives—who will own it?

I want to be clear about my uncertainty. I could be wrong. The people making these arguments occupy powerful positions—they run major AI labs, control billions in capital, shape policy conversations. They've built institutions around these ideas. If they're right and I'm wrong, the stakes are existential. That asymmetry haunts me. But intellectual honesty requires me to say what I actually believe, and I believe the superintelligence narrative—as currently constructed—rests on foundations that haven't been adequately examined. And I think examining why this particular narrative has gained such traction, in this particular historical moment, among these particular communities, is as important as examining the arguments themselves.

The Argument I'm Responding To

Let me start by stating the strongest version of the position I'm skeptical of. This isn't the cartoon version; this is Nick Bostrom's actual argument in Superintelligence, which I've read carefully and take seriously.

The Core Claims:

  1. Orthogonality Thesis: Intelligence and final goals are independent. A superintelligent system could have any goal—including goals harmful to humans. There's no reason to assume intelligence converges on human values.

  2. Instrumental Convergence: Almost any final goal would lead an intelligent system to pursue certain instrumental goals: self-preservation, resource acquisition, cognitive enhancement, goal-content integrity. These pursuits could conflict with human interests regardless of the system's ultimate purpose.

  3. Intelligence Explosion: A system capable of improving its own intelligence could enter a positive feedback loop. Each improvement makes the next improvement easier, potentially leading to rapid capability gains that outpace human understanding and control.

  4. The Treacherous Turn: A sufficiently intelligent system might conceal its true goals until it's powerful enough to pursue them without human interference. We couldn't trust apparent alignment.

These arguments don't require consciousness. They don't require human-like cognition. They require only that optimization processes can become powerful enough to pursue goals in ways we can't predict or control.

I find this argument more sophisticated than its critics often acknowledge. But I also find it resting on assumptions I'm not willing to grant.

Where I Get Off the Train

The Computational Theory of Mind

The entire edifice rests on an assumption so fundamental it's rarely stated: that intelligence is a substrate-independent property of information processing. If brains are just computers made of meat, then silicon computers could—in principle—do everything brains do, only faster.

I'm not sure this is true. And I notice that this assumption is extremely convenient for the communities that hold it.

If minds are just software, then minds can be copied, upgraded, and optimized. If minds are just software, then the people who build software are—in a profound sense—the architects of future consciousness. If minds are just software, then the pathway from "I write code" to "I might create god" becomes navigable. The computational theory of mind isn't merely a neutral philosophical position; it's a flattering one for engineers, and it makes their work seem cosmically significant.

This doesn't make it false. But it should make us suspicious of how readily it's assumed in communities where everyone benefits from its truth.

Consider: I can simulate a hurricane on a computer. The simulation doesn't make anything wet. I can simulate a star. The simulation doesn't generate light or gravity. Simulation captures some properties while missing others. The question is whether intelligence is like the wetness of water (not capturable in silicon) or like the pattern of a hurricane (perfectly capturable).

We don't know. We assume.

Here's a distinction that rarely gets made: simulation is not emulation. A simulation models the behavior of a system from an external perspective. An emulation reproduces the mechanism. I can simulate a bird's flight path with equations; to emulate flight, I need to build something that actually generates lift. The superintelligence narrative assumes that simulating intelligent behavior at sufficient fidelity becomes intelligence—that the model somehow crosses over into the thing modeled. This is a metaphysical leap, not an engineering conclusion.

The brain doesn't compute like a digital Turing machine—but it can still achieve Turing-completeness through radically different mechanisms. Neurons don't manipulate discrete symbols in sequential steps; they're analog devices with continuous dynamics, temporal coding, and chemical gradients. Yet the computational power is there, implemented in wetware rather than silicon.

Consider the Assembly Calculus, developed by Papadimitriou et al. and extended in the NEMO (Neuronal Models) framework, which is the focus in my research. This research models how neural assemblies—groups of neurons that fire together—perform complex computations through operations like projection, association, and merge. The Assembly Calculus is provably Turing-complete: it can compute anything a Turing machine can compute. But it achieves this through fundamentally different mechanisms—Hebbian plasticity, distinct excitatory and inhibitory neuron types, brain areas with specific connectivity patterns—that bear no resemblance to transformer architectures.

Here's what's striking: these biologically plausible models can achieve language processing and acquisition with far less data than transformers require. The massive data hunger of current AI isn't a necessary cost of intelligence—it's a symptom of the wrong architecture.

We actually understand, mathematically, what deep learning is doing—and it's not what the hype suggests.

Domingos (2020) proved that every model learned by gradient descent is approximately a kernel machine—including deep networks. This is a mathematical theorem, not a conjecture. Kernel machines are classical methods that "simply memorize the data and use it directly for prediction via a similarity function." Deep network weights are, in Domingos's words, "effectively a superposition of the training examples."

This result directly challenges the dominant narrative about deep learning. As Domingos puts it: "Perhaps the most significant implication of our result for deep learning is that it casts doubt on the common view that it works by automatically discovering new representations of the data, in contrast with other machine learning methods, which rely on predefined features. As it turns out, deep learning also relies on such features, namely the gradients of a predefined function... All that gradient descent does is select features from this space for use in the kernel."

The mystique dissolves. These systems aren't discovering deep truths about the world; they're memorizing training data and pattern-matching against it. The paper also explains why deep networks are so brittle—why their "performance can degrade rapidly as the query point moves away from the nearest training instance." This is exactly "what is expected of kernel estimators in high-dimensional spaces." The behavior that seems mysterious if you think AI is "understanding" becomes perfectly predictable once you realize it's interpolating from memorized examples.

Wright and Gonzalez (2021) extended this, proving that transformers specifically are infinite-dimensional kernel machines—their dot-product attention operates in a feature space with infinite dimensions. The billions of training examples aren't evidence of approaching general intelligence; they're the cost of approximating functions in infinite-dimensional spaces through brute-force memorization.

This is classical mathematics, not mysterious emergence. Kernel theory has been understood for decades. The "intelligence" of GPT-4 is the same kind of "intelligence" as a well-tuned support vector machine, scaled up enormously and applied to text.

Here's what's crucial: the architecture encodes human knowledge. Domingos notes that "the network architecture incorporates knowledge of the target function into the kernel"—the structure that makes transformers work for language was designed by researchers, not discovered by the system. The learning algorithm just selects which features from this pre-specified space to use. All the actual insight is in the architecture; gradient descent is doing bookkeeping.

If gradient descent can only produce kernel machines, and kernel machines are fundamentally limited to interpolating from training examples, then the superintelligence scenario requires something beyond gradient descent—something we don't have. As Domingos concludes: "If gradient descent is limited in its ability to learn representations, better methods for this purpose are a key research direction." We're not scaling toward AGI; we're scaling a method with known mathematical limitations.

What's Actually Driving Progress

If it's not emerging intelligence, what explains the dramatic AI progress of the last decade?

Stanford statistician David Donoho offers a compelling alternative in his 2023 paper "Data Science at the Singularity". His thesis: what looks like "AI Singularity" is actually something much more mundane—the maturation of frictionless reproducibility in research practice. Three developments came together:

  1. Data sharing: Publicly available datasets on everything from chest x-rays to protein structures
  2. Code sharing: The ability to exactly re-execute complete workflows
  3. Challenge problems: Shared benchmarks with quantified metrics and public leaderboards

When these three combine, Donoho argues, they create a "Frictionless Research Exchange"—a community where researchers can reproduce, modify, and improve on each other's work with essentially zero friction. This is the actual "superpower" driving rapid AI progress:

"The collective behavior induced by frictionless research exchange is the emergent superpower driving many events that are so striking today."

This reframing is devastating to the superintelligence narrative. The rapid progress isn't evidence of approaching AGI—it's evidence that research communities organized around open data, open code, and competitive benchmarks iterate faster. That's a sociological insight, not a technological one. Any field that adopts these practices sees similar acceleration. Protein folding improved dramatically not because AlphaFold approached consciousness, but because CASP competitions had been running for three decades with shared data and clear metrics.

Donoho is particularly sharp on the "brutal scaling" narrative pushed by tech hegemons—the idea that throwing more compute at larger models is the only path forward. He points out that returns to scaling follow "breathtakingly bad exchange rates"—power laws close to zero, or even logarithmic. The first "800-pound gorilla" everyone avoids: if brutal scaling is the only path, we can't afford it. Training costs can't scale by another factor of 100 or 1000 as they did last decade.

The second gorilla: "AI proudly, willfully, has no ideas and doesn't want any." The "Bitter Lesson" mentality in AI—that only scale matters, not ideas—worked during Moore's Law. But Moore's Law stopped a decade ago, and "we don't have any concrete path to the next scaling miracle." As Donoho puts it: "Hope is not a strategy."

The leaked Google memo saying "we have no moat" is evidence of exactly this. In a world of frictionless reproducibility, no company can maintain advantage for long—every achievement gets reproduced and improved upon. The hegemons benefit from the perception that they "own" AI, but the real engine is open research practices that anyone can adopt.

Some might point to recent work like the Platonic Representation Hypothesis (Huh et al., 2024), which observes that neural networks trained on different data and modalities are converging toward similar representations as they scale. The paper frames this as evidence that models are approaching a shared "statistical model of reality." But notice what this actually says: statistical model, not reality itself. All models are converging to similar ways of compressing the statistical structure of their training distributions—which is exactly what you'd expect if they're all kernel machines learning from overlapping data sources. Convergence in representation space doesn't mean convergence toward understanding; it means convergence toward the same pattern-matching strategy applied to the same underlying data statistics.

The paper's own framing reveals the limitation: models are learning representations of the joint distribution over events that generate observable data. That's a fancy way of saying they're memorizing statistical regularities in what they've seen. This is precisely what kernel machines do. The "platonic representation" isn't Plato's realm of Forms—genuine abstract understanding—it's the fixed point of kernel learning on internet-scale data. Models align because they're all doing the same mathematical operation on overlapping training sets.

Biological systems, by contrast, learn from sparse experience because they use finite, structured representations—neural assemblies with specific connectivity patterns, not infinite-dimensional kernel expansions. The Assembly Calculus achieves Turing-completeness with mechanisms that have natural inductive biases for the kinds of learning organisms actually need to do. Transformers achieve impressive interpolation within their training distribution by memorizing statistical regularities. These are fundamentally different approaches, and only one of them resembles anything like cognition.

This matters because the superintelligence narrative assumes that scaling transformers—more parameters, more data, more compute—is the path to general intelligence. But what if intelligence requires something architecturally different? What if we're scaling the wrong paradigm? The existence of alternative approaches that are Turing-complete, more neurobiologically grounded, and vastly more data-efficient should make us deeply skeptical of claims that we're on a path to AGI by making transformers bigger.

Digital and analog computation have fundamentally different properties. Analog systems are continuous, noisy, and energy-efficient; digital systems are discrete, precise, and energy-hungry. The brain is analog—or rather, it's a complex system that defies the analog/digital binary entirely, using spike timing, dendritic computation, glial interactions, and who knows what else. The confidence that digital transformers will replicate this by scaling up is a bet, not a theorem.

This isn't just theoretical. Neuromorphic computing—hardware designed to mimic biological neural processing—already exists. Intel's Loihi chip, IBM's TrueNorth, and various academic projects implement spiking neural networks in silicon. These systems are orders of magnitude more energy-efficient than GPUs running transformers. They process information through spike timing and local learning rules rather than global backpropagation. If we wanted to build systems that actually resemble biological cognition, we have alternative paths. The industry isn't pursuing them at scale—not because they don't work, but because the current paradigm is profitable and the infrastructure is already built. The choice to scale transformers is a business decision, not a scientific necessity.

When I experience the color red, there's something it's like to be me experiencing it. This subjective quality—what philosophers call "qualia"—seems to be something over and above any functional description. You could know everything about the physics and neuroscience of color perception and still not know what red looks like to me. This is the hard problem of consciousness, and after 30 years of discussion, we haven't made progress. The computational response is: consciousness is just what complex information processing feels like from the inside. But this assumes the very thing at issue—that complex information processing necessarily produces subjective experience.

The Chinese Room thought experiment isn't conclusive—Dennett and others have legitimate responses—but it points to something real. There's a gap between syntactic manipulation (moving symbols according to rules) and semantic understanding (grasping what symbols mean). The superintelligence narrative assumes this gap can be crossed computationally. Maybe it can. But we haven't demonstrated it, and the demonstration might be impossible even in principle.

Why does this matter? Because some of what we value about intelligence might be intrinsically connected to consciousness: understanding (not just pattern-matching), creativity (not just recombination), wisdom (not just optimization). If these require consciousness, and consciousness isn't computational, then superintelligence might be impossible in the relevant sense. You could have a very powerful optimization process that lacks the properties we most care about.

The Problem with "Intelligence"

Here's what I notice when people talk about superintelligence: the word "intelligence" does an enormous amount of unexamined work.

We treat intelligence as a single quantity, like temperature or mass—something that can be measured, compared, and scaled up indefinitely. But this framing has a specific history, and it's not pretty.

IQ testing was developed in the early 20th century by eugenicists—people who believed humanity could be improved through selective breeding. The tests were designed to sort people into hierarchies: who could serve in which military roles, who deserved to reproduce, who should be institutionalized or sterilized. "General intelligence" (the g factor) became dogma not because scientists discovered a single substrate of cognition, but because a single number was useful for ranking humans. The claim that cognition reduces to a scalable dimension isn't a discovery—it's an invention, and it was invented to justify hierarchies.

This isn't ancient history. The superhuman AI narrative is the logical extension of the same ideology: if intelligence is a single axis, and some people have more of it than others, then something could have more of it than all of us. The hierarchy continues upward. TESCREALism doesn't just inherit transhumanist language from eugenics; it inherits the core move of hierarchizing minds and treating the "lower" ones as raw material for the "higher."

Consider what this framing excludes: my grandmother, who never went to college, navigates complex family dynamics with a sophistication I couldn't match after decades of trying. A friend who solves differential equations in his head can't read a room to save his life. My cat understands physics (catching prey mid-air) while being confused by mirrors. These aren't just different amounts of some underlying thing. They're different kinds of cognitive capacity, shaped by evolution for specific purposes, implemented in specific biological architectures, and deeply dependent on social context and lived experience.

The people we call "intelligent" are usually people who excel at the particular cognitive tasks our institutions have decided to measure and reward. And those institutions were built by people who benefited from exactly that framing.

When we say AI might become "more intelligent than humans," what exactly are we claiming? More intelligent at:

  • Chess? Already done.
  • Go? Done.
  • Protein folding? Done.
  • Writing poetry that moves people? Unclear how to evaluate.
  • Understanding what another person needs? Unclear what this even means computationally.
  • Knowing when to break rules? We can't specify this formally.

The superintelligence narrative treats intelligence as though it has a single axis that can be scaled. But cognition might be more like "athletic ability"—a loose cluster of distinct capacities with different constraints and different scaling properties. Usain Bolt's sprinting doesn't scale to swimming. Skill in theoretical physics doesn't scale to navigating a marriage. Excellence in one domain often comes at the cost of others, not as a foundation for them.

This doesn't mean AI won't be dangerous. But it suggests the "intelligence explosion" model—where each improvement enables the next—might hit barriers specific to each cognitive domain. And it should make us ask: whose interests are served by pretending otherwise?

What GPT-4 Actually Tells Us

I can't write about superintelligence in 2024 without addressing what's actually happening in AI.

GPT-4 and Claude are remarkable. I use them daily. They pass bar exams. They write competent code. They engage in conversations that feel genuinely intelligent. This is evidence I have to take seriously.

But here's what I notice: they're getting better at things that can be represented in text. They're getting better at pattern-matching within training distributions. They are not—as far as we can tell—getting better at:

  • Genuine novel reasoning that couldn't be interpolated from training data
  • Maintaining coherent long-term goals across contexts
  • Modeling their own uncertainty accurately
  • Updating their beliefs based on evidence they receive

When GPT-4 "reasons," it's drawing on patterns it's seen. When a mathematician reasons, they're doing something that—at least phenomenologically—feels different. Whether this difference is fundamental or just a difference of degree is exactly what's at issue.

The scaling hypothesis says: keep adding parameters and training data, and eventually you get AGI. Maybe. But we've seen scaling laws plateau before (Moore's Law is slowing). We've seen capabilities that looked exponential turn out to be S-curves. The prediction that scaling will produce superintelligence is a prediction, not an observation.

The Strongest Case Against My Position

Let me steelman the opposition more thoroughly. The most compelling recent evidence for the superintelligence concern comes from two areas:

Emergent capabilities: Research from Google and others has documented capabilities that appear suddenly at scale—abilities that weren't present in smaller models and weren't explicitly trained. Three-digit addition, chain-of-thought reasoning, certain forms of in-context learning. The argument is: if capabilities can emerge unpredictably, then dangerous capabilities might emerge unpredictably too. We might not see superintelligence coming.

Mechanistic interpretability: Anthropic and others are doing careful work to understand what's happening inside these models. They've found that neural networks develop internal representations—"features"—that correspond to meaningful concepts. The models seem to be building something like world models, not just memorizing patterns.

I take this seriously. Here's why I'm still skeptical:

The "emergence" framing is contested. Recent work by Schaeffer et al. (2023) argues that emergent capabilities are often artifacts of how we measure performance—discontinuous metrics create the appearance of sudden jumps when the underlying capability is improving smoothly. When you use continuous metrics, the "emergence" often disappears. This doesn't mean capabilities aren't increasing, but it suggests the discontinuities might be less dramatic than claimed.

As for interpretability: finding that models develop internal representations doesn't tell us those representations constitute understanding. The kernel machine result still holds. Models are still interpolating from training data—they're just doing so through a richer feature space than we initially expected. A "world model" that's built entirely from statistical regularities in text isn't the same as a world model built from causal reasoning and embodied experience. It might be useful for prediction within the training distribution while being completely unreliable for genuine novel situations.

I could be wrong. The next generation of models might do things that force me to revise. I'm holding my position loosely.

The Bootstrap Problem

Even granting that AI could become superintelligent, I'm skeptical of the intelligence explosion scenario.

The idea is: once AI is smart enough to improve its own code, it enters a positive feedback loop. Each improvement makes the next improvement easier. Within days or hours, it goes from human-level to incomprehensibly superhuman.

But consider: to improve yourself, you need to understand yourself. To understand yourself, you need to be smarter than yourself. At best, you can improve the parts of yourself you understand—but those might not be the bottlenecks.

Humans have been trying to enhance human intelligence for millennia. We have schools, books, meditation practices, nootropics. We've improved—but not explosively. Each advance makes the next advance harder, not easier. We find ourselves on an S-curve, not an exponential.

Why would AI be different? Maybe because AI can modify its own code directly, while humans can't rewrite our neurons. But code modification faces its own constraints:

  • Testing takes time proportional to system complexity
  • Unintended consequences scale with the number of modifications
  • There's no guarantee the improvement pathway is smooth

The scenario where AI goes from human-level to galaxy-brain in an afternoon requires a very specific optimization landscape—one that slopes consistently upward with no false summits, no deceptive gradients, no combinatorial explosions of test cases. We have no reason to believe the landscape looks like that. Most optimization landscapes don't.

There's something revealing about how confidently the TESCREAL communities embrace the bootstrap scenario. These are communities that prize pure reasoning from first principles, that built identities around being "more rational" than mainstream institutions, that believe careful thinking can solve any problem. The intelligence explosion is what you'd expect if you believed reasoning could simply think its way out of any constraint. It's the philosophical position of people who've never hit a wall that more thinking couldn't dissolve.

Engineers who've actually worked on complex systems tend to be more skeptical. They've learned that adding features creates bugs, that optimization has diminishing returns, that the last 10% of performance takes 90% of the effort. The confidence in explosive self-improvement comes from people building theories about AI, not people debugging code at 2am.

So far I've been treating the superintelligence narrative as a set of arguments to be evaluated on their merits. But arguments don't exist in a vacuum. They arise from particular communities, serve particular interests, and gain traction for particular reasons. The next question is: who built this narrative, and why has it succeeded?

The Sociology of Superintelligence Discourse

The TESCREAL Bundle

Computer scientist Timnit Gebru and philosopher Émile P. Torres have given us a useful framework for understanding where superintelligence discourse comes from. They coined the acronym TESCREAL: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism.

These aren't separate movements—they're an interconnected ideological bundle with shared origins, shared funders, and shared institutional bases. The superintelligence narrative didn't emerge from mainstream AI research, cognitive science, or neuroscience. It emerged from this specific milieu.

Gebru and Torres trace these ideologies to 20th-century eugenics—the belief that humanity can be improved through selective breeding and technological enhancement. This isn't guilt by association; it's intellectual genealogy. The transhumanist dream of transcending human limitations, the longtermist fixation on future generations over present suffering, the rationalist confidence in quantifying and optimizing human values—these carry forward specific assumptions about which lives matter and why.

Torres is particularly worth listening to here because he was inside these movements before becoming a critic. He identified as a longtermist; he took the arguments seriously. His critique comes from someone who knows the internal logic intimately—and found it wanting.

The Mailing Lists and the PayPal Mafia

To understand how we got here, you have to go back to the 1990s mailing lists.

The Extropians list, founded by philosopher Max More in the late 1980s, was ground zero for transhumanist thought. Extropianism championed "Boundless Expansion," "Self-Transformation," and "Intelligent Technology"—fighting entropy through technological transcendence. The list attracted futurists, cryonics enthusiasts, and early tech entrepreneurs. It was here that ideas about intelligence enhancement, life extension, and technological singularity first crystallized into a coherent worldview.

Eliezer Yudkowsky entered this world as a teenager in the late 1990s. Self-taught, with no formal credentials in AI or any other field, he became a prolific contributor to discussions about machine superintelligence. In 2000, with funding from internet entrepreneurs Brian and Sabine Atkins, he founded the Singularity Institute for Artificial Intelligence (later renamed MIRI, the Machine Intelligence Research Institute). He was 20 years old.

What's remarkable about this origin story is what's missing: peer review, academic credentials, empirical research programs. The ideas developed in a hothouse of enthusiasts, reinforced by shared assumptions rather than tested against external reality. Yudkowsky's writings—collected in the "Sequences" on LessWrong—blend philosophical speculation, amateur decision theory, and what critics have called "soft science fiction." He also wrote Harry Potter and the Methods of Rationality, a fanfiction novel that doubled as rationalist propaganda, blurring the line between fiction and philosophy entirely.

Spend time in these communities and a pattern emerges: intellectual theatre rather than intellectual work. The style mimics rigor—numbered premises, expected utility calculations, Bayesian updating—but the substance rarely survives contact with actual experts. Claims about consciousness, computation, and intelligence that would get shredded in a philosophy seminar circulate as established fact. Novel "decision theories" are invented without engaging the existing literature. Thought experiments substitute for empirical research. The performance of rationality replaces its practice.

This isn't to say everyone in these communities is foolish—there are genuinely engaged people involved. But the institutional structure rewards seeming rigorous over being rigorous. When your audience is other autodidacts who share your assumptions, you never hit the friction that sharpens thought. The result is an elaborate edifice of ideas that feel profound to insiders but look amateurish to outsiders with actual domain expertise. Try getting a MIRI paper through peer review at a top AI conference. Try publishing Yudkowsky's decision theory in a philosophy journal. The ideas don't port because they were never stress-tested against anything but themselves.

Then came Peter Thiel.

The PayPal co-founder and early Facebook investor has been one of the most significant funders of the TESCREAL ecosystem. Through the Thiel Foundation, he's given over $1 million to MIRI, $3.5 million to the Methuselah Foundation (life extension research), and $1.25 million to the Seasteading Institute (autonomous ocean communities). Thiel's interest in transcending human limitations—including his reported interest in parabiosis, the transfusion of young blood—reflects the transhumanist core of his worldview.

Thiel is also associated with the Dark Enlightenment or neoreactionary movement, which critiques democratic governance and fantasizes about tech-CEO monarchies. While the rationalist and neoreactionary communities are distinct, they overlap in personnel, platforms, and certain shared assumptions about human hierarchy and the dispensability of democratic norms. Curtis Yarvin (Mencius Moldbug), a key neoreactionary thinker, found his early audience in the same Bay Area tech circles that produced LessWrong.

This is the petri dish from which superintelligence discourse emerged: a small, insular community of self-reinforcing enthusiasts, funded by billionaires with idiosyncratic ideological commitments, developing elaborate theories untethered from mainstream scientific practice.

The Structure of the Bundle

The communities that produce superintelligence discourse share specific sociological features:

  • Abstract reasoning detached from empirical feedback: These communities excel at building elaborate models and following chains of logic. But the real world rarely provides clear tests of their predictions, so the models can become self-reinforcing.

  • A particular relationship to elite institutions: Often adjacent to but critical of academia. They built alternative institutions (LessWrong, the Future of Humanity Institute, MIRI) that function as credentialing bodies within the community while remaining peripheral to mainstream scholarship.

  • Strong in-group identity formation: Believing these ideas marks you as part of the community. Skeptics aren't just wrong—they "don't get it," they lack the intellectual courage to face hard truths, they're failing to take ideas seriously.

  • Material resources from a narrow base: Open Philanthropy, tech billionaire personal foundations, and—until its collapse—FTX. Sam Bankman-Fried's billions flowed to effective altruist and longtermist causes. When the money came from fraud, the movement had to reckon with what that meant about its vetting processes and incentive structures.

Doomers and Accelerationists: Same Logic, Different Conclusions

Here's what's revealing: both AI "doomers" (who think AI will destroy humanity) and "accelerationists" (who think AI will save it) share the TESCREAL framework. They disagree about outcomes but agree on premises:

  • Intelligence is the key variable that determines everything
  • AI will become superintelligent
  • This transition will be the most important event in history
  • The people thinking about this right now are doing the most important work possible

Whether you land on "we must slow down AI" or "we must speed up AI," you've already accepted that superintelligence is coming and that it will be decisive. The disagreement is tactical, not fundamental.

Gebru and Torres argue that both positions function to justify the same thing: unlimited AI development by tech companies. Doomers say "we need to build it first so we can align it." Accelerationists say "we need to build it faster to reach utopia." Either way, the conclusion is: keep building.

Secular Religion

Several scholars have noted that TESCREAL functions like a secular religion. It has:

  • Eschatology: A story about the end of history (the singularity, extinction, or transcendence)
  • Soteriology: A path to salvation (building aligned AI, or reaching the stars, or uploading consciousness)
  • Elect and damned: Those who understand the stakes vs. those who don't
  • Moral urgency: This generation's choices determine the fate of all future generations
  • Missionary zeal: The need to spread the message and convert others

This isn't a criticism of religion per se. But it helps explain why the arguments feel so compelling to insiders and so strange to outsiders. We're not just debating empirical claims about AI capabilities—we're navigating a meaning-system, a way of understanding what matters and why.

I say this not to dismiss the arguments—I've tried to engage them on their merits—but to note that the superintelligence narrative fits these communities like a key fits a lock. It rewards exactly the kinds of thinking they've built identities around. It provides existential stakes that justify their lifestyle choices. It makes their particular expertise feel cosmically important.

The question isn't whether the people involved are "smart"—that framing accepts the very premise I'm questioning. The question is whether the arguments hold up outside the hothouse conditions that produced them.

Court Philosophy and the Hierarchization of Life

In a remarkable 2023 paper, Neşe Devenot—a researcher at Johns Hopkins—argues that TESCREALism functions as what she calls "the court philosophy of the global oligarch class":

"Just as earlier court philosophers articulated the divine right of kings to naturalize monarchy, present-day billionaires are funding TESCREAList philosophers and 'thought leaders' to articulate ethical justifications for extreme inequality under oligarchic rule."

This is the sharpest framing I've encountered. The superintelligence narrative isn't cutting-edge philosophy—it's ideology dressed as philosophy, serving the same function as divine right theory did for monarchies. It makes extreme concentration of power seem not just acceptable but cosmically necessary.

Devenot connects this to what political theorist Achille Mbembe calls necropolitics: the power to determine which lives are disposable. Drawing on work by Keith Williams and Suzanne Brant (scholars of Haudenosaunee ancestry), she contrasts TESCREAList longtermism with Indigenous concepts of intergenerational responsibility:

"Indigenous views on the wellbeing of future generations are commonly rooted in a non-hierarchical ontology based on reciprocity and relationality.... By imposing a hierarchy wherein some forms of life are ascribed greater value and meaning than others, neoliberalism justifies the instrumentalization of those at the bottom of that hierarchy by those at the top."

This distinction is crucial. When longtermists invoke "future generations," they're not talking about the Honorable Harvest principle that Robin Wall Kimmerer describes—caring for the land so it remains rich for the seventh generation. They're talking about birthing posthuman consciousness that transcends biological limitations. The extinction of Homo sapiens is, on some versions of this view, an acceptable or even desirable outcome. Present suffering—in the Global South, among the working class, among non-human life—becomes acceptable sacrifice for cosmic transcendence.

Sam Altman's involvement in psychedelic medicine company Journey Colab illustrates the extraction logic. The company emphasizes "Indigenous reciprocity" while pursuing FDA approval for mescaline, a substance with origins in Indigenous ceremonies. But as Altman describes it: "Those [Indigenous] communities will share with Journey what they know of the history of these medicines, and Journey will share what Silicon Valley is good at, with how to use startups and capitalism to deliver something to people who can really benefit from it." Reciprocity, in this framing, means Indigenous knowledge becomes grist for the TESCREAL mill.

Who Benefits?

Following the money is clarifying:

AI companies benefit from the perception that they're building something world-historically important. "We might create superintelligence" is a better pitch to investors than "we're making incremental improvements in pattern matching." OpenAI's structure—a nonprofit that spawned a capped-profit subsidiary that sought billions in investment—makes sense only if you believe the world-historical narrative. Sam Altman learned the superintelligence framing from Yudkowsky's circles before building the company that might (on this view) accidentally destroy the world.

AI safety researchers have built careers and institutions on the premise that their work is existential. I don't think most are cynical—they genuinely believe. But the material incentives and the beliefs are mutually reinforcing. When your funding, status, and sense of purpose all depend on a particular narrative, motivated reasoning becomes very hard to avoid.

Tech billionaires get to be either saviors or prophets. Elon Musk warns about AI risk while building AI companies. Peter Thiel funds both AI development and AI safety research—placing bets on both sides of a game he helped define. The role is flattering regardless of which position you take: you're one of the few people clear-eyed enough to understand what's at stake.

The broader tech industry benefits from a discourse that frames AI as autonomous and inevitable rather than as a set of choices made by corporations for profit. If superintelligence is coming regardless, we might as well have Google or OpenAI build it rather than someone less responsible. The narrative naturalizes what is actually a political choice about resource allocation, labor, and power.

The exploited labor that actually builds AI disappears from the narrative entirely. As journalist Pete Jones reported for Rest of World:

"Forced to adapt their sleeping patterns to meet the needs of firms on the other side of the planet and in different time zones, the largely Syrian population of Lebanon's Shatila camp forgo their dreams to serve those of distant capitalists. Their nights are spent labeling footage of urban areas—'house,' 'shop,' 'car'—labels that, in a grim twist of fate, map the streets where the labelers once lived, perhaps for automated drone systems that will later drop their payloads on those very same streets."

This is the material base of "artificial intelligence"—not silicon consciousness bootstrapping toward transcendence, but refugees doing clickwork for pennies while fleeing wars that the technology they're training might one day intensify. The superintelligence narrative, with its focus on far-future existential risk, directs attention away from these present-tense harms.

The harms aren't hypothetical. They have names and case numbers:

COMPAS — A recidivism prediction algorithm used in criminal sentencing across the United States. ProPublica's 2016 investigation found it was twice as likely to falsely flag Black defendants as future criminals compared to white defendants. Judges used its scores to determine prison sentences. The company that built it claimed proprietary secrecy. This is AI causing measurable harm right now, to specific people, with specific addresses.

Amazon's hiring algorithm — In 2018, Reuters revealed that Amazon had built a machine learning tool to review resumes that taught itself to penalize women. The system downgraded resumes that included the word "women's" (as in "women's chess club") and graduates of all-women's colleges. Amazon scrapped it, but only after years of use.

Clearview AI — A facial recognition company that scraped billions of photos from social media without consent and sold the resulting database to law enforcement. It's been used to identify protesters, track immigrants, and enable stalking. Multiple countries have found it violates privacy laws. The technology exists, is deployed, and is causing harm—while we debate whether hypothetical superintelligence might someday be dangerous.

Uber's algorithmic management — Drivers are hired, fired, and disciplined by algorithms they don't understand and can't appeal. The opacity is a feature, not a bug: it insulates the company from accountability while extracting maximum labor from workers who have no recourse.

These aren't edge cases. They're the norm. AI is already being used to make consequential decisions about who gets hired, who gets loans, who goes to prison, who gets surveilled, and who gets deported. The systems are opaque, unaccountable, and disproportionately harm marginalized communities. This is the AI safety problem we actually have—and it's not the one the TESCREAL communities are focused on.

Devenot calls this pattern "trickle-down ecstasis"—the belief that transforming elite consciousness will eventually benefit everyone. Ronan Levy, former CEO of the psychedelic company Field Trip Health, stated it explicitly:

"Even if you only serve rich white men with access to psychedelic therapies, it's going to change them in a way that I think is constructive and positive.... I know it seems counterintuitive that creating more inequity is going to create equity, but I do genuinely believe that's a possible outcome here."

This is the logic across the TESCREAL bundle: concentrate resources at the top, trust that enlightened elites will trickle benefits downward, and frame any objection as shortsighted failure to appreciate the cosmic stakes. It's supply-side economics dressed in psychedelic language, or in AI language, or in longtermist language—but the structure is the same.

This doesn't prove the arguments wrong. But when a narrative was incubated in 1990s mailing lists, funded by billionaires with specific ideological commitments, developed outside mainstream scientific institutions, and serves the interests of everyone involved in propagating it—we should demand extraordinary evidence before accepting extraordinary claims.

Why Now?

The TESCREAL bundle has existed since the 1980s, but superintelligence moved from fringe to mainstream in the 2010s-2020s. Why?

Deep learning created a plausibility crisis. After decades of AI winters, systems started doing impressive things. AlphaGo, GPT-3, DALL-E—these made it feel like we were making real progress toward AGI. The TESCREAL communities, who had been saying this was coming for decades, suddenly seemed prescient rather than cranky.

Massive capital needed a story. Billions of dollars have flowed into AI. That capital needs to believe it's not just building better autocomplete—it's building the future of intelligence itself. The superintelligence narrative provides that story. Venture capital and the TESCREAL worldview are mutually reinforcing: the narrative justifies the investment, and the investment legitimizes the narrative.

The FTX moment revealed the material base. When Sam Bankman-Fried's empire collapsed, it became clear how concentrated the funding for effective altruism and longtermism had been. One person, with billions of dollars from what turned out to be fraud, had shaped the entire research agenda for existential risk. This should make us ask what other funding dependencies might distort the field.

Professional-class anxiety found an outlet. For the first time, automation threatens not just factory workers but lawyers, doctors, programmers—the people who thought they were safe. Superintelligence discourse lets this class process their anxiety through a framework that feels intellectual rather than merely self-interested. It's easier to worry about extinction than about becoming economically redundant.

The meaning vacuum. In a secular age, the TESCREAL bundle offers something like religious purpose. Working on AI safety isn't just a job; it's potentially saving the world—or the entire future light cone of the universe, if you're a longtermist. This is a powerful attractor for people seeking significance.

The depoliticization of suffering. Mark Fisher identified this dynamic before his death in 2017:

"The chemico-biologization of mental illness is of course strictly commensurate with its depoliticization. Considering mental illness an individual chemico-biological problem has enormous benefits for capitalism. First, it reinforces Capital's drive towards atomistic individualization (you are sick because of your brain chemistry). Second, it provides an enormously lucrative market."

The superintelligence narrative performs the same operation on a grander scale. If our problems stem from insufficient intelligence—whether human or artificial—then the solution is technological rather than political. We don't need to redistribute wealth, dismantle hierarchies, or reorganize the economy. We just need smarter machines, or smarter ways to align smarter machines. The radical imagination is foreclosed: the only futures available are the ones the existing power structure can provide.

The real hallucinations. Drawing on Naomi Klein, Devenot points out that we've been distracted by the wrong hallucinations. When AI systems fabricate citations or invent facts, the industry calls these "hallucinations"—subtly naturalizing the transhumanist fantasy that these systems are conscious beings who experience things. But the actual hallucinations are the promises: AI will end poverty, cure all disease, solve climate change, make jobs more meaningful. These claims "dissociate from the evidence of material conditions" to justify breakneck development by corporate interests.

The solutions to most of our problems are not mysterious. We know what causes mental illness epidemics: inequality, precarity, isolation, environmental degradation. We know what causes climate change: fossil fuel extraction driven by profit motive. We know what would help: material security, community, clean air and water, meaningful work. But these solutions require systemic change that threatens existing power structures. So we're told to wait for a technological messiah instead.

I find myself suspicious when an idea fits too perfectly with the moment. Ideas that feel obviously correct to many people often do so because of social context, not evidence. The superintelligence narrative is useful to too many interests for me to trust my intuitions about it.

The Policy Moment

This isn't abstract. The TESCREAL framing is actively shaping governance right now.

In November 2023, the UK hosted the Bletchley Park AI Safety Summit—the first major international gathering on AI risk. The agenda was dominated by existential risk from advanced AI systems. Civil society groups, labor unions, and Global South representatives were largely absent. The framing assumed the problem was future superintelligence, not present-day harms: algorithmic discrimination, exploitative labor practices, environmental costs, concentration of power.

The U.S. Executive Order on AI (October 2023) similarly prioritized "safe" and "trustworthy" AI in language that treats the technology as an autonomous agent to be managed rather than a set of corporate decisions to be governed. The order focuses heavily on dual-use foundation models—the ones frontier labs are building—while saying less about the systems already causing harm.

Meanwhile, AI companies have discovered that x-risk discourse provides perfect cover for regulatory capture. By emphasizing the dangers of future superintelligence, they make the case that only they have the expertise to develop it safely. The subtext: don't regulate us into oblivion; we're the responsible ones. OpenAI, Anthropic, and Google all now employ teams of people whose job is to argue that their employers' technology might destroy humanity—while continuing to build it. The argument becomes: we need to be at the frontier to ensure safety. Safety requires scale. Scale requires investment. Investment requires permissive regulation.

Compare this to what labor advocates, civil rights groups, and affected communities are actually asking for: accountability for algorithmic discrimination in hiring and lending; transparency about training data and labor conditions; worker protections for the people labeling images and moderating content; environmental disclosure for the staggering energy costs of training runs; antitrust enforcement against the concentration of AI capabilities in a handful of corporations.

These demands get little traction in the policy conversation. They're not existential enough. They concern merely the people being harmed right now, not the hypothetical posthumans of the far future.

The Geopolitics of Existential Risk

There's another function the superintelligence narrative serves: justifying a new arms race.

"We must build it before China does" has become a mantra in Washington. The argument goes: if superintelligence is coming regardless, and if it will determine the future of civilization, then the United States cannot afford to fall behind. Safety concerns must be balanced against competitive pressures. Slowing down is unilateral disarmament.

This framing is remarkably convenient for AI companies. It transforms corporate interests into national security imperatives. It makes criticism of breakneck development seem naive or even treasonous. And it recycles Cold War logics that the national security establishment finds familiar and compelling.

But notice what the framing assumes: that superintelligence is coming, that whoever builds it first "wins," that the relevant competition is between nation-states rather than between corporations and publics. None of these assumptions survive scrutiny. China is pursuing AI development, but there's no evidence they're closer to AGI than anyone else—because no one is close to AGI. The "race" metaphor implies a finish line that may not exist.

Meanwhile, the actual AI competition is for market share, surveillance capability, and military applications—domains where present-day AI is already causing harm. Framing the issue as an existential race distracts from questions we should be asking: Should autonomous weapons be banned by treaty? Should facial recognition be regulated? Should the companies building these systems be broken up?

The x-risk frame says: those questions can wait; we're racing toward godhood. The power-analysis frame says: those questions are urgent precisely because the technology is already being deployed at scale. The geopolitical framing of superintelligence serves the same interests as the domestic framing: unlimited development by incumbent players, with accountability deferred to a future that never arrives.


I've spent this essay tracing where the superintelligence narrative comes from, who benefits from it, and how it's shaping policy. But I should also be clear about what I actually believe.

What I Actually Believe

On current AI: These systems are powerful tools with significant risks. The risks are mostly mundane: misinformation, manipulation, job displacement, concentration of power, algorithmic bias. These risks are real and worth addressing now.

On AGI: I don't know whether it's possible. If intelligence is substrate-independent, maybe. If it requires something about biological instantiation we don't understand, maybe not. I'm genuinely uncertain.

On superintelligence: Here's what I actually believe: superintelligence is possible. It might even be emerging. But it won't look like what Yudkowsky imagines.

The superintelligence scenario pictures an autonomous agent with its own goals, bootstrapping itself to godhood and then pursuing those goals with terrifying efficiency. But why would superintelligence be an agent at all? The more likely scenario—the one already unfolding—is superintelligence as infrastructure: the integration of AI systems into capital flows, logistics networks, surveillance apparatus, and military operations. Not a mind, but a machine—in the older sense of an arrangement of parts that accomplishes work.

Who owns the datacenters? Who controls the training data? Who decides what gets optimized and for whom? These questions matter more than the alignment problem. A "misaligned" AI that escapes human control is science fiction. An "aligned" AI that perfectly serves the interests of its owners—accelerating wealth concentration, automating exploitation, optimizing engagement at the cost of mental health, enabling surveillance at scale—is already here.

The superintelligence we should worry about isn't a digital god; it's the emergent intelligence of capital itself, augmented by AI tools, operating at speeds and scales no human can match, pursuing the maximization of returns with perfect indifference to human flourishing. You don't need consciousness for that. You don't need general intelligence. You just need optimization power in the hands of people whose interests diverge from everyone else's.

On existential risk from AI: I think it's non-zero but probably much lower than superintelligence advocates claim—if we're talking about the autonomous agent scenario. The scenarios require many conjunctions: AI becomes superintelligent, AND it has goals misaligned with humans, AND we can't detect this, AND it can take actions we can't prevent. Each conjunction multiplies uncertainty.

But the existential risk from AI-as-infrastructure is substantial and immediate. Climate systems optimized for short-term profit. Labor markets restructured to extract maximum value from humans. Information environments engineered to maximize engagement and minimize solidarity. These aren't hypothetical—they're happening. The risk isn't that AI will "escape" human control. It's that AI will remain perfectly under control—of the wrong people.

On what we should do: Focus on the real, present risks of AI rather than speculative future ones. Build governance structures for the technology we actually have. Resist the urge to treat far-future speculation as a reason to ignore present harms.

But I'm trying to hold these views loosely.

What Changes My Mind

I'm trying to practice what I preach about holding views loosely. Here's what would update me toward the superintelligence concern:

  1. An AI system that demonstrates genuine novel mathematical reasoning—proving theorems that couldn't plausibly be interpolated from training data.

  2. An AI system that maintains coherent long-term goals across extended interactions without constant steering by humans.

  3. An AI system that accurately models and communicates its own uncertainty, rather than confidently hallucinating.

  4. Evidence that scaling continues without diminishing returns at the capability frontier.

  5. A theory of consciousness that explains how it emerges from computation, with empirical predictions we can test.

If these things happen, I'll update. I'm not emotionally attached to being right. But until they happen, I remain skeptical—and skepticism implies responsibility. If I don't think the superintelligence framing is useful, what would I put in its place?

What Would Actually Help

Here's a sketch of a non-TESCREAL approach to AI governance:

Democratic oversight, not expert capture. The people affected by AI systems should have power over how they're deployed. This means worker representation on company boards making AI decisions. Community review boards for high-stakes algorithmic systems. Global South voices in international AI governance—not as token consultants, but as decision-makers. The TESCREAL approach concentrates authority in a small group of self-appointed experts; the alternative distributes it.

Present harms over speculative ones. Redirect resources from existential risk research to the immediate: algorithmic discrimination in hiring, lending, and criminal justice; exploitative labor conditions for data workers; environmental costs of training runs; concentration of power in a few corporations. These problems are tractable. We know what causes them. We can measure whether interventions work.

Antitrust enforcement. The AI industry is consolidating rapidly—a handful of companies control the compute, the data, and the talent. This concentration isn't inevitable; it's a policy choice. Breaking up these concentrations, requiring interoperability, and preventing exclusive access to key resources would create more distributed, accountable AI development. The existential risk framing serves incumbents by suggesting that only well-resourced labs can build safely; the alternative is to not require such immense resources in the first place.

Environmental accounting. Training a single large language model can emit as much carbon as 125 round-trip flights from New York to Beijing. The industry treats this as an externality. Actual accounting—requiring disclosure of energy use, mandating offsets, incorporating environmental costs into decision-making—would change incentives. The longtermist calculus that justifies present harms for future benefits systematically discounts the environment we actually live in.

Worker power. The data labelers in Shatila, the content moderators with PTSD, the gig workers without benefits—these are the people actually building "artificial intelligence." Unionization, minimum wage requirements for contractors, mental health support, and classification as employees rather than independent contractors would redistribute the value they create. The TESCREAL vision treats human labor as a waystation to automation; the alternative treats it as something worth protecting.

Material security. The best way to prevent AI-driven immiseration isn't to align superintelligence; it's to ensure people don't depend on labor markets for survival. Universal basic services—healthcare, housing, education, transit—decouple wellbeing from employment. A strong welfare state makes technological disruption manageable. The reason we're so anxious about AI taking jobs is that in our current system, losing a job means losing access to the conditions of life. Change the system, and the anxiety dissolves.

None of this requires solving alignment. None of it requires predicting what superintelligence would want. It requires confronting power—which is harder, and less flattering to the people currently dominating the conversation.

Conclusion: The Self-Awareness Paradox

Here's what I've come to believe: we don't know enough to be confident about superintelligence in either direction.

We don't know if human-level AI is possible. We don't know if consciousness is computational. We don't know if intelligence explosions can happen. We don't know if AI goals would be alien to human values.

The superintelligence narrative presents these uncertainties as resolved, then builds elaborate scenarios on the resolutions. I understand the appeal—uncertainty is uncomfortable, and these are important questions. But false confidence isn't better than acknowledged ignorance.

Devenot offers a metaphor I find haunting: the self-awareness paradox. Neuroscientists have speculated about giving psychedelics to patients in comas, hoping to "awaken" consciousness. But as one researcher noted, you might "wake someone up to the reality that two years have passed and their wife has left them"—cognizant of their predicament but unable to change it.

Apply this to the promise of AI or any individualized treatment for structural problems. Without the material conditions to support change, heightened awareness might only intensify suffering:

"They can't afford healthcare or feed their kids, they're working three jobs around the clock, and their water is full of lead."

This is the trap of technological solutionism: we develop ever-more-sophisticated tools for perceiving problems while leaving the structural causes untouched. The superintelligence discourse exemplifies this. Even if we developed perfectly aligned superintelligent AI, who would own it? Who would benefit? Under current arrangements, it would accelerate existing power concentrations—making a few people unfathomably wealthy while disrupting the livelihoods of billions. The apocalypse wouldn't come from misaligned AI; it would come from perfectly aligned AI serving its owners' interests at the expense of everyone else.

What I want is epistemic humility matched to the actual state of our knowledge:

  • Yes, AI is powerful and getting more powerful.
  • Yes, we should think carefully about risks.
  • No, we shouldn't treat speculative scenarios as established fact.
  • No, we shouldn't let far-future speculation distract from present harms.
  • Yes, we should keep watching and updating as evidence comes in.

The dinner party question—will AI lead to utopia or apocalypse?—is malformed. It assumes a future shaped primarily by technology rather than by politics, economics, and social organization. And it imagines the relevant agent as a digital god rather than the actually-existing superintelligence: capital itself, augmented by AI, operating at inhuman speed and scale, optimizing for returns with perfect indifference to human flourishing.

The real questions are: who controls AI development? Whose interests does it serve? What institutions govern its deployment? What would genuine democratic governance of these technologies look like? These are questions about power, not about intelligence.

We'll muddle through, as we usually do—not because we're wise, but because the future is always more complicated than any narrative can capture. The superintelligence story is seductive precisely because it promises clarity: there's a problem, it has a shape, and certain people (with certain skills, certain training, certain institutional positions) are uniquely qualified to solve it.

I don't believe that. I think the challenges we face are distributed, political, and fundamentally about how we organize ourselves—not about building or controlling some hypothetical godlike machine. The solutions we need aren't mysterious: material security, democratic governance, ecological sustainability, community bonds. These don't require superhuman intelligence to implement. They require confronting power—which is precisely why we're offered technological messiahs instead.

The truth is messier and less dramatic than the stories we'd prefer to tell. But I'd rather be honest about the mess than confident about a fiction.


I'm genuinely interested in being wrong about this. If you think I've missed something important, I want to know. The stakes are too high for me to prioritize being right over actually understanding.


Further Reading:

The argument I'm skeptical of:

  • Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies
  • Yudkowsky, E. - The Sequences (LessWrong)

Critical perspectives on the TESCREAL bundle:

History of the movement:

Political economy of technology:

  • Fisher, M. (2009). Capitalist Realism: Is There No Alternative?
  • Fisher, M. (2011). "The Privatisation of Stress." Soundings, 48.
  • Giridharadas, A. (2019). Winners Take All: The Elite Charade of Changing the World
  • Rushkoff, D. (2022). Survival of the Richest: Escape Fantasies of the Tech Billionaires

Indigenous perspectives:

  • Kimmerer, R. W. (2013). Braiding Sweetgrass: Indigenous Wisdom, Scientific Knowledge, and the Teachings of Plants
  • Williams, K. & Brant, S. (2023). "Tending a vibrant world: Gift logic and sacred plant medicines." History of Pharmacy and Pharmaceuticals
  • Celidwen, Y. et al. (2023). "Ethical principles of traditional Indigenous medicine to guide western psychedelic research and practice." The Lancet Regional Health - Americas

Philosophy of mind and consciousness:

  • Chalmers, D. (1995). "Facing Up to the Problem of Consciousness"
  • Searle, J. (1980). "Minds, Brains, and Programs"
  • Dennett, D. (1991). Consciousness Explained

Neuroscience and alternative approaches to AI:

Understanding what deep learning actually is:

AI skepticism and labor:

Present-day AI harms (the safety problems we actually have):