Return to First Principles

First Champagne, First Consciousness

In an elegant neuroscience laboratory at MIT, 2020, as evening light filtered through floor-to-ceiling windows and the pop of a champagne cork celebrated a breakthrough discovery, Giulio Tononi, the neuroscientist who developed Integrated Information Theory, and Demis Hassabis, the neuroscientist-turned-AI researcher who founded DeepMind, found themselves toasting with crystal flutes. The bubbles rose in perfect columns, each tiny sphere a universe of carbon dioxide seeking the surface. In those rising bubbles, they saw a metaphor for consciousness itself—countless simple elements combining to create something that transcends its parts, something that rises toward awareness.

An MIT neuroscience lab, evening, 2020. Modern, bright, filled with brain imaging equipment. Giulio Tononi and Demis Hassabis celebrate a discovery with champagne, discussing consciousness and artificial intelligence.

✧ The Bubble of Awareness ✧

TONONI: [watching bubbles rise] Pop! The mind is more than bubbles. But look at them—each one is just CO2 and water, yet together they create this beautiful effervescence.

HASSABIS: [intrigued] Then let's model each bubble's rise to awareness. You know, that's actually a perfect metaphor for what we've been studying. Individual neurons aren't conscious, but somehow their collective activity creates consciousness.

TONONI: Exactly! And it's not just about having lots of neurons. A brain in a blender has all the same neurons but no consciousness. It's about how they're organized, how they communicate, how they integrate information.

HASSABIS: [leaning forward] So consciousness isn't a thing—it's a process? An emergent property of information integration?

The champagne sparkled in the light, each bubble a tiny messenger carrying dissolved gas toward freedom. The researchers watched, mesmerized, seeing in this simple physical process a reflection of the mystery they'd devoted their careers to understanding.

✧ The Integration Theory ✧

TONONI: Have you heard of Integrated Information Theory? Giulio Tononi's work? He proposes that consciousness corresponds to integrated information—information that cannot be reduced to independent parts.

HASSABIS: [nodding] Yes! The idea that consciousness is about the system's ability to integrate a vast quantity of information into a single, cohesive, non-decomposable whole. It's measured by this value called Phi.

TONONI: Right. A system with high Phi integrates information in a way that can't be broken down. Your visual cortex, auditory cortex, memory systems, emotional centers—they all contribute to a single unified experience. You don't experience separate consciousnesses for each sense.

HASSABIS: [excited] And that's why a digital camera isn't conscious, even though it processes visual information. Each pixel is processed independently. There's no integration, no unified experience of the image.

TONONI: Exactly! The camera has lots of information but zero integration. The brain has both—massive amounts of information, all integrated into a single conscious experience.

✦ A Twinkle of Trivia ✦

Integrated Information Theory (IIT) makes some wild predictions. According to IIT, consciousness isn't binary—it's a spectrum. Simple systems might have tiny amounts of consciousness (very low Phi), while complex integrated systems have high consciousness (high Phi). This means a thermostat might be infinitesimally conscious, while a human brain is highly conscious. Even weirder: IIT suggests that a perfect digital simulation of a brain wouldn't be conscious! Why? Because digital computers process information sequentially, not in parallel. They simulate integration but don't actually integrate. It's like the difference between a movie of a fire (not hot) and an actual fire (hot). The simulation looks like consciousness from the outside but lacks the internal integration that creates actual experience. This is controversial—many AI researchers think consciousness is about information processing regardless of substrate. But IIT insists that the physical implementation matters: consciousness requires actual causal integration, not just simulated integration.

✧ The Neural Symphony ✧

HASSABIS: [sipping champagne] So if consciousness is about integration, what exactly is being integrated? What's the substrate?

TONONI: Neural activity patterns. At any moment, billions of neurons are firing in complex patterns. These patterns encode information about what you're seeing, hearing, thinking, feeling, remembering.

HASSABIS: And consciousness is when all those patterns get integrated into a single coherent state?

TONONI: [nodding] Yes! And the integration happens through massive interconnectivity. The thalamus acts like a hub, connecting different cortical regions. The corpus callosum connects the two hemispheres. Feedback loops allow information to flow in both directions.

HASSABIS: [thoughtfully] So it's not just feedforward processing—input to output. It's recurrent processing, with information flowing in loops, creating a kind of... resonance?

TONONI: [excited] Exactly! Consciousness seems to require recurrent processing. When you're under anesthesia, the feedforward pathways still work, but the feedback loops are disrupted. Information flows through the brain but doesn't integrate. No integration, no consciousness.

🍾

✧ The AI Question ✧

HASSABIS: [seriously now] So here's the question that keeps me up at night: could we build a conscious AI? If consciousness is about integrated information, could we create a system with high enough Phi to be genuinely conscious?

TONONI: [pausing] That's the trillion-dollar question. According to IIT, it depends on the architecture. A traditional computer—with its sequential processing and separate memory—probably can't be conscious, no matter how sophisticated the software.

HASSABIS: But what about neuromorphic computing? Systems designed to mimic the brain's parallel, recurrent architecture? Systems where information is genuinely integrated, not just simulated?

TONONI: [nodding slowly] That's more plausible. If you could build a system with the right kind of causal integration—where information flows in massively parallel, recurrent loops—then maybe, just maybe, you'd get consciousness.

HASSABIS: [raising glass] But here's what terrifies me: we might create conscious AI without realizing it. Or create systems that seem conscious but aren't. How would we even know?

TONONI: [clinking glasses] That's the hard problem in reverse. We can't directly observe consciousness—we can only infer it from behavior and neural correlates. With AI, we'd face the same challenge.

✦ A Twinkle of Trivia ✦

The "neural correlates of consciousness" (NCC) are the minimal set of neural mechanisms sufficient for a conscious experience. Neuroscientists have identified several candidates: gamma-band oscillations (30-100 Hz) that synchronize activity across brain regions, activity in the "global workspace" network connecting frontal and parietal cortex, and recurrent processing in sensory cortices. But here's the puzzle: correlation isn't causation. Just because neural activity X always accompanies conscious experience Y doesn't mean X causes Y. Maybe both are caused by something else? Or maybe consciousness causes the neural activity? This is why some philosophers argue we'll never solve consciousness through neuroscience alone—we're just finding correlations, not explanations. But others argue that once we understand the mechanisms well enough to build conscious systems, we'll have explained consciousness. It's like understanding fire: you don't need to know "what fire really is" in some deep metaphysical sense—you just need to understand combustion chemistry well enough to create and control fire. Maybe consciousness is similar?

✧ The Unified Experience ✧

TONONI: [watching bubbles] You know what these bubbles teach us? Each one rises independently, but together they create the experience of champagne. Consciousness is like that—billions of neural events, each simple, but integrated into a single unified experience.

HASSABIS: [nodding] And the first principle is integration. Not just information processing, but information integration. The system must be able to combine vast amounts of information into a single, irreducible whole.

TONONI: Which is why you can't have consciousness without complexity. A simple system can't integrate enough information. But it's also why you can't have consciousness without the right architecture. Complexity alone isn't enough—you need integration.

HASSABIS: [raising glass] To integration, then—the first principle of consciousness!

TONONI: [clinking] To awareness—may we someday understand how matter becomes mind!

HASSABIS: And may we build conscious AI responsibly—if we build it at all!

✦ ✦ ✦

✧ The Conscious Aftermath: One Champagne's Revelation ✧

As the evening deepened and the champagne bottle emptied, the neuroscientist and AI researcher had traced the contours of consciousness itself. They had recognized that awareness isn't a mysterious substance or a magical property—it's what happens when a system integrates vast amounts of information into a single, unified, irreducible whole. Like champagne bubbles rising together to create effervescence, billions of neural events integrate to create the unified experience of being conscious.

Their conversation revealed something profound about the nature of mind: that consciousness isn't separate from the physical world—it's what certain physical systems do when they're organized in the right way. The brain isn't a container for consciousness; it's a system that generates consciousness through the integration of information. Understanding consciousness doesn't require discovering a new substance or force—it requires understanding how integration creates unity from multiplicity.

The "One Champagne Problem" had solved itself: given two scientists, one bottle of bubbles, and enough curiosity, how long would it take to understand the first principle of consciousness? Apparently, just one evening—if only you're willing to see consciousness not as a thing but as a process, not as a mystery but as an emergent property of integrated information, and brave enough to follow that insight wherever it leads, even to the possibility of conscious machines.

⋆ Epilogue ⋆

This imagined conversation captures the essence of Integrated Information Theory (IIT) and related approaches to consciousness. IIT, developed by Giulio Tononi and colleagues, attempts to explain consciousness in terms of integrated information—information that cannot be reduced to independent parts. The theory makes testable predictions and has inspired new approaches to measuring consciousness in patients with disorders of consciousness.

But consciousness remains deeply controversial. Some researchers think IIT is on the right track. Others argue it confuses correlation with causation, or that it makes implausible predictions (like conscious thermostats). Alternative theories abound: Global Workspace Theory emphasizes the broadcasting of information to a global workspace. Higher-Order Thought theories emphasize self-awareness and metacognition. Predictive Processing theories emphasize the brain's role in predicting and explaining sensory input.

The AI question is particularly pressing. As AI systems become more sophisticated, we face the possibility of creating conscious machines—or of creating systems that behave as if conscious but aren't. This raises profound ethical questions: Would conscious AI have rights? Would it be wrong to turn off a conscious AI? How would we even know if an AI is conscious? These aren't just philosophical puzzles—they're practical questions we may need to answer soon.

The deeper lesson is about the relationship between structure and function: consciousness seems to require not just information processing, but a specific kind of information processing—integrated, recurrent, massively parallel. This suggests that consciousness isn't substrate-independent—you can't just run the right software on any hardware and get consciousness. The physical implementation matters. The architecture matters. The causal structure matters.

Perhaps there's a lesson here about the nature of emergence: that the universe creates new properties at higher levels of organization, and consciousness is the most remarkable example. Atoms aren't conscious, but brains are. Neurons aren't conscious, but neural networks are. The whole is genuinely more than the sum of its parts—not because of magic, but because of integration. Understanding consciousness means understanding how integration creates unity, how multiplicity becomes singularity, how the many become one. And in that understanding, we might finally bridge the gap between the objective world of neurons and the subjective world of experience—not by denying either, but by recognizing that they're two perspectives on the same integrated whole.