Michael Gazzaniga on the Science of Mind Constraining Matter

Michael Gazzaniga. Can neuroscience explain it? You know – consciousness, being, the number 42. And if everything you thought you were transpired to be nothing more than an easily deceived heap of neurons, would that trouble “you”?

During October 2009, Michael Gazzaniga gave a fascinating series of Gifford lectures exploring how our brains process the information that gives us our sense of “I”. Gazzaniga drew extensively from neuropsychological studies of people with “split brains” (explained later) to develop the notion of a single “interpreter” within the brain – a part of the brain that analyses all the data available for meaning.

Michael Gazzaniga then attempted to rationalise the interpreter, concluding that our focus should be on the interactions of people, not the brain itself. This logic was then expanded to wider society – social structure, interaction, and law. Those later thoughts raised many more questions than were answered.

This article attempts to summarise the key themes in a non-technical manner, with a few naive attempts to interrogate the theories developed. This is my interpretation of 6 hours of lectures. Interpretation, because I tend to recreate Gazzaniga’s conclusions by re-analysing the information presented. With a complex topic such as this, it is likely that some of my interpretations will differ from his. Sections titled “Interlude” are entirely my analysis.

(I delayed publishing in the hope of learning slightly more – but the more I learn, the less I understand…)

In this article:

Evolution of the Brain

Is the brain hardwired to do certain things, or does a person’s activity form the brain? Blank slate or structured? Natural selection vs instruction?

It’s an old debate. And the answer transpires to be “a bit of both”. Specifically: The early stages of the process tend to be hardwired, while the final stages tend to evolve. For example, the path taken by neurons (the cells that transmit information within the nervous system, including the brain), is consistent at the start, more variable towards the end.

Some functions of the human brain come “built in”. Renee Baillargeon showed that babies are born with an understanding of basic physics. Other functions evolve. For example, take a simple verb generation test (association of words): When you don’t know what you are doing, your brain activity increases. As you become more familiar with the same task, brain activity decreases – you’ve learnt how to solve that task, so no longer need to “think about it” as intensely.

Within non-human brains such evolution is much more constrained. Peter Marler showed that specific species of Sparrow could only learn the songs made up of the syllables found in their own species’ song. Chimps are born with their brains almost fully formed, in contrast to human brains, that take several years to develop after birth (for example, far more von Economo neurons are found at age 4 than at a human’s birth). So while animals like Chimpanzees or parrots can do some fairly impressive things, their adaptations are far less impressive than humans: The adaptations of non-humans are restricted to a single goal.

The human brain is in a “cognitive niche” – with far greater plasticity than other animals. Neurologically, why is the human brain different from other animals?

Human brains contain more neurons than other animals – there’s simply more going on. But what’s critical is amount of connectivity within the brain: There is much more communication between different areas of the brain in humans than in other animals. This idea of transferring information locally within the brain is key to understanding subsequent parts of Gazzaniga’s argument. It also explains why a smaller brain is an evolutionary advantage (our brains have been getting smaller, not larger): Information sent between areas of the brain has a shorter distance to travel in a smaller brain.

Why has neuroscience made so many discoveries about humans so recently? Historically it has been ethically far easier for scientists to experiment on non-human brains. However, over the last 20 years new brain scanning technologies (such as Magnetic Resonance Imaging) have allowed science to probe the human brain without the risk of physically damaging it.

Split Hemispheres

The human brain is divided into 2 regions called “cerebral hemispheres”. Right and left. Each side of the brain tends to control a separate side of the body. Each side of the brain receives visual information from a different eye. Certain functions are only found in one side of the brain – for example, the left side of the brain is formulating these words, which the right side cannot do.

In a normal brain, these 2 sides are separated by the “corpus callosum”, which allows the 2 sides of the brain to communicate effectively. Some people have had their corpus callosum cut, normally to treat severe epilepsy. These “split brains” cannot communicate effectively with one another, but in spite of this, these individuals continue to live normal lives. In contrast, people with parts physically missing from their brains (lesioned brains) tend to loose certain functions.

People with split brains have 2 completely different systems in their head. Gazzaniga devised various tests of split brain patients to demonstrate the behaviour of each side: Certain sides of the brain can only do certain things (like write this), so by restricting a visual input to a specific eye, or requiring a task to be completed using a specific hand, we can see the result of one side of the brain acting alone.

These people don’t develop 2 personalities. Indeed, recent patients don’t even comment on obvious deficiencies, such as a lack of vision from one side of their body. Their apparent normality implies that their brains are naturally able to cope with the many minds. Brains are not organized hierarchically. Instead they are parallel and distributed systems – “distributed networks of mind”.

The Interpreter

A person with a split brain can react correctly in 2 different ways to 2 different stimuli. For example, take a simple picture-object association test. A person is shown an image, and asked to select the object associated with that image. In one eye they are shown the image of a chicken, and in the other eye, a snowy scene. Each hand makes a separate association: The chicken linked to a chicken leg, while the snow was linked to a shovel. Correct.

However, while the person could easily explain the first association (the chicken), they could not explain the second (the shovel). They eventually rationalise the selection of the shovel as required to “shovel out the chicken excrement”. Or something.

The reasons is that the side of the brain rationalising the action only saw the chicken, and not the snowy scene. In the first instance, that side of the brain had no idea why the other hand selected a shovel. However, it quickly gathers together all the information it has available, and provides the most plausible reason it can.

The person isn’t actively telling lies. Rather, they are creating a theory to justify what happened based on what they know.

This is what Michael termed “post-hoc rationalization” – building a theory after the fact. This is “consciousness”. And it’s taking place in a specific part of the left hemisphere of the brain called “the interpreter” – the part of the brain that pulls all the information together to provide meaning.

The interpreter is different to the rest of the brain because of the time delay that occurs when it is used. The very act of monitoring a process (thinking about what you are doing) delays that process. The logic is familiar to sports-people, who perform best when not actively analysing their actions.

Many other examples were given to support the notion of an interpreter function within the left side of the brain:

  • The split-brain person whose right side watches a scary movie, yet whose left side only knows that they “feel scared” – not why.
  • The chess Grand Master who displays near-perfect recall of a strategically sensible pattern of chess pieces, while having almost no recall of a random pattern: They struggle to explain this discrepancy because the patterns are on the right side of their brain, and the explanation is on the left.
  • How humans remember a series of photographs as a patterns, and can be easily be fooled into accepting extra images that fit that pattern, but were not part of the original series.

Perhaps the most alarming thought was the ease with which the brain can be “hijacked”: From highly immersive virtual reality environments, to “mind altering” drugs. It’s a wonder we have any belief in ourselves at all. But the system itself is designed to make us “think we’re in charge”.

Interlude: Interpreting the Interpreter

A problem common to such theories is the “Homunculus argument“: The need to prove there isn’t simply a very small person within each person who acts as (in this case) the interpreter. Michael Gazzaniga’s defense against the Homunculus problem seems to rest on the multiple control systems inherent in the brain: These distributed networks of the mind are so inherently complex and fragmented, that there is no one point in the brain that could control it all.

One could argue that the interpreter is itself a homunculus (the little person inside). I suppose that the interpreter’s unique relationship to all the other information in the brain means there is simply no point replicating it: The interpreter is the information filter, so anything deeper within that interpreter would have the same set of information.

Do other animals have an interpreter – and therefore, a consciousness? After initially describing the interpreter as “uniquely human”, the answer became less assured, with the acknowledgement that “we don’t know everything”. The core argument is that the interpreter processes the meaning of patterns, and only humans see patterns. For example, a rat will assess a “random” (but secretly biased) draw on the probability the rat observes. So if the draw is biased, the rat will bias its response to take advantage. In contrast, humans look for the logical pattern, and so fail to see the bias. Similarly, Chimpanzees can think about what they see, but only as it appears – they cannot link different pieces of information together.

That discussion provides some further insight into the meaning of the “consciousness” generated by the interpreter. These points are solely my interpretations:

  • Consciousness is a rationalisation of patterns, not specifically what one sees, feels, or similar – although each of those stimuli contribute to the pattern. While the process is capable of resolving conflicting information, consciousness is not absolute truth: It’s inherently a “best guess”, and may be flawed or completely wrong. Consciousness would seem to closely parallel strongly held belief. Is there a difference?
  • Consciousness is constrained by what we can understand, not necessarily what actually is. An intriguing hypothesis: Over human evolution the growing complexity of our brains parallels the rise of ever-more complex explanations – from deterministic religion to the multiplicity of particle physics. A whole book could be written in this bullet point…
  • Consciousness is a problem-solving exercise, operating quite distinctly from the basic automated tasks that keep our bodies functioning. But why does the interpreter take longer to react? Less important? Computationally more challenging? Or perhaps intended to dissuade us from using it to try and solve infinitely complex dilemmas. Like this one.
  • Consciousness is somewhat distinct from biological “life”. Theoretically, the interpreter could be damaged, while the automated functions were unharmed, leaving someone “alive” but with no consciousness. Or the opposite. Should we redefine “dead” relative to the state of interpreter, not relative to a pulse?
  • Consciousness resides within a physical organ, so when the brain dies, so does consciousness. That raises the possibility of medically “re-animating” people those death has been caused by the failure of an organ other than the brain, extending “life” until their brain dies. The definition of death at the interpreter becomes critical: There’s a remarkably fine line between waking up from sleep and necromancy. Yet Aunt Mavis’ brain-in-a-jar is still potentially useful in an information-based economy (assuming it can be connected to the rest of the world electronically). There’s an entirely rational dystopia that disembodies non-child bearing humans to allow their minds to create wealth while minimising their use of physical resources.

Gazzaniga pin-points a place within the body where “the mind” is. In several of the previous points I have been able to separate mind from the rest of the body, and suggest that the mind can exist separately from the rest of the body. At first that may seem to develop a variation on conventional dualism (as famously interpreted by Rene Descartes), which also separates the function of mind and body, but locates the mind outside the body.

But consider this: The interpreter is (accepting Gazzaniga’s Homunculus defense) the ultimate arbiter of how the brain gains meaning from the information it has, yet we don’t fully understand what information the brain has: Potentially almost anything could influence our consciousness. In spite of attempting to demonstrate that functions such as consciousness exist within a physical part of the brain, the role of interpreter does not specifically disprove external influences on the individual body: Separation of mind and body, gods, telepathy, spirits, or whatever else you may wish to believe.

All those things are merely being interpreted. Just like the woman who knows she is afraid, but does not know why (because only the other side of her split brain saw the scary movie), we could be subject to stimuli that we aren’t aware are influencing us. It might even be argued that the interpreter is just an elaborate form of relativism – emphasising the perspective or experience of the individual.

It does not help that the definition of “consciousness” isn’t. Isn’t agreed upon. Possibly isn’t definable. For example, mirror tests (whether something recognise itself in the mirror) are also used to demonstrate consciousness, and come up with a radically different set of conclusions: That human consciousness develops after several years of life, and that Chimpanzees have consciousness for the best part of their lives.

I could go on, but the criticism may be unfair:

I don’t think Michael Gazzaniga was ever trying to answer the question. Rather he was trying to indicate where he thinks the answer might be lurking: Instead of studying how a specific neuron appears to cause a certain mental state, we should start the examination with the mental state. As the next sections explain, study downward causation, rather than upward. Consider interactions between people.

Free from What?

What’s the meaning of being free? Free will. The notion that you are in command, and therefore responsible for your actions.

Gazzaniga’s problem isn’t simply that the parallel distributed brain makes “you” more complex than your interpreter would have you believe. He’s challenging the deeply held (if irrational) view that cells in the human body can be automated processes, while the brain is somehow not automated. Why should the brain be different?

First consider “the bleak view”: Reductionism leads to determinism. Too many ‘isms. Reductionism is the idea that a complex thing can be explained by the sum of its parts. It is common to most classic scientific theory: Observe an apple falling to the ground, and then use it as part of an explanation for the rest of the universe. Determinism is the notion that future events are fixed by a natural law. The combination is a direct assault on free will: What’s the point of being in control if the outcome is already decided by a set of scientific laws?

Deterministic approaches impact on personal responsibility. For example, Kathleen Vohs demonstrated that a deterministic belief increases cheating.

Michael Gazzaniga developed the notion that the purpose of the brain is to make decisions by showing how activity moves between different areas of the brain over time. Specifically he discussed the way the interpreter seems to push the process back in time: Chronologically, by the time you’ve had a thought, it’s already happened. So to make the events “seem real”, the brain tweaks the timeline slightly.

He concluded the bleak view with a “causal claim chain”: The brain enables the mind. The brain is physical, thus determined. So thoughts are determined. Hence free will is an illusion.


Emergence means that the parts of a complex system do different things collectively to what they do individually. So understanding the components in isolation, does not explain how the whole system behaves. And if that reads like gobbledygook, Yaneer Bar-Yam offers a simple introduction.

Most recent scientific thought has replaced reductionist theory with emergence. For example, in physics, contrast classic Newtonian mechanics (reductionism) with quantum mechanics (emergence). Emergence is resisted in neuroscience in spite of the evidence for it:

Consider a lobster. The lobster’s gut contracts via its nervous system. Even in this relatively simple creature, there are millions of possible combinations of neuron communication. And 100,000-200,000 of those paths communicate the same result. The system is “internet-like”: Multiple pathways may be used by neurons to communicate the same information. There is no fixed path. Communication within the brain cannot be studied with a “single electrode” that monitors one pathway: Analysing the path of just one neuron ignores most of the communication.

Gazzaniga restated Donald MacKay‘s notion we do not control brains, we control behaviour. Robert Laughlin’s comment that our aim is understand how nature organises itself – so understanding how the brain works won’t answer the question. Cars are automatic, but by understanding cars, you cannot understand traffic. The conclusion: That people are free – so we should focus on the interactions of people, not the brain.


In a simple test, a patient is asked to talk about something they like, then something they dislike. After talking about each topic for a short time, something is sparked within their brain that causes them to comment on the smell. In the first case, they smell roses. In the second, they smell rotten eggs. What they smell is based on their wider mental state at the time of the spark.

That’s an example of how the mental state constrains the brain. What’s less clear is how. What sequence of events causes this?

Particles influencing mental state. The basic sequence of ever-larger physical things (particles, atoms, cells, etc) influencing mental state is called upwards causation. Alternatively, call the physical the “micro” level, and the mental state “macro”. The graph represents the physical as P and the mental state as M. Actually, in a truly emergent system, each P contains many possible states P, and each M contain many different M. But we’ll keep it simple. 2 “thoughts” are shown, one at time T1, and one a little later, at time T2. In each case there is an upward causal relationship between P and M for each separate thought.

Michael Gazzaniga seemed to accept a relationship between P and P2 (entirely at the micro, physical level), and dismissed any direct relationship between M and M2 (entirely at the macro, mental state level). The interesting unknown is the relationship between M and P2.

The theory for a link between M and P2 comes from David Krakauer. Consider the evolution of trees in a forest. The growth of a tree at time T1 changes the resources available to new trees growing at the later time, T2: Less light, different nutrients in the soil, etc. So, whatever state dominates at time T1 constrains what occurs at later time T2. This appears to be a form of downward causation.

Interlude: Free Yet Determined and Constrained?

Yes, but…

Gazzaniga’s free will and causality discussion is difficult to follow, precisely because it isn’t initially clear what he is trying to argue for. For example, he did not attempt to argue against determinism, in spite of introducing emergence. When asked why not, he replied that he was simply showing “where to look”: Instead of examining the brain, we should examine how groups of people behave.

So a reductionistic/deterministic approach is used to explain events within the brain at a given time, while an emergent approach explains the relationship between sequences of different events. Almost a theory of how automated objects ultimately descend into chaos. A chaos regulated by the (social) interaction between (human) objects.

At least, that’s what my interpreter concluded.

Subsequent lectures expanded on the nature of these social interactions and how they influence public policy and law. Yet Michael Gazzaniga continued to refer to a “social brain” and not a “shared consciousness”.

The distinction is presumably made because there is a reductionist, automated function within the individual’s brain, which at any given time is acting alone. Only the relationship between events is influenced by emergent, collective, forces. It therefore becomes very important to understand the point at which the system starts being emergent: If “consciousness” only starts after the event, then there is no notion of “I”. You are merely the physical thing, in which neurons convey unprocessed information. Much of the evidence presented (from post-hoc rationalisation, to the interpreter altering the timeline) suggests that conscious thought occurs afterwards.

If all consciousness is shared, other possibilities emerge. For example, an extreme form of relativism, where everything is defined by “other people”, up to and including “your” thoughts. (Of course referring to others is unhelpful, since others become you.)

Most interestingly, it opens up some possibilities for the multiple self – the idea that there isn’t just one version of you: You are defined relative to and by other people, so can be defined differently by different groups of people, thus creating multiple realities of you. This only starts to make sense once you acknowledge that individualistic notions (such as you) are not the sole arbiter of your being.

Limiting subsequent discussion to a vaguer sense of shared society has benefits: Sometimes it is easier to win an argument in which one is not also simultaneously trying to demonstrate that the sun doesn’t revolve around the earth, which in turn isn’t supported by turtles.


Humans are inherently social creatures from birth. Work by David and Ann Premack (who Gazzaniga often cites) showed that very young children preferred scenarios where someone was helped with a problem, rather than hindered.

We’ve grown from a relatively small population to densely populate the planet. Living in groups. Evolving by learning new methods of solving problems (the Baldwin effect). Regulated by a “policeman” who limits violence and chaos by making us accountable for our actions.

Humans are always thinking about other people – social issues dominate the brain’s cortex, unlike other animals:

“You have a theory about your dog, but your dog doesn’t have a theory about you.”


How did we become so empathetic?

Mirror neurons fire both when an animal acts themselves, and when they see another animal perform the same act. The animal understands the action, and knows the meaning for itself. We simulate what happens to others in our own brains: Being in pain can be much like seeing pain. Chimps don’t just mirror actions, but imitate them. Children tend to over-imitate.

Consider the classic ethical Trolley problem: Do you act to save 5 people by sacrificing one? In the first case this is achieved by switching tracks (altering the points). In the second case this is achieved by pushing the fat guy off the footbridge. The total death-toll is the same in each case, yet people are far more reluctant to push, than they are to switch.

Based on brain-imaging, the explanation for the difference is emotional. A conclusion supported by the study of damaged (lesioned) brains, with no ability to process emotional information: Such brains quickly resolve the dilemma with a rational (utilitarian), non-emotional response.

Gazzaniga referred to Rebecca Saxe‘s work showing how our brains predict the belief of others. Specifically how different areas of the brain are responsible for explaining/predicting and perceiving/executing. Consider a false belief test (like that described here):

  1. Accidental harm occurs. Someone dies as an unintended consequence. Most people would think this action is “ok”.
  2. Harm is attempted but fails. The intention was to kill, but nobody died. Most people would think this action is “not ok”.

Most people are interpreting behaviour, but a couple of interesting variations emerge:

  • Young children (under age 5) simply cannot believe anyone can hold a false belief. This ability develops – we are not born with it.
  • Split-brain cases base their decision on the outcome. Did someone die?


The rules we live by are shaped by the way we think of ourselves. That thinking, and law, changes over time. And differs between culture: Americans see fish in an aquarium. Asians see the scene.

A simple model starts with a notion of fairness (in the brain), leading to judgement (behavioral), to moral rules (aggregate). However, at each stage there is a feedback loop, back to the previous stage. The system is more complex than it first appears.

What is to blame – the person or the brain? Neuroscience has advanced considerably since notions of “insanity” were introduced into legal defenses, but law has failed to keep up. Yet without blame, there can be no retribution and punishment.

Certain (medical) conditions make certain kinds of behaviour more likely. For example, a schizophrenic has increased probability of violent behaviour. However, this is not an absolute switch – schizophrenia does not automatically cause a person to behave violently. Recent advances in brain scanning have introduced further confusion: Variations occurs between people with “the same” condition. We can’t know the precise state of the brain at the time a crime was committed. Nor what other factors might have caused someone to misbehave.

Gazzaniga’s conclusion: That currently, brain-scanning “evidence” is likely to be misused in courts.

Similar scientific advances also raise questions about notions of neutrality. Inherent biases towards one’s own race, because we are poor at detecting features in another race. Subtle inter-play of pity and pride, disgust and envy, change how a suspect is regarded.

Michael Gazzaniga’s final comparison was between forgiveness and retribution. Babies understand equal distribution and reciprocity from about 18 months old. Humans are retributive from the beginning.

The question that emerges, is whether forgiveness is a viable concept without accountability? The implication, that without accountability, we only have retribution.

Official video of the original lectures can be found on YouTube. Although I didn’t realize it when I started writing, my article Difference and the Same reflects on much of the evidence presented by Gazzaniga, while addressing a slightly different problem.


1 thought on “Michael Gazzaniga on the Science of Mind Constraining Matter”

  1. […] within the text, and still maintain some form of readability. It may be helpful to first read Michael Gazzaniga’s Science of Mind Constraining Matter, which provides the rationale for some of the statements made in this article. On this […]

Comments are closed.