Industry

Moving Toward Understanding Consciousness

Noë proposes his ideas towards current neuroscience research using reductionism approach is far from understanding the philosophy behind our consciousness.

1. Introduction

Alva Noë is currently a professor of philosophy at the University of California in Berkeley, CA. His research centers on cognitive science, the origins of analytic philosophy, phenomenology, philosophy of mind, theory of art, theory of perception, and Wittgenstein.

2. Ideas and Comments

In science, reductionism means that to discover the mechanism under complex phenomenon, one of the possible methods is to reduce it into simpler components. Such methodology forms the basis for many well-established research fields such as classical mechanics, chemistry and so on. In the context of cognitive science, for instance, we can finally discover the myth of cognition by doing research on biological mechanisms of neurons. In engineering, state-of-the-art deep learning is also inspired by discoveries from neuroscience.

In this commentary, however, Noë proposes his ideas towards current neuroscience research using reductionism approach is far from understanding the philosophy behind our consciousness. To my understanding, the reductionism is also far from building a real artificial consciousness machine.

Holding an enactive view about origin of consciousness, he also particularly comments on visual consciousness, which has been his active research topic for decades. According to his book [1], visual consciousness should arise from interactions with the surroundings, instead of something that simply happens inside of our brains as a uni-direction visual stream. Because it does not solve the problem of how subjective visual consciousness is rooted from the objective electrical and chemical signals, we must consider different parts of the human brain and body as a whole, as they allow people to own visual experience. Likewise, state-of-the-art deep learning method is not enough to solve the myth of visual consciousness, not to mention developing artificial consciousness.

1

The modularity of the brain theory has been widely accepted, which claims that different parts of the brain are responsible for separate tasks. For example, the visual system is for visual perception, and motor area are for action. This idea is often unintentionally promoted in our everyday life, where people are often saying about “area of the brain responsible for seeing”. In this sentence, there is an assumption that this area is doing nothing more than seeing. However, in the Hubel and Wiesel example Noë mentioned, one of their follow-up work is discovering the dorsal stream in the “two visual streams” actually provides transformation with the coordinating action from the visual inputs. On the other hand, the visual system (as well as other perceptions) cannot be independent, because it also depends on the activities of bodily movement and guidance [2].

2

However, when we look back at current AI applications such as self-driving car, Facebook recommendations or image recognition, they are following the simple input and output mapping, in which the AI algorithm learn and attempt to find the best mapping function between the input and the optimal output. We have deep Convolutional Neural Networks where the classification and perception are trained as part of the same network. With enough training, these networks are able to link a (quasi-)symbolic representation (object classes) with the image pixels that are perceived from the camera. If we build an artificial agent (e.g. a robot) with a CNN, the symbolic representation has never been given any meanings to the intelligent agent itself. What does the word “tomato” or “baseball” mean? They have similar shapes but what can they do to the artificial agents? So the CNN network is also a single modularity network which only deals with the visual information. If we want to build visual consciousness toward these two objects, Noe’s point is that the symbolic representation only makes sense when the artificial agent understands what kind of real world context are the visual objects situated in.

More recently, we have AlphaGo, which has defeated world champions despite of the fact that it has no physical body. Although reinforcement learning itself is a promising method to explore the outside world with physical interaction, unfortunately for deep reinforcement learning inside AlphaGo, it has only been used to explore the optimal moves for Go. Therefore, from Noë (and the author’s opinion), it is nothing more than a desktop calculator. Reinforcement learning algorithm is a good tool in allowing us to explore the world with our behaviors to develop the perceptual consciousness. AlphaGo itself is just a very good test-bed for advanced reinforcement learning algorithms, but without a body to explore the world with perception and action, it can never understand what each move of Go means.

Based on Noë’s sensorimotor contingency theory, we should understand our brain as a whole and not as separate entities. For example, perception and action has law-like regularities and cannot be separated. The perceptual world changes from the influence of our motor actions, and these changes help us really see the kinds of object we perceive. In short, we act in order to give ourselves better perceptions, which improves our understanding of the world, thus increasing our capacity to act. By building up these sensorimotor skills, we acquire the intelligence to understand the objects and laws in the world. However, when we look at state-of-the-art AIs, we find they are still lacking this quality.

Therefore, perhaps we can build a robot that can try to catch a flying baseball and test the softness of a tomato to really understand (or to have a visual consciousness) about what’s the difference between a tomato and a baseball.

3. Original Article by Alva Noë

If you stop and think about it, the idea that you could understand a complex system by a detailed description for one of its parts would sound crazy on surface.

You are unlikely to gain much insight into the principles of organizing flocking behavior in birds by confining your attention to what’s going on with an individual bird. And you are unlikely to figure out how birds fly by studying the properties of a feather.

The second example is due to vision science pioneer David Marr, and was advanced by him in the context of his rejection of neural reductionism in the theory of vision. To understand how we see, he believed, you need to think about what an animal does when it sees. What is the task of vision? What is vision for? Only then, given a description of the phenomenon couched at the level of the animal and its needs and interests, can we intelligently ask: How might we (or how might nature) build an animal or a machine capable of performing or implementing this function? And only then would we be in a position to ask, of individual brain cells, what sort of contribution do they make, or do they fail to make, to the achievements of the whole.

It is amusing that Marr’s book was published just as David Hubel and Thorsten Wiesel won the Nobel Prize for their work on information processing in the mammalian visual system. Their achievement — building on the work of generations of scientists — was in discovering receptive fields of cells in cats and monkeys. In lay terms, they found that different cells were tuned to be more responsive to one kind of stimulus rather than another (lines, bars, motion). They neither asked nor answered the looming question: How do circuits of individual neurons manage to give rise to conscious visual experience? The answer to that question — that some version of the reductionist story can be made out — was probably taken for granted not only by Hubel and Wiesel, but also by those who judged their work to be worthy of the highest prize in science.

However, we still don’t understand how visual consciousness exists in the brain. And we can return to Marr’s book for an understanding of why this might be. You just can’t read off the achievements of the whole — not the brain nor the whole organism — from facts about what’s going on with individual cells. A lot of conceptual spade work needs to be done before facts about receptive fields can contribute to the understanding or explanation of anything at all.

Myself and others have been making this argument for some time with little discernible influence on the general hype. (See here, here and here.) The Year of the Brain, the Decade of the Brain, the Connectome, the Brain project, etc. So it is an event of considerable note — maybe one of genuine historical importance — that a group of top neuroscientists from around the world have recently come together to write an opinion piece in the journal Neuron calling on neuroscience to “correct its reductionist bias” and embrace a “more pluralistic neuroscience.”

Hubel and Wiesel, remember, won their prize for single-cell electrode studies on the brains of cats and monkeys. The only other tool for studying the brain then available was post-mortem autopsy. Since then, there has been an explosion of techniques and technologies for studying the brain of living humans and other animals: fMRI, PET and other imaging tools, with even newer techniques of genetic manipulation and ontogenetic circuit control. Combine these new technologies with big data and increased computing power and you have, in some way, a perfect storm — or rather, a recipe for a theoretical or explanatory dead end. More information is not the same as more knowledge, and data untrammeled by understanding, by sound theory and by the big picture is just noise. Actually, it’s worse than noise. It’s noise masquerading as insight.

Everyone knows you can’t find consciousness in the individual cell. But we now have tools for modeling temporally and spatially distributed ensembles of cells. Surely in there shall we find the key to the mind, in those larger groupings! Until we know what questions to ask, we’re unlikely to find anything. (Or rather, we’ll find something but lack a clue on what it is, just as when Columbus landed in America but thought he’d made it to India. I give this example because Hubel compared his research with Wiesel to Columbus’s explorations in his 1981 Nobel Prize lecture.)

John Krakauer et al, who wrote the piece in Neuron, are not pessimists — not any more than Marr was. To move forward and understand the human mind or the minds of nonhuman animals, they proposed, we need to look outside the brain and at the animal’s behavior. That is, at how animals live, what they do, what problems they face, and what are the circumstances in which they thrive. There’s more to biology than molecular biology, and there’s more to cognition and consciousness than neural activity. We won’t understand how the brain enables mind until we think more carefully about behavior.

Philosophy is not — and has never been — the cognitive property of philosophers. Science needs philosophy, both in the sense that scientists ought to pay attention to what philosophers are doing, and more importantly in the sense that scientists, at least sometimes or in moments of crisis, need to do philosophy themselves. They need to question their presuppositions and do the hard conceptual spade work to set themselves on more reliable foundations. I applaud these scientists for their appreciation of the value of their science, and of the need to frame and better contextualize their own research methods.

Science has never been just about information or data. Science aims for understanding and knowledge. By calling for a rejection of simple-minded reductionism, and by encouraging brain scientists to think about the conceptual puzzle of understanding the relationship between the life of an organism and what is going on around it as well as inside it, these neuroscientists are taking important strides towards setting up an adequate neuroscience of cognition and consciousness.

 

Reference: 
[1] Noë, Alva. Action in perception. MIT press, 2004.

[2] O’Regan, J. Kevin, and Alva Noë. “A sensorimotor account of vision and visual consciousness.” Behavioral and brain sciences 24.05 (2001): 939-973.

 


Author: Junpei Zhong|Editor:  Ian |Localized by Synced Global Team: Xiang Chen

0 comments on “Moving Toward Understanding Consciousness

Leave a Reply

Your email address will not be published. Required fields are marked *

%d