Will We Ever Solve the Brain?

As I wrote in my last post, the Janelia workshop I recently attended included a series of debates designed to stimulate discussion of broad topics in neuroscience. These debates were thought-provoking and super fun, especially the part where my side crushed the opposition (kidding). So I wanted to write up a couple of posts about some of the issues we discussed that I found the most interesting. To be clear, these posts include not just my own ideas but also many points raised by my incredibly smart colleagues at the workshop.

One of the statements I was assigned to debate was: “Give me a high-density optrode and a large-field microscope, and I will solve the brain.”

For those of you who aren’t experts, this basically means having the ability to visualize, stimulate, and record the activity of lots of brain cells at a time. We interpreted the statement as having the power to manipulate and/or record the activity of every brain cell simultaneously while an animal is performing any task or behavior that you like. Of course current tools don’t have the resolution to do this, but we figured that a generous interpretation of the statement would make for a more interesting debate.

What does it mean to “solve the brain”?

Before we can debate which tools will enable us to solve the brain, we need to agree on what “solving the brain” actually means. I would argue that solving the brain requires having both a conceptual and mechanistic understanding of everything the brain does—every behavior, cognitive process, internal state, emotion, and so on.

Correlation vs. explanation

In my view, the minimal criteria for “solving the brain” would be 1) being able to predict exactly what an animal is doing or thinking just by observing the neural activity in its brain, and 2) being able to recreate any behavior by stimulating an animal’s brain cells in the right pattern.

This would provide pretty compelling evidence that we really understand which patterns of brain activity cause which behaviors. (And by “behaviors” I don’t just mean running or jumping or mating, but also more abstract things like learning a new skill, recalling a memory, feeling happy or sad, etc.)

Even if we did achieve those two goals, however, I’d argue that that doesn’t constitute a true understanding of the brain but instead just a correlation between neural activity and behavior. Identifying correlations doesn’t mean you understand why something works the way it does. This is ever more apparent in election season, when pollsters can determine things like “Trump voters tend to be white and less educated”, but identifying the reasons behind those correlations isn’t always so simple. (Well, in Trump’s case maybe it is, but you get my point.)

Multiple levels of understanding

To be more specific about what “solving the brain” means, our debate team turned to Marr’s three levels of analysis. In the 1970s David Marr and Tomaso Poggio proposed that solving the brain requires three levels of understanding.

The first level is computational: what problems is the brain trying to solve? In some cases the problem may be relatively straightforward, like recognizing your mom’s face or learning that touching a hot stove is a bad idea. In other cases it’s not so obvious. For example, it’s not immediately clear what specific computations the brain is performing when we’re learning language or navigating in the world. These are the questions often addressed by cognitive neuroscience as well as anyone studying complex aspects of animal behavior.

The second level is algorithmic: what algorithms does the brain employ in order to perform the desired computations? Understanding these algorithms requires studying how the brain represents specific types of information, such as external features of the world or internal states, as well as how these representations are created and transformed through neural processing.

The third level is the implementation: what physical processes give rise to those algorithms? These processes include the biophysical properties of neurons and the connections between them, among many other factors.

I find Marr’s three levels of analysis to be a compelling argument for what it means to solve the brain. It emphasizes how we need to understand the brain at multiple levels, and that this understanding should be both mechanistic (what is the brain doing?) and conceptual (why is it doing this?).

Will we ever solve the brain?

Now that we have an idea of what it means to solve the brain, this raises another question: will we ever get there?

Can science explain consciousness?

There are a few reasons you might argue that we will never solve the brain. First, you could make an argument along the lines of Thomas Nagel: that consciousness is subjective and therefore cannot be explained by objective, scientific means. Nagel argued that we all experience a unique, subjective perception of the world. Because these subjective percepts cannot be objectively observed or quantified by a third party, they are not accessible by scientific methods and therefore science can never “solve the brain”.

Our debate team pondered this argument and wondered whether there really could be something about the subjective nature of experience that is forever inaccessible to scientific methods. Our conclusion was: maybe, maybe not. Perhaps there are some aspects of consciousness that we’ll never understand, but there are tons of questions about how the brain works that we know should have concrete answers. We decided to focus our energy on thinking about what would be needed to answer those questions.

Can we even understand networks that we created?

Another argument for why we’ll never solve the brain involves an inference from artificial neural networks that use machine learning algorithms.

Humans can program an artificial network to solve difficult problems, such as recognizing handwriting or speech, by training it on a large amount of data. For example, you provide the network with a bunch of handwriting samples (inputs), let the network guess what words are being written (output), and tell it what the correct answer is (desired output). Then you allow the network to change the strengths of the connections between its artificial neurons, which changes the output it produces, and you let it keep tweaking these connections millions of times until the actual output matches the desired output as closely as possible. Many networks will naturally find good solutions to the problem.

neural-network-structure

schematic of an artificial neural network (credit: www.explainthatstuff.com)

The thing is, in many cases you can look at the final state of the network and have no real understanding of what it’s doing. It’s somehow solving this complicated problem, like recognizing handwriting, but you have no idea what features it’s picking out or what types of analyses it’s performing.

So here’s a network that you created and you know literally everything about it (its components, connections, activity pattern, and the problem that it’s solving)—and yet you don’t truly understand it. How can we ever hope to understand the brain, a far more complex system that we didn’t create and know far less about?

Some people might argue that the way you programmed the network in the first place is an understanding of the system. You programmed the network to have a certain structure and “learning rule” (which specifies how it changes when the actual output and desired output don’t match), and now it’s doing what you want. You could recreate the system at will. What greater understanding do you need?

I think many of us don’t find this view to be very satisfying. I’d like to have a deeper understanding of how a network actually transforms an input into an output. So the idea that we can’t even understand networks we’ve created is kind of depressing. However, I’m not sure how strong the “artificial networks are inscrutable” claim actually is. If we were smarter and spent more time on the problem, maybe we could figure out how these networks operate.

Moreover, artificial networks aren’t the same as the brain, so even if we can’t figure them out that doesn’t mean the same is true of the brain. For example, the brain is more constrained than an artificial network. It doesn’t have an arbitrarily large number of neurons or connections to work with, since we only have so much energy to feed it and so much space in our skulls. And its structure and function aren’t created de novo, but instead reflect our evolutionary history. It’s possible that these constraints will actually make the brain easier to understand than a network where everything is connected to everything else.

If we solve the brain, will we be able to understand the solution?

Ok, so let’s say we believe that science will someday be able to figure out how the brain works. There’s still the possibility that that “solution” will be so complicated that it won’t provide an intuitive answer that we find satisfying.

We humans are limited in how much information we can hold in our minds at the same time. But the answer to how the brain works might require understanding a huge number of complicated, interconnected processes all operating at the same time. So even if we could figure out the solution and model it on a computer, we might not truly “understand” it at a conceptual level.

This is the “we’ll never understand the brain” argument I find most convincing, but it doesn’t worry me too much. I guess my only rationale for being optimistic is knowing that there are parts of the brain that we understand to some degree. And there’s no real evidence that what we don’t understand is too complicated to understand at all.

In addition, I’d be more pessimistic if I thought that the brain was just one giant network that processes all types of information everywhere. That would make it pretty difficult to understand. But there’s plenty of evidence that brain consists of functional modules, with different regions or circuits processing different types of information in different ways, even if they also interact with one another. This makes it more likely that we’ll be able to understand the brain by understanding different parts of it separately and then putting things together.

So my personal conclusion is that someday, somehow, we will understand how the brain works. I’m hard-pressed to predict whether it’ll take 50 years or 100 years or 1000 years, but I think it’ll happen. In the next post I’ll summarize our discussions on what it will take to get there.


Leave a Reply

Your email address will not be published. Required fields are marked *