What makes a good explanation?
A good explanation is one in which the explanatory target is shown to be a necessary outcome of something simpler (and often smaller). For example, when science explained what water was, it did so by showing how all the macroscopic behavior of water--boiling point, viscosity, not mixing with oil, etc--proceeded from the structure of H2O. Importantly, this explanation not only shows that the behavior can proceed from the structure of H2O, but that it necessarily proceeds from the particular structure of H2O. Given this specific molecular structure, you will always get the properties of water, and if you change the structure, you wont get those properties. Thus we concluded a necessary identity between the explanatory target and the explanatory elements: water is H2O (Kripke). This is the hallmark of good science: a necessary identity. The same is true of heat. Even though heat can be a "fuzzy" concept or idea, it is ultimately some macroscopic behavior that can be explained as a necessary outcome of something else: heat is molecular motion.
Do we have a good scientific explanation for consciousness?
No. While we can in some cases show neural correlates for conscious experiences, we cannot demonstrate a necessary identity--why this particular neural firing pattern necessarily leads to that particular conscious experience. In fact, it seems almost ridiculous to expect such a thing even in principle. Given the trajectory of scientific explanations, it's perfectly reasonable to expect to find necessary identities between neural firing patterns and macroscopic behaviors, such as moving your arm. And this is often what neuroscientists focus on, and demonstrate successfully (also why they are usually behaviorists :p). But how could you show that a particular neural firing pattern A necessarily leads to the smell of chocolate, as opposed to vanilla? This type of question is often referred to as the hard problem of consciousness or the explanatory gap. Unlike heat, water, and moving your arm, it's not clear that conscious experiences would necessarily follow from a particular fine structure.
Why is consciousness different?
Unlike heat and water, it doesn't seem like conscious experiences have spatial extension. In the scientific examples, we show how a macroscopic behavior--which is by definition spatially extended--proceeds from a microscopic structure. The spatial extension of the explanatory target almost appears to be a prerequisite for being able to do physics on it--if you aren't extended in space, I can't obtain your physical structure and show how you proceed from it.
In the case of conscious experiences, we have a microscopic structure--the neural firing patterns. And these firing patterns have a well defined spatial extension, localized to the brain. But what is the spatial extension of a conscious experience? When I taste cherries, or think about the number 5, there are spatially extended neural patterns in my brain, but what is the spatial extent of the experience? It almost appears as though conscious experiences don't have an external spatial extension. Rather, they are the internality of some physical process--they are what a specific neural firing pattern feels like, from the inside, so to speak (Nagel). There's no scientific or logical reason to believe that any given physical process should have something it feels like internally. We just happen to know that this is the case for human brains.
And so we conclude that consciousness cannot be explained by traditional physics because it is not a spatially extended behavior that can be explained in terms of a microscale structure. It is rather the interiority of particular microstructures. And if it does not have spatial extension, it thus cannot be shown to be a necessary outcome of microstructures with physics alone.
How can we get necessary identities for non-spatial objects?
If we need spatial extension to show necessary identity with physics, what can we use to show necessary identity with non-spatial objects? Is there anything that can instantiate abstract, non-extended objects with physical processes?
Suppose you were watching a red crab on your screen. And your friend, who had never seen a computer, said "whoa, what is that thing on your screen?" You respond "it is a particular transistor firing pattern A." Similar to the explanatory gap problem, your friend responds "but those two things can't be identical. There is no way you can show that this red crab, as opposed to a green crab, is a necessary outcome of transistor firing pattern A. You can only show that they happen to be correlated. Therefore there is no necessary identity, and you have not explained the red crab appropriately"
Your friend is right to have the intuition that there is no way in principle you could show how a random firing pattern leads to this red crab before him, and not a green crab. If you didn't have the interpreter and operating system, it would truly seem like magical emergence that you can make almost anything happen on this screen just from random firing patterns. This is how I personally felt about computers for most of my childhood. They were baffling, like consciousness.
The purpose of a Turing machine, or computer, is to encode abstract objects (like the number 5) in physical processes. In fact, Turing completeness guarantees that any abstract object can be instantiated in a physical process that satisfies certain rules. And with this we have a great example of how a micro-scale firing pattern A will necessarily lead to a red crab on your screen and not a green goblin. It's just incredibly complex and requires an interpreter, operating system, etc.
Closing
Thus, if we are to show how conscious experiences are necessary outcomes of neural firing patterns, we very likely need to leverage something like computation, in addition to physics/neuroscience. And possibly even blend the line between the two fields, to make a more unified explanatory framework that can handle entities both with and without spatial extent. Disciplines like quantum computing are examples of such fields. We would like to see more of these.
Cognitive science is also another example of such a field intersection, where various computational models are attempted in order to try and replicate things like learning. Neural networks themselves were conceived by McCullough and Pitts, a neurophysiologist and mathematician. Although displays of intelligent behavior are not direct examples of conscious experiences, we build a powerful epistemology for instantiating human-like responses in physical processes, and may shed light on the relation between physical flickering, intelligence, and conscious experience. We are sort of banking on the fact that once we figure out how to fully replicate our general intelligence, consciousness comes along for the ride.
Is it also possible that there are missing elements in addition to computation? When we look at things like radical leaps in creativity, being able to create something seemingly outside of the existing ruleset of possibilities, causing a paradigm shift in world view (ie from Newtonian to quantum mechanics) these seem to be beyond the realm of what current rule-following LLM's are capable of. What if the brain leverages computation, but also some other stuff? If the brain can, in some instances, operate beyond the critical chaos threshold, it would explain the ability we have to mine novelty out of nothing, to reach outside of our current set of rules into something more general, to be creative. The ordered operation would be predominant, and bring critical inspection, logic, etc to thinking. But the disordered operation would be wild and creative, and it just comes down to brains being able to tune their level of disorder carefully enough to dance around the critical point without spiraling out of control. Thus we conclude that cognitive science and AI should incorporate more disordered computation and learn to dance around the critical point.
No comments:
Post a Comment