Social Question
If the brain isn't a computer, what is it?
This article summarizes Robert Epstein’s critique of the “computer metaphor” for the brain. I agree that the current breed of electronic devices we call “computers” is a poor analogy for the brain, which seems to work in entirely different ways. But i think he goes too far when he says that the brain does not process information.
For one thing, we CAN compute. We invented it. Computers only do it on our behalf, because we designed them to do so.
I also finds the supposed alternative vision—an organ that changes based on experience—a bit short on specifics. I can think of other organs that change based on experience: muscles for instance grow and shrink depending on physical activity and food intake.
So the question arises: if brains aren’t computers, what are they?
______________________
The empty brain
Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer
by Robert Epstein
https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
47 Answers
It’s a distributed control system. It processes feedback from various sensors (skin, internal sensations such as heart rate, lungs, “gut feelings” – including, literally, how your gut feels – input from eyes, ears, etc.) and sends commands to autonomous, semi-autonomous and voluntary musculo-skeletal systems to perform certain acts.
It also does numeric and verbal processing, “like a computer”.
Mr. Epstein wrote, “But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.”
Genes create the ability to assemble sensory input into a coherent, albeit approximate, representation of reality. The construction of neural connections organized into specialized functions that bidirectionally communicate is directed by genetic rules. Without the memory of events with associated emotion, learning to approach what is desirable and avoid the undesirable would not be possible. Establishing symbols permits the evolution of verbal and nonverbal language. Observing the behaviors of others creates models that can be replicated.
Most of computer structures that Mr. Epstein denies are present in a brain have corresponding structures. Although comparing a brain functions with computer function is a poor analogy, his belief that the “brain does not process information, retrieve knowledge or store memories” is incorrect. Even combining multiple computers into parallel systems does not approach the complexity of the human brain. However, perhaps quantum computation may be equal to our biological ability in the future.
A computer does more than compute. It can also transmit and receive.
More than a few great ideas and inventions have been initiated from dreams, or shamanic style vision quests, or drug induced hallucinatory states, or deep meditations. Some would suggest that our brains are capable of receiving and then processing information from a so called quantum field, which exists completely independently from our brains.
“It can also transmit and receive.”
receiving, a.k.a. seeing, hearing, smelling, tasting, touch sensing, heat sensing, sense of balance
transmission, a.k.a. talking, grunting, laughing, crying, moaning, gesticulation, general body language.
You might say, “But loli, those require extra organs!”.
To which I respond “So does a computer. At its core, the parts that are required for the computation are the CPU and Memory.
Everything else is specialised hardware. You need specialised controllers for handling input like mouse and keyboard. You need specialised hardware for direct output of sound and video, a.k.a. sound and video cards. You need specialised hardware for communication, network cards.
This is actually a topic I might end up studying one day, although as of right now I haven’t taken any neuroscience classes yet (I’m planning to).
It depends a lot on how rigorously you define “computer.” If you’re using a mathematical definition of computer, then of course the brain is not a computer. We can’t simulate an arbitrary Turing machine in our mind, and we can’t parse Chomsky type-0 grammars with perfect accuracy. Then again, that’s all theoretical anyways, no physically real computer can do that stuff either.
Over all, we have some of the best audio and visual processing hardware in the known universe. It’s really good at special-purpose data processing. Unfortunately for us, these have nothing to do with our theoretical status as a computer. By ourselves, we really suck at actual computing. We’re good at processing queries (get asked a question, interpret it, access relevant memories, process them, make a decision, and speak the result); however, if given an algorithm to follow and some input data, we struggle. We have a small working memory, and thus act similarly to a computer with a really limited amount of RAM. If I were to perform a numerical algorithm, I would take on a role more similar to a CPU than the whole computer, since I would use external storage (like a piece of paper) to keep track of data in a rigorous fashion. We’re also very slow at such tasks, despite the fact that the brain operates really fast (the “speed of chemistry,” c) for its special purpose data processing. This indicates to me that we don’t carry out those general-purpose algorithms on ”bare metal,” but are actually doing something rather abstract on the inside. The good news is that this doesn’t negatively affect our status as a computer.
——
I believe the brain is a computer, it’s just that it functions in a radically different way. It’s hard to understate how differently the brain works from man-made computers. We’re good at specialized data processing (much better than man-made computers), and making decisions based on this data, although this doesn’t have anything to do with the formal concept of a computer. That’s more similar to a giant data processing system or information system. We’re really slow and error-prone at following general-purpose algorithms, but we can do it, which I think does fulfill the requirements for a physical computer (theoretical computers are physically impossible).
Maybe, as some additional evidence as to our status as a computer, consider tasks of the form “given problem X, write an algorithm to solve problem X for arbitrary inputs.” People (aka programmers) solve those kinds of tasks all the time, and I don’t believe there’s any way for something less powerful than a computer to be able to do it efficiently.
——
If you’re using man-made implementations of computers to define what a computer is, then of course you won’t consider the brain a computer either. I think this is a very wrong approach to use. Implementation has very little effect on whether or not the device is a computer. Data buses, ALUs (arithmetic and logic unit), multiplexers, etc. don’t count for much.
Data isn’t stored, transmitted, or used in the brain the way it is in silicon computers. Brains are massively paralleled, distributed, and pipelined. We don’t have neurons that correspond to binary numbers, we don’t have an ALU. We don’t have shared data buses with multiplexers on on end and demultiplexers on the other. None of those are even needed.
Despite those differences, we definitely have a lot of hardware, and the pieces of hardware are definitely communicating amongst themselves in some manner. I want to bring up the concept of split-brain people, those with the two brain hemispheres separated. In a normal human, there is a small piece of tissue called the corpus callosum that transmits information between hemispheres. Here’s an exercise for the reader: look straight ahead at, say, the opposite wall of the room you’re in. Without moving your eyeballs, identify an object that is in the right half of your field of vision, and say that object’s name. The right half of your visual field is processed by your right brain hemisphere, while the speech center is in your left hemisphere. This indicates that “abstract information” (something representing the identity of the object, rather than raw visual data) crossed over the corpus callosum. We don’t know how that works, but it obviously does.
@ragingloli “You need specialised hardware for communication”
Am I mistaken in considering that our posts are in agreement with one another? If you’re arguing against my post, I fail to see it.
I’m reasonably certain that the brain does not depend on any special quantum effects, especially not something that “exists completely independently” and from which we can receive information. I’d be interested to know where you read that.
The human brain is a meat computer with a high degree of parallelism and it is also a telemetry device.
@PhiNotPi “I’m reasonably certain…”
For your consideration, a few of the many competing theories for non locality…
… and of course, the non local implications of the mysterious Double Slit Experiment.
I’m not trying to convince anyone of anything. I don’t pretend to know. But there is considerable research being done on non locality by competent scientists.
Interestingly, here are a couple of banned Ted Talks about the subject…
The Science Delusion
The War on Consciousness
Funny that these talks were banned from the TED platform entitled…
Challenging existing paradigms
Funny guy that TED.
i tend to agree that classic materialism is an outdated belief system. It’s also self-contradictory, in that it cannot account for its own emergence. Materialism IS an idea that its proponents say is true, and yet there’s no room for “true ideas” in the materialist world view.
This said, the material you shared is too “new age” for my taste. I don’t believe in shamanism on in the star wars force. So i see the brain as a pretty efficient information analyst and synthetiser, not as a cosmic radio receptor.
@RealEyesRealizeRealLies That’s some extraordinarily fringe stuff: telepathy, morphic resonance, shamanism, etc. I would categorize most of it as quantum mysticism, an attempt to reconcile quantum physics with the concept of spirits/souls, which tends to interpret the “non-locality” of quantum physics to mean there’s somehow spiritual “connections” between consciousness / between the body and an external consciousness.
I was very careful to use the phrase ”special quantum effects” because, pretty much by definition, quantum physics is the underlying physics of the universe so therefore everything is quantum physics. I think it’s a mistake to say, however, that consciousness is nonlocal because quantum physics is nonlocal. When it comes to chemistry, everything at a scale larger than molecules has undergone some very thorough quantum decoherance, and can be modeled very accurately with classical (local) models. To be more specific, quantum physics is needed to predict any given reaction rate, but once you know that rate, you don’t need quantum physics anymore to model it. If someone were to make a model of the human brain, complete with protein structures and chemical concentration gradients, all the way down to the cubic micron level (or smaller), I believe the end product would act just like a normal human brain. No quantum physics would have to be involved. That being said, physical simulations of the required accuracy might not be possible for a very long time.
The double slit experiments don’t need a human observer, they just require that the information be there. The act of measuring does not even collapse the wave function. They have run that experiment with the sensors on and the data logging turned off and got a diffraction pattern. When they turn on the recording it yeilds a two slit pattern. The delayed choice and delayed choice quantum eraser experiments had some trippy results too. The big take away for me was the universe reacts when information is stored. It does not really require a conscious observer. Since this shows that the universe appears to have made a calculation based on a rule it greatly favors a computed reality in my mind. Not that it has to be a simulation just that at least the universe has a computable nature.
The words on your post represent your thoughts about this subject. Where are the thoughts that your words represent?
If I died tomorrow, my words here would still represent my thoughts just as well as they did today. Those thoughts can’t be in a dead brain. Where are they?
I suppose that my own pet explanation for the whole sheebang may sound weird to others.
Briefly said, it is that life itself, from the first DNA onward, is about information bossing around matter, including the very matter on which DNA is written. There are thousands (millions?) of genes coding for the proteins necessary for mitosis and meiosis. Genes written on DNA, about how to duplicate, share and sex up DNA… That’s form (in-form-ation) bossing around matter.
From that perspective, the brain is just another way for life to boss matter around. Notice that most animal species have a brain. Cephalisation is a long term evolutionary trend in the animal kindom. That’s because locomotion—which defines animals as opposed to stationary plants—locomotion only makes sense if you know where you’re going. Hence animals’ need for real time information gathering and processing, hence the senses and the brain.
Even one-cell eucaryots can learn something from their environment:
—————————
Even with no brain, slime molds quickly learn bitter lessons
By Amina Khan
April 27, 2016, 12:00 p.m.
You don’t need a brain to learn something new – not if you’re a slime mold, anyway. Scientists who watched Physarum polycephalum search for food found that the slime mold could learn to ignore certain chemical threats.
http://touch.latimes.com/#section/-1/article/p2p-86857881/
————————————
If that’s what a primitive monocellular can do, imagine what a SPECIALIZED cell like a neuron can do. Imagine what a hundred billion neurons can do.
It was only a matter of time before cephalisation would result in a brain large enough for the information within it to KNOW ABOUT ITSELF. Consciousness is information processing that keeps tabs on itself. Self-awareness. It may have started with the octopus. A molusc. By evolutionary standards, they’re pretty old, but very smart.
And from self-awarenes animals to evolve into symbolic thinkers was only a matter of time.
@olivier5 “from the first DNA onward, is about information bossing around matter”
Physics defines information as synonymous with matter.
I believe this is erroneous, and leads to much confusion when discussing these topics.
I prefer the formal definitions from the information theory advocates, which suggest that information is the value of a thing, rather than the thing itself.
Information is information, not matter or energy. No materialism which does not admit this can survive at the present day.… Norbert Weiner, Cybernetics, p147
… so your statement depends upon which definition of information you use.
_________
@olivier5 “Genes written on DNA… That’s form (in-form-ation) bossing around matter.”
Genes are physical form. But they represent the Genome, which is formless… pure information… the value of the genes… The Meaning. Meaning is immaterial. You cannot show me a handful of meaning. You can only show me a handful of something that represents meaning.
Like the words on this page. The words are not the meaning. The words only represent the meaning. Where is the meaning?
________
@olivier5 ”... only a matter of time”
Matter and time have never been proven mechanisms for creating information, or authoring meaning. Matter and time can be components in creating things like tornadoes… Wind + Pressure + Heat + Cold + Time = Tornado.
But that tornado will never write a poem.
The meaning is in the function, it is in what the genes do or control. It’s not a thing, it’s the way things work.
You seem to be closer to defining information like Weiner suggests.
You seem to suggest belief in an immaterial realm that energy and matter alone cannot account for.
informare – Latin – to form the formless.
ation – Latin/French – the action or process of doing something.
Information – The action of forming the formless.
Thoughts become reality… And the word became flesh.
I am defining information as the forms that matter and energy and space and time take. The shape of things. So it’s not immaterial at all: On the contrary it is intrinsically tied to matter. there’s no matter without form. Matter has structure all the way down to quark level, and probably further. I suggest that vice versa, there can be no form without matter.
It’s two sides of that same coin we call reality.
Not that we know what matter is, mind you.
I appreciate you putting your assessment out there. Not easy to do. Bravo. If you’re like me, your assessments of reality will change and modify over time. Good to speak it out, and get it thought out in your head. These discussion help that process.
@olivier5 “I am defining information as the forms that matter and energy and space and time take. The shape of things.”
We already have a word for that. Form. Surely you can see the problems with defining Information as synonymous with form.
@olivier5 “So it’s not immaterial at all. On the contrary it is intrinsically tied to matter. there’s no matter without form.”
Getting deep into the philosophies here. The Theory of Forms has been discussed since Plato… Platonic Forms. As we shape our ideas… we create form… Which is intrinsically tied to the etymology of In-Form-Ation described above. The idea… the meaning is formed with mind, and is represented by a physical object made of matter.
The keyboard you type on is a physical representation of the original idea of a keyboard from the first person who thought of it. The coffee mug you drink from is a physical representation of the idea of the first coffee mug from the person who first thought of it. The automobile was first a thought. It was formed on paper, as plans that represent the idea. The plans allowed it to be physically manifest into a car that we drive. But the car we drive is not Information. It represents the information that created it from the original design, which represents the original thought about it.
It is heavily dependent upon thought. The realm of ideas, which cannot be seen, or touched, or tasted with any natural human senses. Thoughts don’t spill onto the floor during brain surgery because there are no thoughts inside the brain.
Language breaks down at this level. It is difficult to overcome our natural tendency to describe these notions with analogy. We typically consider thought like water that is inside the brain bucket. But if thought is immaterial, then concepts of inside and outside don’t apply.
Television and folklore have taught us that supernatural dimensions must operate on the same principles that our material realm is bound to. It’s most difficult to break that indoctrination, and consider an immaterial realm that is not bound by the same principles of matter, energy, space and time.
I don’t believe our current language has words to describe the indescribable.
We need new words to discuss these realms. Especially when disciplines like Physics hijacks words like Information to mean something entirely different than all the other disciplines adhere to. We’re breaking the the microscope and claiming the cracks in the glass must mean something important. The whole thing is reaching comic proportions.
This is why genes and the genome are not the same.
We can see and touch genes because they are physical objects. We cannot see or touch the genome because it is a non physical entity. One represents the other. Genes represent The Genome. Just like the words on this page represent my thoughts. But these words you see are not my thoughts.
Where are the thoughts that these words represent?
@RealEyesRealizeRealLies
disciplines like Physics hijacks words like Information to mean something entirely different than all the other disciplines adhere to. We’re breaking the the microscope and claiming the cracks in the glass must mean something important. The whole thing is reaching comic proportions.
Sciences like biology are making a step in the right direction by recognizing information as a variable (the variable of all variables so to speak). By recognising knowledge as a fact, etc. These are steps made in the direction of understanding what you call “meaning” but you are right that it’s not the whole journey.
A distinction has to be made between physical information à la Weiner and symbolic information, like sentences. Symbolic information is the lay meaning of the word “information”. It cannot (yet?) be measured by science. It can be said that any sentence could have an infinite number of meanings, depending on the circumstances and interpretations. The phenomenon of glose is in theory infinite, on any text. Another way to say the same thing is: there’s no end to the beauty of some painting, their eastetic value cannot be measured. It is literally immense.
Even bytes and megabytes of the computer are symbolic, and physical at the same time of course. Information is always written ON something, ideally something durable. Even symbolic information has to be on some support or another. Even in the brain, that rule (a basic rule of my own non-naïve materialism: there’s no info without a medium) must be true.
Ergo the brain is like a sheet of paper. Make that 100,000,000,000 sheets of paper.
@realeyes
But if thought is immaterial, then concepts of inside and outside don’t apply.
The notion of “matter” and “material” very vague. Does that include energy? I suppose it does. What about space? Is spacetime material? Not really. Yet it must exist. It’s part of reality.
There’s no doubt in my mind that minds and ideas are part of our reality. It’s all we know and experience daily, in fact. We don’t really experience “matter”, only a sensation of it, i.e. some matter interacts with our matter and it creates a signal which ends up coded as “red” in your mental world. So thoughts exist.
The question you ask seems to be: do thoughts happen in spacetime, or outside of it. I , as a realist, answers yes, it is here and now that I think this. And you can’t have any idea of what I think without some material link between us, by way of the interwebs… Sorry to break it to you but telepaths are all fakes.
I really need to break down your comments to understand you better.
@olivier5 “A distinction has to be made between physical information à la Weiner and symbolic information, like sentences.”
This really concerns me. Claiming that Weiner considers information as physical. He does not. Information is information, not matter or energy. No materialism which does not admit this can survive at the present day. Norbert Weiner, Cybernetics, p147
I don’t know how much more clear he can make it that information is not “energy or matter”.
@olivier5 “Symbolic information is the lay meaning of the word “information”. It cannot (yet?) be measured by science.”
It can indeed be measured to a high degree of accuracy. Down to the bit. Our entire modern world depends upon quantifying it with precision. We couldn’t carry on this conversation otherwise.
Shannon protocols are the groundwork for our entire computer age. A Mathematical Theory of Communication
So you must understand… I’m having trouble accepting your claims, and therefor having trouble understanding your comments.
@olivier5 “Sorry to break it to you but telepaths are all fakes.”
Language is the tool that allows telepathy. We use it to know each others thoughts. It’s the one and only tool to read another persons mind with. You’re reading my mind right now. In fact, that’s all language does. That’s its purpose. It allows us to share thoughts.
The distinction between physical and symbolic information is a classic one. I did not invent it. Much confusion and paradoxes are generated once people neglect this distinction. They start to think that human minds are like electronic computers, for instance.
The key difference i think is that symbolic information is meant to convey meaning, which is something present-day computers entirely ignore. Symbols stand for something else. They can SAY something and MEAN something else, eg in sarcasm. You can indeed measure the signs, count the words and the letters, etc. but you can’t measure the meaning because human symbols are polysemic: They can mean several things. There’s ambiguity in them, always. Hence a poem will be richer in meaning than computer code.
Yes, Google doesn’t care about the meaning of a search “funny cat videos”. It is meaningless to Google. It’s just a data query. But that so-called physical information can always be traced back to the genesis of a sentient author who imbibed the term (their symbolic query) with genuine meaning. It did not, because it could not, arise by chance.
This is different than the physical information from physics disciplines, which is mistakenly equated to observable phenomenon. They are not synonymous. The information doesn’t exist until an observer describes the phenomenon. Then we have symbolic information, aka red, round, large, which, as a description, is completely independent of the phenomenon being described, and completely independent from the media it was encoded upon.
Is this closer to understand what you’re talking about?
I appreciate your feedback, Real. Maybe I need more precise vocabulary, rather than lumping everything under “information”. Let’s try this:
As an example, let’s take a sheet of paper on which is written the sentence “Time flies like an arrow.”
1. The ink on the page follows certain patterns, or FORMS: characters for instance—some look like one another, while some don’t. Let’s call that level “objective forms”. There are there whether someone see them or not.
2. Once someone sees them, we reach another level: DATA, or “observations of forms”, if you prefer. Here we enter the realm of mental events. A little subjectivity is introduced but not much.
3. Once the observer notes that the black forms on the page are probably letters, we can call them SIGNS. This implies a symbolic signification and would lead almost anyone to try and decode the signs.
4. Once the observer manages to group the letters into words and decode the words, we reach another level: The reader is presented with a PROPOSITION where the words interact with one another through grammatic rules. There’s a subject (time), a verb (flies) etc.
Note that this step requires a greater level of subjectivity and interpretation than prior steps. One could read the sentence differently, eg the subject could be “time flies” (there could exist a species of insect called “time flies”), and the verb could be “like”. Etc.
5. Finally, if one manages to interpret the statement, one arrives at the level of MEANING. That step requires a lot of subjectivity. Most if not all propositions are ambiguous. There could be several layers of meaning, eg “times goes so fast” + “time goes only in one direction, there’s no turning back” + “time kills us all in the end” (an arrow is a weapon).
To me, all these levels are part of one big set called “information”. I see a lot of commonalities between them. But there are other categorizations. A frequent one is: Pattern / data / information / knowledge. In this view, knowledge (about the same as “meaning” in my categorisation above) is different from information, more distilled.
But semantics matter only so much. What I mean in layman’s term is that life—,right from the first evolutionary steps, has always been about the genetic code shaping matter, structuring it, what I called “bossing matter around”. And further, that all the animals’ nervous systems are a way to base behavior on some knowledge of the environment. Which explain why our brain is not good at algorithms: No algorithm ever help our ancestors survive. What the system is after is just data integration and analysis + decision making in real time, as fast as possible + some redundancies and a capacity to self-repair.
@olivier5 “To me, all these levels are part of one big set called “information””
Olivier, I think it’s great that you have so much interest in the information sciences. Really awesome. I wish more people did. Consider that you may be attempting to redefine the wheel without realizing that the wheel has already been well defined.
Your above list (1 thru 6) is more like a communication protocol than a real definition of information. Compare your list to the Shannon Model that Weiner utilized for his Cybernetics. Hubert Yockey discovered that DNA/RNA transcription fits the identical protocol. In fact, every communication known to mankind runs through Shannon protocol. This model should illustrate for you the processes behind what you call “bossing matter around”.
Patterns, do not, and cannot, “boss matter around”. Only a genuine code, like English, or DNA, or binary can “boss matter around”. A genuine code is one that conforms to Perlwitz Burks and Waterman’s formal definition of code. Could be drum beats, or colors, or figure eight waggle dance from the bumble bee, or hand sign language. There are many other subtle forms of code that qualify which aren’t so easily recognized as codes.
_____
You are free to define Information any way you like. You may choose to disregard the links I provided you a few posts above from the different disciplines. I was not looking for another definition personally. More specifically, I’ve already accepted what other information professionals have acclaimed in the Journal of the American Society for Information Science. Consider Losee paper available at UNC School of Information and Library Science. Unfortunately, you must now sign in to get the paper. But I downloaded it a long time ago when it used to be free. Try Losee’s paper on A Discipline Independent Definition of Information.
With respect Olivier, if you have any issues with my assessment, then your argument is not with me, but instead with the authors of the links I’ve provided. I’d be happy to continue this conversation if we can come to agreement on terms of communication. My understanding is guided by these principles. We can talk more precisely if you understand them. Or if you have challenges to the links provided, then I can consider your sources on this topic as well. Just provide me some source material that supports your premise, and I’ll consider it honestly.
@olivier5 I reread the article in your OP. I must say, it is rare that I come across such blatant nonsense.
“But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.”
The author should consider speaking with a genetic or molecular biologist… Since every cell inside the brain, and the entire body, utilize extreme advanced quaternary code comparable to binary, and house a full information processing RNA transcription mechanism for converting protein into specific genes…. “bossing matter around”.
One clue to illustrate how biased and confrontational the article is written is how the author intentionally avoids using the word “code”. Not once is the word “code” used in his article. Easy to promote any silly argument by leaving out the most important discovery of our time, and the entire mechanism that evolution functions upon.
Again, as the author claims:
“Not only are we not born with such things, we also don’t develop them – ever.”
If we don’t develop “symbols” or any other items on his list above, then how did he write his article? How did we interpret it?
_____
“The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’).”
“Patterns of ones and zeroes” is called CODE when speaking of computers. Call it what it really is. Otherwise “patterns of ones and zeroes” could also be a kindergartner art project. They call it CODE because is it MUCH MORE than a simple pattern that any neanderthal could scratch into a tree by accident with a spear while taking a shit in the woods.
_____
“No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain”
You also won’t find it inside a computer.
But both the brain and a computer can have physiological representations of B5.
The music isn’t INSIDE anything. But it can be represented by anything… Sheet music, vinyl grooves, optical laser etching, magnetism… Any physical medium can be used to represent B5. But B5 is never “IN” any of them.
______
Sorry, but I think the title of the article “The Empty Brain” is closer to a projection of the author himself than any contribution to neuroscience.
@RealEyesRealizeRealLies i have no issue with any of these links or definition you shared. I was just trying to show the similarities between “physical information” and “symbolic information”; or whatever you want to label these categories.
I do agree that the article “the Empty Brain” is polemical and (in my view at least) goes too far in saying the brain does not process information. I think it evidently does process information, on top of other things like INVENTING new information, something computers can’t really do very well (so far). But the article has the advantage of pointing to the reductionist dimension of the “computter metaphor”. It’s simplistic and naive to think of the brain as a mecanical, predetermined system. It’s much more capable than that in certain areas and much less in others. So indeed the metaphor introduces more confusion than clarity.
@RealEyesRealizeRealLies [No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain”
You also won’t find it inside a computer.
But both the brain and a computer can have physiological representations of B5.
]
The article postulates there is no physical code or representation of our memories in the brain, otherwise people could be able to draw a very accurate picture of a dollar bill from memory. I think that’s a very strong argument against the computer metaphor (there’s no fuziness in computers’ RAM, while our memories are fuzzy.
@olivier5 “But the article has the advantage of pointing to the reductionist dimension of the “computer metaphor””
No doubt. Thanks for pointing that out.
_
“like INVENTING new information”
I write song and poetry. I used to think I created it. Currently, I feel more like an archaeologist, simply uncovering what was already there, sweeping entropy, sifting through the chaos, the noise. Revealing meaning on the scavenger hunt for what McKenna calls “linguistic objects”. It no longer feels like creation for me. More of a coming to conclusions rather than a fabrication of them.
Not to say ideas cannot truly be invented. But those invented ideas are awkward, thorny, filled with religion, dogma, and pyramid schemes. Nothing elegant about them.
Did Einstein create E=mc2… Or did he discover it? Did Gamov create codified genetics, or did he discover it? I believe they dis-covered and came to the concepts, the essence, the spook… and could have used any myriad of symbolic representation to identify them to the rest of us.
“The article postulates… otherwise people could be able to draw a very accurate picture of a dollar bill from memory… a very strong argument against the computer metaphor”
Seemingly so. But two things the author did not account for.
if Jinny brought her sisters, and a thousand people beyond that, each reproduction of the dollar bill would have been different in some way, yet all (at least most) of them be fully representative of the spook.
Jinny is asked to draw $1 from memory. She did not study it before hand. She was not allowed to be a dedicated scanner for the task. Her second drawing of the bill was far more accurate when in the presence of the original. The inconsistencies can be attributed to Jinny’s lacking art skill. Her quality of vision. The quality of her drawing implements and medium (chalk and chalk board)... The real experiment should occur one month later, and from pure memory, and see how accurately Jinny would reproduce the object today from memory. I’d be it is closer to her second attempt than her first.
Scanners have improved tremendously over the years. A street camera recording a hot dog sale might capture the image of a dollar bill, but without the clarity of a drum scanner. Give Jinny enough time and tools to replicate the bill, and see how accurately she reproduces it from memory… Like a golf swing, a speech, learning a song. No computer or human will ever reproduce the $1 perfectly to the original. But the results will always improve relative to the quality of input/output tools.
Secondly, consider that Jinny’s first attempt was perfectly adequate to perfectly represent the value of (see Losee), or the meaning of the object. In fact, her representation was closer to the traditional representation than the symbol I use here as $1. Yet this symbol $1 and Jinny’s sketch still represent the meaning perfectly well. The copy is one of value.
Billions of $1 transactions occur every day without anyone ever pulling the paper version from their wallet. One’s and Zeroes have turned out to be a far more accurate and robust representation than the old paper version.
Let us also consider that a dedicated high quality computer scanner will also make imperfect copy because it can be so detailed that it sees past the original. I will actually reproduce the folds and valleys of the paper. The inconsistency in the ink. Items that have nothing to do with the essence of a $1 bill.
From memory, I recalled his subject name as Jinny. But the author uses a different font than I can on fluther. Shall we consider that my recall of Jinny is inaccurate? Or did I reproduce her name exactly enough with the tools I have on fluther?
@RealEyesRealizeRealLies
The 1$ bill example is more interesting than just a simple lack of drawing skill. It points to the fact that our way to store information is not, in fact, to store it. It is to store (somehow) the capacity to recognise a 1$ bill and to reconstruct what you call the “essence” or the “spook” of the bill: Its main characteristics.
In that sense, our memories are re-creations of the original perception. They are produced, created, and often partly invented. Much more dynamic, changing and fuzzy than computer RAM.
If Jinny had been shown a 1$ bill with the face of George Washington with a clown red nose on it, she would instantly recognise it as fake or at least different from her memory of the bill. She is incapable of drawing his face from memory and YET she’s able to recognise his face, and significant alterations to his representation (the red nose).
Likewise, how many lovers have complained that they can’t SEE the face of their beloved in their mind, in the person’s absence, in spite of having marvelled on his/her face so often. And yet as soon as they see their beloved, they recognise him or her… To my knowledge, Epstein is the first to have pointed at that seeming paradox, although it may also be in Proust now that I think of it.
@olivier5 ”...It points to the fact that our way to store information is not, in fact, to store it. It is to store (somehow) the capacity to recognise a 1$ bill and to reconstruct what you call the “essence” or the “spook” of the bill: Its main characteristics.”
That’s why I’ve consistently stated represent rather than store. I know this flies in the face of all information science, but again, I don’t think any info, brain or hard drive, is ever stored. I think it’s simply represented.
Not IN the brain… like water in a bucket.
It is OF the mind… both human and computer info… available from immaterial realm, non physical, to be accessed by, and represented by, language/code.
The code we type here, is mapped to binary, then mapped again to English. All of it represents thought which does not reside IN any physical medium. Language is just a key. A passcode, opening a door to thought/spirit of mind/soul.
Makes no difference. The point is that there’s no precise representation of George Washington’s face in your brain, since if there was, you could summon it and draw it accurately enough for someone else to recognise him.
And yet, you are able to recognise a picture of him… And when you do that, you don’t compare the picture shown to you to a few thousands mental pictures of different people and conclude: “It looks most like Washington.” No, you just recognise him. Somehow.
Note that one can recognise other people even after they change their hairdo, grown on weight or tatooed their face. Even more surprising: When my wife gets a haircut, i hardly notice it—it takes some effort to spot the change. I’d say something like: ” Hmmm you changes something but what?” None of this could happen if we were recognizing people by comparing what we see with an image of them stored in our memory. So that’s not how it works.
Perhaps ”...you are able to recognise a picture of him”
Because ”...there’s no precise representation of George Washington’s face”
We get the basics, what’s needed, to make the connection. A haircut, a tattoo… may not be needed, and thus isn’t note worthy.
Objects are dimensional. A flattened 2D image will never be considered a precise representation. A view from the north side will never represent the entire object. Nothing can be fully accurately represented because nothing can be fully accurately observed.
Yes, that makes sense… Maybe the only meaning of the word “essence”—a word I always had a problem with—is that synthetic kernel of noteworthy details that encapsulates a person or an object in our memory.