Read the book: «Hardware and software of the brain»
Copyright (c) I. Volkov, January 5, 2017 – November 12, 2023
Modern computers already surpassed complexity of the brain. What is more important, in the process of development, theoretical cybernetics elaborated numerous concepts and solutions which may be applied to living systems. The process is mutually beneficial. You explain how biological organisms operate and simultaneously get some hints about further development of machines.
The human brain is a live automatic control system. Hence it may be described in terms of modern cybernetics. It is very different from common computers, but the main concepts are applicable. A typical workable computer consists of the two main parts: hardware and software that is material and non-material halves. Accordingly, for humans we talk about the body and the soul. The term hardware is not very well suited because a half of the human body is water. Also objective phenomena behind the soul are much wider than just algorithms learned by a person. Nevertheless, the terms will be retained for compatibility.
Programmers of traditional computers know that software is heavily dependent on hardware. With the development of computer industry, large efforts were applied so as to achieve portability, but the talks is about a program which should run on 2 computers of the same type, but made by different manufacturers. It is obvious that if a different hardware has no some feature which is crucial for the program, then they are incompatible in principle. So we should begin from functional architecture of the brain, only then proceed to software which may run on this device.
Several latin words which are often used in medical literature
Lateral – located at the side.
Medial – located in the middle.
Rostral – shifted from the centre to the head.
Caudal – shifted from the centre to the tail.
Dorsal – back (humans) or upper (animals).
Ventral – front (humans) or lower (animals).
A coronal plane dissects a structure into ventral and dorsal parts.
A sagittal plane separates the right from the left.
Some notes about anatomy
When you disassemble an electronic device, it usually contains several blocks which are functionally different and also well separated. They may be mounted on different printed circuit boards or even in separate boxes. There is no such separation in the brain. It is a smooth 3D mass which may be structured only by morphology, that is microstructure of nerve cells and fibers. Moreover, if you look at a cross-section of the brain, you will see almost nothing. Thin serial slices used for reconstruction are just transparent. Only after special staining that microstructure becomes visible. The next question. Suppose you have singled out some part of the brain as structurally different. Who can guarantee that it is functionally different too? If it is functionally different, is this function confined within this structure only? Such questions resulted in several systems of anatomical terminology. Different brain subdivisions may overlap and all of them have quite remote relation to functionality. Nevertheless this knowledge is crucial because without it you won't be able to understand the location of a certain point from its description in special literature.
Anatomy of the brain
Fig. 1.
The most rough division is: the hindbrain, midbrain, and forebrain. In Latin this will be: rhombencephalon, mesencephalon, and prosencephalon. Going in down-top direction, rhombencephalon is further subdivided into myelencephalon and metencephalon and prosencephalon – into diencephalon (the intermediate brain) and telencephalon. Latin terms sound terribly, but fortunately they may be encountered mainly in very specialized literature. Myelencephalon is also called medulla oblongata (the oblong brain).
Another often term is the brain stem. It begins from the spinal cord and includes medulla oblongata, pons of the hindbrain, the midbrain, and sometimes diencephalon too.
The next tier of anatomy is more relevant to functionality.
Fig. 2.
The reticular formation is an elongate structure or a chain of nuclei spreading from medulla oblongata into diencephalon. The reticular formation is located in the middle of the brain stem and may be considered as its core.
Fig. 3.
Metencephalon consists of cerebellum and pons. The former is also called a small brain because its structure is a simplified variant of the big brain. Pons means a bridge. It is formed by axons going to and from hemispheres of cerebellum. Important ascending and descending pathways obviously travel through pons. Also it includes the reticular formation and several specialized nuclei.
The dorsal part of the midbrain is formed by tectum that is the roof. It consists of the inferior and the superior colliculi (2 pairs of hillocks). The superior colliculus implements low-level visual processing. The inferior colliculus does the same for hearing.
If we need the line which separates peripheral devices from a computer case, it's here. With a few exclusions, what was named previously corresponds to controllers of periphery while the thalamus of diencephalon may be regarded as several expansion cards of PC.
Fig. 4. The thalamus.
Below the thalamus resides the hypothalamus which plays a similar role, only in regard to internal bodily functions.
Remaining telencephalon is a motherboard of a neurocomputer. It consists of 2 almost symmetrical hemispheres. Each hemisphere is covered with the cerebral cortex – its visible surface. Basal ganglia are a complex of subcortical nuclei which are hidden beneath.
Fig. 5. Coronal cut of anterior section of the Brain showing basal ganglia.
Finally, anatomy names a complex which is defined not by proximity or similarity of its parts, but by strong connections between them.
Fig. 6. Schematic briefly summarizing neural systems proposed to process emotion, highlighting structures that are visible on the medial surface of the brain. Papez's (1937) original circuit (A) was expanded upon in the concept of the limbic system (B) to include a variety of subcortical and cortical territories (MacLean, 1952; Heimer and Van Hoesen, 2006).
This is the limbic system. Its components are located in the midbrain and forebrain.
Hardware
Computers are made of electronic components which were also known as radio details because before the advent of microprocessors computers were made of the same details as radio, TV, and other consumer electronics. Elementary components of the brain are neurons – specialized nerve cells which can generate electric pulses and conduct them to long distances of tens of centimetres. There are different types of radio details: resistors, capacitors, transistors, and a few others. Likewise, there are a few (of the order of 10) different types of neurons which may be repeatedly encountered in the different parts of the nervous system. The most substantial difference between them is that some are excitatory, others – inhibitory. That is, firing of the first neuron may force the second to fire too, or suppress its background firing rate instead. Neurons are not the only cells of the brain. There is also glia (which serves as damper for neurons and insulator for long wires – nerves) and blood vessels which are indeed a special type of muscles.
Computer is a device that processes information. How is information represented in the brain? We can't determine immediately how our ideas are represented, but we can draw conclusions watching what happens when they come out to periphery and convert themselves into physical actions. It was definitely established that muscular tension depends upon the average firing rate in the nerve that ends up on this muscle. Hence, we can suppose that a single spike of a single neuron in the central nervous system doesn't matter and our ideas are encoded by the pulse activity averaged over a group of adjacent cells (called a cluster) and a certain period of time. The typical number of elements in such clusters is of the order of 1000. As to time parameters, the duration of one spike is approximately 1 millisecond so the firing rate of 1 neuron can't be more than 1 kilohertz. Multiplying it by the number of elements in the cluster, we get 1 megahertz, but keep in mind that the reaction time still can't be less than 1 millisecond because spikes are not rectangular.
At this time you might realize the shocking truth: our brain is so different from our computers that it is an analog (more exactly digital-analog) device at all. Meanwhile there is something that unites them. It is very symbolical that computer programs and musical records may be stored on the same type of media such as optical disks.
Anatomically, the brain consists of several parts which may be clearly distinguished and reproduce themselves in all humans. Their cell structure is different from the adjacent regions or they simply are visible from the surface. The inner space may be of two types – gray and white matter. The former is composed of cell bodies, the latter – of nerve fibers. Different parts of the brain are heavily interconnected. This supports the hypothesis that regions which look different are functionally different as well. All in all, anatomy distinguishes some couple of dozen different parts, but how to arrange them into a functionally meaningful construct?
Basic principles of a live neurocomputer are different from the Von Neumann architecture. Computer operative memory changes data by an instruction (the differential principle) and keeps data while power is on. Regeneration is a separate unconditional process. In live neural net, dynamic memory is a pattern of neural activity which should be explicitly supported by the system of nonspecific activation or by reverberation. In the second case, circulation of activity is also controlled by the nonspecific system. That is, in a computer, instructions are quick and their results remain forever. In a live neurocomputer actions are lengthy and continue while the activation signal remains. Retaining results also requires continuing activation.
Neurocomputing may be studied by purely mathematical methods. We can take 2D (or 3D) image as a main data unit, take associative instead of linear memory, and design a completely different computational model. The Von Neumann processor retrieves data from memory by the address of a memory cell. Associative memory uses keys instead. Also, a single neural net usually keeps multiple images which are superimposed in distributed storage. There are 2 types of such memory: autoassociative and heteroassociative. In the first case, the goal is just to memorize many images, then to recall one of them using some hints. In the second – associations between images of different types are remembered. This may be used to implement stimulus – reaction or event – handler pairs.
Two types of continuous computation are visible right away. In the second case, the system is placed into real-world conditions where external events come permanently so some reactions will be generated permanently too. In the first, we can use an image retrieved from autoassociative memory as a key for the next operation. This implements "free thinking" or "flight of ideas".
A computer comes with a ready set of instructions. The set of elementary, hardware-supported actions for a neurocomputer is smaller. This resembles the situation with fonts. The first typewriters had fixed sets. Then, they were replaced by graphics and fonts are usually generated by software now.
Science doesn't use the term software in application to living systems. Researchers study behavior instead. There is even a separate branch called behaviorism. It regards the nervous system as a black box and tries to formulate laws which link its inputs and outputs. The goal of brain research is seemingly to establish how different structures participate in generation of complex behavior. The task turned out to be tricky. The primitive approach is to determine correspondence between elementary actions and various parts of the nervous system. This encountered resistance from the opposite group which claimed that it is impossible and any function is equally distributed over the brain as a whole. For a computer engineer, the solution is obvious. Mapping is possible, but it is internal rather than external actions which should be mapped. Such as memory read/write operations.
The brain as a whole may be best approximated as a finite-state automaton, but this approach has one problem. Neural activity is highly dynamic and requires energy consumption. If you change the state and relax, it will always slip into the zero state. This issue is resolved using a specific architecture of neural nets. Karl Pribram in his "Languages of the brain" highlighted that many connections in the nervous system are reciprocal (bidirectional). If the biofeedback is negative, this serves for stabilization. If it is positive, this will create a generator which can maintain an activity once it was launched. As a result, you may look at an object, then close your eyes, but the image will remain and you even will be able to examine its details.
Abstract neurocomputer
Our computers are based on a Turing machine. Its principle is very simple. Complexity of computers comes from software, not from hardware. The same should be true for the human brain. The problem is that there are no tape and read/write head inside the human scull. Then, what is the prototype for the Turing model? Thorough consideration reveals that he formalized work of a human which uses some external storage such as paper. This approach resembles what is known in neuroscience as behaviorism. It doesn't try to penetrate into the head, regards the brain as a black box with inputs and outputs. What we need now – to describe how this box operates inside.
The first hint comes already from the input signal. Turing supposed that this is text while for the brain it is video. Speech and written textual input emerge only later in biological evolution and human civilization. Neurophysiological study shows that topology of an input image is retained well into the deep parts of visual analyzer so we can conclude that the format of 2D images is the main data format of brain hardware. The next principle is determined by the type of memory used. For any processor, memory input/output is one of the most important operations simply because a processor should process some data and data is stored in memory. The Von Neumann processor uses linear memory, that is a sequence of bytes which are accessed by their address. The human brain has associative memory instead. An elementary block uses a 2d (or 3D) array (millions) of neurons where information is represented by the pattern of activity. The image may be retrieved only as a whole, but this storage is still efficient because a single block can retain many different images. The method of retrieval uses a key which is also some image.
These two principles – a 2D image instead of a byte and associative instead of linear memory may be used as an axiomatic foundation of mathematical neurocomputing. The science may theoretically explore all the possible methods of data processing and all the possible constructs of neuro-machines. Meanwhile the brain is a ready working solution optimized by the nature so we can add further details from data gathered by biological experiments. The next striking difference between common computers and a neurocomputer is the absence of the clock generator. A live neurocomputer is asynchronous. Brain rhythms are well described, but alpha-rhythm of the visual cortex emerges when eyes are closed. Instead, activation is manifested by desynchronization. This creates a major problem. At each step of its operation, a computer should decide what to do next. A neurocomputer should decide when in addition. On the other hand, this feature creates additional flexibility and it is used. Human computations are based on insight. It is well known as a source for great discoveries, but indeed is used on routine basis in tens, maybe hundreds per day. The principle of a computer: take the next instruction from memory at the next pulse of a clock generator. In a neurocomputer, associative memory provides an appropriate idea at an appropriate moment. This sounds like the Holy Grail, but unfortunately this formula contains a lot of uncertainty. More specifically, insight is: 1. generated by hardware that is has no psychological explanation, 2. heavily dependent on previous experience. There are two types of insight: sensory and motor, that is related to input (perception) and output (action). These images are generated in different parts of the neocortex, but subjectively we feel them similarly because all the neocortex has almost the same structure like computer memory. Insights of the first type are especially interesting because they have an explanation from the theory of information. Visual input conveys digital-analog data which according to Shannon's formula contains a virtually infinite amount of information. The nervous system is simply unable to process it all so only some portions are captured when appropriate.
How are insights generated? This is related to another problem. The brain has memory for sure, but does it have a processor?
Theory of neurocomputing
Hardware and software are just parts of another item – a whole computational system and it is obvious that a computer based on the Von Neumann architecture and the human brain are very different. Paradox is that computers were thoroughly developed for decades and we know them in details while evolution of the brain spans millions of years, we use it permanently, but have no clues about its operation. Substantial efforts were made during 19 and 20 centuries so as to fill this gap. Let's try to formulate explicitly the most basic principles of neurocomputing. First, we need to determine our goal. What is a neurocomputer? Any construct created of neural nets? Their range would be as wide as diversity of live nervous systems. Let's confine our interest to the human brain, but keep in mind that it isn't the top of perfectness. Maybe the brain of some animals is better in some aspect. Maybe there are better solutions unknown in the nature. The prototype is only a hint for the theory.
Let's formulate the answer at the very beginning, then explain it in details. Thorough consideration shows that the following 3 concepts are workable for both types of computing, but they have different implementation. Moreover, distribution of overall computing between memory, a processor, and software is different. A neurocomputer uses associative memory implemented as blocks of homogeneous neural nets. Even in the simplest form with 2 layers, they are already capable of some processing such as branching or simple arithmetic (addition and subtraction). That's it. We have processing without a processor. Add the difference in data representation. Computer memory stores data immediately. In neural nets, data is an image represented as a pattern of neural activity, but memory is kept in modifiable synapses. Nets don't store images immediately. They store the ability to generate particular images in response to particular keys. This entails another difference. In a computer, data is loaded from memory into a processor, undergo modification, then is stored back. In a neurocomputer, images are processed immediately in the memory. Elementary processor instructions are replaced by hard-wired ability of a particular block of memory to perform particular image transformations. This ability is defined by local interconnections within the same layer or between different layers of the same net. In a computer it corresponds to some procedure or algorithm which implements a particular method of data processing. On the other hand, each homogeneous net working as heteroassociative memory keeps associations between input and output images. This corresponds to rules of traditional programming, only blocks of those rules are kept in different, genetically predetermined hardware locations. You see that in a neurocomputer, the main paradigm is rule-based and types of rules are predetermined at hardware level.
So does a neurocomputer have a processor? Seemingly, yes, but it is striped of many functions known for its counterpart in a computer. Basically, if we regard the neocortex as the memory, the role of a processor will be to activate or suppress different areas and links between them. A good example is attention. That is, a processor of a neurocomputer accepts various signals from outside and inside the brain, but its output is just internal "turn on" or "turn off". Meanwhile there is something else. Software may be further subdivided into applications, system-level, and firmware (hardware emulation). While the first part is associated with the neocortex and constitutes various externally visible human abilities, the last is stored in a processor. Here we encounter an interesting turn. We have already seen that the memory of a neurocomputer is capable of processing. On the other hand, its processor is made of similar blocks of associative memory. Only now processing is regarded as a main function while memory provides storage for system programming.
Subcortical nuclei of the brain are well described both anatomically and functionally. Many of them provide low-level control which is unrelated to the psyche and behavior. Others are a part of motor and sensory systems. For example, the thalamus is a major relay station which conducts signals from sensory organs to the neocortex. It is the remainder that may be regarded as a candidate to the role of a processor. Two main parts may be named: the basal ganglia and the limbic system. Functional study of the basal ganglia hints that this is also memory indeed. Seeking analogy with computers, one can say that the basal ganglia are BIOS – the memory which contains the most basic subroutines. So the only part remains – the limbic system and its blocks are well connected into a functionally complete unit. Only not all of this complex may be regarded as a CPU. The limbic system is usually associated with motivation and emotions. Human motivation is subdivided into 2 absolutely different types which come from different parts of the brain. High-level goals and long-term plans are generated in the frontal neocortex, while biological desires such as hunger – in the hypothalamus. If we put aside such blocks, 2 major structures remain – the hippocampus and the amygdala.
Let's try and figure out how this processor operates in general.
A Turing machine is claimed to be universal. The human brain is even more universal. It can work in the completely analog mode when a person monitors some object and follows its movements. The first thing to do is to determine the class of tasks which the brain processor is used for. The brain as a whole is a regulator so if everything is normal, it may stay in the state of idle run. Despite not sleeping, it may do nothing special, just perform random actions without particular use. An alternative is goal-oriented behavior. That's what is interesting for us. That's when activity becomes highly structured and the processor operates at full power. The typical example is the task to reach some place walking in the city. Other tasks may be described by analogy. Consider manufacturing. The goal is to assemble some product from parts. Performed operations are separate steps or crossroads on the route, only motion happens in virtual space. The method to generate goal-oriented behavior is problem solving. That's the difference. Creativity. A Turing machine is designed to perform ready algorithms, the processor of the brain – to generate algorithms.
The main approach to creativity remains ancient trials-and-errors method. For computers, more orderly variants such as full scan of possible solutions are used. In any case, the processor should prompt some actions, then assess their results. That's exactly what the limbic system does. Looks like the amygdala is a block of system-level memory similar to the basal ganglia. Only they keep elementary programs for sensorimotor coordination while the amygdala keeps genetically predetermined states of the brain itself such as fear or aggression. Human emotions correspond to processor instructions of computers, only the brain doesn't use the Von Neumann architecture. It uses a finite-state machine which has elementary states rather than actions. This approach may be very powerful. It is well known that the brain parts tend to have connections according to the principle of all-to-all. Not all of them are used at once. Instead, only a fraction is employed when necessary. This means that you can dynamically create different working machines for different circumstances.
On the other hand, the hippocampus has all the necessary means to assess the results. It receives input from major sensory channels and can generate sharp pulses of activation at the output. Again, this works differently. When you write a control program, you would create a variable to measure the assessment. Say, in the range [-1,1]. The program will input data, process it, set that variable, then use it for decision making. Looks like the brain has not such a separate variable. Instead, associative memory immediately links an input situation to the appropriate emotional reaction such as attraction or aversion. That's why negative emotions are harmful. You get tensed and if this tension has no exit, you must contain it. That leads to double tension and quick tiredness.
The free excerpt has ended.