Massively modular vs Marr-Chomsky layers

How was the GOLEM+NEODE design conceived? What reason exists for its wholesale adoption of the same broad functional plan as that of a digital computer (except for the emotionality, which is supplied by the user). GOLEM+NEODE is a bioplausible version of a GOFPC[1], based on a horizontally layered architecture. MIT neuroscientist David Marr demonstrated conclusively that the visual cortex is structured in a horizontally layered manner, almost as if it were made by a Silicon Valley design house. It is, as a direct consequence, a general purpose information processor whose main task is the governance of an autonomous, socially embedded, conditionally obedient apex bioborg. Its theoretical 'arch nemesis' is the conceptually orthogonal evolutionary psychology (EP) paradigm attributed to Tooby & Cosmides. Their paradigmatic brain is highly modular, consisting of vertically integrated specialised functional 'columns'. The authors claim it is a logical consequence of the application of evolutionary psychology (EP) principles to the issues of animal and human cognition and behaviour. They state explicitly that, while computationalism was the first wave of post behaviourist psychology, evolutionism constitutes its natural and legitimate successor. GOLEM theory, as an exemplar of uber-computationalism, begs to differ.[1] 

A simplistic reading of Von Uexkull at first seems to support EP. He presents the example of the weltanschung of the flea, and seems to imply that this vertically integrated, highly specialised concept can be applied to larger, more complex creatures. However, when Uexkull introduces his circkreis and funktionkreis biocircuits, we clearly recognise them as homeostatic loops existing within the unambiguously general purpose dataflow subsystems called the merkorgan (generalised input channel) and complementary werkorgan (generalised output channel). 

In a 2021 paper, Pietraszewski & Wertz claim to have identified widespread confusion concerning which level and/or levels of analysis are modular-what they call the modularity mistake-. Unfortunately, they seem to have also misunderstood what Fodor means by 'massive modularity'. The figure below compares a massively modular 'vertical' architecture (right hand diagram) with a Marr-inspired 'horizontal' architecture based on general purpose computation (left hand diagram). 

When Chomsky demonstrated the (putative) existence of a Language Acquisition Device (LAD) in human infant brains, this was widely interpreted to support the modular brain model- ie the LAD was assumed to a functionally dedicated module located in the left cerebral hemisphere. The implication of a 'cookbook' solution is clear- take one monkey brain, add a language module, then cook for around 18 months, after which time the child will be able to speak in whatever language the local culture uses.  It is relatively easy to account for dedicated, functionally specialised modules in evolutionary terms. The explanation is in essence no different for a talking monkey (or octopus or whale or parrot) than for a talking human baby. 

GOLEM theory is quite unambiguous on this point- there is ample evidence that the human brain is a general purpose computer. One of the main lines of evidence is that an almost identical sequence of functional prototypes mark the stages of development of both brains and computers. For example, the evolution of brains and the dominant computer paradigm must both pass through a massively modular phase of development. Scientists and engineers used computers by interacting with them one line of test at a time, via their so-called command line interface (CLI). The user's view of computers was simply as a repository of executable functions, which took in data from input files, including the function name and data string typed in at the moment of use by the user on the command line. Each available function was implemented by its own binary executable file. This situation is graphically depicted by the vertically organised, massively modular 'functional specialist' architecture depicted in the right-hand diagram of the figure below. In more modern computer designs, functionality is incorporated into 'applications' ('app's) and a pointing device (tablet and stylus or mouse) used to interact with the app's GUI. This situation is depicted by the horizontally layered 'functional generalist' architecture depicted in the left-hand diagram of the figure below.

However, as time went on, the pressure for computers (and brains) to improve their level of functional performance just kept on increasing. In the biological world, evolution provided that optimalisation pressure, while in the computer world, consumer awareness and then marketplace demand provided a similar type of drive, albeit on a much shorter time base (decades rather than thousands of millennia). Consider the humble word processor, such as Microsoft Word. Initially this function required rooms full of typists, but evolved over the middle decades of the 20th century. At first, text editors and page layout programs were used in a sequential, modularised manner. Then Wang brought out the first dedicated word processing software and hardware combination. The user felt like she worked within her own little world, where all her editing, lookup and page formatting needs were provided without ever needing to exit the program into the computer's operating & filesystem environment. Microsoft then translated Wang's idea into a stand-alone software 'app' that would run on anyone's PC. 

This process of integrated expansion of generalised functionality is, according to GOLEM theory, what has also occurred during the evolution of the human brain. Further evidence that language is not just a modularised 'add-on' comes from analysis of the linguistic function itself. Research (eg Asoulin et al) clearly demonstrates that language is more suited to the internal needs of cognition, than the external needs of speech production and comprehension. The implications are clear, and far reaching- the LAD is our entire brain!. Asoulin [3] divides the proof of his assertion into two kinds, quoted directly as follows: "the first is the argument from linguistics, according to which the externalisation of language - in, say, verbal communication - is a peripheral phenomenon because the phonological features of expressions in linguistic computations are secondary (and perhaps irrelevant) to the conceptual-intentional features of the expressions. The second is the design-features argument, according to which the design features of language, especially when seen from the perspective of their internal structure, suggest that language developed and functions for purposes that are not primarily those of communication". 

In the GOLEM model, in which each channel (ie both input and output) contains a hierarchical, recursive data representation, the subjective-state itself (called a Self Situation Image or SSI) is represented by the dominant-side double hierarchy, while the objective-state (called an Other Situation Image or OSI) is represented by the non-dominant side double hierarchy. Within each sided double hierarchy, the input side hierarchy is responsible for computing the ongoing flow of contemporaneous semantic states (so-called sigma semantics), while the output side hierarchy is responsible for computing the ongoing flow of semantic transitions, ie behaviour plans.

It is within the scope of every adult human to allocate the 'other' hemisphere as they see fit. They can use it for the purpose for which it evolved- the modelling of friendly or hostile cospecific minds. That is, we can use it to try to predict the next move that our enemies make, or we can use it to buy an appropriate gift for a friend's birthday. We can also use it to predict the behaviour of complex systems by modelling those systems as if they contained a singular human intelligence, eg anthropomorphic judgments of animal behaviours, political machinations or even adverse weather events.

Lets go back to basics. One of the insights of the 20th century was that all information-processing devices, including the behavioural control systems within organisms, can be characterized as a set of mechanistic if/then (cause/effect) contingency rules (Turing, 1950) [2]. At lower levels, these IF-THEN rules are known as reflexes- IF stimulus THEN response. But at higher levels, the IF condition and the THEN consequence are not so easily identified, located or measured. Turing and others understood one thing clearly- complex systems use hierarchically organised state machines to manage their structural and operational complexity. As explained in other parts of the GOLEM-NEODE manifesto, finite machine states are essentially static, veridicial snapshots of internal and external realities, which can create highly plausible dynamic mental representations by virtue of their ability to be animated (like a movie) then smoothly interpolated. Otherwise, the computational problems inherent in all dynamical systems would be prohibitive [4]. 

(Pulvermueller et al, 2014) [5] seem to support the massively modular paradigm, as well as rejecting the separation of hardware (ie neuronal mechanisms) from software (ie mental processes): "A new perspective on cognition views cortical cell assemblies linking together knowledge about actions and perceptions not only as the vehicles of integrated action and perception processing but, furthermore, as a brain basis for a wide range of higher cortical functions, including attention, meaning and concepts, sequences, goals and intentions, and even communicative social interaction". Like Tooby & Cosmides [6], Pulvermueller et al opines that cognition models with massive modularity are inherently more credible from a Darwinian viewpoint. If each functional module was independent, this would be a plausible stance to adopt. However, within the evolutionary record itself, we observe functional modules to not only not be independent, but to be highly interactive and flexibly 'crosslinked'. 

Finally we arrive at a realistic viewpoint, the familiar and inevitable compromise between the two extremes. Sure, the separate input and output channels of the GOLEM model clearly resemble Fodorian modules. However this view must be tempered with the knowledge that within these channels, the computations are obviously organised into horizontal, Marrian layers, such as the subjective animation layer (frontal and parietal lobes), and the objective automation layer (basal ganglia and cerebellar cortex).

1. I rather suspect that Tooby & Cosmides, not having the benefit of a computer science 101 background, have allowed themselves to be pulled down the sort of notorious ideological rabbit hole made infamous by Behaviourism. In a hole, you don't notice that someone has secretly and surreptitiously put blinkers around your eyes. Who can blame them for attempting a coup in this manner, if the best idea the computationalists have had is the 'backprop all the way down' concept (a fair summary of deep learning).

2. David Pietraszewski, D. & Wertz, A.E. ( 2021 ) Why Evolutionary Psychology Should Abandon Modularity. Perspectives on Psychological Science 1-26 

3. Asoulin, E. (2016). Language as an instrument of thought. Glossa: a journal of general linguistics 1(1): 46. 1-23

4. A world in which such problems could not be easily solved would be populated by simple, unconscious creatures at best.

5. Pulvermüller, F. Moseley, R.L. Egorova, N. Shebani, Z. Boulenger, V. (2014) Motor cognition-motor semantics: Action perception theory of cognition and communication. Neuropsychologia Volume 55, Pages 71-84 

6. Anthropologist John Tooby and psychologist Leda Cosmides believe that evolution can (and therefore, should) be used to explain psychological traits in the same way that it is (successfully) used to explain physiological features. 




copyright M.C.Dyer 2022
theorygolem@gmail.com 
Powered by Webnode
Create your website for free! This website was made with Webnode. Create your own for free today! Get started