Sociophysics – The last science

‘Truth’ is context-dependent
In context of my studies to date, and in particular my newfound understanding of the common good, I have experienced a surprising insight. I can best describe the occurrence as a spontaneous emergence, in my mind, of a conception of sociophysical phenomena. I was not yet aware of sociophysics and thought that I had coined the term to help define a path of study. Simply, I wanted a word to help me focus more closely upon the physical phenomena that emerge from social interactions. Searching for the literature of sociophysics, I was initially surprised to find a sparse population of recent mathematical probabilistic treatments and models, stemming form quantum physics early in the twentieth century, game theory in the mid-twentieth century, and analyses of computer modeling of adaptive networks early in our twenty-first century. My search soon led me near to the origin of the sociophysical concept – an absolute origin escapes me, though sociophysics seems closely tied to Aristotelian animism. Nevertheless, I now realize that sociophysics has presented itself in a variety of apparitions to many a kindred spirit. If it is a science, then it is the strangest, vaguest, and widest of them – indeed, it has been called “the science that comes after all the others” – and fascinatingly, the men who have studied it knowingly, were and for the greatest part still are outcast by orthodoxy. I certainly am no stranger to their ranks, perhaps that is part of the reason why I feel a sense of familiarity and belonging among the concepts exposed in the current exploration of ‘the last science’.

Previously, I have argued that abstract modeling (theorizing) simplifies reality, allowing only fractionated (quantized), and thus unreal understandings. Historically, fractionation (specialization; division of labor) has been the cost of good quality knowledge. In The Common Good: Part I, I have introduced Robert Rosen, a theoretical biologist who suggested that studies of biology would bring new knowledge to physics, and would change our understanding of science in a broad manner. The study and modeling of complex systems appears to drive in this direction; by my intuitive reckoning, increasingly complex modeling (interaction of theories) approaches ever closer to a good quality representation of reality, and thus a truer understanding of reality. It is for this reason that I have chosen to focus the current exploration upon the histories [NOTE A] of understanding and modeling of social interaction, which shall lead us to an integrated understanding of the current state of the art.

Two classes
Abstract: The abstract form of sociophysics is fundamentally dependent upon human knowledge, which has been composed of necessarily subjective experiences (observations) of an assumed objective reality. It is a science stemming from and attempting to formalize intuitive understandings of social phenomena, by use of mathematical tools developed and used in statistical physics.

Real: We must assume that in reality the physical phenomena that emerge from social interaction are independent of human knowledge; that they occur regardless of observation. Sociophysical phenomena are synergistic (non-additive effects resulting from individual acts) manifestations of the dynamic, physical interaction, consequence and feedback, occurring among networked actors. Examples of phenomena that emerge from social interaction include: ant and termite colonies, bacterial colonies, cities, brains, genetic networks, mycelial networks, glial networks, multicellular organisms, ecosystems, physical and abstracted knowledge, road systems, postal systems, the world wide web (internet).

A true false start: true within context of the me-generation; false within a deeper historical context
Galam (2004) tells us that during the late 1970’s statistical physics was gripped by the theory of phase transitions.(1) In 1982, despite the scandal of a university faculty’s retraction of researchers’ academic freedom due to political fears of institutional disrepute, S. Galam et al managed to publish a set of assumed “founding papers” on Sociophysics.(2) In reference to the first in the set, Galam himself comments that “in addition to modeling the process of strike in big companies using an Ising ferromagnetic model in an external reversing uniform field, the paper contains a call to the creation of Sociophysics. It is a manifesto about its goals, its limits and its danger. As such, it is the founding paper of Sociophysics although it is not the first contribution per se to it.” During the following decade, Galam published a series of papers on Sociophysics, to which he received no feedback. He tells of other physicists “turning exotic” during the mid-nineties, developing the closely related Econophysics, the purpose of which was to analyze financial data. Econophysics quickly gave rise to the so called “quants” of Wall Street – young physicists who were employed by investment bankers to develop algorithms allowing for the trading of complex derivatives, the abuse of which, by the pathological social milieu of the international finance trade, was responsible for the global economic crisis of 2008. Fully fifteen years after his initial publications and the assumed inception of the science of Sociophysics, Galam claimed some gratification in the recognition that a “few additional physicists at last started to join along it”. I deeply sympathize with his statement: “I was very happy to realize I was not crazy, or at least not the only one.” Nevertheless, Galam was and remains incorrect in regard to his position in the history of sociophysics; a history that began centuries before the me-generation.

Reading Galam’s personal testimony, I felt a crystallization of my intuition that the institutionalized position of a careering academic scientist makes for a very poor springboard from which to develop novel ideas and concepts, even if, as in Galam’s case, the ideology is not actually novel. Indeed, I myself have felt, and seen in colleagues, active restraint from pursuing interesting, albeit unorthodox ideas while bound by the rites of the ivory tower. Shameful though this situation is, it certainly is not a modern problem.

Halley, Quetelet and Comte
In his review of the sociophysics literature, Stauffer (2012) reports that the idea of applying knowledge of physical phenomena to studies of social behavior reaches at least two millennia into the past, naming a Sicilian, Empedokles, as the first to suggest that people behave like fluids: some people mix easily like water and wine, while others, like water and oil, do not mix.(3) Vague and philosophical, I hesitate to categorize this conception as sociophysics, though admittedly it does attempt at least metaphorically to fuse social and physical phenomena. Rather more accurate examples of sociophysics were Halley’s calculations of celestial mechanics and annuity rates, Quetelet’s Physique Sociale, and Comte’s Sociophysics. Let us now step through these chronologically.

halley-portrait-thumbnail
Edmund Halley

In 1682 Edmund Halley had computed an elliptical orbit for an object visible to the naked eye; a conglomerate of rock and ice, now known as Halley’s comet. He reasoned that it was the same comet as the one reported 75 years earlier, in 1607.(4) He communicated his opinion and calculations to Sir Isaac Newton, who disagreed on account of both the geometry of the object’s orbit and it’s reoccurrence. Nevertheless confident of his theory, Halley predicted that the object would reappear after his death, in 1759; he was proven correct by the comet’s timely visit. Since then, the orbital path followed by Halley’s comet has been confirmed as elliptical, passing beyond Neptune before returning to the vicinity of Earth and Sun with an average periodicity of 75 to 76 years, with variational extremes of 74 and 79 years due to the gravitational perturbations of giants Jupiter and Saturn.

Astronomy, massive bodies and gravitation are relevant to our exploration of sociophysics for three reasons to be expounded later. For the time being, it is important to point-out a fact about Halley that is much less recognized, though possibly more easily recognized as relevant to our current exploration.

In 1693 Halley constructed a mortality table from individual dates of birth and death; data collected by the German city of Breslau. Based upon this tabulation Halley went on to calculate annuity rates for three individuals. In his application of probability theory to social reality – now known as the actuarial profession – it seems Halley had been preceded, in 1671, by a Dutchman, Johannes de Wit. Though again, to his credit, Halley was the first man to correctly calculate annuity rates, based upon correct probabilistic principles.

quetelet_adolphe
Adolphe Quetelet

Adolphe Quetelet was a Belgian astronomer and teacher of mathematics, a student of meteorology, and of probability theory; the latter leading to his study of social statistics in 1830. Stigler (1986) tells us that astronomers had used the ‘law of error’ derived from probability theory to gain more accurate measurements of physical phenomena.(5) Quetelet argued that probabilistic theory could be applied to human beings, so rendering the average physical and intellectual features of a population, by sampling “the facts of life”. A graphical plot of sampled quantities renders a normal distribution and the Gaussian bell-shaped curve, hence the “average man” is determined at the normal position. In theory, individual characteristics may then be gauged against an average, “normal character”. Quetelet also suggested the identification of patterns common to both, normal and abnormal behaviors, thus Quetelet’s “social mechanics” assumed a mapping of human physical and moral characteristics, allowing him to formulate the argument that probability influences the course of human affairs, and thus that the human capacity for free-will – or at least the capacity to act upon free-will – is reduced, while social determinism is increased. Quetelet believed that statistical quantities of measured physical and mental characteristics were not just abstract ideas, but real properties representative of a particular people, a nation or ‘race’. In 1835, he published A Treatise on Man, and the Development of His Faculties, and so endowed to the culture of nineteenth century Europe a worldview of racial differences, of an “average man” for each subspecies of Homo sapiens, and hence scientific justification (logical soundness) for slavery and apartheid. Furthermore, Quetelet’s “average man” was presented as an ideal type, with deviations from the norm identified as errors.

Comte
Auguste Comte

Between 1830 and 1842 Auguste Comte formulated his Course of Positive Philosophy (CPP). From within our modern ‘global’ cultural milieu it is difficult to appreciate how widely accepted (‘globalized’) the ideology of positive philosophy was two hundred years ago, during the height of Eurocentric colonial culture, as positivism has received virtually no notice since the re-organizational events [NOTE B] imposed upon the politico-economic and cultural affairs of Europe after the Russian revolution and first world war.(6) The eclipse of positivistic ideology began with neo-positivism in philosophy of science, which lead to post-positivism. Strangely, it appears that the two later schools (neo- and post-positivism) have forgotten both, positive philosophy itself and the man who initiated and defended it, and even coined the term positivism [NOTE C]. However, Bourdeau (2014) tells that Comtean studies have seen “a strong revival” in the past decade, with agreement between modern philosophers of science and sociologists upon the ideologies propagated over 170 years ago. Points which were well established in positivism, but subsequently forgotten, have re-emerged in the modern philosophical milieu.

Re-emergent ‘truths’:
i) Scientific justification (logical soundness) is context-dependent.
ii) Science has a social dimension; science is necessarily a social activity with vertical (inter-generational) as well as horizontal (intra-generational) connections and thus also epistemic influences. Simply, science is a human activity, and humans are social animals.
iii) Positive philosophy is not philosophy of science, but philosophy of social interaction; Aristotelian political philosophy. Also, positivism does not separate philosophy of science from political philosophy.
iv) Cooperative wholeness; unity of acts and thoughts; unity of genes and memes; unity of dynamism and state.

“Being deeply aware of what man and animals have in common, Comte […] saw cooperation between men as continuous with phenomena of which biology gives us further examples.”
– Bourdeau (2014)

Comte made the purpose of CPP clear: “Now that the human mind has grasped celestial and terrestrial physics, – mechanical and chemical; organic physics, both vegetable and animal, – there remains one science, to fill up the series of sciences of observation, – Social physics. This is what men have now most need of: and this it is the principal aim of the current work to establish”.(7) He continued, saying that “it would be absurd to pretend to offer this new science at once in a complete state. [Nevertheless, Sociophysics will possess the same characteristic of positivity exposed in all other sciences.] This once done, the philosophical system of the moderns will in fact be complete, as there will be no phenomenon which does not naturally enter into some one of the five great categories. All our fundamental conceptions having become homogeneous, the Positive state will be fully established. It can never again change its character, though it will be forever in course of development by additions of new knowledge.”

In 1832 Comte was named tutor of analysis and mechanics at École Polytechnique. However, during the following decade he experienced two unsuccessful candidacies for professorship; he began to see ties severed between himself and the academic establishment after releasing a preface to CPP. In 1843 he published Elementary Treatise on Analytic Geometry, then in 1844 Discourse on the Positive Spirit, as a preface to Philosophical Treatise on Popular Astronomy (also 1844). By this time he was at odds with the academic establishment – essentially Comte had dropped-out of university. The reason for this does not seem to have been due to a lack of curiosity, neither to a lack of capacity, nor imagination, nor vision, nor even a simple lack of effort. Indeed, the situation resonates strongly with Einstein’s early academic situation, with Dirac’s late academic situation, and with Binet’s life-long academic situation. Galam’s experiences during the early 1980’s reverberate the same, unfortunate, if not pathological phenomenon of academic institution – interesting and curious, broad-reaching minds are generally met with hostile opposition from a fearful and mediocre orthodoxy.

Comte’s second great work – often referred to in the literature as Comte’s second career, was written between 1851 and 1854. It was regarded by Comte himself as his seminal work, and was titled First System of Positive Polity (FSPP). Its goal was a politico-economic reorganization of society, in accordance with scientific methods (techniques for investigating phenomena based upon gathering observable, empirical and measurable evidence, subject to inductive and deductive logical reasoning and argument), with the purpose of increasing the wellbeing of humankind – i.e. adaptation of political life based upon political episteme with the purpose of increasing the common good. This is precisely the Aristotelean argument (see The Common Good: Part I, under the heading Politikos). Though the sciences (epistemes) collectively played a central role in FSPP, positivism is not just science. Rather, with FSPP Comte placed the whole of positive philosophy under the ‘continuous dominance of the heart’, with the motto ‘Love as principle, order as basis, progress as end’. Bourdeau (2014) ensures us that this emphasis “was in fact well motivated and […] characteristic of the very dynamics of Comte’s thought”, though it seems as anathema to the current worldview as it did for Comte’s contemporaries, who “judged severely” – admirers of CPP turned against Comte, and publicly accused him of insanity.

Much like Nikola Tesla, Comte is reported to have composed, argued, and archived for periods of decades, periodically ‘observing the function of’ his systematic works, all in his mind. His death, in 1857, came too early for him to draft works that he had announced 35 years prior:
Treatise of Universal Education – intended for publishing in 1858;
System of Positive Industry, or Treatise on the Total Action of Humanity on the Planet – planned for 1861;
Treatise of First Philosophy – planned for 1867.

Polyhistornauts predicted
“Early academics did not create regular divisions of intellectual labour. Rather, each student cultivated an holistic understanding of the sciences. As knowledge accrued however, science bifurcated, and students devoted themselves to a single branch of the tree of human knowledge. As a result of these divisions of labor – the focused concentration of whole minds upon a single department – science has made prodigious advances in modernity, and the perfection of this division is one of the most important characteristics of Positive philosophy. However, while admitting the merits of specialization, we cannot be blind to the eminent disadvantages which emerge from the limitation of minds to particular study”.(7)

In surprising harmony with my own thoughts and words, Comte opined “it is inevitable that each [specialist] should be possessed with exclusive notions, and be therefore incapable of the general superiority of ancient students, who actually owed that general superiority to the inferiority of their knowledge. We must consider whether the evil [of specialization] can be avoided without losing the good of the modern arrangement; for the evil is becoming urgent. […] The divisions which we establish between the sciences are, though not arbitrary, essentially artificial. The subject of our researches is one: we divide it for our convenience, in order to deal the more easily with its difficulties. But it sometimes happens – and especially with the most important doctrines of each science – that we need what we cannot obtain under the present isolation of the sciences, – a combination of several special points of view; and for want of this, very important problems wait for their solution much longer than they otherwise need to”.(7)

Comte thus proposed “a new class of students, whose business it shall be to take the respective sciences as they are, determine the spirit of each, ascertain their relations and mutual connection, and reduce their respective principles to the smallest number of general principles.”

While reading this passage I was struck by the obvious similarity of its meaning to my own situation. I remain dumbfounded and humbled by the scale of foresight, so lucidly expressed by this great mind. For Comte had not simply suggested multi-disciplinary study, but a viewing though, and faithful acceptance of the general meanings rendered by the various scientific disciplines, together allowing for an intuitive, ‘heartfelt’ condensation of human knowledge.

Five fundamental sciences:
1) Mathematics
2) Astronomy
3) Physics
4) Chemistry
5) Biology

Sociology, then, is the sixth and final science. Each of these may be seen as a node in the network of human knowledge. Sociology, according to Comte, is the body of knowledge which will eventually allow for the networking of all human epistemes into a great unified field of human ideas.

Generalization: uneasy unification
Generalizing the laws of “active forces” (energy) and of statistical mechanics, Comte suggested that the same principle of interaction is true for celestial bodies and for molecules. Specifically, the center of gravity of either a planet or a molecule is focused upon a geometrical point, and though massive bodies may interact with each other dynamically, thus affecting each others relative position and velocity, the center of gravity of each is conserved as a point-state.

“Newton showed that the mutual action of the bodies of any system, whether of attraction, impulsion, or any notion other, – regard being had to the constant equality between action and reaction, – cannot in any way affect the state of the center of gravity; so that if there were no accelerating forces besides, and if the exterior forces of the system were deduced to instantaneous forces, the center of gravity would remain immovable, or would move uniformly in a right line. D’Lambert generalized this property, and exhibited it in such a form that every case in which the motion of the center of gravity has to be considered may be treated as that of a singular molecule. It is seldom that we form an idea of the entire theoretical generality of such great results as those of rational Mechanics. We think of them as relating to inorganic bodies, or as otherwise circumscribed, but we cannot too carefully remember that they apply to all phenomena whatever; and in virtue of this universality alone is the basis of all real science.”
– It should not escape the reader’s attention that in this passage Comte has effectively, albeit figuratively, plotted a graph of dynamically interacting point-states. The interactivity and cooperativity of massive bodies within a solar system or chemical reactants within a flask, both represent physically complex systems of dynamic social interaction – i.e. both are sociophysical systems. Implicit in this epistemological condensation is the fact that sociophysical systems are not necessarily alive, or biotic, or even organic.

After the completion of FSPP and his complete break with orthodox academia, Comte is said to have “overcome modern prejudices”, allowing him to “unhesitatingly rank art above science”.(6) Like Comte, I take the Aristotelian view that the arts are combinations of knowledge and skill; habitus and praxis; theory and method. Thus in a very real and practical sense the sciences are arts, from which it logically follows that Art ranks above Science. A rather more difficult pill to swallow, has been Comte’s Religion of Humanity, which he founded in 1849. Like Bourdeau (2014), I believe “this aspect of Comte’s thought deserves better than the discredit into which it has fallen”. My personal stance is due specifically to a previous uncomfortable encounter with an article on the topic of common goods, which was published by The Journal of Religious Ethics under the auspices of the United Nations(8). I had hesitated to include the paper and its contents in my previous work, due simply to fear – a fear reprimand by my peers, and a personal fear of straying from the “scientifically correct and peer reviewed path of learning”. As will become obvious, I have since realized that exclusion of study materials on the basis of fear alone is unreasonable, and that I should, and shall, attempt a rather more inclusive, better rounded education; critical thinking and good quality arguments remain of utmost importance.

“Reforms of society must be made in a determined order: one has to change ideas, then morals, and only then institutions.”
– Comte (cca. 1840)

The Religion of Humanity was defined with neither God(s) nor supernatural forces – as a “state of complete harmony peculiar to human life […] when all the parts of Life are ordered in their natural relations to each other […] a consensus, analogous to what health is for the body”. Personally, I understand this concept as the Tao, and more recently as deep ecology; inclusive of humanity but not exclusive to it. For Comte however, worship, doctrine and moral fortitude were oriented solely toward humanity, which he believed “must be loved, known, and served”.

Three components associated with the positivist religion:
i) Worship – acts; praxis; methods.
ii) Doctrine – knowledge; habitus; theories.
iii) Discipline (moral fortitude) – self-imposed boundaries, simultaneously conforming to, affirming, and defining the system of belief.

Two existential functions of the positivist religion:
i) Moral function – via which religion governs an individual.
ii) Political function – via which religion unites a population.

Ghetto magnetism
In this section we begin to explore the modern science of macro-scale physical phenomena, which result from micro-scale social interactions. The reader may find it useful to refer to the appended glossary of terms [NOTE D].

During the birthing period of quantum mechanical theory, “the concept of a microscopic magnetic model consisting of elementary [atomic] magnetic moments, which are only able to take two positions “up” and “down” was created by Wilhelm Lenz”.(9) Lenz proposed that spontaneous magnetization in a ferromagnetic solid may be explained by interactions between the potential energies of neighboring atoms. Between 1922 and 1924, Ernst Ising, a student of Lenz, studied the Lenz model of ferromagnetism, as a one-dimensional chain of magnetic moments; each atom’s field interacting with its closest neighbors. Ising’s name seems to have become attached to the Lenz model by accident, in a 1936 publication, titled On Ising’s Model of Ferromagnetism.

ising
Ernst Ising

Three energetic components of the Ising model:
i) Interaction between neighboring magnetic moments (atomic spins).
ii) Entropic forcing (temperature).
iii) Action of an externally applied magnetic field, affecting all individual spins.

Social interaction between neighboring atoms induces parallel alignment of their magnetic momenta, resulting in a more favorable energetic situation (lower entropy) when neighbors are self-similar; both +1, or both −1. Conversely, a less favorable situation results from opposing momenta (+1 next to −1).(10)

Ising_model_initial_1
Example of a the Ising model on a two dimensional (10 x 10) lattice. Each arrow represents a spin, which represents a magnetic moment that points either up (-1, black) or down (+1, red). The model is initially configured as a ‘random’ distribution of spin vectors.

Ising_model_initial_2
The same initial ‘random’ distribution of magnetic moments, showing ‘unfavorable’ alignments (circled in green).

Ising_model_initial_3
Clusters of spins begin to form (positive clusters circled in green, negative clusters circled in yellow) as a result of neighbor interaction, temperature, and the action of an externally applied magnetic field. As a result of entropy-reducing vector flipping, new ‘unfavorable’ spin alignments arise (circled in light blue), which will also tend to flip polarity.

In sociology, the term tipping point(11) refers to a rapid and dramatic change in group behavior – i.e. the adoption by the general population of a behavior that was rare prior to the change. The term originated in physics, where it refers to the addition of a small weight to a balanced object (a system previously in equilibrium), causing the object to topple or break, thus affecting a large scale change in the object’s stable state (a change of the system’s equilibrium); a change of stable state is also known as a phase transition.

The relation between cause and effect is usually abrupt in complex systems. A small change in the neighborhood of a subsystem can trigger a large-scale, or even global reaction. “The [network] topology itself may reorganize when it is not compatible with the state of the nodes”
– Juan Carlos González Avella (2010)

In relation to social phenomena, Morton Grodzins is credited with having first used the term tipping point during his 1957 study of racial integration in American neighborhoods.(11) Grodzins learned that the immigration of “black” households into a previously “white” neighborhood was generally tolerated by inhabitants as long as the ratio of black to white households remained low. If the ratio continued to rise, a critical point was reached, resulting in the en masse emigration of the remaining white households, due to their perception that “one too many” black households populated the neighborhood. Grodzins dubbed this critical point the tipping point; sociologist Mark Granovetter labeled the same phenomenon the threshold model of collective behavior.

Between 1969 and 1972, economist Thomas Schelling published articles on the topic of racial dynamics, specifically segregation. Expanding upon the work of Grodzins, Schelling suggested the emergence of “a general theory of tipping”. It is said that Schelling used coins on a graph paper lattice to demonstrate his theory, placing ‘pennies’ (copper-alloy one cent pieces, representing African-American households) and ‘dimes’ (nickel-alloy ten cent pieces – representing Caucasian households) in a random distribution, while leaving some free places on the lattice. He then moved the pieces one by one, based upon whether or not an individual ‘household’ was in a “happy situation” – i.e. a Moore neighborhood, in which the nearest eight neighbors are self-similar.(12) At random, one self-dissimilar ‘household’ was moved to a Moore neighborhood, over time rendering a complete segregation of households, even with low valuation of individual neighbor preferences. In 1978 Schelling published a book titled Micromotives and Macrobehavior, in which he helped to explain variation in normative differences, tending over time to display a self-sustaining momentum of segregation. In 2005, aged 84, Schelling was awarded a share in the 2005 Nobel prize in economics, for analyses of game theory, leading to increased understandings of conflict and cooperation.(13)

Schelling
Thomas Schelling

“People get separated along many lines and in many ways. There is segregation by sex, age, income, language, religion, color, taste, accidents of historical location. Some segregation results from the practices of organizations; some is deliberately organized; and some results from the interplay of individual choices that discriminate. Some of it results from specialized communication systems, like different languages. And some segregation is a corollary of other modes of segregation: residence is correlated with job location and transport”.(14)
– Schelling (1971)

Under the heading Linear Distribution, in Schelling’s 1971 publication on the subject of social segregation, we find a direct analog to the original one-dimensional Lenz-Ising model. Schelling seems to have either appropriated the concept, citing neither Lenz nor Ising, or to have designed the model independently. His involvement in American foreign policy, national security, nuclear strategy, and arms control(13) certainly would have granted Schelling access to knowledge of theoretical works, including the so called Monte Carlo methods, undertaken at Los Alamos during and after the second world war.(15) However, for the purpose of our current exploration it is irrelevant how exactly Schelling arrived at his understanding, and indeed, as I have mentioned previously, sociophysics has emerged in a variety of apparitions, to studious individuals with widely differing perspectives.

Schelling_1D
“The line of stars and zeros […] corresponds to the odd and even digits in a column of random numbers. […] We interpret these stars and zeros to be people spread out in a line, each concerned about whether his neighbors are stars or zeros. […] Suppose, now, that everybody wants at least half his neighbors to be like himself, and that everyone defines ‘his neighborhood’ to include the four nearest neighbors on either side of him. […] I have put a dot over each individual whose neighborhood does not meet his demands. […] A dissatisfied member moves to the nearest point at which half his neighbors will be like himself at the time he arrives there. […] Two things happen as they move. Some who were content will become discontent, because like members move out of their neighborhoods or opposite members move in. And some who were discontent become content, as opposite neighbors move away or like neighbors move close. The rule will be that any originally discontented member who is content when his turn comes will not move after all, and anyone who becomes discontent in the process will have his turn after the 26 original discontents have had their innings.”
– Schelling (1971)

Under the heading Area Distribution, Schelling (1971) introduces a two dimensional (13 x 16) lattice, commenting that “patterning – departure from randomness – will prove to be characteristic of integration, as well as of segregation, if the integration results from choice and not chance.” Clearly, Shelling’s model of social segregation bares great similarity to Ising’s ferromagnetic model.

Stauffer (2012) reminds us that the formation of urban ghettos is a well known phenomenon, and suggests that New York’s Harlem is the most famous black district,(3) with a history stretching well over a hundred years. From 1658, Harlem was a Dutch settlement (or ghetto) named after the capitol of north Holland. African-Americans began to immigrate during the ‘great migration’, from about 1905, when former slaves from rural southern United States migrated to mid-western, north-eastern and western regions of the US. Harlem identified as a ‘black’ district in a Manhattan borough, during the early 1920s.

Indirectly, Stauffer poses an interesting question: Why is it that we spontaneously self-organize into groups of self-similar individuals? – or in the specific case of “ghetto formation” – Why is it that we like to live in communities of like-minded, ethnically and culturally similar individuals? The simplest and clearest answer to this question is surely that we are social animals, and that it is easier to socialize with self-similar individuals, than with strangers. However, stemming from this is the truly fascinating question: If it is true that we like to live in communities of self-similar individuals, then why do we not like to live in communities of self-similar individuals when forced to do so? As an example of the latter, Stauffer reminds us of the uprising, in 1943, of the Warsaw Ghetto, which did not self-assemble but was formed under command of Nazi Germany. Again, the simplest and clearest answer must be that we are social animals, though I cannot think of good reason in support of this example, other than revolutionary pressure due to innate principles of self-regulation and self-organization. Regardless, it would be nice to assume that precisely this kind of ambiguity, apparently intrinsic to sociology, has been at root of the epistemological rift between physics and sociology, as the result of a long-standing ideological tradition in physics of determinism. In reality, a deeper and rather more vexing explanation haunts us; it has become obvious that the ambiguity of social interaction is not restricted to messy life systems, but governs inorganic physical phenomena also.

Statistical physics, borne of quantum theory, has put a definitive end to physical determinism. The renormalization technique, ushered in during the mid-1970’s, seems to have been an attempt to conserve physical determinism, at least tentatively. However, renormalization is a theoretical hack – an attempt to abstractly force fundamentally complex, infinite, random, and thus fundamentally indeterminate phenomena to appear as if they were simple, precisely calculable, determinable facts. Physically, experimentally, reality is not clear. In fact, reality is fundamentally uncertain, and so remains non-understood; mysterious. Stauffer confirms the validity of Comte’s thoughts, suggesting that “cooperation of physicists with sociologists could have pushed research progress by many years”.

State of the Art
“The concept of Complex Systems has evolved from Chaos, Statistical Physics and other disciplines, and it has become a new paradigm for the search of mechanisms and an unified interpretation of the processes of emergence of structures, organization and functionality in a variety of natural and artificial phenomena in different contexts. The study of Complex Systems has become a problem of enormous common interest for scientists and professionals from various fields, including the Social Sciences, leading to an intense process of interdisciplinary and unusual collaborations that extend and overlap the frontiers of traditional Science. The use of concepts and techniques emerging from the study of Complex Systems and Statistical Physics has proven capable of contributing to the understanding of problems beyond the traditional boundaries of Physics.”
– Juan Carlos González Avella (2010)

In an interdisciplinary review of the literature defining adaptive co-evolutionary networks (AcENs), Gross & Blasius (2007) have listed five dynamical phenomena common to AcENs:
i) emergence of classes of nodes from an initially heterogeneous population
ii) spontaneous division of labor – in my opinion the same as (i)
iii) robust self-organization
iv) formation of complex topologies
v) complex system-level dynamics (complex mutual dynamics in state and topology)

We are to understand that the mechanisms giving rise to these emergent phenomena themselves emerge from the dynamical interplay between state and topology. Divisions of labor, for example, spontaneously emerge (self-organize) as a result of information feedback within an AcEN(16) This fact bolsters an argument that I have made previously, for a strong similarity between the epiphenomena of bacteria, gregarious insects and humans, in their respective cultures. Also supported by studies of AcENs, is my hitherto intuitive understanding that a diverse set of actors is fundamental to the production of common goods. In fact, it is now clear that cultural diversity is so fundamental to the dynamics of social phenomena, that divisions of labor necessarily and spontaneously emerge from an initially homogeneous population, due to random variations of nodal state (entropic forcing), degree and homophily.

Gross & Blasius (2007) have reported that self-organization is observed in Boolean and in biological networks, occurring within a narrow region of transition between an area of chaotic dynamics and a area of stationary dynamics. Metaphorically, one might say that between the vast and chaotic field of the unknown and the relatively large steady state of knowledge, lies a narrow field – a phase space of self-organizing possibility – i.e. intuition. Not at all surprisingly, life systems, like all complex adaptive systems, necessarily occupy this theoretically defined phase space. Further, Gross & Blasius talk of the “ubiquity of adaptive networks across disciplines”, specifying technical distribution networks such as power grids, postal networks and the internet; biological distribution networks such as the vascular systems of animals, plants and fungi; neural or genetic information networks; immune system networks; social networks such as opinion propagation/formation, the social media and market-based socio-economics; ecological networks (food webs), and of course biological evolution offers an historical depth of literature on the subject of AcENs. The authors mention that examples are also reported from chemistry and physics, but do not provide examples. Based upon our current exploration it seems fair to suggest at least the following: astronomical gravitational networks, molecular chemical reactant networks, geological networks (the interactive cycling of carbon, water, nitrogen, minerals, etc…), and of course quantum mechanical networks.

For me personally, the most difficult to fathom of these examples has been the astronomical gravitational network. However, I am now able to imagine the gravitational interaction of massive bodies at their various scales – planets, moons and comets within a solar system; solar systems within a galaxy; galaxies within local groups; local groups within clusters, etc – as nodes, with gravitation comprising the set of edges (connections) between massive bodies.

mass_distribution
Network geometry is obvious in models of Universal mass distribution.

Tabulated nomenclature of static and dynamic elements, for a selection of epistemes.

EPISTEME STATIC DYNAMIC
Metaphysics actor action
Graph theory node edge
Complex systems theory vertex link
Quantum theory particle wave
Electro dynamics theory field vector
Economic theory agent behavior
Astrophysics massive body gravitation
Chemistry reactant reaction
Molecular biology – central doctrine DNA transcription
Molecular biology – central doctrine mRNA translation
Biology organism survival
Evolutionary theory species adaptation

According to J. Avella (2010), the modeling of network dynamics has revealed a complex relationship between actor heterogeneity and the emergence of diverse cultural groups.(19) Network structure and cultural traits co-evolve, rendering qualitatively distinct network regions or phases. Put in more familiar terms: patterns of social interaction and processes of social influence change or differ in tandem, also network patterns and processes feedback upon each other. Thus social interactions exist as a dynamic flux in which distinct channels of interactivity form, sever, and re-form. From the collective interaction of agents, emerge temporary, sequential, non-equilibria – known as network states. The formation of network states is controlled by early-forming actors, whereas the later formation and continued rapid reformation of cultural domains, comprises the geometry – or ‘architecture’ – of a mature network; a network who’s dynamics have reached a dynamic steady state.

Furthermore, the ordered state of a finite system under the action of small perturbations is not a fixed, homogeneous configuration, but rather a dynamic and diversified, chaotic steady state. During the long term, such a system sequentially “visits” a series of monocultural configurations; one might imagine a systemic analogue to serial monogamy. Slow forming monocultures emerge under stable environmental conditions (low entropic forcing). Under less stable environmental conditions (high entropic forcing), monocultural domains undergo fragmentation and are replaced by a variety of rapidly forming and re-forming cultural domains, thus rendering a dynamic steady state. The relation between cause and effect is usually abrupt in complex systems. Indeed, “the [network] topology itself may reorganize when it is not compatible with the state of the nodes.”

Avella tells of a study by Y. Shibanai et al, published in 2001, analysing the effects of global mass media upon social networks. Shibanai et al assumed global mass media messages as an external field of influence – analogous to the external magnetic field in the Ising model – with which network actors (individual, and/or groups of nodes in a network) interact. The external field was interpreted “as a kind of global information feedback acting on the system”. Two mechanisms of interactive affect upon society by global media were identified:
i) The influential power of the global media message field is equal to that of real (local) neighbors.
ii) Neighbourly influence is filtered by feedback of global information, but effected only if and/or when an individual network node is aligned with a global media message.
Shibanai et al concluded that “global information feedback facilitates the maintenance of cultural diversity” – i.e. The propagation of messages promoting a state of global order and cultural unity, simultaneously enables and maintains a dynamic steady state of global disorder and multiculturalism.

Generally, considerations of equilibrium assume that the application of a field enhances order in a system. However, this is not always the case. To the contrary, Avella (2010) tells us that “an ordered state different from the one imposed by the external field is possible, when long-range interactions are considered” and fascinatingly, that “a spatially nonuniform field of interaction may actually produce less disorder in the [social] system than a uniform field.”

“While trends toward globalization provide more means of contact between more people, these same venues for interaction also demonstrate the strong tendency of people to self-organize into culturally defined groups, which can ultimately help to preserve overall diversity.”
– J. Avella (2010)

Respectfully, I urge the reader to allow themselves a few moments of meditation upon this rather subversive finding.

A dynamic steady state exists in a network until a process of social influence such as an external environmental perturbation or an internal social perturbation, exceeds some threshold (tipping point), as a result of which the current network steady state is eroded and reformation of ongoing network dynamics occurs, rendering a new dynamic steady state. Put another way: above some threshold, a given perturbation causes an abrupt change in social interactions, leading to a new (though ultimately temporary) dynamic steady state. Co-evolution implies that the processes of social influence change as the result of multilateral feedback mechanisms between social interactions, environmental forcing, and/or the eccentric actions of some individual or group.

Three distinct phases of complex (adaptive, co-evolutionary) networks:
Phase I) A large component of the network remains connected and co-evolutionary dynamics lead to a dominant monocultural state.
Phase II) Fragmentation of the monocultural state begins, as various cultural groups form in the dynamic network. However, these smaller groups remain stable in the presence of ongoing stochastic shocks; peripheral actors are either absorbed into a social group or are forced out. “Social niches are not produced through competition or selection pressure but through the mechanisms of homophily and influence in a co-evolutionary process.[…] Thus, even in the absence of selection pressures, a population can self-organize into stable social niches that define its diverse cultural possibilities.”
Phase III) Fragmentation of cultural domains leads to high levels of heterogeneity. Avella (2010) teaches that the very high levels of heterogeneity observed in network models are “empirically unrealistic in most cases; however, they warn of a danger that comes with increasing options for social and cultural differentiation, particularly when the population is small or there is modest cultural complexity. Unlike cultural drift, which causes cultural groups to disappear through growing cultural consensus, a sudden flood of cultural options can also cause cultural groups to disappear; but instead of being due to too few options limiting diversity, it is due to excessive cultural options creating the emergence of highly idiosyncratic individuals who cannot form group identifications or long-term social ties.”

Confirming what we have learned from Ising and Schelling, Avella tells that “[actors] have a preference for interacting with others who share similar traits and practices”, and this fact “naturally diversifies the population into emergent social clusters.” However, we have also learned that a highly idiosyncratic actor, who is either unrecognized or even disconnected from a local area network, may still play an influential role upon the greater network (society). Thus, highly idiosyncratic individuals, devoid of group identifications and/or long-term social ties, rather than posing a danger, may be potentially highly relevant to social processes, if only in the sense that collective idiosyncrasy exists as a reservoir of unused or even unknown options and opportunities – a pool of potential, perhaps similar to that of genomic mutants; a diverse set of resources from which may emerge novel solutions to challenges and previously un-encountered situations.

Indeed, precisely this scenario appears to have been the case at the emergence of life on Earth (see: LUCA and the progenotes, in Part II: Empirical observations and meta-analyses, of The Common Good), during which the progenote population represented a collective, albeit semi-disconnected, network of highly idiosyncratic individuals with no strong group identification or long-term social ties. Also learned in Empirical observations and meta-analyses, a local area network catastrophe is catastrophic only for a highly adapted (specialized) monoculture, and may be problematic for small 'satellite' cultural groups that are to a lesser extent adapted to the current network topology. However, highly idiosyncratic, even disenfranchised actors in the current dynamic, network steady state, may experience homophilic pressure and thus social connectivity in the dynamic steady state which emerges from a phase transition of the network topology.

Avella (2010) has confirmed that cultural heterogeneity (multicultural dynamics, and even outright anarchy) is a deep aspect of reality. Anarchy and chaos appear to be near the source, or indeed to be the source of physical and social order. That is to say a variety of ordered states spontaneously emerge from anarchical, chaotic systems. "Social diversity can be maintained even in highly connected environments" – i.e. Even under intense pressure to conform, diversification and hence diversity, emerge and persist.

Vinkovic & Kirman (2006), remind us that the purpose of the Schelling model is “to study the collective behavior of a large number of particles”,(16) and that the model illustrates the emergence of aggregate phenomena that are not predictable from the behaviors of individual actors. In economics theory individual agents make decisions based upon a “utility function” (personal preference), an idea that can be interpreted in physical terms, as: particle interactions are driven by changes of internal energy. A direct analogy is made between the interactions of life systems (humans, insects, fungi, plants, bacteria, etc…) and physical systems (gases, liquids, solids, colloids, solutions, etc…) by treating particles as agents. “In the Schelling model utility depends on the number of like and unlike neighbors. In the particle analogue the internal energy depends on the local concentration […] of like or unlike particles. This analogue is a typical model description of microphysical interactions in dynamical physical systems […]. Interactions between particles are governed by potential energies, which result in inter-particle forces driving particles’ dynamics.”

It is understood then, that from the collective behaviour of individual agents, emerge clusters of self-similar agents. Fascinatingly, Vinkovic & Kirman report finding that aggregates of empty space play a “role” in the dynamics of agent clustering; stressing the importance of the number of empty spaces in the initial, random, configuration of an experimental lattice. Specifically, “an increase in the volume of empty space results in more irregular cluster shapes and slower evolution because empty space behaves like a boundary layer”. Clearly, in their analytical study, the authors assume that aggregates of empty space express a “behavior”, thus implying that “empty space” has some capacity to act; specifically, stabilizing nearby clusters by preventing them from direct contact with each other. Simply, we are to acknowledge the collective agency of aggregations of agentless locations on the lattice; the collective action of actorless, “free” space.

Empty_space_actors
Plots of an agent based (Schelling) model.
The two dimensional experimental lattice is composed of (100 x 100) = 10000 cells. Each cell is either empty (white) or is occupied by one agent (red or blue). Numbers of empty cells in initial random configurations are shown.
Increased cluster size correlates with decreased value of x. Increased sizes of empty space clusters are shown (circled in green) for both initial configurations.
– graph adapted from Vinkovic & Kirman (2006)

I have managed to find only a tiny scattering of scientific works attributing some significance to empty space. One example is from the statistical analysis of graphical data plots; Forina et al (2003), have introduced an empty space index, the purpose of which is to quantify the fraction of information space on a given graph, that does not hold any “experimental objects”.(17) However, the authors are careful to point out that the empty space index cannot be confused with a clustering index. Another, perhaps more commonly known example stems from astronomy; voids.

Like Serge Galam, Stephen Wolfram is also a self-proclaimed hobbyist exploring sociophysics. In his philosophical treatment of space-time(18) Wolfram (2015) suggests that “maybe in some sense everything in the universe is just made of space.” Wolfram speaks of what I choose to call aether (see: A Spot of Bother and Aether), saying:
“As it happens, nearly 100 years [before Special Relativity, people] still thought that space was filled with a fluid-like ether. (Ironically enough, in modern times we’re back to thinking of space as filled with a background Higgs field, vacuum fluctuations in quantum fields, and so on.)”

Conclusion
It must be stressed that the epistemic condensation of sociology and physics may be ascribed to any of the periodic elements; to the sub-atomic scale as well as the astronomic scale; to mathematical and theoretical, albeit complex, models of reality; and of course to life systems.

We have viewed through empirically observable phenomena, at some aspect of reality that is more fundamental than those which we have observed.

Critically, this cannot be science, as the absolute boundary of the scientific method, and thus science itself, is empiricism (sensual observation and manipulation). Any thing that we think we see and do beyond or through what we actually observe and affect, is not science. We are left with only one logical possibility: that our newfound knowledge of reality is metaphysical. Ultimately we must categorize it as Art.

Notes
A) There are at least three separate histories of sociophysics; one stemming from philosophy, one from quantum physics, and one from sociology.
B) In the vocabulary of complex systems modeling and co-evolutionary adaptive networks theory one may rightly define such reorganizational events as a change of topological dynamics.
C) As well as positivism, Comte coined the words sociology and altruism.(6)
D) Glossary of terms relevant to network models:
Node: The node is the principal unit of a network. A network consists of a number of nodes connected by links. Depending on context, nodes are sometimes also called vertices, agents, actors, or attractors.
Link: A link is a connection between two nodes in a network. Depending on context, links are also called edges, connections, actions or interactions.
Degree: The degree of a node is the number of nodes to which it is connected; i.e. degree = links/node. The mean degree of the network is the mean of the individual degrees of all nodes in the network.
Neighbors: Two nodes are said to be neighbors if they are connected by a link.
Dynamics: Depending on context, dynamics refers to a temporal change of either the state or the topology of a network.
Evolution: Depending on context, evolution refers to a temporal change of either the state or the topology of a network.
Frozen node: A node is said to be frozen if its state does not change in the long-term behavior of the network. In certain systems the state of frozen nodes can change nevertheless on an even longer topological time scale.
Topology: Refers to a specific pattern of connections between the nodes in a network.
State: Depending on context, state refers to either the state of a networked node or the state of the network as a whole – including the nodes and the topology.
Small-world: Refers to a network state in which distant, indirectly connected, nodes are linked via a short average path length.
Scale-free: Refers to a network state in which the distribution of node degrees follows a power law.
Homophily: Refers to spontaneous attraction between self-similar nodes; literally self love.

Bibliography
1) S. Galam, “Sociophysics: a personal testimony”, (2004), Laboratoire des Milieux Désordonnés et Hétérogènes, arXiv, http://arxiv.org/abs/physics/0403122
2) S. Galam, Y. Gefen and Y. Shapir, “Sociophysics: A mean behavior model for the process of strike”, (1982), Journal of Mathematical Sociology, 9, p. 1-13.
3) D. Stauffer, “A Biased Review of Sociophysics”, (2012), Institute for Theoretical Physics, Cologne University, arXiv, http://arxiv.org/abs/1207.6178
4) G. Heywood, “EDMOND HALLEY: ASTRONOMER AND ACTUARY”, (1985), ???
5) S. Stigler, “Adolphe Quetelet (1796-1874)”, (1986) Encyclopedia of Statistical Sciences, John Wiley & Sons, http://mnstats.morris.umn.edu/introstat/history/w98/Quetelet.html
6) M. Bourdeau, “Auguste Comte”, (2014), Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/archives/win2015/entries/comte/
7) H. Martineau, “The Positive Philosophy of Auguste Comte”, (1896), Batoche Books (2000), http://socserv2.socsci.mcmaster.ca/econ/ugcm/3ll3/comte/Philosophy1.pdf
8) J. O’Connor, “MAKING A CASE FOR THE COMMON GOOD IN A GLOBAL ECONOMY: The United Nations Human Development Reports [1990-2001]”, (2002), The Journal of Religious Ethics, Vol. 30, No. 1, p. 155-173, http://www.jstor.org/stable/40017930
9) S. Kobe, “Ernst Ising 1900-1998”, (2000), Technische Universität Dresden, Institut für Theoretische Physik, http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-97332000000400003
10) J. Selinger, “Ising Model for Ferromagnetism”, Chapter 2 of Introduction to the Theory of Soft Matter: From Ideal Gasses to Liquid Crystals, (2016), http://www.springer.com/978-3-319-21053-7
11) “Tipping point”, https://en.wikipedia.org/wiki/Tipping_point_%28sociology%29
12) D. Vinkovic and A. Kirman, “A physical analogue of the Schelling model”, (2006), Proceedings of the National Academy of Science, http://www.pnas.org/content/103/51/19261.full
13) “Thomas Schelling”, https://en.wikipedia.org/wiki/Thomas_Schelling
14) T. Schelling, “DYNAMIC MODELS OF SEGREGATION”, (1971), Journal of Mathematical Sociology, Vol. 1, p. 143-186, http://www.tandfonline.com/doi/abs/10.1080/0022250X.1971.9989794
15) “Monte Carlo method”, https://en.wikipedia.org/wiki/Monte_Carlo_method
16) T. Gross & B. Blasius, “Adaptive coevolutionary networks: a review”, (2007), Journal of The Royal Society, http://rsif.royalsocietypublishing.org/content/5/20/259.short
17) M. Forina, S. Lanteri, C. Casolino, “Cluster analysis: Significance, empty space, clustering tendency, non-uniformity. II – Empty space index”, (2003), https://www.researchgate.net/publication/10619726_Cluster_analysis_Significance_empty_space_clustering_tendency_non-uniformity_II_-_Empty_space_index
18) S. Wolfram, “What Is Spacetime, Really?”, (2015), http://blog.stephenwolfram.com/2015/12/what-is-spacetime-really/
19) J. Avella, “Coevolution and local versus global interactions in collective dynamics of opinion formation, cultural dissemination and social learning”, (2010), Institute of Interdisciplinary Physics and Complex Systems, http://digital.csic.es/handle/10261/46275

The Common Good: a semi-rational emergent property of complex collective interaction between diverse actors – Part II

The common good invariably requires diversification, manifest as random fluctuations within the biological phase space from which emerge divisions of labour, and thus necessarily, inequalities among individuals comprising a social collective. Entropic forcing drives increases of the common good, via increased diversity, to an apparent limit.

Explorations are made of philosophical (Part I) and empirical (Part II) studies in politics, biology, and economics.

Cooperation via collective divisions of labour is a necessary prerequisite to biological metabolism and reproduction. A collective comprising diverse actors is thus assumed fundamental to the planetary biome. The preponderance of benefit (here designated ‘the common good’) that emerges for actors (individuals and groups), is mediated by Woesean collective cooperation, defined as “a diverse community of cells(note A) surviving and evolving as a biological unit.”(1)
– see Part I for (note A) and reference (1).

“Diversity is an asset with which to confront uncertainty.”
– Groschl, 2013

Part II: Empirical observations and meta-analyses

Diversified-specialized: a modern economical perspective
The concept of diversified specialization is introduced and discussed in some detail by Farhauer & Kröl (2012), in an empirical study of German kreisfreie städte (cities with county status).(28) The study speaks of Marshall-Arrow-Romer (MAR) externalities, and of Jacobs externalities; both are forms of knowledge spillover. The former generating advantages due to specialization in the local environment, the latter generating advantages due to diversification in the local environment.

A diversified sector structure fosters cross-sectoral (‘Jacobs’) spillovers and lessens the impact of sector-specific demand shocks upon the regional economy. However, cities specializing in several sectors profit from both, MAR and Jacobs knowledge spillovers. Diversified-specialised cities combine the benefits of higher productivity due to specialization, with the advantages of a diversified structure, such as cross-fertilization among differing sectors, thus exhibiting higher growth rates than either specialized or diversified cities.

Specialization is risky. When a highly specialized local economy is exposed to a negative demand shock, local unemployment tends to increase dramatically, resulting in a local economic recession, or possibly even leading in an economic, and eventually cultural collapse of the entire region. In an extreme case the industry sector begins to wholly collapse, causing a widespread cascading shockwave.(29)

Sector-specific demand shocks are better absorbed by a diversified economy. It is reasonable to assume that a diversified economic environment, or indeed the diversified skill-set of an individual, generally allows for greater stability; or biologically speaking, greater fitness via increased adaptive capacity. The viability of a culture surely is in the common interest of all individuals comprising it, whether they are directly or indirectly integrated into the local culture (economy and/or ecology). Thus economic and cultural stability (viability) may reasonably be viewed as a common good.

Farhauer & Kröl report that diversified cities are generally larger, more crowded and chaotic, rendering a business environment that is less efficient and more costly than that found in a specialized city. Interestingly then, diversification requires more space than specialization, not simply geographically but also potentially; a larger realm of possibility (a larger phase space) defines diversified actors.

“Smaller cities tend to be specialised and, as a result, more productive which indicates a negative influence of city size on productivity. However, in large cities inputs can be utilised more efficiently – i.e. put to the best possible use – by means of which productivity is higher.”
– Farhauer & Kröl, 2012

Hitting squarely the predictions rendered by the hypothesis upon which the current thesis rests(note F), the diversified-specialized theory appears to be inconclusive and ambiguous, yet it is obvious that if population number (city size) does not make a clear difference in productivity, then a diversified approach is better, if only because it renders a more stable and viable situation for all stakeholders. And indeed Farhauer & Kröl do report that numerous empirical studies correlating regional sector structure (either diversified or specialized) with economic growth, have found greater employment rates in diversified regions. Critically though, the study promotes the concept of ‘diversified-specialization’ as more productive, more innovative and more stable than either diversified or specialist structures are on their own. Thus a “region specializing in a certain combination of related sectors is likely to experience higher growth rates than a region specializing in an unrelated portfolio or in one sector only.”

An indeterminate confusion in the literature relevant to the empirical study of local economies has been reported; some studies concluding that a city is specialized, while others say the same city is diversified. Farhauer & Kröl tell that “many cities exhibit multiple specialisations, but – apart from specialization in a few sectors – they show a diversified structure at the same time.” One could easily assume that Farhauer & Kröl are fence-sitting on their suggestion of diversified-specialized cities. Rather, I would suggest they have taken a pragmatic perspective, indicative of diversity and diversification as fundamental to local economies; that is to say, specializations cannot exist in the absence of diversity, and that specializations emerge from a milieu of diverse actors. Arguably, the same may be said of local ecologies.

Furthering the economy/ecology analogy, the authors tell that “companies benefit from proximity to upstream and downstream firms […]” – a statement that is strikingly reminiscent of biological commensal symbiosis between upstream and downstream metabolisms, and of the current best guess regarding the origin of life on Earth; the constitution of the last universal common ancestor. Most fascinating of all, due to its similarity with the inefficient process of photosynthetic primary production, is the statement “cities with lower productivity levels are characterised by higher growth rates.”

LUCA and the progenotes
The idea that any group of modern organisms inherited their genes from a single common ancestor is naive. Much more likely is that the last universal common ancestor (LUCA) was a complex and diverse, sophisticated global community.(30) Early life forms were particularly promiscuous, sharing their genes in a process called horizontal gene transfer (HGT); moving genetic materials, signals, metabolic components, and other resources between cells without necessarily reproducing the entire cell.

“Most researchers now believe we should think of LUCA as a pool of genes shared among a host of primitive organisms [though] some biologists believe that horizontal gene transfer makes LUCA unknowable.”
– Whitfield, 2004

Whitfield (2004), proposes that individual cellular components of the LUCA collective may have independently learned how to solve similar problems, such as membrane construction, or the extraction of energy from certain organic molecules, and that HGT allowed for promiscuous sharing of genes coding such solutions with other cells in the commune.

The cellular functions of modern organisms rely on complex enzymatic machinery. Generally enzymatic components are encoded by several noncontiguous genes, which may be located in different regions of the genome. In contrast, the earliest genes would each have encoded an enzymatic product able to function as a stand-alone functional module – “like cassettes that can be loaded, removed and replaced. Antibiotic-resistance genes are like that today.”

The darwinian threshold, estimated to have occurred 3.5 billion years ago, represents the point in biological history when inheritance and mutation of genes replaced HGT as the dominant mode of evolution; individual cells became more complex and their functions became less interchangeable.

Carl Woese (1998), proposed that the LUCA was not a discrete entity, but a diverse community of cells surviving and evolving as a collective.(31) “This communal ancestor has a physical history but not a genealogical one. The [LUCA] cannot have been a particular organism, a single organismal lineage. It was communal, a loosely knit, diverse conglomeration of primitive cells that evolved as a unit, and it eventually developed to a stage where it broke into several distinct communities, which in their turn become the three primary lines of descent. – The universal ancestor is not an entity, not a thing. It is a process […]. Progenotes(note G) were very unlike modern cells. Their component parts had different ancestries, and the complexion of their componentry changed drastically over time. All possessed the machinery for gene expression and genome replication and at least some rudimentary capacity for cell division. But even these common functions had no genealogical continuity, for they too were subject to the confusion of lateral gene transfer. Progenotes are cell lines without pedigrees, without long-term genetic histories. With no organismal history, no individuality or “self-recognition,” progenotes are not “organisms” in any conventional sense.”

Individually, progenotes differed metabolically, their small genomes necessitating individual metabolic simplicity. Collectively however, the diverse and noncontiguous genome of the progenote population was totipotent, and HGT greatly facilitated the spread of innovations through the population, endowing the progenote community with an enormous evolutionary potential.

“not individual cell lines but the community of progenotes as a whole […] survives and evolves”
– Woese, 1989

Glansdorff et al (2008), teach that “the origin of viruses and their possible role in evolution have opened new perspectives on the emergence and genetic legacy of LUCA”.(32) Order and its corollary, organization, have increased during the evolution of biological systems. Complexity remains a rather poorly defined concept, except in the abstract sense of non-computability; irrationality.

Molecular genetic studies have allowed researchers to infer a sophisticated genomic and metabolic capacity for the LUCA. Generally, the view is one of a diversified and promiscuous community, collectively housing a wide spanning genetic redundancy. “It is indeed very likely that most cells in an ancestral community having engendered the diversity of metabolic functions found in the three Domains possessed more than a single copy of every essential gene as well as numerous paralogous genes. This redundancy could have been selected for as an important survival factor for cells with a still primitive, not fail-safe division mechanism.” As we shall see later, functional redundancy, and an apparent ceiling thereof, is documented as an aspect of the relationship between diversity and productivity.
LUCA_diagram_Glansdorff
Schematic representation of hypothetical emergence and legacy of the LUCA(33)

Promiscuous and multiphenotypic, dynamic and unstable, LUCA existing as a continual process of unregulated (or poorly regulated) incorporation and/or rejection of innovations via lateral exchanges of genomic and/or catalytic components, presumably via a merging process similar to phagocytosis, between cells devoid of rigid envelopes, living as a community in a broad range of temperatures and chemical environments. The community concept allows for the explanation of major transitional events in evolution, via genetic exchanges within an ancestral and promiscuous community, generating a large variety of forms from which new classes of entities may independently emerge at a new level of complexity. “The emergence of the first Domain must have been the outcome of a crisis rather than a progressive development.”

“Above a certain level of diversification and catalytic interconnections, the [prebiotic] system would undergo ‘catalytic closure’, thereby becoming capable of self-replication.” Catalytic closure refers to a situation in which all catalysts (enzymes) required for metabolisis are synthesized within a cellular system. However, catalytic closure does not necessitate all the catalysts to be enclosed within an individual cell membrane, as evidenced by the many and varied examples of obligate symbiosis, including for example our own human state of obligate syntrophy, facilitated by the microbiome of our digestive tract.

The picture painted here, is one of LUCA and the progenotes, as metabolically and morphologically overlapping heterogeneous communities, continually shuffling around genetic material, which may have been composed of RNA, or DNA, or even a combination of the two. A great but not completely localized conglomeration of biologically diverse actors, collectively producing a common good. Taking a broad view, it may not be terribly unrealistic to assume that the modern planetary biome, driven by a vast variety of symbioses, still exists in this more-or-less promiscuous and evolvable state of nature.

Collective divisions of labour: biological multi-dimensionalism
Clonal populations of wild type Bacillus subtilis can diversify to express at least five (documented) distinct cell types, each associated with a specialized function.
1) Motile cells express flagella, which propel cells in low viscosity environments.
flagellum
Schematic diagram of flagellar structure.

2) Surfactin-producing cells secrete an amphiphilic surfactant compound that acts to reduce the surface tension of water, as well as functioning as a communication signal, and as an antimicrobial agent (anti-bacterial, anti-viral, anti-fungal, anti-mycoplasmal, and hemolytic). The various services rendered by Surfactin are embedded within the communal micro-habitat, thus bettering the living conditions for all cells comprising the local cellular collective, for this reason Surfactin is considered to be a public good.
Surfactin
Structural formula of a surfactant.

3) Matrix-producing cells secrete extracellular polymeric substances (EPS), the structural protein TasA, and a variety of antimicrobial compounds. EPS acts in a similar manner to the extracellular matrix in higher animals; a biotic medium surrounding and binding cells, facilitating temporary storage and transfer of information and resources between cells, and generally functioning to buffer the cellular collective from environmental stressors. As a component of the EPS, TasA assembles into amyloid-like fibers that attach to cell walls and play a critical role in the formation of various colony morphologies, and in some modes of colonial expansion. The EPS, including the various functional compounds and morphologies embedded within it, is considered to be a public good.
biofilm
Scanning electron micrograph of biofilm produced by collective secretion of EPS by B. subtilis.

4) Protease-producing cells secrete enzymes that facilitate nutrient acquisition. Secreted proteases are considered public goods.
protease_action_diagram
Schematic diagram of protease function

5) Sporulating cells produce stress-resistant bodies (spores) that can survive extended periods of adverse environmental condition.
endospore
Electron micrograph showing an endospore held within a cell body.

Here then is a tentative list of possible states – the phase space of evolutionarily stable strategies of B. subtilis. Importantly, relative proportions of the various specializations observed in any individual colony develop as a result of the environmental condition(s) experienced by the cell collective, and are geared to propagate and increase the common good. Specifically, Gestel et al (2015), have shown that migration of B. subtilis over a solid surface is dependent upon cellular differentiation of cells in a clonal colony, into two distinct phenotypes; surfactin-producing cells and matrix-producing cells. Collectives of these cell types form highly organized structures that the authors have named ‘van Gogh bundles’; tightly aligned, elastic filamentous loops; chains of cells that push themselves away from the colony edge. The geometries of van Gogh bundles are mediated via mechanical cellular interactions, with small-scale local changes (cell elongation, division, orientation, and polar interactions) at the level of individual cells determining the collective properties of expanding filamentous loops, emergent at the colony level.(33)

B_subtilis_migration
Two distinct cellular phenotypes arising from differentiation of a clonal population of wild type B. subtilis. Surfactin-producing cells (red), matrix-producing cells (green).(34)

Though migration surely is a good strategy for cells living in a limiting environment, we cannot rightly assume that individual bacterial cells are aware of colony-level (organismal) behaviors. In the specific example studied by Gestel et al cells live on a solid surface making individual ‘selfish’ action (flagellar motility) impossible. Apparently the only manner in which individual cells can migrate away from such an environment is via diversified and cooperative, collective action. Though environmental stimuli are important determinants of the differing growth phases of cell collectives, cell differentiation is also inherently stochastic. Gestel et al tell that “under constant environmental conditions, cells can spontaneously differentiate [metabolically switching] into matrix-producing cell chains that are preserved for a number of generations due to a regulatory feedback loop.”

B. subtilis is not the only ‘unicellular’ or ‘single-celled’ species to exhibit a multicellular lifestyle. “Filamentous structures also occur during the colony growth of Paenibacillus vortex and B. mycoides.” Also B. cereus has been shown to switch to a multicellular lifestyle when grown on filter-sterilized soil-extracted soluble organic matter (SESOM) or artificial soil microcosm (ASM) – physical models of environmental conditions that cells encounter in soils. In all four microbial species, multicellularity allows for and facilitates migration via emergent common goods. Interestingly, the domesticated strain B. subtilis 168, which is documented as defective in surfactin production, cannot make the switch to a multicellular lifestyle when grown on SESOM or ASM.

There is an interesting observation to be made here in regard to ESS theory. The mathematical, logical descendent of game theory, is depicted in the literature essentially as a binary system, comprising cooperative and altruistic ‘dove’ actors, versus selfish and aggressive ‘hawk’ actors. In contrast, B. subtilis is presumed to be a quinary system of evolutionary stable strategies, comprising five expressible types of actor, as well as the higher-level collective actor(s) that emerge from synergy between groups of cellular actors – “the formation of van Gogh bundles depends critically on the synergistic interaction of surfactin-producing and matrix-producing cells.”

“Some problems can be solved only when individuals act together. This applies to bacteria in the same way that it applies to humans.”
– Gestel et al, 2015
cooperative_ants
Stigmergic ants cooperate to move a large food article to the nest. Individuals lifting the load cannot ‘see’ where the nest is; a ‘driver’ (bottom of image) nudges the ‘lifters’ in the direction of the nest.

The diversity-productivity relationship
Difficulties in finding or creating metrics of the common good are widespread. Bouter (2010), has professed that “knowledge is a common good”, pointing out that “finding good indicators of scientific quality is no easy task”. Recognizing that “research is becoming less and less the exclusive province of the universities”, Bouter calls for “co-operation in a variety of changing contexts”. In specific regard to evaluation of the societal relevance of scientific research, he has suggested there is “plenty of room for discussion about the validity of the indicators, the optimum level of detail and weighing up the relative importance of its various aspects. […] However, it is clearly too early to adopt a strong quantitative approach.”(34) In fact, there is no standard metric of the common good.

Standardized quantification of diversification and specialization processes, and of diversified or specialized states, has also proven largely intractable, with various researchers using, or creating, differing working definitions and tools. Nevertheless, studies of diversity have been endowed with a probabilistic metric called the diversity index. This theoretical object has been interpreted in a variety of ways; relatives of the diversity index have been used by ecologists in studies of the relationship between plant diversity and ecosystem function, generally showing that “productivity increases with diversity”(35). From these studies has emerged a statistical model of “a fundamentally important ecological pattern”(36) called the diversity-productivity relationship (DPR).

Zhang et al (2012), tell that the DPR “has received considerable attention during the past two decades”, and that numerous grassland experiments have demonstrated positive DPRs; that is, production of biomass increases with increased biodiversity.(37) A positive DPR coexists with increases of resource use, nutrient retention and cycling, niche differentiation and inter-species facilitation. Generally, the greater the diversity of organisms in an ecosystem, the better each organism (or group) is able to survive and reproduce, due to increases of nutrient abundance, resource availability, habitat partitioning and mutualistic symbioses. Critically, the DPR body of knowledge includes insignificant, and negative, as well as positive effects of biodiversity on productivity. These should be expected however, as results of physical (environmental) limitation, and differences of assumption and quantification in individual studies.

DPR studies tend not to show direct links between ecological mechanisms and positive DPRs. This failure, or inability, results partially from the form of scientific inquiry; a necessarily narrow field of view, focused upon one, or a very few, specific aspect(s) of the object or process being studied. In a meta-analysis of global forest productivity, Zhang et al, have commented that the majority of “DPR studies have chosen species richness as the measure of species diversity to define and interpret DPRs. However, richness alone cannot fully represent species diversity in relation to ecosystem functioning because it ignores the influence of species evenness (relative abundance) on [interspecies] interactions. The lack of understanding of species evenness in DPRs is presumably limited by traditional experimental and statistical methods.”

Zhang et al, chose three dimensions of productivity in their DPR meta-analysis.
1) Biomass: Kg of cellulose, though in reality a great deal more and varied biological material is present.
2) Volume: m3 of forest canopy,
3) Basal area: m2 of forest floor.

The former two (biomass and volume) vary with biological activity, the latter is invariant; all three represent limited common goods. It is important to realize that none of these dimensions, neither individually nor collectively, account for actual forest ecosystem productivity, because a great deal of biological activity crucial to aboveground production of biomass and volume occurs below the forest floor, in the shallow layer of topsoils ignored by the global meta-analysis. Similarly, other obvious environmental factors, such as solar radiation and meteorological water, have been excluded, presumably along with a vast array of less obvious or unknown factors. Even so, Zhang et al have concluded, in agreement with the majority of DPR studies, that positive DPRs are a global phenomenon in forest ecosystems, commenting that “polycultures are generally more productive than mono-cultures”, and that evenness of the canopy volume, as well as contrasting traits between various organisms, are central components of positive diversity-productivity relationships. Furthermore, they report the existence of a diversity plateau at the high end of the species richness range, resulting from functional redundancies among species cohabiting an ecosystem. Thus, ecosystemic synergy is driven toward a diversity-productivity ‘ceiling’, imposed by functional redundancy, which we may well define as homeostasis of the common good.

This last point exposes what I believe to be a fundamental sociophysical phenomenon of critical importance to the understanding of common goods and of sustainable development; natural limits are imposed upon all complex systems. Interestingly, if shade is viewed as a phenomenon emerging from the metabolic activities of plant growth, and that shade produced by these conditions drives speciation, then we may rightly consider shade to be a limited common good.

Trogisch (2012), has focused upon processes occurring below the forest floor, specifically the states of nitrogen and leaf litter decomposition in soil samples from a subtropical forest. He has suggested that primary productivity and nutrient cycling be considered common goods, and has confirmed a consensus regarding the reduced vulnerability of diversified ecosystems to environmental stress. Furthermore, he has proposed functional redundancy among diverse species as a systemic stabilizer, allowing ecosystem functions and services to remain unchanged, or less affected, after environmental perturbation.(38)

“Forests account for 80% of the world’s plant biomass and are therefore a main driver and component of the Earth’s biogeochemical cycles. Their versatile services such as climate regulation and protection of soil resources, denotes them as one of the most important terrestrial ecosystems for human wellbeing.” Indeed one may justly argue that forest ecosystems are common goods that propagate wellbeing for a vast, uncounted, number of species.

A most remarkable passage in Trogisch’s thesis teaches that “decomposition dynamics in mixed leaf litter often show non-additive effects so that [nitrogen] is released at a faster rate than predicted from decomposition rates of corresponding single-species leaf litter. Such litter diversity effects during decomposition can lead to a feedback reaction positively influencing plant productivity”. Thus, species diversity affects irrational, non-computable, synergistic processes, that act to increase and stabilize the common good.

Jacobs knowledge spillover: relating the DPR with the common good in an economic context
Jane Jacobs questioned why some cities grow and others decay. Her theory of agricultural origin, published in 1969, proposed that agricultural knowledge and practical technologies emerged from a diversified human collective. Jacobs concluded that “high and sustained levels of innovative behavior and entrepreneurship inevitably result in the increased diversification and complexity of the local economic base over time and that a diversified urban economy provides the best setting for entrepreneurial and innovative behavior”. Thus, increases in the number and diversity of divisions of labor endow an economy with an increased capacity for production of goods and services.(40)

Reviewing Jacobs, Desrochers & Hospers (2007) list four characteristics of economic systems(39) that are also common to biological systems:
1) Development is dependent upon the self-organization of numerous and various complex relationships, from which differentiations emerge, giving rise to an organ from which further differentiations emerge.

2) Expansion (growth) is dependent upon the capture and use of energy. The greater the diversity of means for capturing, using, recapturing, and reusing energy before its discharge from the system, the more resilient the system is.

3) Self-maintenance (constitutive self-regulation) is an intrinsic systemic process, incorporating positive and negative feedback, along with aspects of development and growth.

4) Evasion of systemic collapse incorporates self-maintenance, bifurcation, positive and negative feedback, and emergency adaptations, together helping to ensure systemic longevity. However, entropic effects are certain to impact upon any system, as a gradual increase of disorder (disorganization) in internal (systemic) and external (environmental) structures.

The similarities between ecology and economy in regard to the relationship between diversity and productivity are striking. Critically however, the economic literature ignores, or fails to identify, the presence of natural limits to productivity imposed by a diversity plateau; a functional redundancy among local actors. Building upon Desrochers & Hospers (2007), I propose that the emphasis of economics in modern culture has switched from natural diversity and complexity to artificial specialty and simplicity; from a natural stable-state driven by dynamism, to an unnatural unstable-state propagated by statism; from divergent inefficient creativity, to convergent efficient monotony.

As seems to be the case with all researches attempting to relate diversity and productivity, Desrochers & Leppala have admitted that quantification of frequency and relative importance of Jacobs spillovers (diversity index of knowledge sharing) could not be measured satisfactorily, commenting that “simply because something is immeasurable does not mean that it is necessarily unobservable, unintelligible or unimportant.”(40)

The synergistic function of complex systems identified here as the Jacobs spillover and the DPR is reminiscent of the messy workspace phenomenon – in which the current project(s), may ‘shake hands’ with past works and even future hopefuls, allowing for greater capacities of creative problem solving, insight, adaptation and innovation. Vohs et al (2013), have reported that “disorderly environments […] can produce highly desirable outcomes, […] encourage novelty-seeking and unconventional routes, [thus stimulating] creativity, which has widespread importance for culture, business, and the arts.”(41) Strangely, and rather irrationally, Vohs et al have omitted the sciences in their list of beneficiaries, thus apparently denying scientific pursuits the privilege of “disorderly environments”.

In 1945, the economist and Nobel laureate Friedrich Hayek suggested that “any approach, such as that of mathematical economics with its simultaneous equations, which in effect starts from the assumption that people’s knowledge corresponds with the objective facts of the situation, systematically leaves out what is our main task to explain.” He believed that “objective or scientific knowledge is not the sum of all knowledge”, that there are other unorganized kinds of knowledge. Critical of economic theory, Hayek proposed that, in reality, no one has perfect information, only the capacity and skill to find information.(42) Thus the reality of economics is not, as commonly held by economists, a pure logic of choice, but rather “knowledge relevant to actions and plans”.(40)

“Unfortunately for mathematical economists, this kind of knowledge [relevant to actions and plans] cannot enter into statistics: it is mostly subjective”.(40)
– Friedrich Hayek, 1945

“There is something deadening to the human mind in uniformity; progress comes through variation.”(40)
– Malcom Keir, 1919

Desrochers & Leppala (2011) describe an essential aspect of creativity (divergent thinking) as “the capacity to look beyond the normal application context of artifacts and ideas”. Creative, inventive and innovative progress, leading to increases in diversity, knowledge and productivity, is facilitated by opportunities for specialists to explore areas in which they are not experts, and to work on several different projects simultaneously, by means of a variety of familiar and unfamiliar methods. This pair of practical concepts is the path to polymathy. Unsurprising then, that polymaths are viewed by history as individuals who have produced the greatest common good – in the sense that they have given, most often at no cost, greatly useful intellectual gifts to humankind.

Common uncertainty: the diversity index
In a meta-analysis of global economic development, aimed at drawing generic conclusions for all countries with available data, Kaulich (2012), echoes the concerns of Farhauer & Kröl (2012), Bouter (2010), Zhang et al (2012), and Desrochers & Leppala (2011), reporting that “different and sometimes conflicting definitions and measurements of diversification/specialization have been used, together with different datasets”.

The economies of all countries are based upon agriculture, with the successful export of agricultural goods allowing for diversification away from primary production, via the manufacture of initially simple products, leading to increasingly sophisticated activities. Diversification, claims Kaulich, is intrinsic to, and is the driving force of economic development.

Kaulich has also found a positive relationship, specifically between the diversity of products exported by an economy and its per capita level of income.(46) At “quite a high level of income per capita” (~ $22,000 / year) economic diversification of the average country slows down, lead by the manufacturing sector toward a plateau. Thus, as a country transitions from a developing to a developed economy, it simultaneously encounters a diversity ‘ceiling’, which limits its economic growth. This pattern is very similar to the ecological DPR, in which productivity is driven toward a diversity ‘plateau’ imposed by functional redundancies among species cohabiting an ecosystem. Is it fair, then, to speak of an economic diversity-income relationship, and of economic homeostasis?

“A country’s economic growth may be defined as a long-term rise in capacity to supply increasingly diverse economic goods to its population.”(43)
– Kuznets, 1971

“Whatever it is that serves as the driving force of economic development, it cannot be the forces of comparative advantage as conventionally understood. The trick seems to be to acquire mastery over a broader range of activities, instead of concentrating on what one does best.”(44)
– Rodrik (2004)

“The common notion to specialize in “what one does best” as a means to achieve economic prosperity and hence poverty reduction seems to be fundamentally wrong.”(45)
– Kaulich, 2012

Kaulich cites an earlier report, UNIDO (2009), suggesting that re-specialization may occur at the high-income end of economic development. This affords a diplomatic position within the diversity vs. specialization debate, which Kaulich makes masterful use of, posing that economic theories arguing exclusively for or against economic specialization appear contradictory, but may both be correct, albeit identifiable at differing points in the economic development of a country. However, his own analysis of global trade data does not conclusively show a U-curve, suggestive of a decrease in economic diversification at the high-income end in combination with continued increase of income. Instead, Kaulich has confidently reported an L-curve.
Diversification_curve
Sketch graph showing economic diversification increasing with product sophistication and income per capita, leading to a diversity-income plateau.
– adapted from UNIDO (2012)

In stating that “successful policies for economic diversification cannot consist of a top-down process with a static set of rules for the private sector”, the UNIDO working paper clearly advocates a policy admissive of complexity; reliant upon self-regulation, and based upon bottom-up self-organization of diverse actors.

Discussion:
The use of various diversity indices in empirical studies of ecologies and of economies, has produced a pattern among observations. A generally positive relationship is identified between quantitative measures of diversity and productivity, leading to a plateau at the high diversity end of abundance and evenness.

One must ask: is the observed limit a physical, entropic, phenomenon, or an artifact of the diversity index? Irrationally, I prefer the former, and suggest that various independent empirical studies have collectively identified an apparent homeostatic epiphenomenon of sociophysical dynamism; steady-state animism on a macro scale, perhaps even a planetary scale. A common-good-state-of-nature.

It should be appreciated that the terms ‘synergy’, ‘epiphenomenon’ and ‘sociophysics’ sit rather uncomfortably within the envelope of science, because their meanings act as signposts toward an understanding of metaphysics. Perhaps Rosen intuited correctly that relational studies of living systems may produce new knowledge of physics and result in profound changes for science?

At the very least, scientific understandings of economics and politics appear to be fundamentally incorrect, requiring revisions permitting the inclusion of non-computable phenomena, emerging from interactions between diverse actors to produce common goods.

Notes:
F) Hypothesis:
i) Universally, the collective efficiency of a diverse set of actors is greater than that of a specialized set of actors.
η(ΣAd > ΣAs) → U

ii) Locally, the collective efficiency of a specialized set of actors is greater than that of a diverse set of actors.
η(ΣAs > ΣAd) → L

Where U is universal (i.e. global) effect, L is local effect, η is efficiency, Σ is sum (collective), Ad is diverse actor, As is specialized actor.

Hypothetical predictions:
A diverse set of actors is a necessary prerequisite for the emergence of specialized actors.
A diverse set of actors is a necessary prerequisite for the emergence of common goods.

G) Progenotes are defined as organic elements comprising the communal ancestor, identified in the lineages now assumed as the phylogenetic ‘tree of life’.

Bibliography:
28) O. Farhauer & A. Kröl, “Diversified Specialisation – Going One Step Beyond Regional Economics” Specialisation-Diversification Concept”, (2012), JAHRBUCH FÜR REGIONALWISSENSCHAFT, Vol.32, Number 1, p.63-84, http://www.uni-passau.de/fileadmin/dokumente/fakultaet/wiwi/VWL/Agglo-Text_120110_Homepage.pdf

29) “The collapse of manufacturing”, (February, 2009), The Economist, http://www.economist.com/node/13144864

30) J. Whitfield, “Origins of life: Born in a watery commune”, (2004), Nature Vol. 427, p. 674-676, abstract: http://www.nature.com/nature/journal/v427/n6976/full/427674a.html

31) C. Woese, “The Universal Ancestor”, (1998), Proceedings of the National Academy of Sciences of the USA, 95(12): 6854–6859, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC22660/

32) N. Glansdorff, Y. Xu & B. Labendan, “The Last Universal Common Ancestor: emergence, constitution and genetic legacy of an elusive forerunner”, (2008), Biology Direct, http://www.biologydirect.com/content/3/1/29

33) J. Gestel, H. Vlamakis, R. Kolter, “From Cell Differentiation to Cell Collectives: Bacillus subtilis Uses Division of Labor to Migrate”, (2015), PLOS Biology, http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002141

34) L. Bouter, “Knowledge as a common good: the societal relevance of scientific research”, (2010), Higher Education Management and Policy, Vol. 22/1, http://www.keepeek.com/Digital-Asset-Management/oecd/education/knowledge-as-a-common-good_hemp-v22-art8-en#page1

35) J. van Ruijven and F. Berendse, “Diversity-productivity relationships: Initial effects, long-term patterns, and underlying mechanisms”, (2004), Vol. 102.3, PNAS, abstract http://www.pnas.org/content/102/3/695.abstract

36) H. Hillebrand and B. Cardinale, “A critique for meta-analyses and the productivity-diversity relationship”, (2010), Ecology, Vol. 91.9, p. 2545-2549, http://snre.umich.edu/cardinale/wp-content/uploads/2013/02/Hillebrand_Cardinale_Ecology_2010.pdf

37) Y. Zhang, H. Chen, P.Reich, “Forest productivity increases with evenness, species richness and trait variation: a global meta-analysis”, (2012), Journal of Ecology, Vol.100, p.742–749, http://forestecology.cfans.umn.edu/prod/groups/cfans/@pub/@cfans/@forestecology/documents/article/forestproductivityincreases.pdf

38) S. Trogisch, “The functional significance of tree diversity for soil N-pools, leaf litter decomposition and N-uptake complementarity in subtropical forests in China”, (2012), ETH ZURICH, http://e-collection.library.ethz.ch/eserv/eth:6313/eth-6313-02.pdf

39) P. Desrochers & S. Leppala, “Opening up the ‘Jacobs Spillovers’ black box: local diversity, creativity and the processes underlying new combinations”, (2011), Journal of Economic Geography, Vol 11, p. 843–863, abstract only http://joeg.oxfordjournals.org/content/11/5/843

40) P. Desrochers and G-J. Hospers, “Cities and the Economic Development of Nations: An Essay on Jane Jacobs’ Contribution to Economic Theory”, (2007), Canadian Journal of Regional Science, Vol. 3(1), p. 115-130, http://geog.utm.utoronto.ca/desrochers/CJRS_Jacobs.pdf

41) K. Vohs et al, “Physical Order Produces Healthy Choices, Generosity, and Conventionality, Whereas Disorder Produces Creativity”, (2013), Psychological Science Vol 24(9), p. 1860–1867, abstract http://pss.sagepub.com/content/early/2013/08/01/0956797613480186.abstract

42) B. Godin, “The Knowledge Economy: Fritz Machlup’s Construction of a Synthetic Concept”, (2008), http://www.csiic.ca/pdf/godin_37.pdf

43) S. Kuznets, “Modern Economic Growth: Findings and Reflections. Prize Lecture”, (1971), Lecture to the memory of Alfred Nobel, http://www.nobelprize.org/nobel_prizes/economic-sciences/laureates/1971/kuznets-lecture.html

44) D. Rodrik, “Industrial Policy for the Twenty-First Century”, (2004), Harvard University, https://www.sss.ias.edu/files/pdfs/Rodrik/Research/industrial-policy-twenty-first-century.pdf

45) F. Kaulich, “Diversification vs. specialization as alternative strategies for economic development: Can we settle a debate by looking at the empirical evidence?”, (2012), Development Policy, Statistics and Research Branch, UNIDO, http://www.unido.org//fileadmin/user_media/Publications/Research_and_statistics/Branch_publications/Research_and_Policy/Files/Working_Papers/2012/WP032012_Ebook.pdf

The Common Good: a semi-rational emergent property of complex collective interaction between diverse actors – Part I

The common good invariably requires diversification, manifest as random fluctuations within the biological phase space from which emerge divisions of labour, and thus necessarily, inequalities among individuals comprising a social collective. Entropic forcing drives increases of the common good, via increased diversity, to an apparent limit.

Explorations are made of philosophical (Part I) and empirical (Part II) studies in politics, biology, and economics.

Cooperation via collective divisions of labour is a necessary prerequisite to biological metabolism and reproduction. A collective comprising diverse actors is thus assumed fundamental to the planetary biome. The preponderance of benefit (here designated ‘the common good’) that emerges for actors (individuals and groups), is mediated by Woesean collective cooperation, defined as “a diverse community of cells(note A) surviving and evolving as a biological unit.”(1)

“Diversity is an asset with which to confront uncertainty.”
– Groschl, 2013

Part I: Philosophical observations, models and theoretical analyses

Politikos: definition and mediation of the common good
Commenting on Aristotle’s political theory, F. Miller (2011) tells that “the modern word ‘political’ derives from the [Ancient Greek πολιτικός] ‎politikós, ‘of, or pertaining to the polis’ [polis translates as ‘city-state’, or city]. City-states like Athens and Sparta were relatively small and cohesive units, in which political, religious, and cultural concerns were intertwined. The extent of their similarity to modern nation-states is controversial.”(2)

As a point of interest, Amish culture, described in a previous post titled The Worldly and The Amish represents a modern, relatively small and cohesive population unit, in which political, religious, and cultural concerns are intertwined. Presumably, the world’s remaining populations of ‘primitive’ peoples (nations) would also fit this description, so Miller’s controversy appears to exist principally between modern globalized (‘worldly’) culture, and what one might loosely term ‘old school cultures’, or perhaps the ‘old world order’.

Edward Jenks’ well informed comment, describing a founding and central aspect of political states, seems much less controversial: “[Evidently,] all political communities of the modern type owe their existence to successful warfare. As a natural consequence, they are forced to be organized on military principles […].”(3)

Referring to warring as “sad”, Jenks (1909), posed that plunder is easier, or at least quicker, than working to build up and equip a household, and that men would be unwilling to give up a household; property. The resulting conflict, more than feudalism, developed the practical knowledge of plunder – how best to get stuff with a minimal input of work, and how best to protect the stuff you have worked to accumulate. War, then, is a result of ownership and property.

plunder_household
Jacques Callot, “Plundering a Large Farmhouse”, (1633), plate 5, The Miseries of War.
Inscribed: Here are the fine exploits of these inhuman hearts. They ravage everywhere. Nothing escapes their hands. One invents tortures to gain gold, another encourages his accomplices to perform a thousand heinous crimes, and all with one accord viciously commit theft, kidnapping, murder and rape.

Warfare and military organization were surely intrinsic to city-states existing during Aristotle’s lifetime, which he described as comprising a collection of parts (natural resources, households, and individual citizens), together taking a compound form, and certain order, defining the constitution of the state. For Aristotle, state constitution was not just a theoretical, ‘on paper’, statement of cultural ideals, but an immanent organizing principle analogous to the soul (spirit or genius) of an organism. Thus the Aristotelian constitution of the polis is the way of life of the citizens.(2)

In accordance with Aristotle’s political naturalism, political episteme (from Ancient Greek ἐπιστήμη, epistḗmē, ‘knowledge’) incorporates various practical sciences, such as the art of war (military), the art of household management (economy: from Ancient Greek οἰκονομία, oikonomia, ‘management of a household’, ‘administration’), and the art of language (rhetoric: from Ancient Greek ῥητορικός, ‎rhētorikós, ‘concerning public speech’). Critically, all practical sciences are means of rendering a collective human good. “Even if the end is the same for an individual and for a city-state, that of the city-state seems at any rate greater and more complete to attain and preserve. For although it is worthy to attain it for only an individual, it is nobler and more divine to do so for a nation or city-state”.(2)

“The needs of the many outweigh the needs of the few – or the one.”
– Spock & Kirk, Startrek II: The Wrath of Khan, 1982

Aristotelian political episteme refers to knowledge of how, why, when and where among the citizenry, noble acts and happiness occur, leading to an understanding of how, where and when to act; implementing policy in order to promote general goodness (a common good quality of life) for the state.

Modern political science does not inspire a great deal of noble action or happiness in citizens, if it did then commoners would surely all hold more respect for careering politicians – a role of state that each of us plays, either by direct action or indirectly by deference of action. The fact that so many modern citizens tend to believe that deference of their individual governing responsibility, to an unknown group of ‘representatives’ is better for them as individuals as well as for the commons, than collective self-governance is, clearly shows a lack of political episteme, and hence faith in political science – a faith in systematized governance, definable as technocracy.

A culture of faith in technocracy renders an equivalence between the church (spiritual affairs) and the state (affairs of governance), which is inescapable even if – or perhaps particularly if – one assumes oneself to be a divine ruler. This common faith of modernity invalidates the controversy suggested by Miller (2011), regarding the extent to which ancient city-states and modern states are (dis)similar; political, spiritual and cultural affairs are as intertwined in modernity as they were in antiquity.

Groschl (2013) propagates the Aristotelian meaning of political episteme, as concerning collective life for its own sake, and he suggests that modern political science acts to prevent people from accessing an understanding of what politics means.(4)

Here then is a guide:
Political life renders a constitution; the socio-physical epifunction of a population that emerges from a cultural milieu, is not attributable to any individual or group, but comprises a collection of individual and/or group interactions within and between a population and its local environment.

Political episteme is a collection of arts; the practical and theoretical knowledge of noble action and happiness of citizens, the purpose of which is to ensure a good constitution; a common good quality of life for a population.

Political science is the practical and theoretical knowledge of distribution and management of power and resources.

Better worded definitions do not detract from the difference in meaning between the latter two. Epistemes are outrospective, mostly open and giving. Sciences are introspective, mostly closed and reductive.

Political science – indeed science of any kind – is attributable solely to humans, and in particular to modern, ‘western’ (now ‘global’) affluent culture. Groschl teaches that political science has been tuned to Hobbesian political philosophy, leading into an era of misconception, or possibly preconception, about the meaning of economy – which is now assumed to be an intrinsic, if not central, aspect of politics. In modernity, both politics and economy have been redirected to face inward, targeting individual private interests as their primary beneficiaries. So it is due to the moral of modern society (modern worldview) that the rights and ambitions of the individual are elevated to a near holy status. We assume genius as ‘proprietary’ of an individual, rather than being the result of the commons; emerging gracefully from the cultural milieu – the complex and uncertain interactions of many and varied actors. Interestingly though, products of genius (generally forms of knowledge) are appropriated by society as common goods.

In emphasis of this last point it seems prudent to assume, as do Bibard & Groschl (2013), that goods and goodness are defined almost ubiquitously among our past and present cultures, as shared phenomena. As an example, they pose little good in owning the most beautiful painting in the world if no one but the painter ever experiences it. Indeed, without sharing experiences of the painting, how can the painter know that it is the most beautiful painting in the world? Goods are necessarily shared, and are thus to a greater or lesser extent, common.

Modernity holds the misinformed consensus that common goods, indeed goods of any kind, are necessarily made; that goods do not exist without the expenditure of energy by some individual or group. This interpretation has most likely resulted from our cultural fixation upon business, in which goods are produced, traded, bought, sold, and finally consumed. Critically, solar radiation and water seem obvious candidate common goods, yet neither can reasonably be assumed to be a product of expenditure of energy by some individual or some group. Also critically, goods are not necessarily good; it is possible to trade bad goods, or a bad lot of otherwise good goods. The word good appears to have a vaguer meaning stemming from the Germanic word gōd.

Orthodox biologists claim that common goods (termed ‘public goods’ in the technical dialect of biology) are invariably products of metabolic activity, and thus require work to produce. The word public is derived from the Latin publicus, which is a blend of poplicus ‘of the people’ (from populus ‘people’) and pubes ‘adult’. In contrast the word common is derived from the Latin communis, which is itself derived from the old Latin comoenus ‎’shared’, ‘general’. Thus the misunderstanding of common good, held by biologists, appears to be due to uncritical confusion of the meanings of the words ‘public’ and ‘common’, and in particular to a propagated misuse of the word ‘public’(note B).

However, this view is not ubiquitous among scientists. In private correspondence, an ecologist and forest ecosystem conservationist from the University of Wageningen in the Netherlands, G. Havik, has suggested that we should “distinguish common goods from limited common goods”, as the latter poses important consequences for evolution. “Sunlight” he has said “will not be a limited common good for as long as we are around on this planet – except when you’re in someone’s shade, which has driven speciation”. From Havik’s perspective, sunlight is an unlimited common good that is shared and used, but not produced, by metabolic activities. As we shall learn later during exploration of the diversity productivity relationship (DPR), increased diversity of life systems (speciation) may itself be considered a common good. Thus, in an ecological context, shade is an emergent property of biological metabolism, rendering a limiting condition upon the use of an unlimited common good, and shade is also itself a limited common good, due to its diversification effect upon organisms.

A similar example may be made of water. Orthodoxy says that water can be a common (or public) good only if energy is expended in order to create a good, such as a distribution and/or filtration facility rendering potable water. However, we shall assume a wider, more inclusive and more natural interpretation:
Water is a common good if it is available for use.(note C)

Wealth-getting: profiteering vs. sustaining
Non in depravatis, sed in his quae bene secundum naturam se habent, considerandum est quid sit naturale.
What is natural has to be investigated not in beings that are depraved, but in those that are according to nature.
– Aristole, Politics, Book 1(5)

Business assumes to share the goodness (profit) produced by its activities with a select group of actors (the shareholders), but not with a wider ecological sphere (the stakeholders). Simply, business is conducted for the good of an individual legal person; a corporation. In accordance with political science, the purpose of human social interaction – our political lives – is to serve private interests as exclusively as possible. Another way of saying this is that modern human social interaction is geared toward rendering and increasing private goods.

From my own perspective at the time of writing this essay, a cultural moral of self-fulfillment rather than social responsibility, seems to have peaked in the 1970’s among the post WWII American baby boomer culture; the “Me generation”. Twenge & Campbell (2009) have identified and exposed a generational aftershock; a “destructive spread of narcissism”.(6)
WIIFM_ant

Bibard & Groschl suggest that private profiteering, exemplified by the corporate sector under the umbrella of political science, stands in full contradiction to a possible common good. They tell that ancient political philosophy respected private interests to some degree, and thus allowed business to occur to some extent, as a result of political life. Profiteering however, was viewed as a manner of managing private, familial, household affairs. The commons (community, city-state or nation) while requiring wealth-getting activities, does not necessitate a profit-motivated attitude. Aristotle further dissected wealth-getting, by defining a necessary branch that is related to sustenance, is limited, and by nature a part of household management; and an unnecessary branch that is unlimited, unnatural (abstract) and addictive.

“[Some] people suppose that it is the function of economy (household management) to increase property, and they are continually under the idea that it is their duty to be either safeguarding their substance in money or increasing it to an unlimited amount. The cause of this state of mind is that their interests are set upon life but not upon the good life. [Even] those who fix their aim on the good life seek the good life as measured by bodily enjoyments, so that inasmuch as this also seems to be found in the possession of property, all their energies are occupied in the business of getting wealth; and owing to this the second kind of the art of wealth-getting has arisen. For as their enjoyment is in excess, they try to discover the art that is productive of enjoyable excess; and if they cannot procure it by the art of wealth-getting, they try to do so by some other means, employing each of the faculties in an unnatural way.”(7)
wolf_of_wall_street
Lead characters in the film The Wolf of Wall Street (2013).

“[The] business of drawing provision from the fruits of the soil and from animals is natural to all. But, […] this art is twofold, one branch being of the nature of trade while the other belongs to the household art; and the latter branch is necessary and in good esteem, but the branch connected with exchange is justly discredited (for it is not in accordance with nature, but involves men’s taking things from one another). As this is so, usury is most reasonably hated, because its gain comes from money itself and not from that for the sake of which money was invented. For money was brought into existence for the purpose of exchange, but interest increases the amount of the money itself; consequently this form of the business of getting wealth is of all forms the most contrary to nature.”(7)
– Aristotle ca. 350 BC

Earning money or any manner of profiteering for its own sake, tends to lead people astray from the good life. Aristotelian political philosophy does not assume the Hobbesian primacy of private freedoms, but is oriented toward a common good life via collective functions of community. Likewise, Bibard & Groschl suggest that the ultimate ends of our actions, in business as in political life, should be directed outward, toward the good of the commons, and that the common good should be understood as fulfilling human ends; producing a good quality of life.

Politics should be geared for and directed toward human ends, simple biological needs, not toward vanity or enrichment for their own sake. This message is echoed by the words and meanings of wizards and sages, stretching from ancient times through to modernity. They teach that the path toward intellectual fulfillment via a good quality education, leading to holistic contemplation, is a far healthier human pursuit than is simple material, or worse still, monetary acquisition.

Clearly, ancient philosophies of political episteme and of household management are more relevant to human nature than are their modern theoretic counterparts, political science and economics, respectively. Apparently, people hang onto the modern habit unreasonably; faithfully doing damage.

Spontaneous politic
Aristotle viewed humans as spontaneously political animals, and indeed human nature is fundamentally social. However, social behaviors of some kind or other may be observed throughout the known biome. Organisms are necessarily embedded within life-systems, thus living as parts of collectives (communities, ecosystems) that are formed and maintained via continual biosemiosis.
social_interactions
Consciously or not, we continually measure and compare ourselves and our acts against those of our peers – be they members of our own, or another species.

bacterial_interspecies_interaction
Schematic diagram showing potential bacterial interspecies interactions.(8)

The natural state currently proposed, is a spontaneously occurring, complex, anarchic, self-organizing and self-regulating, adaptive milieu fundamental to life-systems. The ‘state of nature’ is thus understood as an emergent sociophysical epifunction.

Homo sapiens is nestled symbiotically within the wholeness of the planetary biota. A similar natural state may be assumed to exist for all organisms and life systems on Earth, from the lowly kitchen sponge microbe(9)(10), through the great ocean mammal(11), to the mighty forest dendron(12).

Let us venture the supposition that no organism is capable of sustaining life in the absence of interactions with other organisms. Orthodox biologists would disagree with this umbrella definition, arguing that individual unicellular organisms (such as bacterial or archeal cells, some protozoa and algae) are capable of surviving in isolation, as chemotrophic or photosynthetic primary producers. Here, then, stands a challenge to provide an unambiguous example as proof of biotic independence in situ – naturally. In vitro attempts at sustaining an individual cell, isolated from sources of organic nutrients as well as from mineral products of biotic processes, fail rapidly. If access to organic nutrients and biotic mineral cycling is made available to the cell, then metabolism can continue, invariably leading to colonization of the habitat, by invasion of other species and/or clonal (vegetative) reproduction giving rise to genetic mutants. In either case the result is a form of diversified symbiotic collective; a culture.

The interaction imperative is expressed clearly by Cowden (2012), “the organism with the best interaction strategy has the highest fitness [and] stable payoff equilibriums have been shown for cooperation and altruism, behaviors that seem contradictory to the strongly supported individualistic, survival of the fittest mode of evolution”.(13)

Models of social behavior: informatory and unreal
Computer models of social behavior, are fundamentally flawed due to their necessarily rational (computational) basis. Natural systems of social behavior are in part, necessarily logical, but are just as necessarily irrational (non-computable) due to the fundamentally uncertain nature of nature itself. In order to be understandable, a model can only ever approximate nature in a simplistic manner, and in accordance with the state of knowledge (theory) at the time of the model’s construction. The sciences are model based activities, in theory. In practice, the sciences necessarily incorporate, then so far as technically possible, deny the influences of irrational factors.

Models, whether computerized or not, represent a truncation of reality. Scientific knowledge thus also represents a truncation of reality. Fascinating and awesome it is to begin to grasp the scale of modern moral and knowledge lock-in.(14)(15)

Game theory teaches that “cooperation results in the highest mutual benefit”. An offshoot of game theory, evolutionary stable strategy (ESS) theory, assumes that “a uniform environment, and resources are available everywhere”.(13)

ESS theory is an example of modeled social behavior. The theory is originally attributed to John M. Smith, a former aeronautical engineer turned geneticist and theoretical biologist who also developed signaling theory (biosemiotics), and to George Price, a physical chemist turned population geneticist and theoretical biologist, turned devout Christian and altruist. Price eventually committed suicide due to depression, perhaps in part due to an inability to show in practice what was provable in theory.
ESS_men
Clockwise from top left: William D. Hamilton, John M. Smith, George Price, John Nash, John von Neumann.

Smith & Price followed the works of evolutionary biologist and geneticist turned mathematician and logician William D. Hamilton, the polyhistor John von Neumann, and the mathematician, logician and schizophrenic John Nash, the latter both known for their work on game theory. Much like game theory, ESS theory comprises logical manipulation of rational, albeit abstract mathematical characterizations. The subject of ESS theory was popularized by Richard Dawkins in 1976, with his book The Selfish Gene, in which Dawkins made frequent use of the phrase “all other things being equal”, of course in natural environmental circumstances all other things are often not equal. To his credit, Dawkins did make reference to this fact, commenting that the environment does tend to radical and sudden change, thus allowing for the displacement of an existing ESS, which gives way to the emergence of new strategic patterns, before eventual re-stabilization of the biotic system into a new ESS; a new steady state.(16)

In the nascent literature of economics, environmentalism, and political theory, which together form the bulk of serious theoretical work on the topic of sustainable development, the emergence and stabilization of a novel ESS following the breakdown of an existing ESS, is termed a “paradigm shift”, which is nothing less than a change of cultural moral; a change of worldview.

In his popularization of genetic fundamentalism, Dawkins propagated arguments against the existence of altruistic behaviors and group selection, saying that both are common misunderstandings of phenomena that benefit individual genes. Dawkins knowingly skipped over a closely related concept, Hamiltonian inclusive fitness, which must have seemed as likely then as it does now, to disrupt the foundation of the gene-centric orthodox theoretical edifice. In fact, Dawkins’ text mentions inclusive fitness only in a footnote of the (2006) 30th anniversary edition, referring to his colleague and collaborator Alan Grafen, who’s work (Grafen, 1984) reported “the widespread misuse of Hamilton’s concept of ‘inclusive fitness’.” Grafen himself seems to have been considerably broader of mind, admitting that Hamilton’s rule (note D) upon which kin selection theory, inclusive fitness theory, and ESS theory are founded, “holds good only under certain assumptions”. There are “different definitions of [relatedness], and the scope of the rule depends on the definition of [relatedness] employed”. Grafen interpreted inclusive fitness as “a device that simplifies the calculation of conditions for the spread of certain alleles”, and suggested that the expression of those alleles affects the number of offspring produced by other organisms in a population.(17)

This last point brings us to the controversial idea of group selection, which makes intuitive sense (species are born, reproduce and become extinct, just as organisms are born, reproduce and die) but is vague and difficult to rationalize, particularly from the gene-centric perspective. However, empirical evidence of higher level selection (selection of traits above the level of individual organisms) was published by Wade (1976). In his initial study of group fitness among populations of Flour beetles, Wade concluded that a genetic bottlenecking “process of random extinctions with recolonization can establish conditions favorable to the operation of group selection.”(18) In a continuation of his experimental work, Wade (1980), reported that “under many circumstances, a species performance in competition is not predictable from its performance in single-species culture”, and that “competitive ability can be viewed as an indirect but general measure of the nature of population response to group and individual selection for increased and decreased population size.”(19)

Unclear and slight, the group selection idea is perhaps too easily dismissed. We shall not dwell upon it further here, except to point out that it bears the markings of an emergent phenomenon, and to respectfully remind the reader that epigenetic phenomena (the potentially heritable alteration of genetic traits, environmentally affected above the level of DNA code) are a relatively recent discovery.(20)

In regard to altruistic behaviors, Reuter et al (2010), have reported that in humans “oxytocin promotes interpersonal trust by inhibiting defensive behaviours and by linking this inhibition with the activation of dopaminergic reward circuits, enhancing the value of social encounters.”(21) Furthermore, a handful of genetic association studies have linked polymorphisms of the oxytocin receptor gene (OXTR) and the vasopressin 1a receptor gene (AVPR1A) to prosocial behaviors, while concurrently implicating the dopaminergic system. Thompson et al (2013), report two genes as candidate genes for human altruism, OXTR and cluster of differentiation 38 (CD38), both genes are active in the regulation of blood plasma concentrations of oxytocin. They suggest that OXTR and CD38 mediate trade-offs between self-focused cognition and behaviors, versus prosocial cognition and altruistic behaviors.(22)

“Inclusive fitness is often associated with kin selection, as more closely related organisms more likely share the same alleles – such alleles are referred to as ‘identical by descent’ as they are from a common ancestor. However, altruism genes may be found in non-related individuals, thus relatedness is not a strict requirement of inclusive fitness [which is widely quoted as an explanation for the evolution of altruistic behaviors]” Cowden (2012).

puppy
I continue to feed and care for an organism similar to the one pictured here. Dawkins would say that my expressions of care toward my pet Uma are not altruistic but selfish, that Uma somehow increases my own reproductive capacity, or at least that I am pushing my own feel good button. That may be so, I openly admit that my quality of life is bettered by Uma’s company, though Uma tends to enjoy a good quality of life also.
kitten
We’re not yet sure about the cat, who has been invited into the household to manage a population of mice. Apparently I am incapable of altruism toward mice.

Jest aside, wild biomes (natural states) are not all red in tooth and claw, but they are all complex and diversified, symbiotic and synergistic systems, defined by divisions of labour and collective actions, producing an emergent common good. Inclusive fitness does not describe a Hobbesian war of each against all, but infers the indirect reproduction of identical copies of traits (behaviors or phenotypes linked to environmental or genetic components) parallel to the vertical gene transfer achieved by parents to their offspring; horizontal gene transfer, as documented by microbiologists, comes closer but still does not fully hit the mark of indirect reproduction. Essentially, distant relatives within a species, as well as siblings, even twins, exemplify indirect reproduction. A wider exemplary scope might expose the various and diverse hemoproteins.

hemoglobin
Hemoglobin is a tetrameric protein (left), comprising four heme groups (right).

“If iron is nature’s favorite essential metal, then heme is its Swiss Army knife: a versatile, indispensible tool that, in the company of its protein sheath, can do seemingly anything. The power of heme is particularly evident in the prokaryotes, where diversity in the catalytic activities of heme proteins, as well as proteins involved in the uptake, trafficking and sensing of heme, appears to be vast”.(23)
– Mayfield et al, (2011)

Dawkins paved his approach to the subject of biological collectivism, altruism, and social behavior, with logic and computer models. He was confident that he saw clearly, a single formal system, operating invariant rules, written by men – the theoretical evolutionary stable strategy (ESS). In all honesty, I admit to seeing rather less clearly, more vaguely and uncertainly, a set of complex and interacting systems. Biological processes are changeable, adaptable; are not written; are not rules, but malleable agreements and necessary compromises.

Theoretical biologist R. Rosen, argued that a living organism is not a machine, and thus cannot have a computer-simulable model. Furthermore, Rosen opined that the current reductionistic state of science – “sacrificing the whole in order to study the parts” – is inadequate to create a coherent theory of biological systems, as life is not observed after dissection of a biological organization. Rosen held what seems to be a mystical belief – that biology is not a subset of known physics, that relational studies of living systems (how parts of living systems relate to each other) may produce new knowledge of physics and result in profound changes for science generally. Inspired by Gödel’s theorems of incompleteness, and the limitations of Turing-computability, he suggested that “we should widen our concept of what models are”.(24)

The assumption of strict empiricism is fundamentally untenable, as any observation is necessarily dependent upon subjective experience. Thus the ’empirical sciences’, as well as those bodies of knowledge best termed ‘epistemes’ – including politics, psychology and the ‘arts’ – are principally subjective, intuitive understandings, leading to the formation and execution of practical arts, allowing for the acquisition of empirical knowledge. Rationalizations of irrational processes such as politics and the (inter)actions of political states, are conducive to modeling in a manner similar to the modeling of physical phenomena, those models being necessarily based upon truncations of empirical measurement, to render computable data.

That markets are composed of individual rational actors, is a fundamental supposition upon which modern economic theory is built, allowing for precise computational modeling of economic activity. However, this founding assumption is clearly incorrect; markets are composed of people (individuals and groups), and people are not invariably rational actors. Simply, people are not machines, they do not always Turing-compute, or act in accordance with expectation (theoretical or otherwise); people do not always do the right thing. Thus real market behaviors tend not to conform tightly with statistical, theoretical prediction. This observation is communicated succinctly by Bibard & Groschl, who have said that “the economic assumption of pure and perfect rationality is not an empirical, but a theoretical one”.

The complete failure of economic theory and subsequent data-driven models to predict, even imprecisely and inaccurately, black swan events such as the global finance sector catastrophe of 2007-8 and the ensuing global monetary crisis, is the result of both: truncations of empirical measurement data used in theoretical modeling; and the indoctrination of modern global culture into a system of theoretical and mechanical naivete.

In a very real sense, modern economic theory and models comprise a simplistic interpretation of the realities of political life; and generally, people place near-complete trust and reliance upon technologies that they misunderstand, or outright do not understand.

Groschl (2013) reports that recent annual meetings of the world economic forum at Davos have begun to recognize sustainable development not merely as a mechanical, technical process. Increasingly, behavior is seen as the missing link between analyses (providing knowledge of what is at stake) and implementation (doing something about it). He suggests that a transformation is occurring – or needs to occur, and calls upon his readers to realize that “not everything that counts can be counted and not everything that can be counted counts […]. One cannot rely too much on models and calculations. Instead one must rely on one’s intuition, and trust the intuitions of others”. In so saying, Groschl corroborates my own view, published as part of a previous post titled iconoclast which ends with a call for the realization that the greater part of reality is irrational – “irrationality is the denominator, and rationality the numerator”.

The mechanization of governance: expert systems – not even idiots
Hackett & Groschl speak of a transnational capitalist class (TCC) – the principal shareholders and managers of large corporations. These private businesses do not reside within a single nation, and thus are not bound by the laws and customs of any one nation, rather they are spread across several nations, the governing policies of which they tend to influence. In fact, Hackett & Groschl claim that the influence of transnational corporations has grown to become the core actor in governance discourse. Increasingly, developed states conduct peripheral, enabling roles, while developing countries have been entirely disenfranchised from the global agenda. Transnational corporations affect their influence upon the economies of most countries, and seem to play an ever increasing, albeit private and hidden role in international relations, together resulting in economic activities the scale of which are beyond the capacity of any one nation state. It is said that the power and reach of transnational business has in many ways surpassed the power and capacity of the United Nations.
Based upon the knowledge that people irrationally trust models, the understanding that government policy is strongly influenced by corporate interests, and that the governance of corporations is strongly influenced by economic theory and computer modeling, it seems reasonable to take the view that policy is increasingly being conducted by technological systems, most of which still employ people – albeit with the unrealistic assumption that human components of the politico-technological system are devoid of humanity; that they are perfectly rational actors.

The modern political state is thus modular, and most correctly defined as technocracy. Herein, warn Hackett & Groschl, lies a looming crisis of accountability. With the knowledge that corporate shareholders are not legally liable for the actions of the corporate person they own, and assuming the TCC as the global elite, economically governing group, who will hold the TCC and it’s individual members accountable? – and how?

One answer to this quandary is as predictable as it is incapable; artificial intelligence. Not the ‘general’ or ‘strong’ AI of science fiction, but decidedly unintelligent expert systems. The convergence of governance and expert systems is termed e-government – defined by the United Nations Global E-Government Readiness Report 2004, as “the use of [information and communication technology (ICT)] and its application by the government for the provision of information and public services to the people.”

Several aspects of governance, in business and government, have already been delegated to expert systems, as shown by the broader definition given in a more resent UN document, titled “E-Government for the Future We Want”:
“E-government can be referred to as the use and application of information technologies in public administration to streamline and integrate workflows and processes, to effectively manage data and information, enhance public service delivery, as well as expand communication channels for engagement and empowerment of people. The opportunities offered by the digital development of recent years, whether through online services, big data, social media, mobile apps, or cloud computing, are expanding the way we look at e-government. While e-government still includes electronic interactions of three types – i.e. government-to-government (G2G); government-to-business (G2B); and government-to-consumer (G2C) – a more holistic and multi-stakeholder approach is taking shape.”(25)

The Encyclopedia of Digital Government (2007), provides concrete examples of governance tasks performed by expert systems. “Increasingly, government organizations in the Netherlands use expert systems to make judicial decisions in individual cases under the Dutch General Administrative Law Act […]. Examples of judicial decisions made by expert systems are tax decisions, decisions under the Traffic Law Act (traffic fines), decisions under the General Maintenance Act (maintenance grants), and decisions under the Housing Assistance Act.

There are two categories of judicial expert systems. Expert systems in the first category support the process of judicial decision making by a civil servant. The decision is taken in “cooperation” between a computer and the civil servant. Expert systems in the second category draft judicial decisions without any human interference. In these cases the decision making process is fully automatic.”(26)

In 1989 J. Weintraub authored an article published in AI Magazine (note E), in which he lists twelve possible uses for experts systems in federal, state, and municipal governments.
1) Forecasting – financial planning and cash management
2) Labor relations
3) Document and archive retrieval
4) Regulatory compliance advise
5) Office automation
6) Capital assets analysis
7) Personnel employment assessment
8) Legal advice
9) Instruction
10) Bid and proposal preparation assistance
11) Natural language querying of database
12) Auditing

Further, Weintraub stated that “the applicability of expert systems and AI to government administration can be seen in a careful ‘between the lines’ reading of the Information Systems Plan (ISP). Although not explicitly stated, many of the systems and projects defined in ISP are driven by extensive and complex logic processes and would benefit from AI technology.”(27) This is more than a little humorous, as expert systems are thoroughly incapable of reading “between the lines”, in a sense proving the necessity of humans, whether expert or not, for the interpretation of real-world situations and to propose solutions that better, or at least maintain, a decent quality of life.

In this regard I speak from personal experience, having been subjected, rather frustratingly, to the stress-inducing ridiculousness of the expert system employed by the royal Dutch tax department. In regular correspondence with the Dutch tax system, it failed to remind me of a chat bot only twice during the course of six years – due on both occasions to the intervention of a (human) civil servant. The expert governor (Dutch tax bot) consistently appraised the situation incorrectly, whereas a layman (myself) and civil servant (tax inspector) appraised the situation correctly. The Dutch computer expert governor, a rational specialist, managed very well only to reduce the quality of my life, by not incorporating into the situation argument, the information that I had sent to it.

Apparently, the current culture of deference of individual responsibilities of governance to a group of ‘representative’ strangers, is not dysfunctional enough. Modern culture seeks to defer individual responsibilities of governance even further, feeding them to unintelligent expert systems. While I can imagine the presumed attraction of this course of action, if viewed superficially and from a disinterested distance, my own experiences have proven that deference of governance to machine systems, makes for singularly poor policy, resulting in absurd decision making. Expert systems have no understanding of the knowledge they house, nor of how the implementation of that knowledge impacts upon the quality of people’s lives. Indeed, this is part of the attraction – we hope to better our lives by employing selfless, unbiased, ‘incorruptible’, perfectly rational machines as civil servants; as our governors. A warning! Expert (governing) systems are not intelligent, in fact they are not even idiots.

There may be a glimmer of hope however, in the incorporation and interrelation of several expert systems, representing a diversity of specializations, thus synthesizing a multi-expert system; a diversified-specialized system; a computerized polymath. Such a system would not be intelligent, but it might be capable of more rounded, complex, decision making, which in turn may lead to more livable forms of governance for humans. However, the only sure way to attain a good quality of life is to personally, individually, abandon the current culture of technocratic lock-in (‘representative democracy’), and to begin to govern oneself in association with ones local group, resources, and territory.

Notes
A) For the purpose of this essay, the word cell is assumed to be synonymous with actor, and the latter may refer to molecular as well as systemic agents of action.

B) Take for example the report by Cordero (2012), in which is stated: “A common strategy among microbes living in iron-limited environments is the secretion of siderophores, which can bind poorly soluble iron and make it available to cells via active transport mechanisms. Such siderophore-iron complexes can be thought of as public goods that can be exploited by local communities and drive diversification […]” – italicized emphasis is mine.

C) Of course ‘water’ may be replaced with any object or process.

D) Hamilton’s rule (rB > C) was published in 1964, as a popularization of the mathematical treatment of kin selection, by Fisher and Haldane in the 1930’s, and a further formal mathematical treatment, a theorem, composed by Price.
r = genetic relatedness of the recipient to the actor.
B = benefit gained by the recipient as a result of the act.
C = cost of the act to the actor.

E) Elsevier publishes an entire journal devoted to the field of expert systems in governance, titled “Expert Systems with Applications” [http://www.journals.elsevier.com/expert-systems-with-applications/]. Here are two recent (2012 and 2015) citations:
i) “Evaluation and ranking of risk factors in public–private partnership water supply projects in developing countries using fuzzy synthetic evaluation approach” http://www.sciencedirect.com/science/article/pii/S0957417415001487
ii) “An unstructured information management system (UIMS) for emergency management” http://www.sciencedirect.com/science/article/pii/S0957417412002813

Bibliography
1) C. Woese, “The universal ancestor”, (1998), Proceedings of the National Academy of Sciences of the United States of America, vol. 95(12), p. 6854-9, (abstract) http://www.ncbi.nlm.nih.gov/pubmed/9618502

2) F. Miller, “Aristotle’s Political Theory”, (2012), The Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/aristotle-politics/#Aca

3) E. Jenks, “A History of Politics”, (1909), p.73, https://archive.org/stream/ahistorypolitic01jenkgoog#page/n88/

4) S. Groschl et al, “Uncertainty, Diversity and The Common Good”, (2013), Gower, http://www.gowerpublishing.com/isbn/9781409453390

5) J. Scott, “Critical Assessments of Leading Political Philosophers”, (2006), p. 421, Routledge, https://books.google.si/books?id=vayp8jxcPr0C&pg=PA153&lpg=PA153&dq=Non+in+depravatis,+sed+in+his+quae+bene+secundum+naturam+se+habent,+considerandum+est+quid+sit+naturale&source=bl&ots=vLYRv0Xyl-&sig=JMpCOrRPx15W-We6lS-cvR1s9pE&hl=sl&sa=X&ved=0CC4Q6AEwAmoVChMIuNj85ce1xwIVCaZyCh339QR_#v=onepage&q=Non%20in%20depravatis%2C%20sed%20in%20his%20quae%20bene%20secundum%20naturam%20se%20habent%2C%20considerandum%20est%20quid%20sit%20naturale&f=false

6) J. Twenge & W. Campbell, “The Narcissism Epidemic:Living in the Age of Entitlement”, (2009), Free Press, http://www.narcissismepidemic.com/

7) Aristotle, “Politics (Book 1)”, (1957), Aristotle in 23 Volumes, Vol. 21, translated by H. Rackham, Cambridge, MA, Harvard University Press, http://www.perseus.tufts.edu/hopper/text?doc=urn:cts:greekLit:tlg0086.tlg035.perseus-eng1:1.1252a

8) F. Short et al, “Polybacterial human disease: the ills of social networking”, (2014), Vol. 22-9, p. 508-518, Trends in Microbiology, Elsevier, http://www.cell.com/action/showImagesData?pii=S0966-842X%2814%2900116-4

9) http://www.internetjfs.org/articles/ijfsv6-4.pdf

10) http://www.scirp.org/Journal/PaperDownload.aspx?paperID=20492

11) http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.472.6659&rep=rep1&type=pdf

12) http://www.mycologia.org/content/104/5/988.full

13) C. Cowden, “Game Theory, Evolutionary Stable Strategies and the Evolution of Biological Interactions”, (2012), Nature – Education, http://www.nature.com/scitable/knowledge/library/game-theory-evolutionary-stable-strategies-and-the-25953132

14) P. David, “PATH DEPENDENCE – A FOUNDATIONAL CONCEPT FOR HISTORICAL SOCIAL SCIENCE”, (2006), University of Oxford, http://ecohist.history.ox.ac.uk//readings/david-pathdependent206.pdf

15) T. Foxon, “Technological and institutional ‘lock-in’ as a barrier to sustainable innovation”, (2002), Imperial College London, http://www3.imperial.ac.uk/pls/portallive/docs/1/7294726.PDF

16) R. Dawkins, “The Selfish Gene”, (1976), Oxford University Press.

17) A. Grafen, “Natural Selection, Kin Selection and Group Selection”, (1984), Behavioural ecology: an evolutionary approach, Vol. 2, http://scholar.google.si/scholar_url?url=http://users.ox.ac.uk/~grafen/cv/KandD2ed.pdf&hl=en&sa=X&scisig=AAGBfm0OvPYvo4EIUQjEhmJPdVvThJAo9g&nossl=1&oi=scholarr

18) M. Wade, “Group selection among laboratory populations of Tribolium”, (1976), Proceedings of the National Academy of Sciences, Vol. 73-12, p. 4604-4607, http://www.pnas.org/content/73/12/4604.short

19) M. Wade, “Group Selection, Population Growth Rate, and Competitive Ability in the Flour Beetles, Tribolium Spp.”, (1980), Ecology, Vol. 61-5, p. 1056-1064, Ecological Society of America, abstract http://www.jstor.org/stable/1936824

20) V. Huges, “Epigenetics: The sins of the father”, (2014), Nature, Vol. 507-7490, http://www.nature.com/news/epigenetics-the-sins-of-the-father-1.14816

21) M. Reuter, et al, “Investigating the genetic basis of altruism: the role of the COMT Val158Met polymorphism”, (2010), Social Cognitive and Affective Neuroscience, http://scan.oxfordjournals.org/content/early/2010/10/28/scan.nsq083.full

22) G. Thompson, et al, “Genes underlying altruism”, (2013), Biology Letters, The Royal Society, http://rsbl.royalsocietypublishing.org/content/9/6/20130395

23) J. Mayfield et al, “Recent advances in bacterial heme protein biochemistry”, (2011), Current Opinion in Chemical Biology, Vol. 15, p. 260–266, Science Direct, https://www3.nd.edu/~dlab/Mayfield_2011.pdf

24) “Rosennean Complexity and other interests”, (2008), Panmere, https://web.archive.org/web/20100819071558/http://www.panmere.com/?page_id=16

25) “UNITED NATIONS E-GOVERNMENT SURVEY 2014 – E-Government for the Future We Want”, (2014), Untied Nations New York, http://unpan3.un.org/egovkb/Portals/egovkb/Documents/un/2014-Survey/E-Gov_Complete_Survey-2014.pdf

26) M. Groothuis, “Applying ICTs in Judicial Decision Making by Government Agencies”, (2007), Encyclopedia of Digital Government, p. 87-96, https://books.google.si/books?id=iDrTMazYhdkC&pg=PA87&hl=sl&source=gbs_toc_r&cad=3#v=onepage&q&f=false

27) J. Weintraub, “Expert Systems in Government Administration”, (1989), AI Magazine Vol. 10/1, Association for the Advancement of Artificial Intelligence, https://www.aaai.org/ojs/index.php/aimagazine/article/download/730/648

Insanity of Genius

Nobly reasonable? Infinitely facultative? Angelic and Godly?!
Wow! What a piece of work was Shakespeare!
But was he a genius? Was Bach, Da Vinci, or Einstein?
What do we mean by use of the word genius?

In this, the final of four posts, now well away from the comfort and normalcy of home, we stumble and fall into a broken house of mirrors. Our journey began in a deep contextual fog, of historical, theistic, and social themes; tricky navigation to be sure! A stiff wind carried us on to explore the isles of intelligent behavior – Cellular Biology and Micro Anatomy, and on, to re-animate matter via reunification of matter and mind.
melancholia
Currently, there is melancholia.

Archetype of the mad genius
“Lovers and madmen have such seething brains, such shaping fantasies, that apprehend more than cool reason ever comprehends. The lunatic, the lover and the poet are of imagination all compact […]”

Long before Shakespeare’s time, Book XXX of the Aristotelian Problematanote 1, titled “Problems Connected with Prudence, Intelligence, and Wisdom”, here translated by Forster (1927), seems to have been the first written work to definitively associate exceptional cognitive ability with mental illness. It begins by asking “Why is it that all those who have become eminent in philosophy or politics or poetry or the arts are clearly of an atrabilious temperament, and some of them to such an extent as to be affected by diseases?” Examples are made of three Heros, citing the epileptic affliction and atrabilious temperament of Heracles, the insanity of Ajax, and the self-imposed exile and isolation of Bellerophon.

Northwood (1989), analytically interprets Book XXX, rendering “the moderate overheatedness of melancholic geniuses ensures that they are more susceptible to bouts of imaginative fancy – i.e. divergent thinking – but this [disposition] is not always present, nor […] would it be beneficial if it were. The ideas and fancies would remain undeveloped if the melancholic were not then able to look at these ideas [rationally] with a critical eye – a sober eye – a cool eye. As one hears of the creative process today, there are moments of inspiration and moments of rational analysis, editing, criticism. It is only the melancholic who will naturally have both.”

She points out that “a temperament that is full of change” refers to mood swings…
Madness-Total_Madness_All_The_Greatest_Hits_y_More-Frontal
… which in today’s culture would likely be diagnosed as bipolar disorder.

EST
“That dramatic mood swings are beneficial is an idea that is alien to late 20th century psychiatry; it is an illness to be cured by pills and electric shock treatment.”

Aristotel_Plato
“Greek authors believed that external climatic variability (with the result of internal character variability) was extremely beneficial to one’s character, and that it led to intellectual outstandingness.” – a fascinating thought, if taken from our modern perspective of global climate change.

Humorism
“In ancient Greek theories of health, it was the equal balance or mixing of the humors or elements (i.e., the isonomic mean) that comprised the ideal healthy state.”

The ancient humors might, in modernity, be viewed metaphorically. Perhaps as good-humor, dry-humor, bad-humor, and dark-humor?

The Problema XXX describes “a form of melancholic constitution that is both 1) itself characterized as a mean, and 2) thought to lead to intellectual outstandingness. This is theoretically problematic since the melancholic constitution was by definition a constitution in which there was a natural preponderance of black bile. Thus, there appear to be two incompatible means that are descriptive of the ideal in ancient Greek medicine: the isonomic mean that underlies the ideal healthy state, and the melancholic mean that describes the melancholic who is capable of greatness”.1

“Men differ in appearance not because they possess faces but because they possess certain kinds of faces, some handsome, others ugly, others with nothing remarkable about them (those, that is, who are naturally ordinary); so those who possess an atrabilious temperament in a slight degree are ordinary, but those who have much of it are quite unlike the majority of people. For, if their condition is quite complete, they are very atrabilious; but, if they possess a mixed temperament, they are men of genius.

If they neglect their health, they have a tendency towards the atrabilious diseases, the part of the body affected varying in different people; in some persons epileptic symptoms declare themselves, in others apoplectic, in others violent despondency or terrors, in others over-confidence, […]. The force which gives rise to such a condition is the temperament according as it contains heat or cold. If it be cold beyond due measure, it produces groundless despondency; hence suicide by hanging occurs most frequently among the young, but sometimes also among older men.

Since it is possible for an abnormal state to be well attempered and in a sense [become] a favourable condition, and since it is possible for the condition to be hotter and then again cold [i.e. ‘bipolar’], when it should be sonote 2 the result is that all atrabilious persons have remarkable gifts, not owing to disease but from natural causes”.2

Probably referring to Problemata, Andreasen (2014), states “The first attempted examinations of the connection between genius and insanity were largely anecdotal.” He continues by describing the work of an Italian physician, Cesare Lombroso, who in 1891 published “The Man of Genius”: “a gossipy and expansive account of traits associated with genius – left-handedness, celibacy, stammering, precocity, and, of course, neurosis and psychosis and he linked them to many creative individuals, including Jean-Jacques Rousseau, Sir Isaac Newton, Arthur Schopenhauer, Jonathan Swift, Charles Darwin, Lord Byron, Charles Baudelaire, and Robert Schumann. Lombroso speculated on various causes of lunacy and genius, ranging from heredity to urbanization to climate to the phases of the moon. He proposed a close association between genius and degeneracy and argued that both are hereditary”.3

Correlative genetics of madness, creativity and g
A genetic study by Kéri et al (2009), suggests that “there is an association between psychotic features and creativity, which may explain the retention of genes related to psychosis.”, and reports Neuregulin 1 as a candidate gene for psychosis, but it also affects neuronal development, synaptic plasticity, glutamatergic neurotransmission, and glial function. The promoter of this gene exists as a polymorphism:
– C/C –> “lowest creativity scores”;
– C/T –> “middle-ranking scores”;
– T/T –> “highest creativity scores”.

“[…] the biologically relevant promoter polymorphism of the neuregulin 1 gene has a significant impact on creativity: The T/T genotype, which has previously been shown to be related to psychosis risk and altered brain structure and function, was associated with the highest creativity scores when lifetime achievement or laboratory scores of creative thinking were taken into consideration. […] The prefrontal cortex is important in cognitive inhibition and creativity, and there is evidence that the promoter polymorphism of the neuregulin 1 gene affects the functioning of this brain region. Indeed, it has been reported that the reduction of prefrontal functions [reduced cognitive inhibition, related to schizotypal features] may lead to creative peaks in highly functioning people, even if they are in the presymptomatic stage of severe neurodegenerative illnesses.”

Both Diamond and Witelson concluded that the parietal lobes of Einstein’s brain (parts of the cerebrum primarily concerned with processing sensory information and with spatial orientation) were anatomically unique. They used markedly different methodologies, however: Diamond counted the cells in the cerebral cortex; Witelson studied the brain’s gross anatomy. Diamond compared cell counts of parts of Einstein’s cerebrum with those from former Veterans’ Administration hospital patients. The specimens were stained to distinguish two types of brain cell: neurons and glial cells. In Einstein’s left parietal cortex, Diamond noted a significant increase in glial cells but not neurons. She proposed that the “differential cell counts constituted a potentially meaningful measure of the functional status of the brain” in general, and, in particular, “neuronal:glial ratios in selected regions of Einstein’s brain might reflect the enhanced use of this tissue in the expression of his unusual conceptual powers”.4

“Unfortunately we have [only] one brain of an Einstein. Scientific certainty tends to be confirmed by multiple subjects or experiments, so it is hard to draw definite conclusions from a single specimen, no matter how exceptional”.5

neuromythology
Dwelling on this point, it seems wise to include the critical rhetoric of Hines (2014), who in his approach to the field of studies representing Einstein’s brain, has coined the ironical term neuromythologynote 3, and has discounted the findings of various authors. In particular his criticism is of the propagation of assumed, but not known, meanings of certain phenomena.6 Indeed, the state of current affairs is one of very little knowledge about the physical and anatomical aspects of cognition, less still, if anything at all, is known with certainty.

An epigenetic study by Chorney et al (1998), reported that general cognitive ability ( g ) “presents three challenges for molecular genetic analysis: It is a quantitative trait with a roughly normal distribution; it is multifunctional, involving environmental as well as genetic sources of variance; and its heritability is likely due to groups of genes of varying size and effect, rather than a few genes of major effect.”
– i.e. A small probabilistic effect upon g is more likely affected by groups of genes with interchangeable properties, rather than individual genes with specific properties.

The Chorney study was a search of group phenomena, rather than familial phenomena, proposing to associate g with quantitative trait loci (QTL). The cohort comprised 51 experimental subjects (children scoring high IQ), and 51 control subjects (children scoring average IQ), all Caucasian, living and schooling within a hundred kilometer radius of Cleveland Ohio.

Insulin-like Growth Factor 2 Receptor (IGF2R), and by association Insulin-like Growth Factor 2 (IGF2), are mentioned repeatedly in the QTL for g study. The function of this pair of DNA coded molecules is communication (signal and receiver), and is a necessary part of intercellular (paracrine) vesicle transport, occurring in virtually all tissues. In the central nervous system vesicular trafficking is a mode of volume transmission (VT), introduced in earlier explorations, titled Refraction of the State of Nature and Meta-matricity.

Agnati and Fuxe (2014) say that “so-called exosomes appear to be the major vesicular carrier for intercellular communication but the larger microvesicles also participate. Extracellular vesicles are released from cultured cortical neurons and different types of glial cells and modulate the signalling of the neuronal–glial networks of the CNS. This type of VT has pathological relevance, and epigenetic mechanisms may participate in the modulation of extracellular-vesicle-mediated VT”.7

IGF2R
Molecular model of IGF2R
IGF2
Molecular model of IGF2

IGF2 appears to be linked to neurogenesis and memory creation, via promotion of survival of hippocampal neurons. That said, the authors are careful to point out that “such QTL’s are not genes for genius; moreover, genius involves much more than genes”. This statement is highly reminiscent of Binet’s point of view, noted in a previous post of this series, titled Quantity of Genius?, in which we learned that Binet emphasized qualitative, as opposed to quantitative measures, and stressed that intelligence was not based on genetics alone; that intellectual development progressed at variable rates, was influenced by environmental factors, and was malleable rather than fixed.

Trepidation
Mindful of lofty historic ideology, and the deep shadow cast by eugenics,
now laden with a hypothetical neuro-glial syncitium;
an heritable, albeit variable effect upon g,
via an indirect (infrastructural) link between allele variation of IGF2 and IGF2R,
possibly modulating the capacity of glial volume transmission in our population…
…does genius stand therein?

Apparently irrelevant to the QTL study, which focussed upon Chromosome 6, the gene for apolipoprotine E (on Chromosome 19) is mentioned in association with late onset Alzheimer’s disease. This I found interesting, in connection with a study of the glia of Einstein’s brain (Colombo, 2006), in which Alzheimer’s disease is also mentioned.

“Comparison between samples of Einstein’s brain with those of four other men, the geometries (parallelism, relative depth, tortuosity) of primate-specific interlaminar glial processes were not individually distinctive. However, Einstein’s astrocytic processes showed larger sizes and higher numbers of interlaminar terminal masses (bulbous endings), which are of unknown significance but known to occur in some cases of Alzheimer’s disease”.16

The dope on cognitive disorganization
In a study linking creativity and psychopathology via dopamine transmission, de Manzano et al (2010), have concluded that highly creative individuals have lower concentrations of a particular type of dopamine receptor (D2) in the thalamus. The study proposes that “a lower D2 [binding potential] in the thalamus may be one factor that facilitates performance on divergent thinking tasks [by decreasing filtering and autoregulation of information flow, and by increasing] excitation of cortical regions through decreased inhibition of prefrontal pyramidal neurons, [thus allowing neuronal networks of the prefrontal cortex] to more easily switch between representations and process multiple stimuli across a wider association range. This state, [of creative bias may increase] performance on tasks that involve continuous generation and (re-)combination of mental representations and switching between mind-sets. […] A decreased signal-to-noise ratio (i.e less signal and more noise) in cortical regions should better enable flexibility and switching between representations; similarly, the associative range should be widened and selectivity should be decreased which might spur originality and elaboration. [However], creative bias may also bring a risk of excessive excitatory signals from the thalamus overwhelming cortical neurotransmission, with ensuing cognitive disorganization and positive symptoms”.9

lose_touch_with_reality
The national Institute of Mental Health defines positive symptoms as psychotic behaviors not seen in healthy people, often causing the schizophrenic to “lose touch with reality”.

Positive symptoms:
I) Hallucinations – things that a person sees, hears, smells, or feels that no one else can see, hear, smell, or feel. “Voices” are the most common type of hallucination in schizophrenia. Many people with the disorder hear voices. The voices may talk to the person about his or her behavior, order the person to do things, or warn the person of danger. Sometimes the voices talk to each other.

II) Delusions – false beliefs that are not part of the person’s culture and do not change. The person believes delusions even after other people prove that the beliefs are not true or logical. People with schizophrenia can have delusions that seem bizarre, such as believing that neighbors can control their behavior with magnetic waves. They may also believe that people on television are directing special messages to them, or that radio stations are broadcasting their thoughts aloud to others. Sometimes they believe they are someone else, such as a famous historical figure. They may have paranoid delusions and believe that others are trying to harm them, such as by cheating, harassing, poisoning, spying on, or plotting against them or the people they care about. These beliefs are called “delusions of persecution.”

III) Thought disorders – unusual or dysfunctional ways of thinking. One form of thought disorder is called “disorganized thinking.” This is when a person has trouble organizing his or her thoughts or connecting them logically. They may talk in a garbled way that is hard to understand. Another form is called “thought blocking.” This is when a person stops speaking abruptly in the middle of a thought. When asked why he or she stopped talking, the person may say that it felt as if the thought had been taken out of his or her head. Finally, a person with a thought disorder might make up meaningless words, or “neologisms”.note 4

IV) Movement disorders – agitated body movements. A person with a movement disorder may repeat certain motions over and over. In the other extreme, a person may become catatonic. Catatonia is a state in which a person does not move and does not respond to others.10

Paragon of disorganization
Anderson & Harvey (1996), have suggested that Einstein’s great intellectual abilities were due to higher neuronal density, resulting in a more rapid processing of information.11 Hines (2014) refers to this, commenting that increased neuronal density is in no way indicative of a neuroanatomical basis for superior information processing.6 Yet ecological relationships between neuronal and glial populations, as well as the physiology of biological information processing, remain unclear. Processing speed is relevant, and positively correlated with increased scores on standardized IQ tests. However, Terman’s longitudinal study, showed clearly that higher than average IQ is not a predictor of greater than average life achievement. “intelligence alone doesn’t guarantee achievement”.12

Rapid processing does not seem to be the correct aspect of cognition to use in order to attempt characterization of Einstein’s intellectual achievements. The 20th century’s most famous genius is not reported as having been particularly swift, indeed to the contrary, he has been called “dull-witted”.19

Selemon et al (1998), report that “Overall neuronal density was 21% greater in brains from schizophrenic patients in comparison to normal controls. Significant elevations in neuronal density were observed in layers II, III, IV, and VI. […] In brains from Huntington’s diseased patients, increases in neuronal (35%) and glial (61%) density with substantial cortical thinning (30%) were observed”.13

Most intriguing, is the lack of clear division between the process of ideation in the genius, and the process of ideation in the schizophrenic; both psycho-types are marked by fantastic syntheses.
tesla-pigeon
“I have been feeding pigeons, thousands of them for years. But there was one, a beautiful bird, pure white with light grey tips on its wings; that one was different. It was a female. I had only to wish and call her and she would come flying to me.
I loved that pigeon as a man loves a woman, and she loved me. As long as I had her, there was a purpose to my life.”

– Nikola Tesla

In 2006, the Colombo investigation compared the brain of Einstein with those of four healthy “age-matched” subjectsnote 5. This study mathematically defined and compared geometrical features of primate-specific interlaminar glial processesnote 6. No distinctive geometrical characteristics of interlaminar processes were observed, however, Einstein’s astrocytic processes were shown to be greater in number and larger. Some aspects of glial process morphology, specifically the enlarged terminal masses (bulbous endings) were deemed of unknown significance, though the authors did propose the possibility of “a potential increase in the local numbers of glial channels and receptors [representing] – in healthy conditions – a functional upgrading of the cortical neuropil”. In conclusion however, Colombo, et al state “incongruities between the supposedly special structural ‘attributes’ of [Albert Einstein’s] brain and current interpretations of their meaning raise doubts as to the exact contribution of these types of analyses, besides spurring a provocative discussion in scientific and laymen literature. In a species with a heavily socially molded brain and mind, such as human, the full expression of an individual special aptitude depends on multiple genetic and environmental factors – which could cancel or potentiate the former. Perhaps individuals with “special” brains (and minds) are more frequent than suspected. They just may go unnoticed due to sociocultural conditions or their early potential being cancelled [due to environmental factors]”.14

Having been steeped in more recent findings, exposed in the previous post, titled “Anatomy of Genius” one is tempted to propose that the structures defined by Colombo, et al have a function similar to neuronal pre-synaptic terminals (i.e. storage, expression and re-uptake of molecular signaling compounds and trophic factors), and that Einstein’s enlarged glial processes allowed for a greater capacity of conveyance and communication of information – a greater volume of transmission. Or, as proposed by Colombo et al, it may be that by the time of his passing, Einstein’s brain displayed micro-scale degradations, similar to those observed in the infantile brains of Down’s syndrome cases, and in late onset Alzheimer’s disease.16

However, “if we regard an astrocytic domain as an elementary unit of brain that monitors, integrates, and potentially modifies the activity of a contiguous set of synapses, this glio-neuronal unit in human brain [in comparison to rodent brain] contains far-larger numbers of synapses, thus capable of carrying out more complex processing per glio-neuronal unit, than any other species. It is tantalizing to propose that the computational power of cortex increases as a function of its size of astrocytic domains. If so, glio-neuronal-based processing increases the intelligence of primates further than the mere increase in brain size”.16

Quiet spot – neural efficiency hypothesis
Garbner et al (2006) studied the impacts of expertise and intelligence at the neurophysiological level, they tested tournament chess players to determine whether cortical activation is reduced during expert performance. It turned out that activation relating to figural intelligencenote 7, rather than general intelligence, was significantly reduced.

“[Recent] research has provided considerable evidence of the neural efficiency hypothesis of intelligence, indicating lower and more focussed brain activation in brighter individuals. […] Based on numerous findings of negative correlations between participants’ intelligence and the amount of brain activation during cognitive performance it was postulated that intelligence is not a function of how hard, but rather how efficiently the brain works, indicated by a more focussed use of specific task-relevant areas. […] Correlations between measures of [working memory] capacity and intellectual performance, [suggest] that brighter individuals have a larger mental workspace at hand to perform mental operations and are capable of allocating their attentional resources more effectively than less intelligent individuals. […] In line with the neural efficiency hypothesis, participants with higher figural intelligence, displayed a lower amount of cortical activation than the figurally less intelligent participants. […] This finding nicely conforms to previous studies in the framework of the neural efficiency hypothesis, showing that the largest activation differences between lower and higher intelligence participants emerged over the frontal cortices”17

Regarded as being of utmost importance for intellectual functions, are the executive processes (selective attention, inhibition, and mental manipulation of information). Since figurally brighter individuals have been observed displaying lower activation in cortical areas, with no simultaneous increase of activation in other cortical areas, one is tempted to assume that a neurally more efficient brain is functioning in them than in less intelligent individuals, who rely more strongly on the prefrontal cortex. In this light it is rather puzzling to consider the case of verbal IQ, in which increased cortical activation is observed in correlation with verbal tasks, suggesting that verbally brighter individuals display less neural efficiency during task performance.17

My own assumption is that figural processing fundamentally differs from linguistic (verbal) processing; the former being much older and deep-rooted in the tree of life than the latter. A fitting example may be made of the proposed externalized spacial memory of Physarum polycephalum, exposed in the previous post “Anatomy of Genius“. Behaviors such as that of P. polycephalum are considerably closer to the root of the tree of life than is verbal language.

Postmortem paragon
The www is awash with reports suggesting that Einstein was dyslexic and performed poorly in school. However, according to Wolff & Goodman (2001) writing for the foremost repository of Einstein’s works, the Albert Einstein Archives at the Hebrew University of Jerusalem, though there seems little doubt that young Albert was possessed of slightly unusual, even rebellious characteristics, dyslexia and poor scholarship were not among them.

“If dyslexia is defined as a neurological condition which causes problems translating language to thought or thought to language and therefore presents difficulties with reading, writing and spelling, speaking or listening, Einstein can certainly not be diagnosed with this defect.

The strongest argument that Einstein was not dyslexic is that he mastered the German language perfectly and his ability to express himself in writing and speech showed high skills of comprehension, discrimination and precision.

A different aspect may be Einstein’s social behavior. It prompted some specialists place him among those afflicted with autism, or its milder form, a developmental disorder called Asperger’s Syndrome (AS). Children suffering from AS are characterized as aloof and emotionally detached; their socially inappropriate behavior and their extreme egocentricity prevent them from interacting successfully with their peers. They appear to have little empathy for others and to lack social or emotional reciprocity. Other symptoms include motor clumsiness, non-verbal communication problems, repetitive routines and stereotyped mannerisms and the idiosyncrasy for loud or sudden noises. One of the most interesting aspects of their personality is the “perseveration,” an obsessive interest in a single object or topic to the exclusion of any other.

Some of the characterizations of AS described in the paragraph above actually apply well to the young Albert as we know him from Maja’s and Max Talmey’s recollections.

Both Maja and Talmey describe a boy who took little interest in boisterous games and, in general, in his peers, a boy who would concentrate patiently on elaborate constructions with building blocks or playing cards, delve into books and tricky arithmetic problems or play the violin. A sort of glass pane, as he called it many years later, separated him from his fellow human beings. Had such “social phobia” then been classified as a personality disorder, and had his parents and doctors felt the need to ‘heal’ the boy by making him conform to some norm, Albert might not have become Einstein.

Self-sufficiency, autonomy, a certain shyness and an extraordinary power of concentration, are traits that still characterized the adult scientist. He never felt comfortable with the obligation to deliver addresses and speeches and to mingle with people. The man who attracted women “like a magnet attracts filings”, who was not afraid of having more than one love affair alongside his marriage and who stuck by his friends and lovers “in his way”, this man nevertheless considered himself a lone wolf: “I never belonged to my country, my home, my friends, or even my immediate family, with my whole heart.” Music was the portal into the place where Einstein sealed his emotions in order to avoid dealing with interpersonal relationships.”18

Silberman (2003) reports that Autism is a subset of childhood schizophrenia, and calls Asperger’s syndrome “the engineers’ disorder.” Geneticists call those who don’t fit into the diagnostic pigeonholes “broad autistic phenotypes”.20

Geek_Madness
This hierarchy, in addition to the meaning associated with the previous image, seems a concise manner of describing Nikola Tesla – a largely uncelebrated genius of the 20th century.

Enigma
While comforted by the ability to solve intractable problems, our love of science is tinged with apprehension.
Holmes
An unlikely hero; Sherlock Holmes is callous, arrogant, and shunning of society. Sir Arthur Conan Doyle described his character as “a calculating machine”. Holmes is not just a solver of mysteries, but a mystery himself, a superhuman intellect – an artificial intellect? Surely a scientist. The character is a conglomeration of popular stereotypes of the archetypal scientist: solitary, introverted, daring, reckless, slightly inhuman, cruel, obsessive, imaginative and brilliant.

The world Holmes was originally created for was one obsessed with science. The Victorian era saw the birth of Charles Babbage’s own “calculating machine”. Yet fascinatingly, the character is also a recluse, an eccentric bohemian, who relies upon intuition and flashes of insight.

“See the value of imagination, […] We imagined what might have happened, acted upon the supposition, and find ourselves justified. Let us proceed”.20

On occasion Homes abandons cerebral methods altogether, opting instead for good old fashioned fisticuffs “[…] a straight left against a slogging ruffian.” – this I recognize as just plain hard physical work.
He also describes himself as “the most incurably lazy devil that ever stood in shoe leather.” – a sure recognition of neural efficiency!

Cultural apprehension of such characters stems from an irrational fear of not knowing for certain how far one of them might go in the pursuit of truth…21

Conclusion
We had set out to explore, and perhaps help to define genius. While an unambivalent definition is not on the visible horizon, we have reasonably argued that:
Intelligence is a somatic function, and only loosely related with genius;
that a surprisingly strong set of correlations is emerging between genius, creativity, psychosis and other disorders of the brain, linked to information processing; such as bipolar disorder, the dyslexic spectrum, and the autistic spectrum.

Furthermore, it is clear that the phenomena measured by the intelligence quotient, specified by the Stanford-Binet scale, continue to be poorly understood. Rather clearer is that the Stanford-Binet scale is a good predictor of an individual’s aptitude for schooling, ‘in the box’ thought and action, and that the Simon-Binet scale has been thoroughly bastardized in the name of rational money making. The IQ test as we know it today seems to be a reliable manner of identifying Homo economics and the school system, such as it is today, seems a reliable manner of propagating the skill-set of Homo economics.

Personally, I was surprised by the links to deep history, the jinn and other mythological creatures, guardian spirits and the tree of life / tree of knowledge. Also in this vein, I am now deeply fascinated by bifurcating, branching, web-like geometries and their relation to adaptive, physical systems of knowledge.

A short list of characteristics related to genius:
– stubbornness
– single mindedness
– creativity (divergent thinking)
– a willingness to ignore social normalcy
– a willingness to bend and break rules
– decreased dopamine transmission
– (and by association) risk taking and addictive behaviors

We have begun to conceive of neurons, rather than glia, as support cells, and of neuronal networks as extensions of glial networks. Neuronal networks facilitate rapid information transfer and processing, but are created and maintained by slower glia, who create and interpret meaning. There is a metaphor to be seen, of the relationship between machines and humans on the www, and the relationship between neurons and glia in the brain.

It seems intuitively reasonable to pose the following immature hypothesis:
Mediated by greater capacity infrastructure, an increase in volume transmission leads to a greater capacity of subconscious information processing, and thus to greater interpretive power of meaning.

However, there is to date little evidence in support of this set of assumptions, and increases in carrying capacity seem also to render schizophrenic states. Closer to realization, due to imminent testability, is the hypothesis:
Genius is a pathological state.

Indeed, to what extent is Homo economics systematically, by rational and theoretically profitable means, eradicating genius from the nascent cultural milieu?

An important philosophical question presents itself:
Is it in the common good to normalize human behavior?

Bibliography and Notes
Note 1) The Aristotelian Problemata, is most likely a collection of pseudepigraphs, with some portion originating from the Lyseum, where Aristotel taught after retiring from Asia. The Lyseum under Aristotel was nicknamed the Peripatetic school, immortalized during the Renaissance, in a fresco by Raphael, titled Scuola di Atene (School of Athens). The English word perambulate is derived from the Greek περιπατητικός. Scuola di Atene depicts many members of the Peripatetic school standing and walking.
Aristotel_Plato
An enlargement of the central area of the fresco portrayes an aged Plato and young Aristotel walking while arguing.

Note 2) According to Aristotelian thinking, geniuses are able to regulate their madness, allowing sometimes for the open associations of creative freedom (i.e. divergent thinking) and at other times for a narrowly focussed, rational and analytical approach to the fruits of free association. The mechanism of this self-regulation is not discussed, or if it is, the subject has escaped me – the text as translated by Forster (1927) makes for difficult reading, presumably due to the many differences of worldview between ancient (Aristotelian) and modern times.

Note 3) It had crossed my mind that the concept of genius may be the myth in reference, though upon reflection, Hines’ meaning in neuromythology is almost certainly synonymous with the type of uncritical collective delusion which I have come to understand as default culture.

Note 4) It is certainly noteworthy that William Shakespeare was a master neologist!

Note 5) Four normal brains and one of an alleged genius, hardly makes for a conclusive study, however this is what is available to us, better than pure speculation.

Note 6) Interlaminar astrocytes are specific to the cerebral cortex of higher primates. Their characteristic peculiarity is a very long single process (up to 1 mm) that extends from the soma located within the supragranular layer to cortical layer IV.15

Note 7) Figural refers to a dimension of J. P. Guilford’s Structure of Intellect as represented by real world objects; environmental aspects. Th e figural dimension is defined as:
visual information perceived through seeing; auditory information perceived through hearing; and kinesthetic information perceived through one’s physical actions.

1) H. Northwood, “The Melancholic Mean: the Aristotelian Problema XXX.1”, (1998), Twentieth World Congress of Philosophy, https://www.bu.edu/wcp/Papers/Anci/AnciNort.htm

2) E.S. Forster et al (translation), “Problemata – Book XXX”, (1927), THE WORKS OF ARISTOTLE, Oxford University Press, http://archive.org/stream/worksofaristotle07arisuoft/worksofaristotle07arisuoft_djvu.txt

3) N. Andreasen, “Secrets of the Creative Brain”, (2014), THE ATLANTIC, http://www.theatlantic.com/features/archive/2014/06/secrets-of-the-creative-brain/372299/

4) S. Kéri, “Genes for Psychosis and Creativity”, (2009), PSYCHOLOGICAL SCIENCE, https://www.researchgate.net/publication/26663675

5) F. E. Lepore, “Dissecting Genius: Einstein’s Brain and the Search for the Neural Basis of Intellect”, (2001), Dana Foundation, http://www.dana.org/Cerebrum/Default.aspx?id=39337

6) T. Hines, “Neuromythology of Einstein’s brain”, (2014), Brain and Cognition, Vol. 88, p. 21-25, http://www.sciencedirect.com/science/article/pii/S0278262614000669

7) L. Agnati & K. Fuxe, “Extracellular-vesicle type of volume transmission and tunnelling-nanotube type of wiring transmission add a new dimension to brain neuro-glial networks”, (2014), Philosophical Transactions of the Royal Society B, http://rstb.royalsocietypublishing.org/content/369/1652/20130505

8) M. J. Chorney et al, “A Quantitative Trait Locus Associated with Cognitive Ability in Children”, Psychological Science, Vol. 9, No. 3 (1998), p. 159-166, http://www.jstor.org/discover/10.2307/40063273?sid=21105308375691&uid=2&uid=4&uid=2129&uid=70&uid=3739008

9) Ö. de Manzano et al, “Thinking Outside a Less Intact Box: Thalamic Dopamine D2 Receptor Densities Are Negatively Related to Psychometric Creativity in Healthy Individuals”, (2010), PLoS ONE, http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0010670#s3

10) “Schizophrenia”, (2009), National Institutes of Health, http://www.nimh.nih.gov/health/publications/schizophrenia/index.shtml

11) B. Anderson, T. Harvey, “Alterations in cortical thickness and neuronal density in the frontal cortex of Albert Einstein”, (1996), Neuroscience Letters, vol. 210, p. 161-164, http://ac.els-cdn.com/0304394096126938/1-s2.0-0304394096126938-main.pdf?_tid=a46f43ee-9a5c-11e4-84d3-00000aacb35d&acdnat=1421068521_fd1115a0565a10574a1bb90a418fff6d

12) M. Leslie, “The Vexing Legacy of Lewis Terman”, (2000), Stanford Alumni, https://alumni.stanford.edu/get/page/magazine/article/?article_id=40678

13) L. Selemon, G. Rajkowska, P. Goldman-Rakic, “Elevated neuronal density in prefrontal area 46 in brains from schizophrenic patients: Application of a three-dimensional, stereologic counting method”, (1998), http://onlinelibrary.wiley.com/doi/10.1002/%28SICI%291096-9861%2819980316%29392:3%3C402::AID-CNE9%3E3.0.CO;2-5/abstract

14) J. Colombo, et al, “Cerebral cortex astroglia and the brain of a genius: a propos of A. Einstein’s”, (2006), Brain Research Reviews, Vol. 52(2), p. 257-263, http://www.sciencedirect.com/science/article/pii/S0165017306000130

15) “Astrocytes”, Network Glia, (sponsored by the Journal) Glia, http://www.networkglia.eu/en/astrocytes

16) N. A. Oberheim et al, “Uniquely hominid features of adult human astrocytes”, (2010), vol. 29(10), p. 3276-3287, Journal of Neuroscience, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2819812/

17) Garbner, Nuebauer, Stern, “Superior performance and neural efficiency: The impact of intelligence and expertise”, (2006), http://www.ifvll.ethz.ch/people/sterne/grabner_neubauer_stern_2006.pdf

18) B. Wolff & H. Goodman, “The Legend of the Dull-Witted Child Who Grew Up to Be a Genius”, (2001), The Albert Einstein Archives – The Hebrew University of Jerusalem, http://www.albert-einstein.org/article_handicap.html

19) S. Silberman, “The Geek Syndrome”, (2003), Wired, http://www.wired.com/wired/archive/9.12/aspergers_pr.html

20) A. C. Doyle, “Silver Blaze”, (1892), http://www.eastoftheweb.com/short-stories/UBooks/SilvBlaz.shtml

21) S. Day, “Sherlock Holmes is the archetypal scientist – brilliant but slightly scary”, (2014), The Guardian, http://www.theguardian.com/science/blog/2014/jan/01/sherlock-holmes-archetypal-scientist

Anatomy of Genius

Nobly reasonable? Infinitely facultative? Angelic and Godly?!
Wow! What a piece of work was Shakespeare!
But was he a genius? Was Bach, Da Vinci, or Einstein?
What do we mean by use of the word genius?

In this, the third of four posts, the intent is to bravely push on, away from the comfort and normalcy of home. Our journey began in a deep contextual fog, of historical, theistic, and social themes; tricky navigation to be sure! Now, with a stiff wind in our sails, we carry on to explore the isles of intelligent behavior – Cellular Biology and Micro Anatomy, and to reunite matter with genius.

A tall order! Or a tall tale?

Indistinct morphologies and a rebel arbiter
In opposition to the “prestigious authorities” of his time, Rudolf Ludwig Karl Virchow argued, during the mid-eighteen-hundreds, that the brain contains connective tissue. He described what he assumed to be connective tissue as, “a sort of putty [nervenkitt], in which the nervous elements are embedded”.1 The German term nervenkitt – coined and first used by Virchow, to identify the sticky mass he had observed enveloping neural cells – is translated into English as neuroglia; the word glia is derived from the Greek γλοία (“glue”). Shortly thereafter, in 1865, Otto Friedrich Karl Deiters reasoned that any cell that does not have an axon (“hauptaxencylinderfortsatz” – in English this word means something like main cylindrical extension) cannot be a nerve cell. Deiters’ hypothesis was confirmed in 1886 by Golgi’s superior staining technique.

Camillo_Golgi black_reaction

Camillo Golgi invented silver staining (“the black reaction”) in 1873, which enabled the visualization of nervous tissue with light microscopy.

Golgi called attention to the relationship between the endfeet of glial “protoplasmic prolongations” (known today as processes) with blood vessels. Nerve cells were very rarely observed to contact blood vessels, so glial cells were assumed to be the principal suppliers of nutrients, sugars and oxygen, to the “noble elements” (neurons) of the brain. Ernesto Lugaro first proposed that perivascular glial endfeet may serve as a detoxifying filter of substances that might enter from the blood into the brain, and that glial cells might remove toxic waste products of neuronal metabolism. And so it was that glia were assigned to the low caste of the cellular population of the central nervous system, and given the status of “support cells”, which is what the cannon of neurobiology has taught until very recently.

Current studies are showing that astroglia, or astrocytes (star-like cells), display a remarkable heterogeneity in morphology and function.2 Not all astrocytes exhibit a star-like morphology, not all express glial fibrillary acidic protein (GFAP – the compound currently used as a specific maker of astrocytes), and not all contact the vasculature. And apparently, it was never true that we use only one tenth of our brains – a conjecture based upon the assumption that the glia out number the neurons 10 to 1, and that only the “noble elements” do the thinking. A quantitative study performed by Bahney et al (2013), has concluded that “the ratio of glia to neurons in the human brain is approximately 1:1 rather than the 10:1 or 50:1 ratio previously assumed”.3 So, although astroglia comprise the most diverse cell type in the central nervous system, they are not, as Oberheim (2012) and many before her suggest, the most numerous.

In lieu of an unambiguous definition of this, most fascinating of cell types, and in an attempt to contextualize what may be a key anatomical aspect of genius, let us take a little swim in the relevant literature.

Dominium & Syncytium
Protoplasmic astrocytes stake-out, occupy, and maintain their own individual territories, thus creating micro-anatomical domains2 within the limits of the tentacle-like processes extended by each astrocyte cell body. Within an individual cell’s domain, the astrocyte membrane interacts with the membranes of several neurons, may envelop as many as two million synapses, as well as extending to a neighboring blood vessel, and to neighboring glia. Such interactions are mediated by the flattened endfeet of the elongated processes.4
neurovascular_unit
The entire complex (astrocyte, multiple neurons, other glia, and blood vessel) is known as a neurovascular unit.

Individual astroglial domains are integrated into a local superstructure called an astroglial syncytium, as a result of cytoplasmic sharing and of cytoplasmic streaming, mediated by gap junctions on endfeet. Astroglial syncytia are also anatomically segregated, each syncytium forming an anatomical computation structure. This is not to be confused with computational anatomynote 1 the term anatomical computation is used here to describe a multicellular network capable of physical computation. There is some resemblance, in meaning at least, to the macro scale anatomical computation described by Valero-Cuevas et al (2006), for the tendon network of the fingers5, and by Milo et al (2004), for various evolved and artificial networks6.
tendon_network_fingers
Schematic diagram of the tendon network of the fingers – an example of anatomical computation.

gap_junction
Schematic diagram of a gap junction and its subunits – the first slide (left side) shows a transmembrane protein (connexin), the second slide shows the (self)assembled connexin hexamer (hemichannel), the third slide shows three gap junctions formed by the (self)assembly of pairs of hemichannels.

astroglial_syncytium_gaps
Schematic diagram of an idealized 2-dimensional slice of an astroglial syncytium, showing gap junctions (green) at the ends of processes, cell nuclei (red).

astroglial_syncytium_paths
“Dancing Suns” – Schematic diagram of an idealized 2-dimensional slice of an astroglial syncytium, showing two (of very many) possible signalling pathways – a direct route (yellow), and the long way ’round (blue).

“Protoplasmic astrocytes of the cortex are highly coupled cells. After a single cell injection of biocytin, a gap junction-permeable dye, an average of 94 cells spanning a radius of approximately 400 μm can be visualized and hence appear networked through gap junctions”.2 Glia are known to receive and transmit molecular gliotransmitters, propagated, possibly bidirectionally, via the cellular processes. Furthermore, processes are capable of propagating ionic waves and are known to conduct action potentials (the principal form of neuronal signal transduction), and thus are capable, in principle, of conducting high-speed electronic communications.7 It is conceivable that a somatic pulse, generated by the rhythmic cardiac function, is made use of by glial syncytia in order to drive cytoplasmic compounds, selectively and/or diffusively, through a brain-wide-web (neuro-glial syncytium).
syncytium_capillaries
Astrocytes (red), blood vessels (green), and ganglion cell nuclei (blue).8

The conscious pilot
“Glial cells […] have gap junction connections with neurons and other glia, extend great distances throughout the brain, and could enable brain-wide gap junction networks. […] Neurons connected by gap junctions have continuous internal cytoplasm and synchronized membranes and behave like one giant neuron. Membrane potentials on one side of a dendritic–dendritic gap junction induce a spikelet or prepotential into the opposite side, integrating dendritic potentials (along with axonal inputs) to drive synchrony. […] Neurons and glia connected by gap junctions may be viewed as subsets of Golgi’s threaded-together reticulum, described as syncytia […]. Placement, openings, and closings of gap junctions are regulated by intra-neuronal calcium ions, cytoskeletal microtubules, and/or phosphorylation via G-protein metabotropic receptor activity. As various gap junctions open and close, form and disappear, the topology, location, and extent of synchronized dendritic web syncytia can change and move sideways through input/integration layers of axonal–dendritic networks throughout the brain. [The] billions of brain neurons and glia provide a near-infinite variety of topological syncytia, representational Turing structures which may be isomorphic with cognitive or conscious content. Fleetingly shifting […] spatiotemporal envelopes of dendritic synchrony [may] correlate with conscious scenes and frames. […] Any fine-scale [i.e. subcellular] process mediating consciousness occurring on membrane surfaces or within neuronal interiors could be structurally unified and temporally synchronized by gap junctions and dendritic webs”.9
fine_scale_Hamerhoff
Schematic diagram of two neural dendrites, or glial processes (or one of each), connected by a gap junction. Within each cytoplasmic interior, microtubules (spotted bars) are connected by microtubule-associated proteins (lines interconnecting the spotted bars). Curved lines and interference patterns represent possible fine-scale processes underlying consciousness (e.g., electromagnetic fields, calcium ion gradients, molecular reaction–diffusion patterns, actin sol-gel dynamics, glycolysis, classical microtubule information processing, and/or microtubule quantum computation with entanglement and quantum coherence). These processes can extend through gap junctions and in principle throughout brain-wide webs. Thus, cellular integration webs may unify (on a brain-wide basis) fine-scale processes comprising consciousness.9

In addition to gap junction coupling, Oberheim et al (2012), propose that hemichannel (half a gap junction) formation allows for regulated release or uptake of gliotransmitters to and from the extracellular matrix.

Tripartite Synaps
“Astroglia can affect neuronal excitability, possibly modulate synaptic transmission and synchronise synaptic events. It should be stated, however, that the role and relevance of gliotransmission for information processing in the brain remains controversial”.10

Synapses comprise three parts; a presynaptic terminal, the postsynaptic neuronal membrane, and an eveloping astrocyte. Neurotransmitters that are released from the presynaptic terminal activate receptors that are embedded in the membranes of the postsynaptic neuron and the local astrocyte, thus potentiating the postsynaptic neuron and creating a Calcium ion (Ca2+) signal in the astrocyte. The latter can propagate through the astroglial cell body and through the astrocytic syncytium, thus allowing for triggering of neurotransmitter release from the astrocyte, which in turn may signal the pre- and postsynaptic neurons.10

The generation and maintenance of trophic molecules, signalling molecules, various second messengers, metabolic substrates, and other stuff that comprises inter- and intracellular, and inter- and intrasyncytial signal waves, is complex, involving selective diffusion through gap junctions, and endo- and exocytosis via vesicles, to and from astrocytes, neurons and the extracellular matrix. Importantly, astroglial signalling is on a much slower time scale than is neuronal signalling. The former ranging in the seconds or minutes, the latter ranging in the milliseconds.

Far from playing the lowly role of support cells, astrocytes, and the syncytia in particular, should probably be considered integrators, modulators, and interpreters of molecular signalling cascades. My intuition is that the glia do the messy and difficult part of thinking, and that they create, regulate and keep the rapid, rational, machine-like, neural networks. That in a sense it is the neural networks which play the supporting role, much as we humans do the messy work of real intelligent thinking, and have created, regulate and keep the rapid, rational, networks of machine systems in order to aid our endeavors.

– a messy little aside –
The interconnected brain, a system of syncytia, surely forms a meta-syncytium. This concept seems a much better fit with Aristotelian animism, than with Descartesian substance dualism and “nonphysical substance”(?!) – an abhorrent conception if ever I encountered one! No, the mind (spirit, soul, consciousness, sapience, genius…) is a somatic function. It is time for us to bravely correct Descartes’ erroneous vision.

We think because we are.
– or in native Descartesian J’existe, donc je pense.

Anatomy of animism
The following section presents an argument for Aristotelian (pagan) animism, by exposure of two experimental concepts:
a) intelligent behaviors of slime molds.
b) intelligent behaviors of plant roots.

a) Physarum polycephalum develops as a multinucleate syncytium, and has been reported to display behaviors indicative of intelligence, such as learning, memory formation and anticipation. In an experimental study reminiscent of Skinner’s operant conditioning (described in an earlier post, titled “The Worldly and The Amish“), Saigusa et al (2008), exposed the plasmodium of P. polycephalum to periodic changes of humidity and temperature, thus producing a temporary adverse environmental state at regular intervals. As is the case with all life forms, P. polycephalum responds to adverse environmental change, in this case by slowing its growth and territorial exploration, here termed “spontaneous in-phase slowdown” (SPS). After a set of three regularly timed exposures to the adverse condition, the organism is reported to have anticipated a fourth and fifth exposure, in both cases observed as a significant SPS. However, the fourth and fifth SPSs were not induced by the experimenters (i.e. there was no adverse environmental condition). Thus, the authors “conclude that the Physarum plasmodium can perform a primitive version of brain function (that is, memory and anticipation)”.11

C. Reid et al (2012), have commented that “when solving a maze or connecting several food sources using the most efficient network, the slime mold first explores its entire environment, with cytoplasm simultaneously covering all exploration space, before retracting cytoplasm from areas that do not contain food. The result is the construction of a single tubule when connecting two food sources only, or an efficient tubule network between food-source nodes”.11

This proliferative exploration prior to the retraction of non-reinforced connections, bears great similarity to synaptic pruning discussed in an earlier post, titled “Meta-matricity“.
Tetsu_Saigusa
Tetsu Saigusa holding two petri dishes containing Physarum polycephalum. In the dish on the right the slime mold has covered the entire exploration space available to it. In the dish on the left, after self-pruning a single cytoplasmic tubule connects two nutrient sources.

During exploratory foraging the plasmodium leaves behind a mat of nonliving, extracellular polymeric substance (EPS). Reid et al suggest that the organism’s tendency to avoid the EPS in future foraging, indicates the use of EPS, by P. polycephalum as an externalized spacial memory. The authors also deem the avoidance behavior to be a choice, because if no previously unexplored territory is available, the avoidance stops. I assume that having exhausted all options for the acquisition of food, the starving organism is forced to eat the EPS which it had previously secreted. Whether the behavior observed in association with the situation, as described by Reid et al constitutes “choice”, is, I think, debatable. But then in all fairness, choice, or preference, or indeed free will, are all rather poorly understood.
Pp_externalized_memory
Photograph of P. polycephalum plasmodium showing (A) extending pseudopod, (B) search front, (C) tubule network, and (D) extracellular slime deposited where the cell has previously explored. The food disk containing the inoculation of plasmodial culture is depicted at (E).12

b) Concurrent with these findings, advances in molecular and cell biology, and ecology, have begun to identify plants as sensing, communicative and cooperative, problem solving organisms. However, it was Charles Darwin, in collaboration with his son Francis, who first proposed that plants behave intelligently. Darwin had spent the last twenty years of his life studying plant roots, and described them as behaving like the lower animals, principally the invertebrate soft bodied animals, such as worms and slugs. With the root apex seated at the anterior pole (front end) of the plant body where it acts as a “brain-like organ”. Darwin’s root-brain hypothesis13 has been forgotten, or ignored, for well over one hundred years, but recent experimental findings have collectively created a conceptual scaffold in support of Darwin’s root-brain. In particular the higher plants, can no longer be placed outside the realm of cognitive, animated, animal living systems – a dichotomy traceable to Aristotle – possibly as a result of failing to appreciate the varying time-scales of living organisms; plant movements, due to their greatly reduced velocity, are not as readily observable as are animal movements.
Mycorrhizae
Plant roots, shown here as part of a mycorrhizal, mutualistic symbiotic relationship.

“The common descent of all organisms is the central pillar of Charles Darwin’s theory of evolution and […] the unity of life implied thereby is a revelation of both beauty and simplicity. By the same token, the existence of a plant neurobiology harmonises with the neurobiology of animals. […] In keeping with Charles Darwin’s theory of common descent of all organisms, a unification of animals/humans and plants according to their body polarity is possible and thereby removes from view the Aristotelian dichotomy between plant and animal organisms. [Plants] possess a sensory-based cognition which leads to behavior, decisions and even displays of prototypic intelligence”.14

Bibliography and Notes
note 1. In contrast to anatomical computation (aka: somatic computation), computational anatomy is defined as a field of neuroanatomy utilizing various imaging and computational techniques to model and quantify the spatiotemporal dynamics of neuroanatomical structures.

1) G. Somjen, “Nervenkitt: Notes on the History of the Concept of Neuroglia”, (1988), GLIA, Vol. 1, p. 2-9, PDF available, https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0CCYQFjAB&url=https%3A%2F%2Fwiki.brown.edu%2Fconfluence%2Fdownload%2Fattachments%2F74847169%2FNervenkitt%2BNotes%2Bon%2Bthe%2BHistory%2Bof%2Bthe%2BConcept%2Bof%2BNeuroglia.pdf&ei=HZTHVN67J4OuPInrgJAL&usg=AFQjCNH8k7st6lX1MiuLhZ2TJB9SBwBKEQ&bvm=bv.84607526,d.ZWU

2) N. Oberheim et al, “Heterogeneity of Astrocytic Form and Function”, (2012), Vol. 814, p. 23-45, Methods in Molecular Biology, https://www.urmc.rochester.edu/labs/Nedergaard-Lab/publications/pdfs/Heterogeneity-of-Astrocytic-Form.pdf

3) J. Bahney et al,”Validation of the isotropic fractionator: Comparison with unbiased stereology and DNA extraction for quantification of glial cells”, (2013), Journal of Neuroscience Methods, Vol. 222, p. 165–174, http://www.sciencedirect.com/science/article/pii/S0165027013003786

4) M. Merlini et al, “In vivo imaging of the neurovascular unit in CNS disease”, vol. 1(2), p. 87-94, IntraVital, http://www.tandfonline.com/doi/full/10.4161/intv.22214#.VMooxMbYdFU

5) Valero-Cuevas et al, “The tendon network of the fingers performs anatomical computation at a macroscopic scale”, (2006), TRANSACTIONS ON BIOMEDICAL ENGINEERING, http://bme.usc.edu/assets/005/55499.pdf

6) Milo et al, “Superfamilies of Evolved and Designed Networks”, (2004), Science, http://www.sciencemag.org/content/303/5663/1538.abstract

7) T. Otis & M. Sofroniew, “Glia get excited”, (2008), Nature Neuroscience Vol. 11, p. 379 – 380, http://www.nature.com/neuro/journal/v11/n4/full/nn0408-379.html

8) Fernández-Sánchez and Cuenca, (2010), Vision Research Picture Competition, http://www.vision-research.eu/index.php?id=581

9) S. Hamerhoff, “The conscious pilot – dendritic synchrony moves through the brain to mediate consciousness”, (2009), Journal of Biological Physics, http://www.pubfacts.com/fulltext/19669425/The

10) Adapted from: H. Kettenmann, A. Verkhratsky, “Neuroglia – Living Nerve Glue”, (2011), Fortschritte der Neurologie und Psychiatrie, vol. 79, p. 588-597, http://www.networkglia.eu/en/astrocytes (sponsored by the journal Glia, http://onlinelibrary.wiley.com/doi/10.1002/glia.v63.2/issuetoc)

11) T. Saigusa et al, “Amoebae Anticipate Periodic Events”, (2008), Physical Review Letters, 100(1): 018101, http://eprints.lib.hokudai.ac.jp/dspace/bitstream/2115/33004/1/PhysRevLett_100_018101.pdf

12) C. Reid et al, “Slime mold uses an externalized spatial “memory” to navigate in complex environments”, (2012), vol. 109(43), p. 17490–17494, Proceedings of the National Academy of Sciences, http://www.pnas.org/content/109/43/17490.full

13) U. Kutschera & K. Niklas, Darwin’s root-brain hypothesis (p. 1343), in “Evolutionary plant physiology: Charles Darwin’s forgotten synthesis”, (2009), Springer-Verlag, http://www.evolutionsbiologen.de/media/files/pdfs/darwin/2009KutNiklas.pdf

14) F. Baluškaet al, “The ‘root-brain’ hypothesis of Charles and Francis Darwin: Revival after more than 125 years”, (2009), Plant Signaling & Behavior, Vol. 4, p 1121-1127, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2819436/

Quantity of Genius?

Nobly reasonable? Infinitely facultative? Angelic and Godly?!
Wow! What a piece of work was Shakespeare!
But was he a genius? Was Bach, Da Vinci, or Einstein?
What do we mean by use of the word genius?

In this, the second of four posts, the intention is to push on, away from the comfort and normalcy of home port. Our journey began in a deep contextual fog, of historical, theistic, and social themes; tricky navigation to be sure! Now sailing out of the ‘fog’, our objective, or bias, is to explore the cultural isle of Intelligence testing. Beware! For things are not what they seem…

An unmeasurable quantity tentatively defined for the good of all children
binet
“All beings in the process of development are distinguished by and characterized by their involvement in play.” – Alfred Binet (1909)

The only son of a wealthy physician and an artist mother, Alfred Binet was raised by his mother as a result of his parent’s early separation. By the age of 15, Alfred and his mother had moved from Nice to Paris, in order that young Binet might attend law school there. Six years later, in 1878, he was awarded a degree in law, but never practiced, perhaps due to his life of privilege and independent wealth. His intent, or perhaps his father’s, was to study medicine, so Binet attended the Sorbonne, where an interest in psychology, self-propagated and mediated by books in the national library, soon overwhelmed the Sorbonne’s standard curriculum in natural sciences. He did not finish formal study. 1 After five introverted years of independent study, Binet was introduced to Jean Charcot, then director of the neurological clinic in the Parisian hospital La Pitié-Salpêtrière, where Binet worked for eight years before resigning. In 1891 he was offered work at the Laboratory of Physiological Psychology at the Sorbonne, of which he became director in 1894. Despite the prestigious title, this position, like his prior one at the Salpêtrière, was unpaid. Indeed, throughout his career, Binet relied upon his independent income in order to conduct his research. 2

Descried as a shy but critical person, he had little patience for activities that he judged unworthy of his time. In description of Binet, his collaborator Simon wrote: “to examine patients with him was always an extreme pleasure, for he brought to the situation so much imagination” and recalled “What afternoons we passed with these subjects. What delicious conversations we had with them. And what laughs too.” A less sympathetic coworker described him as “difficult, dominant, even domineering, and that he alienated many collaborators.” Binet’s daughter Madeleine described him as “a lively man, smiling, often very ironical, gentle in manner, wise in his judgments, a little skeptical of course. . . . Without affectation, straightforward, very good-natured, he was scornful of mediocrity in all its forms. Amiable and cordial to people of science, pitiless toward bothersome people who wasted his time and interrupted his work.”

Binet’s ironical satire was tinged with darkness, evidenced by the following statement, designed to test critical thinking in child subjects: “Yesterday the body of an unfortunate young woman, cut into eight pieces, was found on the fortifications. It is believed she killed herself.”
– from a 1909 version of the intelligence scale.

Several members of the Free Society for the Psychological Study of the Child, of which Binet was a member, were appointed to the Commission for the Retarded, as the result of a law mandating all children, aged six to fourteen, to attend school. The question under investigation was:
“What should be the test given to children thought to possibly have learning disabilities, that might place them in a special classroom?”

Binet took it upon himself to establish, and where possible, define and measure the differences separating normal and abnormal children. A first draft of L’Etude experimentale de l’intelligence (“Experimental Studies of Intelligence”) was published in 1903. Two years later, after collaboration with Theodore Simon, an assistant from the medical school, a new test called the Binet-Simon scale of intelligence was published.

Between 1905 and 1911, Binet spent substantial amounts of time on test revision. However, this was far from his only professional activity. In the same period, he wrote books on the relation of mind and brain, on children’s ideas, on retarded children, and on the theater. He also published more than 100 articles, only a few of which focused upon the intelligence scale; many others examined psychotic patients, residents of mental hospitals, courtroom testimonies, the relation between language and thought, experts at chess and mental calculation; professional actors, directors, authors, and artists, the effects of mental fatigue on intellectual performance, and a host of other loosely related topics.

Examined in Binet’s explorations were various populations, including typical children and adults, as well as children and adults with varying degrees of mental retardation. As Binet was relatively unrestricted in his choice of study material, he eagerly pursued any area that he thought might shed light upon individual differences in mental function – including at least: consciousness, will, attention, sensation, perception, esthetics, creativity, suggestibility, hypnotism, cognitive styles, love fetishes, pain thresholds, mental fatigue, language development, memory development, and conceptual development. However, as Binet did not hold a professorship, or even a formal degree in the sciences, there was no possibility of him attracting the students and funds which might facilitate the continuation of his works. Sadly, his contemporaries seem to have been unable to recognize value in his many ideas, possibly because as undeveloped concepts, they did not evoke the obvious practical utility ascribed to the intelligence scale.

Expounding the remarkable diversity of intelligence, Binet and Simon made clear the limitations of the intelligence scale, saying that it did not yield an absolute measure of intelligence. Unlike the measure of length yielded by a ruler, the intelligence scale “yielded an ordinal classification in which the measure of intelligence was entirely relative to that of other individuals of the same age.” Throughout his career, Binet emphasized the necessity of studying intelligence by use of qualitative, as opposed to quantitative measures, and stressed that intelligence was not based on genetics alone; that intellectual development progressed at variable rates, was influenced by environmental factors (i.e. milieu), and was malleable rather than fixed.

Understanding intelligence to be a complex, relative and variable phenomenon, Binet and Simon did not issue a definition of precisely what their scale attempted to measure. They did, however, argue the central role of judgment:
“In intelligence there is a fundamental faculty, the alteration or the lack of which is of the utmost importance for practical life. This faculty is judgment, otherwise called good sense, practical sense, initiative, the faculty of adapting one’s self to circumstances. To judge well, to comprehend well, to reason well, these are the essential activities of intelligence. A person may be a moron or an imbecile if he is lacking in judgment; but with good judgment he can never be either.”

In 21 years, Binet published more than 200 times (books, articles, and reviews) in fields that are now called experimental, developmental, educational, social, and differential psychology. The diversity of topics his studies addressed is perhaps best visible from one typically productive year, 1894. Two books (one an introduction to experimental psychology methods and one on the psychology of expert calculators and chess masters); four articles on children (three involving their memory for words, prose, and visual information and one on their suggestibility); two studies of professional dramatists; one article on spatial orientation (published in volume 1 of the new American journal Psychological Review); and a description of a graphical method for recording piano-playing techniques. He also found time to co-found and edit the first French psychological journal, “L’Année Psychologique“. In addition to this, Binet wrote four plays that were produced on the Paris stage, the common theme of which was the horrifying consequence of mistakes made by stupid bureaucrats, pompous physicians, and greedy businessmen.

outlier
“Individual differences have been an annoyance rather than a challenge to the experimenter. His goal is to control behavior, and variation within treatments is proof that he has not succeeded. Individual variation is cast into that outer darkness known as “error variance”. For reasons both statistical and philosophical, error variance is to be reduced by any possible device.”
– Binet

Intuitively, I understand that Binet saw himself in the children he was helping the French government to identify. The concept and term (retarded) was, I have little doubt, for Binet a working title; an officially sanctioned label that he could use until a better understanding of special children was reached. It seems clear that what he hoped to be able to identify, by application of some device such as the intelligence scale, were extraordinary young minds who perform a kind of cognition that appears alien, if not frightening, to (neurotypical) people who populate a span in the median range of the normal distribution of human cognizance. In light of the fact that the recurring theme of his research was the remarkable diversity of intelligence, it is highly ironic that Binet’s name should be so strongly associated with reducing intelligence to a narrow range of statistically standardized numbers; the IQ scores.

Subversion of extraordinary goodness:
normalcy and the dark side of human nature

Lewis Terman was also fascinated by intelligence, but his promotion of “gifted” children (a term he himself coined and identified with) was based upon elitist ideology. A proponent of eugenics, a social movement that arose from Galton’s conception of hereditary genius, and aimed to improve the human ‘breed’ by perpetuating particular traits while eliminating others. 3 In direct opposition to Binet, Terman felt that general intelligence was a quantifiable capacity. As a eugenicist, he believed that genetics dictated general intelligence, and that one’s “original endowment” of intelligence, which he termed intelligence quotient, was not altered by education, home environment or practice.
lewis_terman
Lewis Terman

Terman’s moral milieu was a product of contemporary business, civic, and educational leaders in the United States of America, who were attempting to “accommodate the needs of a diversifying population, while continuing to meet the demands of society. There arose the call to form a society based on meritocracy while continuing to underline the ideals of the upper class. In 1908, H.H. Goddard, a champion of the eugenics movement, found utility in mental testing as a way to evidence the superiority of the white race. After studying abroad, Goddard brought the Binet-Simon Scale to the United States and translated it into English. Following Goddard [the mental testing movement in the U.S.] was lead by Terman, who took the Simon-Binet Scale and standardized it using a large American sample. The new Standford-Binet scale was no longer used solely for advocating education for all children, as was Binet’s objective. The new American objective of intelligence testing was illustrated in the Stanford-Binet manual, with testing ultimately resulting in curtailing the reproduction of feeble-mindedness and in the elimination of an enormous amount of industrial inefficiency. When Binet became aware of the foreign ideas being grafted on his instrument he condemned those who with “brutal pessimism and deplorable verdicts were promoting the concept of intelligence as a single, unitary construct.”

Regardless, in 1908 Indiana became first of the United States to enact a law allowing sterilization on eugenic grounds. In 1914, Harry Laughlin at the Eugenics Record Office published a Model Eugenical Sterilization Law that proposed sterilization of the “socially inadequate” – people supported in institutions or maintained wholly or in part by public expense. The law encompassed the “feebleminded, insane, criminalistic, epileptic, inebriate, diseased, blind, deaf, deformed, and dependent” – including “orphans, ne’er-do-wells, tramps, the homeless and paupers.” By the time the Model Law was published in 1914, twelve states had enacted sterilization laws. Despite these early statutes, sterilization did not gain widespread popular approval until the late 1920s.

Perhaps due to his bullied mid-west childhood, Terman made use of his influential professional position, in order to push for forced sterilization of thousands of Americans who scored below average on the Standford-Binet scale. By 1924, approximately 3,000 people had been involuntarily sterilized in America; the vast majority (2,500) in California. That year Virginia passed a Eugenical Sterilization Act, which was adopted as part of a cost-saving strategy to relieve the tax burden in a state where public facilities for the “insane” and “feebleminded” had experienced rapid growth. The law asserted that “heredity plays an important part in the transmission of insanity, idiocy, imbecility, epilepsy and crime…” It focused on “defective persons” whose reproduction represented “a menace to society”.4

Hitler much admired the eugenics practices in America and, after becoming the German chancellor in 1933, empowered Nazi emulation and application of Americanized eugenics on anyone deemed to be a degenerate. By October of 1946, due to the atrocities imposed by the efficient systematization of eugenic (“genetic cleansing”) practices during the second world war, the Nuremburg trials declared forced sterilization a crime against humanity. Terman backed away from eugenics professionally, but personally maintained his belief.

Without doubt, Terman’s America and Hitler’s Germany formed an unfortunate and uncomfortable milieu. An infamous example may be made of the cooperation between the German government and the German subsidiary of American owned International Business Machines. “It was legal for IBM to service the Third Reich directly, but only until America entered the war in December 1941”. IBM technology increased the efficiency of Germany’s “Final Solution“, defined as the systematic extermination of the Jewish population in Nazi occupied Europe. It is perhaps insignificant in the current exploration, yet noteworthy, that America did not drop Einstein’s bastard children, Little Boy and Fat Man, on Germany.

Society identifies and attempts to define both, genius and pathos.

Terman’s Termites, it seems, excelled in ability to perform within the educational system5, though none proved to be a genius, none won a Pulitzer or a Nobel, and none have left behind extraordinary lifetime achievements. It is certainly noteworthy that Einstein would have failed to enter Terman’s group, as did William Shockley and Luis Alvarez, both winners of the Nobel prize in physics, in 1956 and 1968, respectively.

Bibliography and Notes
1) T. Imhoff, “Alfred Binet (1857 – 1911)”, (2000), http://www.muskingum.edu/~psych/psycweb/history/binet.htm

2) R. Siegler, “The Other Alfred Binet”, (1992), Developmental Psychology, Vol. 28, p. 179 – 190, American Psychological Association, http://psycnet.apa.org/journals/dev/28/2/179/

3) M. Leslie, “The Vexing Legacy of Lewis Terman”, (2000), https://alumni.stanford.edu/get/page/magazine/article/?article_id=40678

4) P. Lombardo, “Eugenic Sterilization Laws”, (cca 2011), http://www.eugenicsarchive.org/html/eugenics/essay8text.html

5) W. E. Benet, “Genius: An Overview”, (2005), Assessment Psychology Online, http://www.assessmentpsychology.com/genius2.htm

Fog of Genius

Nobly reasonable? Infinitely facultative? Angelic and Godly?!
Wow! What a piece of work was Shakespeare!
But was he a genius? Was Bach, Da Vinci, or Einstein?
What do we mean by use of the word genius?

In this, the first of four posts, the intention is to again push off from the comfort and normalcy of home port. This time to explore the most slippery o’eels! Our journey begins in a deep contextual fog, of historical, theistic, and social themes; tricky navigation to be sure! Once out of the ‘fog’, our objective, or bias, will be to explore the cultural islands Intelligence, Creativity, and Mental illness, and to interrelate them via cellular biology generally, and astroglial function specifically.

The subject matter is seemingly impossible; a vast ocean of reported (known), unreported (unknown), and living (tenacious and malleable) ideas. A rather less than more reliable volume! Aware of the bias – of our direction – mediated intentionally by sails, rudder, and navigator, and as much again, and more, without intent, by swells and gales and monsters unknown…

Qui_vive ?

Deep cultural time
The word genius is derived from the Latin gigno, from the Ancient Greek γίγνομαι ‎(gígnomai, “to come into being, to be born, to take place”). Oddly, the word genius is also recognized as a Romanized version of the Arabic jinni or djinni (“hidden from sight”), from which is derived majnūn (“one whose intellect is hidden”, “mad”). Also, the pre-Islamic jinnaye (“good and rewarding gods”). A sentiment similar to that apprehended by pangan animism.

The jinn or djinn are Arabian mythological creatures who inhabit an unseen world beyond the known universe. Apparently, in the Quran, jinn are composed of a smokeless, scorching fire, and in the Torah (Christian “Old Testament”), as seraphim and cherubim (“burning/fiery ones”). The jinn, humans and angels are said to make up the three sapient creations of God. Like humans, the jinn have free will note A, and so may have a good, evil, or neutral disposition. The latter, mischievous or evil spirits were called shaytan jinn, and are also described as demons. Interestingly, phenomena that in ancient times were thought to result from possession by a shaytan, are in modernity defined as psychosis, schizotypy, or outright schizophrenia. All as stigmatizing now as they were a millennium ago, during the intellectual darkness, indeed exorcism, of the middle ages.

Closely related, in form and function, to the jinn, seraphim and cherubim, are the Karabu, Shedu and Lammasu of Assyria, Babylon and Phoenicia, respectively. All are depicted as sapient hybrid animals, varyingly described as:
– the likeness of four living creatures;
– each with four faces (the face of a man, the face of a lion on the right side, the face of an ox on the left side, and the face of an eagle) and four wings, with straight feet with a sole like the sole of a calf’s foot, and hands of a man under their wings;
– six-winged beings that fly around the Throne of God crying “holy, holy, holy”;
– a lion or bull with eagles’ wings and a human face;
– having a king’s head, a bull’s body, and an eagle’s wings;
– human-headed winged lion (Sphinx);
– eagle-headed winged lion (Griffin).

ezekiel_vision
Ezekiel’s vision – the marking of this image is incorrect. According to the literature, the two objects labeled “Angels” are cherubs
Cherubim
Another representation of cherubs flying with the throne of God
Seraphim_SamuraiX_Hiko
A representation of a Seraph
Lamasu
A pair of Lamassu
Shedu_1
Shedu
Sphinx
Sphinx

Interestingly, all of these fantastical creatures are described, almost exclusively, as powerful protective deities. For example, cherubim first appear in the Christian bible, in the Garden of Eden, as guards of the way to the Tree of life – possibly also the Tree of knowledge.

Classical and Enlightened soles
Animism represents a unity of spirit and material. Thus, in pagan antiquity, souls or spirits were not restricted to the human condition, but were also intrinsic to other animals, as well as plants, rocks, mountains, rivers, even thunder, wind, shadows, the sun, the moon and planets, etc… In his treaties on the nature of living things, “De Anima“, Aristotle pointed to the soul as the form and essence of a living thing. Thus the soul was assumed not to be a distinct substance from the body, and a body without a soul was unintelligible.

For love of empire, all good Romans worshiped the genius of Rome – the divine power protecting the Roman empire, Rome itself, and its heroic leaders and emperors. Likewise, the genius of a family, or house, was a protector and guide, an inspiring supernatural spirit which took care of house and family, and was the object of worship.1

Early in the 17th century, believing that a divine spirit had revealed new philosophies to him, Descartes suggested that the mind, spirit, or soul, is composed of a nonphysical substance, which he identified with consciousness and self-awareness. Unlike Aristotle, Descartes distinguished these spiritual aspects from a physical (material) brain. Hence, the dualism in modem philosophy of mind, and the resulting mind-body problem. Substance dualism is famously defended by Descartes’ “Je pense, donc je suis”, arguing that the mental substance can exist outside of the body and that the body cannot think. This philosophical stance, a belief held almost ubiquitously in modern culture, has allowed for the conception of artificial intelligence, hypothetically assumed to come into existence as a product of machine mediated computation of abstract (nonphysical) algorithms.

Late in the 18th century, expanding upon a theme related to substance dualism, Emmanuel Kant argued that experience is structured by the mind. He had published “Critique of Pure Reason” in response to the philosophical rift that had developed between empiricists – who believe that knowledge is fundamentally rooted in sensory experience, and rationalists – who believe that knowledge is fundamentally rooted in reasoning. In attempting to resolve the issue, Kant leaned toward the latter, a priori knowledge and reasoning. He suggested that the mind comprises necessary structures the function of which is to internalize the physical sensations, which are comprehended via synthesis with reasoned structures. It is this understanding of mental primacy in comprehension and conceptualization that gives way to what he called anshcauung, which we may call intuition, imagination, visualization, or insight.

Arthur Miller (2000) has identified this concept in his exploration of the role of insight in the sciences. In particular, Miller focuses upon Kantian anshcauung and anschaulichkeit (“visualizability”) in physics, proposing that “Anschaulichkeit refers to properties of an object, which exist whether or not we look at it or make measurements on it.” And that “anschaulichkeit is immediately given to the perceptions or what is readily graspable in the anschauung.” Furthermore, anshcauung is raised up from anschaulichkeit, and “visual imagery” (anschaulichkeit) is inferior to visualization (anshcauung).2 It is possibly this Kantian epistemic quagmire that has frightened so many quantum physicists into a strictly calculating corner, from which one often hears comments such as Don’t look for meaning, just do the math and you’ll get the right answer.

The Prussian general and military theorist, Carl Philipp Gottfried von Clausewitz, emphasized the importance, in war, of immeasurable “moral forces” (i.e. all influences on events not material in nature: the morale and experience of the troops, or the skill of the general, for example, as opposed to the number of troops, quality and quantity of arms, etc…). He posed that the immeasurability of moral factors created a difficult dilemma: “theoretical calculations would either have to be inaccurate (excluding moral forces) or impossible to carry through rationally […], since they included indeterminate quantities.”

This rational ignorance is precisely the strategy of modernity, most clearly visible in business, economics, and in the sciences. Generally, we calculate based upon measurable phenomena, assuming that the immeasurable (irrational) phenomena will magically balance, rendering no net influence. Simplified, abstracted, and linear; rational theories attempt to describe non-linear and complex real-world phenomena. The analysis of models is useful, but immersive indoctrination in them tends to a belief that the model (theory) is reality.
Carl_von_Clausewitz
“The very nature of genius is to rise above rules. However, any theory proposing rules not good enough for genius – rules which genius can disregard would conflict with reality, for it would set theory in conflict with genius, and the successful actions of geniuses are part of the reality which theory ought to help us understand and explain.” note B
– Carl Philipp Gottfried von Clausewitz (cca 1820)

However, “one never rises above the rules, if the rules are correct”. note C
The decisions to which a person is lead by genius, will be entirely consistent with correct rules (theory), but theories which exclude genius (i.e. the moral, irrational, immeasurable) from the rule, are reprehensible. Von Clausewitz attempted a definition of genius, posing it to be the application of rules in situations where key data are not apparent. Expanding upon this, one might say genius is recognized as action taken in accordance with an intuitive conceptualization (that is, an unknown or hidden, and thus irrational, mental model), based upon fragmentary evidence. Here neither intuition, nor genius, are taken as being synonymous with ‘rising above the rules’, rather with the extraction of meaning from current patterns, based upon experience, but without necessarily being able to define those patterns.

Here, Herr von Clausewitz suggests subconscious computation – a form of signal processing and information ‘chunking’ that occurs in our minds without our direct awareness. He alludes to the same concept again, saying: “The rules of war, like those of grammar, can be derived inductively. This may provide the ability to reach the right conclusions without the need to learn the rules abstractly or apply them analytically.” Indeed, he has used the word “subrational” to describe a significant component of genius, allowing for a rapid recognition of truth, which the mind would normally miss. The majority of genius, he ascribed to stubbornness, “tremendous determination”, an overcoming of fear and social friction.3

In concord with von Clausewitzian rule breaking, and with Henri Poincaré’s “special aesthetic sensibility”, Miller writes “just as in art, discoveries in science are made by breaking the rules”, and suggests that “in network thinking, concepts from apparently disparate disciplines are combined by proper choice of mental image or metaphor to catalyze the nascent moment of creativity. This necessarily nonlinear thought process can occur unconsciously, and not necessarily in real time.” Miller also reminds us of Poincaré’s description of scientific creativity, as the “the process in which the human mind seems to borrow least from the exterior world, in which it acts, or appears to act, only by itself and on itself.”

An example of a similar phenomenon may be made of not learning by rote (i.e. learning, but not by memorizing through mechanical repetition, not by hearing and repeating aloud, not without full attention to comprehension or thought for the meaning). Poincaré and Einstein both expressed difficulty in memorizing material that had no clear patterns, or was not inducible from first principles (a priori reasoning).

Learning by rote seems to be for those who can not see the truth.

Hereditary genius
Francis Galton, cousin of Charles Darwin, in 1868 published a bestselling Hereditary Genius. Robert Nesbit (1976) has defined Galton’s concept as “a special intellectual and spiritual power that is inherent in a given person’s nature and that transmits itself to succeeding generations through the germ plasm – until or unless, that is, this genealogy becomes corrupted through interbreeding with inferior physical and mental types.”

The meaning here (i.e. a eugenic caste) fascinates as much as frightens me note D because I hold an inexorable faith in free will, and because I closely relate the reality of the human condition to biology generally, and particularly to the sociobiology of certain insects and microbes.
thinker
“Capacity for intense and sustained concentration of the mind is also one of the qualities seen oftener in the great than in other people.”
– Nesbit (1976)

Nesbit’s interpretation is one of temporal, rather than hierarchal displacement. He quotes Goethe:
“If a talent is to develop quickly and joyously, it is essential that there be in circulation throughout the scene an abundance of productive genius and of sound culture…. We admire the tragedies of the ancient Greeks, but upon proper examination we should admire the period more than the individual author. [No doubt, there is] the occasional exception, the mind of great creative force. […] This rare individual through reading, fantasy, and sheer imagination creates his own milieu. […] Great ages in the history of culture are made by their great component individuals, but the reverse is also true, that in large degree great individuals are made by great ages and by all the intellectual circuits.”

“The intellectual and moral milieu created by multitudes of self-centered, cultivated personalities was necessary for the evolution of that spirit of intelligence… that formed the motive power of the Renaissance. […] Ages of genius have truth, beauty, and goodness emblazoned on them, not modernism, post-modernism, and futurism.”

Nesbit describes the concept of milieu as a fusion of consciousness with environment.4 More specifically, the meaning of milieu is interpreted as part of the larger environment, that is simultaneously participated in, shaped by, and swept into, the individual’s consciousness. He continues, saying “Every individual above the [intellectual] level of moron is from time to time excited emotionally and intellectually by the people and things around him. It is a fair statement that the highly talented are the most excited in this way, and whether it is a poem or a scientific theory, what we witness is the capacity to internalize a social experience and to make the product socially available. […] Galton did not err in his linking of geniuses by family and genealogy; where he went wrong was in limiting family to physical genealogy rather than seeing it as […] the whole social order – social, cultural, moral, and intellectual entities, as well as a continuity of germ plasm. Heredity, yes, but that word is also properly used when prefaced by the word social.”

Rather than affluence, “a closeness of the generations”, intellectual and moral intimacy between parent and child, a form of apprenticeship, “assimilation of the many psychological and social insights, understandings, skills and techniques”. These, Nesbit posed, are of vital importance in the formation of genius.

“What is true of individuals is also true for peoples. By common assent the three most talented peoples of the past two and a half millenniums have been the Chinese, the Greeks, and the Jews. […] In all three, the family extended itself into all aspects of the individual mind, becoming the nursery of education, moral precept, citizenship, piety, and craft skill.”

There are darker aspects of the familial milieu also. Nesbit lists greed, fratricide, incest, and other evils, and interestingly, argues that “murder is the price to be paid, along with incest, blood feud, and other linked evils, for the uniquely intimate atmosphere of family, and it is, on the evidence of history, a price that should be paid. Better a society in which these specific evils will always exist as the consequence of the family tie than one on which, in order to abolish the evils, the family itself is abolished.”

“One can somehow live with the evils, but civilization could hardly exist without the nurturing ground of its geniuses.”

Bibliography and Notes
note A) Angels are not reported to have been endowed with free will, making them seem rather machinelike. For fear of prosecution, and because I feel the issue has little of significance to offer our current exploration, I shall refrain from commenting on this curious finding.

note B) A similar argument can be made for authority – it is unnecessary to challenge the authority, if the authority is correct.

note C) The meaning of this passage is strikingly similar to that of Gödel’s incompleteness theorems, authored a century later, in 1931.

note D) A good argument might be made for the existence of social castes within our current world population; a topic not explored here.

1) R. Rushdooney, “The Ideas of Genius”, (1972), Chalcedon Report, vol. 78, http://chalcedon.edu/research/articles/the-idea-of-genius/

2) A. Miller, “Insights of Genius”, (2000), MIT press.

3) C. Rogers, “Clausewitz, Genius and the Rules”, (2002), The Journal of Military History, Vol. 66, p. 1167-1176, http://www.jstor.org/stable/3093268

4) R. Nisbet, “Genius”, (1976), The Wilson Quarterly, Vol 6, p. 98 – 107, Woodrow Wilson International Center for Scholars, http://www.jstor.org/stable/40256393