Sociophysics – The last science

‘Truth’ is context-dependent
In context of my studies to date, and in particular my newfound understanding of the common good, I have experienced a surprising insight. I can best describe the occurrence as a spontaneous emergence, in my mind, of a conception of sociophysical phenomena. I was not yet aware of sociophysics and thought that I had coined the term to help define a path of study. Simply, I wanted a word to help me focus more closely upon the physical phenomena that emerge from social interactions. Searching for the literature of sociophysics, I was initially surprised to find a sparse population of recent mathematical probabilistic treatments and models, stemming from quantum physics early in the twentieth century, game theory in the mid-twentieth century, and analyses of computer modeling of adaptive networks early in our twenty-first century. My search soon led me near to the origin of the sociophysical concept – an absolute origin escapes me, though sociophysics seems closely tied to Aristotelian animism. Nevertheless, I now realize that sociophysics has presented itself in a variety of apparitions to many a kindred spirit. If it is a science, then it is the strangest, vaguest, and widest of them – indeed, it has been called “the science that comes after all the others” – and fascinatingly, the men who have studied it knowingly, were and for the greatest part still are outcast by orthodoxy. I certainly am no stranger to their ranks, perhaps that is part of the reason why I feel a sense of familiarity and belonging among the concepts exposed in the current exploration of ‘the last science’.

Previously, I have argued that abstract modeling (theorizing) simplifies reality, allowing only fractionated (quantized), and thus unreal understandings. Historically, fractionation (specialization; division of labor) has been the cost of good quality knowledge. In The Common Good: Part I, I have introduced Robert Rosen, a theoretical biologist who suggested that studies of biology would bring new knowledge to physics, and would change our understanding of science in a broad manner. The study and modeling of complex systems appears to drive in this direction; by my intuitive reckoning, increasingly complex modeling (interaction of theories) approaches ever closer to a good quality representation of reality, and thus a truer understanding of reality. It is for this reason that I have chosen to focus the current exploration upon the histories [NOTE A] of understanding and modeling of social interaction, which shall lead us to an integrated understanding of the current state of the art.

Two classes
Abstract: The abstract form of sociophysics is fundamentally dependent upon human knowledge, which has been composed of necessarily subjective experiences (observations) of an assumed objective reality. It is a science stemming from and attempting to formalize intuitive understandings of social phenomena, by use of mathematical tools developed and used in statistical physics.

Real: We must assume that in reality the physical phenomena that emerge from social interaction are independent of human knowledge; that they occur regardless of observation. Sociophysical phenomena are synergistic (non-additive effects resulting from individual acts) manifestations of the dynamic, physical interaction, consequence and feedback, occurring among networked actors. Examples of phenomena that emerge from social interaction include: ant and termite colonies, bacterial colonies, cities, brains, genetic networks, mycelial networks, glial networks, multicellular organisms, ecosystems, physical and abstracted knowledge, road systems, postal systems, the world wide web (internet).

A true false start: true within context of the me-generation; false within a deeper historical context
Galam (2004) tells us that during the late 1970’s statistical physics was gripped by the theory of phase transitions.(1) In 1982, despite the scandal of a university faculty’s retraction of researchers’ academic freedom due to political fears of institutional disrepute, S. Galam et al managed to publish a set of assumed “founding papers” on Sociophysics.(2) In reference to the first in the set, Galam himself comments that “in addition to modeling the process of strike in big companies using an Ising ferromagnetic model in an external reversing uniform field, the paper contains a call to the creation of Sociophysics. It is a manifesto about its goals, its limits and its danger. As such, it is the founding paper of Sociophysics although it is not the first contribution per se to it.” During the following decade, Galam published a series of papers on Sociophysics, to which he received no feedback. He tells of other physicists “turning exotic” during the mid-nineties, developing the closely related Econophysics, the purpose of which was to analyze financial data. Econophysics quickly gave rise to the so called “quants” of Wall Street – young physicists who were employed by investment bankers to develop algorithms allowing for the trading of complex derivatives, the abuse of which, by the pathological social milieu of the international finance trade, was responsible for the global economic crisis of 2008. Fully fifteen years after his initial publications and the assumed inception of the science of Sociophysics, Galam claimed some gratification in the recognition that a “few additional physicists at last started to join along it”. I deeply sympathize with his statement: “I was very happy to realize I was not crazy, or at least not the only one.” Nevertheless, Galam was and remains incorrect in regard to his position in the history of sociophysics; a history that began centuries before the me-generation.

Reading Galam’s personal testimony, I felt a crystallization of my intuition that the institutionalized position of a careering academic scientist makes for a very poor springboard from which to develop novel ideas and concepts, even if, as in Galam’s case, the ideology is not actually novel. Indeed, I myself have felt, and seen in colleagues, active restraint from pursuing interesting, albeit unorthodox ideas while bound by the rites of the ivory tower. Shameful though this situation is, it certainly is not a modern problem.

Halley, Quetelet and Comte
In his review of the sociophysics literature, Stauffer (2012) reports that the idea of applying knowledge of physical phenomena to studies of social behavior reaches at least two millennia into the past, naming a Sicilian, Empedokles, as the first to suggest that people behave like fluids: some people mix easily like water and wine, while others, like water and oil, do not mix.(3) Vague and philosophical, I hesitate to categorize this conception as sociophysics, though admittedly it does attempt at least metaphorically to fuse social and physical phenomena. Rather more accurate examples of sociophysics were Halley’s calculations of celestial mechanics and annuity rates, Quetelet’s Physique Sociale, and Comte’s Sociophysics. Let us now step through these chronologically.

Edmund Halley

In 1682 Edmund Halley had computed an elliptical orbit for an object visible to the naked eye; a conglomerate of rock and ice, now known as Halley’s comet. He reasoned that it was the same comet as the one reported 75 years earlier, in 1607.(4) He communicated his opinion and calculations to Sir Isaac Newton, who disagreed on account of both the geometry of the object’s orbit and it’s reoccurrence. Nevertheless confident of his theory, Halley predicted that the object would reappear after his death, in 1759; he was proven correct by the comet’s timely visit. Since then, the orbital path followed by Halley’s comet has been confirmed as elliptical, passing beyond Neptune before returning to the vicinity of Earth and Sun with an average periodicity of 75 to 76 years, with variational extremes of 74 and 79 years due to the gravitational perturbations of giants Jupiter and Saturn.

Astronomy, massive bodies and gravitation are relevant to our exploration of sociophysics for three reasons to be expounded later. For the time being, it is important to point-out a fact about Halley that is much less recognized, though possibly more easily recognized as relevant to our current exploration.

In 1693 Halley constructed a mortality table from individual dates of birth and death; data collected by the German city of Breslau. Based upon this tabulation Halley went on to calculate annuity rates for three individuals. In his application of probability theory to social reality – now known as the actuarial profession – it seems Halley had been preceded, in 1671, by a Dutchman, Johannes de Wit. Though again, to his credit, Halley was the first man to correctly calculate annuity rates, based upon correct probabilistic principles.

Adolphe Quetelet

Adolphe Quetelet was a Belgian astronomer and teacher of mathematics, a student of meteorology, and of probability theory; the latter leading to his study of social statistics in 1830. Stigler (1986) tells us that astronomers had used the ‘law of error’ derived from probability theory to gain more accurate measurements of physical phenomena.(5) Quetelet argued that probabilistic theory could be applied to human beings, so rendering the average physical and intellectual features of a population, by sampling “the facts of life”. A graphical plot of sampled quantities renders a normal distribution and the Gaussian bell-shaped curve, hence the “average man” is determined at the normal position. In theory, individual characteristics may then be gauged against an average, “normal character”. Quetelet also suggested the identification of patterns common to both, normal and abnormal behaviors, thus Quetelet’s “social mechanics” assumed a mapping of human physical and moral characteristics, allowing him to formulate the argument that probability influences the course of human affairs, and thus that the human capacity for free-will – or at least the capacity to act upon free-will – is reduced, while social determinism is increased. Quetelet believed that statistical quantities of measured physical and mental characteristics were not just abstract ideas, but real properties representative of a particular people, a nation or ‘race’. In 1835, he published A Treatise on Man, and the Development of His Faculties, and so endowed to the culture of nineteenth century Europe a worldview of racial differences, of an “average man” for each subspecies of Homo sapiens, and hence scientific justification (logical soundness) for slavery and apartheid. Furthermore, Quetelet’s “average man” was presented as an ideal type, with deviations from the norm identified as errors.

Auguste Comte

Between 1830 and 1842 Auguste Comte formulated his Course of Positive Philosophy (CPP). From within our modern ‘global’ cultural milieu it is difficult to appreciate how widely accepted (‘globalized’) the ideology of positive philosophy was two hundred years ago, during the height of Eurocentric colonial culture, as positivism has received virtually no notice since the re-organizational events [NOTE B] imposed upon the politico-economic and cultural affairs of Europe after the Russian revolution and first world war.(6) The eclipse of positivistic ideology began with neo-positivism in philosophy of science, which lead to post-positivism. Strangely, it appears that the two later schools (neo- and post-positivism) have forgotten both, positive philosophy itself and the man who initiated and defended it, and even coined the term positivism [NOTE C]. However, Bourdeau (2014) tells that Comtean studies have seen “a strong revival” in the past decade, with agreement between modern philosophers of science and sociologists upon the ideologies propagated over 170 years ago. Points which were well established in positivism, but subsequently forgotten, have re-emerged in the modern philosophical milieu.

Re-emergent ‘truths’:
i) Scientific justification (logical soundness) is context-dependent.
ii) Science has a social dimension; science is necessarily a social activity with vertical (inter-generational) as well as horizontal (intra-generational) connections and thus also epistemic influences. Simply, science is a human activity, and humans are social animals.
iii) Positive philosophy is not philosophy of science, but philosophy of social interaction; Aristotelian political philosophy. Also, positivism does not separate philosophy of science from political philosophy.
iv) Cooperative wholeness; unity of acts and thoughts; unity of genes and memes; unity of dynamism and state.

“Being deeply aware of what man and animals have in common, Comte […] saw cooperation between men as continuous with phenomena of which biology gives us further examples.”
– Bourdeau (2014)

Comte made the purpose of CPP clear: “Now that the human mind has grasped celestial and terrestrial physics, – mechanical and chemical; organic physics, both vegetable and animal, – there remains one science, to fill up the series of sciences of observation, – Social physics. This is what men have now most need of: and this it is the principal aim of the current work to establish”.(7) He continued, saying that “it would be absurd to pretend to offer this new science at once in a complete state. [Nevertheless, Sociophysics will possess the same characteristic of positivity exposed in all other sciences.] This once done, the philosophical system of the moderns will in fact be complete, as there will be no phenomenon which does not naturally enter into some one of the five great categories. All our fundamental conceptions having become homogeneous, the Positive state will be fully established. It can never again change its character, though it will be forever in course of development by additions of new knowledge.”

In 1832 Comte was named tutor of analysis and mechanics at École Polytechnique. However, during the following decade he experienced two unsuccessful candidacies for professorship; he began to see ties severed between himself and the academic establishment after releasing a preface to CPP. In 1843 he published Elementary Treatise on Analytic Geometry, then in 1844 Discourse on the Positive Spirit, as a preface to Philosophical Treatise on Popular Astronomy (also 1844). By this time he was at odds with the academic establishment – essentially Comte had dropped-out of university. The reason for this does not seem to have been due to a lack of curiosity, neither to a lack of capacity, nor imagination, nor vision, nor even a simple lack of effort. Indeed, the situation resonates strongly with Einstein’s early academic situation, with Dirac’s late academic situation, and with Binet’s life-long academic situation. Galam’s experiences during the early 1980’s reverberate the same, unfortunate, if not pathological phenomenon of academic institution – interesting and curious, broad-reaching minds are generally met with hostile opposition from a fearful and mediocre orthodoxy.

Comte’s second great work – often referred to in the literature as Comte’s second career, was written between 1851 and 1854. It was regarded by Comte himself as his seminal work, and was titled First System of Positive Polity (FSPP). Its goal was a politico-economic reorganization of society, in accordance with scientific methods (techniques for investigating phenomena based upon gathering observable, empirical and measurable evidence, subject to inductive and deductive logical reasoning and argument), with the purpose of increasing the wellbeing of humankind – i.e. adaptation of political life based upon political episteme with the purpose of increasing the common good. This is precisely the Aristotelean argument (see The Common Good: Part I, under the heading Politikos). Though the sciences (epistemes) collectively played a central role in FSPP, positivism is not just science. Rather, with FSPP Comte placed the whole of positive philosophy under the ‘continuous dominance of the heart’, with the motto ‘Love as principle, order as basis, progress as end’. Bourdeau (2014) ensures us that this emphasis “was in fact well motivated and […] characteristic of the very dynamics of Comte’s thought”, though it seems as anathema to the current worldview as it did for Comte’s contemporaries, who “judged severely” – admirers of CPP turned against Comte, and publicly accused him of insanity.

Much like Nikola Tesla, Comte is reported to have composed, argued, and archived for periods of decades, periodically ‘observing the function of’ his systematic works, all in his mind. His death, in 1857, came too early for him to draft works that he had announced 35 years prior:
Treatise of Universal Education – intended for publishing in 1858;
System of Positive Industry, or Treatise on the Total Action of Humanity on the Planet – planned for 1861;
Treatise of First Philosophy – planned for 1867.

Polyhistornauts predicted
“Early academics did not create regular divisions of intellectual labour. Rather, each student cultivated an holistic understanding of the sciences. As knowledge accrued however, science bifurcated, and students devoted themselves to a single branch of the tree of human knowledge. As a result of these divisions of labor – the focused concentration of whole minds upon a single department – science has made prodigious advances in modernity, and the perfection of this division is one of the most important characteristics of Positive philosophy. However, while admitting the merits of specialization, we cannot be blind to the eminent disadvantages which emerge from the limitation of minds to particular study”.(7)

In surprising harmony with my own thoughts and words, Comte opined “it is inevitable that each [specialist] should be possessed with exclusive notions, and be therefore incapable of the general superiority of ancient students, who actually owed that general superiority to the inferiority of their knowledge. We must consider whether the evil [of specialization] can be avoided without losing the good of the modern arrangement; for the evil is becoming urgent. […] The divisions which we establish between the sciences are, though not arbitrary, essentially artificial. The subject of our researches is one: we divide it for our convenience, in order to deal the more easily with its difficulties. But it sometimes happens – and especially with the most important doctrines of each science – that we need what we cannot obtain under the present isolation of the sciences, – a combination of several special points of view; and for want of this, very important problems wait for their solution much longer than they otherwise need to”.(7)

Comte thus proposed “a new class of students, whose business it shall be to take the respective sciences as they are, determine the spirit of each, ascertain their relations and mutual connection, and reduce their respective principles to the smallest number of general principles.”

While reading this passage I was struck by the obvious similarity of its meaning to my own situation. I remain dumbfounded and humbled by the scale of foresight, so lucidly expressed by this great mind. For Comte had not simply suggested multi-disciplinary study, but a viewing though, and faithful acceptance of the general meanings rendered by the various scientific disciplines, together allowing for an intuitive, ‘heartfelt’ condensation of human knowledge.

Five fundamental sciences:
1) Mathematics
2) Astronomy
3) Physics
4) Chemistry
5) Biology

Sociology, then, is the sixth and final science. Each of these may be seen as a node in the network of human knowledge. Sociology, according to Comte, is the body of knowledge which will eventually allow for the networking of all human epistemes into a great unified field of human ideas.

Generalization: uneasy unification
Generalizing the laws of “active forces” (energy) and of statistical mechanics, Comte suggested that the same principle of interaction is true for celestial bodies and for molecules. Specifically, the center of gravity of either a planet or a molecule is focused upon a geometrical point, and though massive bodies may interact with each other dynamically, thus affecting each others relative position and velocity, the center of gravity of each is conserved as a point-state.

“Newton showed that the mutual action of the bodies of any system, whether of attraction, impulsion, or any notion other, – regard being had to the constant equality between action and reaction, – cannot in any way affect the state of the center of gravity; so that if there were no accelerating forces besides, and if the exterior forces of the system were deduced to instantaneous forces, the center of gravity would remain immovable, or would move uniformly in a right line. D’Lambert generalized this property, and exhibited it in such a form that every case in which the motion of the center of gravity has to be considered may be treated as that of a singular molecule. It is seldom that we form an idea of the entire theoretical generality of such great results as those of rational Mechanics. We think of them as relating to inorganic bodies, or as otherwise circumscribed, but we cannot too carefully remember that they apply to all phenomena whatever; and in virtue of this universality alone is the basis of all real science.”
– It should not escape the reader’s attention that in this passage Comte has effectively, albeit figuratively, plotted a graph of dynamically interacting point-states. The interactivity and cooperativity of massive bodies within a solar system or chemical reactants within a flask, both represent physically complex systems of dynamic social interaction – i.e. both are sociophysical systems. Implicit in this epistemological condensation is the fact that sociophysical systems are not necessarily alive, or biotic, or even organic.

After the completion of FSPP and his complete break with orthodox academia, Comte is said to have “overcome modern prejudices”, allowing him to “unhesitatingly rank art above science”.(6) Like Comte, I take the Aristotelian view that the arts are combinations of knowledge and skill; habitus and praxis; theory and method. Thus in a very real and practical sense the sciences are arts, from which it logically follows that Art ranks above Science. A rather more difficult pill to swallow, has been Comte’s Religion of Humanity, which he founded in 1849. Like Bourdeau (2014), I believe “this aspect of Comte’s thought deserves better than the discredit into which it has fallen”. My personal stance is due specifically to a previous uncomfortable encounter with an article on the topic of common goods, which was published by The Journal of Religious Ethics under the auspices of the United Nations(8). I had hesitated to include the paper and its contents in my previous work, due simply to fear – a fear reprimand by my peers, and a personal fear of straying from the “scientifically correct and peer reviewed path of learning”. As will become obvious, I have since realized that exclusion of study materials on the basis of fear alone is unreasonable, and that I should, and shall, attempt a rather more inclusive, better rounded education; critical thinking and good quality arguments remain of utmost importance.

“Reforms of society must be made in a determined order: one has to change ideas, then morals, and only then institutions.”
– Comte (cca. 1840)

The Religion of Humanity was defined with neither God(s) nor supernatural forces – as a “state of complete harmony peculiar to human life […] when all the parts of Life are ordered in their natural relations to each other […] a consensus, analogous to what health is for the body”. Personally, I understand this concept as the Tao, and more recently as deep ecology; inclusive of humanity but not exclusive to it. For Comte however, worship, doctrine and moral fortitude were oriented solely toward humanity, which he believed “must be loved, known, and served”.

Three components associated with the positivist religion:
i) Worship – acts; praxis; methods.
ii) Doctrine – knowledge; habitus; theories.
iii) Discipline (moral fortitude) – self-imposed boundaries, simultaneously conforming to, affirming, and defining the system of belief.

Two existential functions of the positivist religion:
i) Moral function – via which religion governs an individual.
ii) Political function – via which religion unites a population.

Ghetto magnetism
In this section we begin to explore the modern science of macro-scale physical phenomena, which result from micro-scale social interactions. The reader may find it useful to refer to the appended glossary of terms [NOTE D].

During the birthing period of quantum mechanical theory, “the concept of a microscopic magnetic model consisting of elementary [atomic] magnetic moments, which are only able to take two positions “up” and “down” was created by Wilhelm Lenz”.(9) Lenz proposed that spontaneous magnetization in a ferromagnetic solid may be explained by interactions between the potential energies of neighboring atoms. Between 1922 and 1924, Ernst Ising, a student of Lenz, studied the Lenz model of ferromagnetism, as a one-dimensional chain of magnetic moments; each atom’s field interacting with its closest neighbors. Ising’s name seems to have become attached to the Lenz model by accident, in a 1936 publication, titled On Ising’s Model of Ferromagnetism.

Ernst Ising

Three energetic components of the Ising model:
i) Interaction between neighboring magnetic moments (atomic spins).
ii) Entropic forcing (temperature).
iii) Action of an externally applied magnetic field, affecting all individual spins.

Social interaction between neighboring atoms induces parallel alignment of their magnetic momenta, resulting in a more favorable energetic situation (lower entropy) when neighbors are self-similar; both +1, or both −1. Conversely, a less favorable situation results from opposing momenta (+1 next to −1).(10)

Example of a the Ising model on a two dimensional (10 x 10) lattice. Each arrow represents a spin, which represents a magnetic moment that points either up (-1, black) or down (+1, red). The model is initially configured as a ‘random’ distribution of spin vectors.

The same initial ‘random’ distribution of magnetic moments, showing ‘unfavorable’ alignments (circled in green).

Clusters of spins begin to form (positive clusters circled in green, negative clusters circled in yellow) as a result of neighbor interaction, temperature, and the action of an externally applied magnetic field. As a result of entropy-reducing vector flipping, new ‘unfavorable’ spin alignments arise (circled in light blue), which will also tend to flip polarity.

In sociology, the term tipping point(11) refers to a rapid and dramatic change in group behavior – i.e. the adoption by the general population of a behavior that was rare prior to the change. The term originated in physics, where it refers to the addition of a small weight to a balanced object (a system previously in equilibrium), causing the object to topple or break, thus affecting a large scale change in the object’s stable state (a change of the system’s equilibrium); a change of stable state is also known as a phase transition.

The relation between cause and effect is usually abrupt in complex systems. A small change in the neighborhood of a subsystem can trigger a large-scale, or even global reaction. “The [network] topology itself may reorganize when it is not compatible with the state of the nodes”
– Juan Carlos González Avella (2010)

In relation to social phenomena, Morton Grodzins is credited with having first used the term tipping point during his 1957 study of racial integration in American neighborhoods.(11) Grodzins learned that the immigration of “black” households into a previously “white” neighborhood was generally tolerated by inhabitants as long as the ratio of black to white households remained low. If the ratio continued to rise, a critical point was reached, resulting in the en masse emigration of the remaining white households, due to their perception that “one too many” black households populated the neighborhood. Grodzins dubbed this critical point the tipping point; sociologist Mark Granovetter labeled the same phenomenon the threshold model of collective behavior.

Between 1969 and 1972, economist Thomas Schelling published articles on the topic of racial dynamics, specifically segregation. Expanding upon the work of Grodzins, Schelling suggested the emergence of “a general theory of tipping”. It is said that Schelling used coins on a graph paper lattice to demonstrate his theory, placing ‘pennies’ (copper-alloy one cent pieces, representing African-American households) and ‘dimes’ (nickel-alloy ten cent pieces – representing Caucasian households) in a random distribution, while leaving some free places on the lattice. He then moved the pieces one by one, based upon whether or not an individual ‘household’ was in a “happy situation” – i.e. a Moore neighborhood, in which the nearest eight neighbors are self-similar.(12) At random, one self-dissimilar ‘household’ was moved to a Moore neighborhood, over time rendering a complete segregation of households, even with low valuation of individual neighbor preferences. In 1978 Schelling published a book titled Micromotives and Macrobehavior, in which he helped to explain variation in normative differences, tending over time to display a self-sustaining momentum of segregation. In 2005, aged 84, Schelling was awarded a share in the 2005 Nobel prize in economics, for analyses of game theory, leading to increased understandings of conflict and cooperation.(13)

Thomas Schelling

“People get separated along many lines and in many ways. There is segregation by sex, age, income, language, religion, color, taste, accidents of historical location. Some segregation results from the practices of organizations; some is deliberately organized; and some results from the interplay of individual choices that discriminate. Some of it results from specialized communication systems, like different languages. And some segregation is a corollary of other modes of segregation: residence is correlated with job location and transport”.(14)
– Schelling (1971)

Under the heading Linear Distribution, in Schelling’s 1971 publication on the subject of social segregation, we find a direct analog to the original one-dimensional Lenz-Ising model. Schelling seems to have either appropriated the concept, citing neither Lenz nor Ising, or to have designed the model independently. His involvement in American foreign policy, national security, nuclear strategy, and arms control(13) certainly would have granted Schelling access to knowledge of theoretical works, including the so called Monte Carlo methods, undertaken at Los Alamos during and after the second world war.(15) However, for the purpose of our current exploration it is irrelevant how exactly Schelling arrived at his understanding, and indeed, as I have mentioned previously, sociophysics has emerged in a variety of apparitions, to studious individuals with widely differing perspectives.

“The line of stars and zeros […] corresponds to the odd and even digits in a column of random numbers. […] We interpret these stars and zeros to be people spread out in a line, each concerned about whether his neighbors are stars or zeros. […] Suppose, now, that everybody wants at least half his neighbors to be like himself, and that everyone defines ‘his neighborhood’ to include the four nearest neighbors on either side of him. […] I have put a dot over each individual whose neighborhood does not meet his demands. […] A dissatisfied member moves to the nearest point at which half his neighbors will be like himself at the time he arrives there. […] Two things happen as they move. Some who were content will become discontent, because like members move out of their neighborhoods or opposite members move in. And some who were discontent become content, as opposite neighbors move away or like neighbors move close. The rule will be that any originally discontented member who is content when his turn comes will not move after all, and anyone who becomes discontent in the process will have his turn after the 26 original discontents have had their innings.”
– Schelling (1971)

Under the heading Area Distribution, Schelling (1971) introduces a two dimensional (13 x 16) lattice, commenting that “patterning – departure from randomness – will prove to be characteristic of integration, as well as of segregation, if the integration results from choice and not chance.” Clearly, Shelling’s model of social segregation bares great similarity to Ising’s ferromagnetic model.

Stauffer (2012) reminds us that the formation of urban ghettos is a well known phenomenon, and suggests that New York’s Harlem is the most famous black district,(3) with a history stretching well over a hundred years. From 1658, Harlem was a Dutch settlement (or ghetto) named after the capitol of north Holland. African-Americans began to immigrate during the ‘great migration’, from about 1905, when former slaves from rural southern United States migrated to mid-western, north-eastern and western regions of the US. Harlem identified as a ‘black’ district in a Manhattan borough, during the early 1920s.

Indirectly, Stauffer poses an interesting question: Why is it that we spontaneously self-organize into groups of self-similar individuals? – or in the specific case of “ghetto formation” – Why is it that we like to live in communities of like-minded, ethnically and culturally similar individuals? The simplest and clearest answer to this question is surely that we are social animals, and that it is easier to socialize with self-similar individuals, than with strangers. However, stemming from this is the truly fascinating question: If it is true that we like to live in communities of self-similar individuals, then why do we not like to live in communities of self-similar individuals when forced to do so? As an example of the latter, Stauffer reminds us of the uprising, in 1943, of the Warsaw Ghetto, which did not self-assemble but was formed under command of Nazi Germany. Again, the simplest and clearest answer must be that we are social animals, though I cannot think of good reason in support of this example, other than revolutionary pressure due to innate principles of self-regulation and self-organization. Regardless, it would be nice to assume that precisely this kind of ambiguity, apparently intrinsic to sociology, has been at root of the epistemological rift between physics and sociology, as the result of a long-standing ideological tradition in physics of determinism. In reality, a deeper and rather more vexing explanation haunts us; it has become obvious that the ambiguity of social interaction is not restricted to messy life systems, but governs inorganic physical phenomena also.

Statistical physics, borne of quantum theory, has put a definitive end to physical determinism. The renormalization technique, ushered in during the mid-1970’s, seems to have been an attempt to conserve physical determinism, at least tentatively. However, renormalization is a theoretical hack – an attempt to abstractly force fundamentally complex, infinite, random, and thus fundamentally indeterminate phenomena to appear as if they were simple, precisely calculable, determinable facts. Physically, experimentally, reality is not clear. In fact, reality is fundamentally uncertain, and so remains non-understood; mysterious. Stauffer confirms the validity of Comte’s thoughts, suggesting that “cooperation of physicists with sociologists could have pushed research progress by many years”.

State of the Art
“The concept of Complex Systems has evolved from Chaos, Statistical Physics and other disciplines, and it has become a new paradigm for the search of mechanisms and an unified interpretation of the processes of emergence of structures, organization and functionality in a variety of natural and artificial phenomena in different contexts. The study of Complex Systems has become a problem of enormous common interest for scientists and professionals from various fields, including the Social Sciences, leading to an intense process of interdisciplinary and unusual collaborations that extend and overlap the frontiers of traditional Science. The use of concepts and techniques emerging from the study of Complex Systems and Statistical Physics has proven capable of contributing to the understanding of problems beyond the traditional boundaries of Physics.”
– Juan Carlos González Avella (2010)

In an interdisciplinary review of the literature defining adaptive co-evolutionary networks (AcENs), Gross & Blasius (2007) have listed five dynamical phenomena common to AcENs:
i) emergence of classes of nodes from an initially heterogeneous population
ii) spontaneous division of labor – in my opinion the same as (i)
iii) robust self-organization
iv) formation of complex topologies
v) complex system-level dynamics (complex mutual dynamics in state and topology)

We are to understand that the mechanisms giving rise to these emergent phenomena themselves emerge from the dynamical interplay between state and topology. Divisions of labor, for example, spontaneously emerge (self-organize) as a result of information feedback within an AcEN(16) This fact bolsters an argument that I have made previously, for a strong similarity between the epiphenomena of bacteria, gregarious insects and humans, in their respective cultures. Also supported by studies of AcENs, is my hitherto intuitive understanding that a diverse set of actors is fundamental to the production of common goods. In fact, it is now clear that cultural diversity is so fundamental to the dynamics of social phenomena, that divisions of labor necessarily and spontaneously emerge from an initially homogeneous population, due to random variations of nodal state (entropic forcing), degree and homophily.

Gross & Blasius (2007) have reported that self-organization is observed in Boolean and in biological networks, occurring within a narrow region of transition between an area of chaotic dynamics and a area of stationary dynamics. Metaphorically, one might say that between the vast and chaotic field of the unknown and the relatively large steady state of knowledge, lies a narrow field – a phase space of self-organizing possibility – i.e. intuition. Not at all surprisingly, life systems, like all complex adaptive systems, necessarily occupy this theoretically defined phase space. Further, Gross & Blasius talk of the “ubiquity of adaptive networks across disciplines”, specifying technical distribution networks such as power grids, postal networks and the internet; biological distribution networks such as the vascular systems of animals, plants and fungi; neural or genetic information networks; immune system networks; social networks such as opinion propagation/formation, the social media and market-based socio-economics; ecological networks (food webs), and of course biological evolution offers an historical depth of literature on the subject of AcENs. The authors mention that examples are also reported from chemistry and physics, but do not provide examples. Based upon our current exploration it seems fair to suggest at least the following: astronomical gravitational networks, molecular chemical reactant networks, geological networks (the interactive cycling of carbon, water, nitrogen, minerals, etc…), and of course quantum mechanical networks.

For me personally, the most difficult to fathom of these examples has been the astronomical gravitational network. However, I am now able to imagine the gravitational interaction of massive bodies at their various scales – planets, moons and comets within a solar system; solar systems within a galaxy; galaxies within local groups; local groups within clusters, etc – as nodes, with gravitation comprising the set of edges (connections) between massive bodies.

Network geometry is obvious in models of Universal mass distribution.

Tabulated nomenclature of static and dynamic elements, for a selection of epistemes.

Metaphysics actor action
Graph theory node edge
Complex systems theory vertex link
Quantum theory particle wave
Electro dynamics theory field vector
Economic theory agent behavior
Astrophysics massive body gravitation
Chemistry reactant reaction
Molecular biology – central doctrine DNA transcription
Molecular biology – central doctrine mRNA translation
Biology organism survival
Evolutionary theory species adaptation

According to J. Avella (2010), the modeling of network dynamics has revealed a complex relationship between actor heterogeneity and the emergence of diverse cultural groups.(19) Network structure and cultural traits co-evolve, rendering qualitatively distinct network regions or phases. Put in more familiar terms: patterns of social interaction and processes of social influence change or differ in tandem, also network patterns and processes feedback upon each other. Thus social interactions exist as a dynamic flux in which distinct channels of interactivity form, sever, and re-form. From the collective interaction of agents, emerge temporary, sequential, non-equilibria – known as network states. The formation of network states is controlled by early-forming actors, whereas the later formation and continued rapid reformation of cultural domains, comprises the geometry – or ‘architecture’ – of a mature network; a network who’s dynamics have reached a dynamic steady state.

Furthermore, the ordered state of a finite system under the action of small perturbations is not a fixed, homogeneous configuration, but rather a dynamic and diversified, chaotic steady state. During the long term, such a system sequentially “visits” a series of monocultural configurations; one might imagine a systemic analogue to serial monogamy. Slow forming monocultures emerge under stable environmental conditions (low entropic forcing). Under less stable environmental conditions (high entropic forcing), monocultural domains undergo fragmentation and are replaced by a variety of rapidly forming and re-forming cultural domains, thus rendering a dynamic steady state. The relation between cause and effect is usually abrupt in complex systems. Indeed, “the [network] topology itself may reorganize when it is not compatible with the state of the nodes.”

Avella tells of a study by Y. Shibanai et al, published in 2001, analysing the effects of global mass media upon social networks. Shibanai et al assumed global mass media messages as an external field of influence – analogous to the external magnetic field in the Ising model – with which network actors (individual, and/or groups of nodes in a network) interact. The external field was interpreted “as a kind of global information feedback acting on the system”. Two mechanisms of interactive affect upon society by global media were identified:
i) The influential power of the global media message field is equal to that of real (local) neighbors.
ii) Neighbourly influence is filtered by feedback of global information, but effected only if and/or when an individual network node is aligned with a global media message.
Shibanai et al concluded that “global information feedback facilitates the maintenance of cultural diversity” – i.e. The propagation of messages promoting a state of global order and cultural unity, simultaneously enables and maintains a dynamic steady state of global disorder and multiculturalism.

Generally, considerations of equilibrium assume that the application of a field enhances order in a system. However, this is not always the case. To the contrary, Avella (2010) tells us that “an ordered state different from the one imposed by the external field is possible, when long-range interactions are considered” and fascinatingly, that “a spatially nonuniform field of interaction may actually produce less disorder in the [social] system than a uniform field.”

“While trends toward globalization provide more means of contact between more people, these same venues for interaction also demonstrate the strong tendency of people to self-organize into culturally defined groups, which can ultimately help to preserve overall diversity.”
– J. Avella (2010)

Respectfully, I urge the reader to allow themselves a few moments of meditation upon this rather subversive finding.

A dynamic steady state exists in a network until a process of social influence such as an external environmental perturbation or an internal social perturbation, exceeds some threshold (tipping point), as a result of which the current network steady state is eroded and reformation of ongoing network dynamics occurs, rendering a new dynamic steady state. Put another way: above some threshold, a given perturbation causes an abrupt change in social interactions, leading to a new (though ultimately temporary) dynamic steady state. Co-evolution implies that the processes of social influence change as the result of multilateral feedback mechanisms between social interactions, environmental forcing, and/or the eccentric actions of some individual or group.

Three distinct phases of complex (adaptive, co-evolutionary) networks:
Phase I) A large component of the network remains connected and co-evolutionary dynamics lead to a dominant monocultural state.
Phase II) Fragmentation of the monocultural state begins, as various cultural groups form in the dynamic network. However, these smaller groups remain stable in the presence of ongoing stochastic shocks; peripheral actors are either absorbed into a social group or are forced out. “Social niches are not produced through competition or selection pressure but through the mechanisms of homophily and influence in a co-evolutionary process.[…] Thus, even in the absence of selection pressures, a population can self-organize into stable social niches that define its diverse cultural possibilities.”
Phase III) Fragmentation of cultural domains leads to high levels of heterogeneity. Avella (2010) teaches that the very high levels of heterogeneity observed in network models are “empirically unrealistic in most cases; however, they warn of a danger that comes with increasing options for social and cultural differentiation, particularly when the population is small or there is modest cultural complexity. Unlike cultural drift, which causes cultural groups to disappear through growing cultural consensus, a sudden flood of cultural options can also cause cultural groups to disappear; but instead of being due to too few options limiting diversity, it is due to excessive cultural options creating the emergence of highly idiosyncratic individuals who cannot form group identifications or long-term social ties.”

Confirming what we have learned from Ising and Schelling, Avella tells that “[actors] have a preference for interacting with others who share similar traits and practices”, and this fact “naturally diversifies the population into emergent social clusters.” However, we have also learned that a highly idiosyncratic actor, who is either unrecognized or even disconnected from a local area network, may still play an influential role upon the greater network (society). Thus, highly idiosyncratic individuals, devoid of group identifications and/or long-term social ties, rather than posing a danger, may be potentially highly relevant to social processes, if only in the sense that collective idiosyncrasy exists as a reservoir of unused or even unknown options and opportunities – a pool of potential, perhaps similar to that of genomic mutants; a diverse set of resources from which may emerge novel solutions to challenges and previously un-encountered situations.

Indeed, precisely this scenario appears to have been the case at the emergence of life on Earth (see: LUCA and the progenotes, in Part II: Empirical observations and meta-analyses, of The Common Good), during which the progenote population represented a collective, albeit semi-disconnected, network of highly idiosyncratic individuals with no strong group identification or long-term social ties. Also learned in Empirical observations and meta-analyses, a local area network catastrophe is catastrophic only for a highly adapted (specialized) monoculture, and may be problematic for small ‘satellite’ cultural groups that are to a lesser extent adapted to the current network topology. However, highly idiosyncratic, even disenfranchised actors in the current dynamic, network steady state, may experience homophilic pressure and thus social connectivity in the dynamic steady state which emerges from a phase transition of the network topology.

Avella (2010) has confirmed that cultural heterogeneity (multicultural dynamics, and even outright anarchy) is a deep aspect of reality. Anarchy and chaos appear to be near the source, or indeed to be the source of physical and social order. That is to say a variety of ordered states spontaneously emerge from anarchical, chaotic systems. “Social diversity can be maintained even in highly connected environments” – i.e. Even under intense pressure to conform, diversification and hence diversity, emerge and persist.

Vinkovic & Kirman (2006), remind us that the purpose of the Schelling model is “to study the collective behavior of a large number of particles”,(16) and that the model illustrates the emergence of aggregate phenomena that are not predictable from the behaviors of individual actors. In economics theory individual agents make decisions based upon a “utility function” (personal preference), an idea that can be interpreted in physical terms, as: particle interactions are driven by changes of internal energy. A direct analogy is made between the interactions of life systems (humans, insects, fungi, plants, bacteria, etc…) and physical systems (gases, liquids, solids, colloids, solutions, etc…) by treating particles as agents. “In the Schelling model utility depends on the number of like and unlike neighbors. In the particle analogue the internal energy depends on the local concentration […] of like or unlike particles. This analogue is a typical model description of microphysical interactions in dynamical physical systems […]. Interactions between particles are governed by potential energies, which result in inter-particle forces driving particles’ dynamics.”

It is understood then, that from the collective behaviour of individual agents, emerge clusters of self-similar agents. Fascinatingly, Vinkovic & Kirman report finding that aggregates of empty space play a “role” in the dynamics of agent clustering; stressing the importance of the number of empty spaces in the initial, random, configuration of an experimental lattice. Specifically, “an increase in the volume of empty space results in more irregular cluster shapes and slower evolution because empty space behaves like a boundary layer”. Clearly, in their analytical study, the authors assume that aggregates of empty space express a “behavior”, thus implying that “empty space” has some capacity to act; specifically, stabilizing nearby clusters by preventing them from direct contact with each other. Simply, we are to acknowledge the collective agency of aggregations of agentless locations on the lattice; the collective action of actorless, “free” space.

Plots of an agent based (Schelling) model.
The two dimensional experimental lattice is composed of (100 x 100) = 10000 cells. Each cell is either empty (white) or is occupied by one agent (red or blue). Numbers of empty cells in initial random configurations are shown.
Increased cluster size correlates with decreased value of x. Increased sizes of empty space clusters are shown (circled in green) for both initial configurations.
– graph adapted from Vinkovic & Kirman (2006)

I have managed to find only a tiny scattering of scientific works attributing some significance to empty space. One example is from the statistical analysis of graphical data plots; Forina et al (2003), have introduced an empty space index, the purpose of which is to quantify the fraction of information space on a given graph, that does not hold any “experimental objects”.(17) However, the authors are careful to point out that the empty space index cannot be confused with a clustering index. Another, perhaps more commonly known example stems from astronomy; voids.

Like Serge Galam, Stephen Wolfram is also a self-proclaimed hobbyist exploring sociophysics. In his philosophical treatment of space-time(18) Wolfram (2015) suggests that “maybe in some sense everything in the universe is just made of space.” Wolfram speaks of what I choose to call aether (see: A Spot of Bother and Aether), saying:
“As it happens, nearly 100 years [before Special Relativity, people] still thought that space was filled with a fluid-like ether. (Ironically enough, in modern times we’re back to thinking of space as filled with a background Higgs field, vacuum fluctuations in quantum fields, and so on.)”

It must be stressed that the epistemic condensation of sociology and physics may be ascribed to any of the periodic elements; to the sub-atomic scale as well as the astronomic scale; to mathematical and theoretical, albeit complex, models of reality; and of course to life systems.

We have viewed through empirically observable phenomena, at some aspect of reality that is more fundamental than those which we have observed.

Critically, this cannot be science, as the absolute boundary of the scientific method, and thus science itself, is empiricism (sensual observation and manipulation). Any thing that we think we see and do beyond or through what we actually observe and affect, is not science. We are left with only one logical possibility: that our newfound knowledge of reality is metaphysical. Ultimately we must categorize it as Art.

A) There are at least three separate histories of sociophysics; one stemming from philosophy, one from quantum physics, and one from sociology.
B) In the vocabulary of complex systems modeling and co-evolutionary adaptive networks theory one may rightly define such reorganizational events as a change of topological dynamics.
C) As well as positivism, Comte coined the words sociology and altruism.(6)
D) Glossary of terms relevant to network models:
Node: The node is the principal unit of a network. A network consists of a number of nodes connected by links. Depending on context, nodes are sometimes also called vertices, agents, actors, or attractors.
Link: A link is a connection between two nodes in a network. Depending on context, links are also called edges, connections, actions or interactions.
Degree: The degree of a node is the number of nodes to which it is connected; i.e. degree = links/node. The mean degree of the network is the mean of the individual degrees of all nodes in the network.
Neighbors: Two nodes are said to be neighbors if they are connected by a link.
Dynamics: Depending on context, dynamics refers to a temporal change of either the state or the topology of a network.
Evolution: Depending on context, evolution refers to a temporal change of either the state or the topology of a network.
Frozen node: A node is said to be frozen if its state does not change in the long-term behavior of the network. In certain systems the state of frozen nodes can change nevertheless on an even longer topological time scale.
Topology: Refers to a specific pattern of connections between the nodes in a network.
State: Depending on context, state refers to either the state of a networked node or the state of the network as a whole – including the nodes and the topology.
Small-world: Refers to a network state in which distant, indirectly connected, nodes are linked via a short average path length.
Scale-free: Refers to a network state in which the distribution of node degrees follows a power law.
Homophily: Refers to spontaneous attraction between self-similar nodes; literally self love.

1) S. Galam, “Sociophysics: a personal testimony”, (2004), Laboratoire des Milieux Désordonnés et Hétérogènes, arXiv,
2) S. Galam, Y. Gefen and Y. Shapir, “Sociophysics: A mean behavior model for the process of strike”, (1982), Journal of Mathematical Sociology, 9, p. 1-13.
3) D. Stauffer, “A Biased Review of Sociophysics”, (2012), Institute for Theoretical Physics, Cologne University, arXiv,
5) S. Stigler, “Adolphe Quetelet (1796-1874)”, (1986) Encyclopedia of Statistical Sciences, John Wiley & Sons,
6) M. Bourdeau, “Auguste Comte”, (2014), Stanford Encyclopedia of Philosophy,
7) H. Martineau, “The Positive Philosophy of Auguste Comte”, (1896), Batoche Books (2000),
8) J. O’Connor, “MAKING A CASE FOR THE COMMON GOOD IN A GLOBAL ECONOMY: The United Nations Human Development Reports [1990-2001]”, (2002), The Journal of Religious Ethics, Vol. 30, No. 1, p. 155-173,
9) S. Kobe, “Ernst Ising 1900-1998”, (2000), Technische Universität Dresden, Institut für Theoretische Physik,
10) J. Selinger, “Ising Model for Ferromagnetism”, Chapter 2 of Introduction to the Theory of Soft Matter: From Ideal Gasses to Liquid Crystals, (2016),
11) “Tipping point”,
12) D. Vinkovic and A. Kirman, “A physical analogue of the Schelling model”, (2006), Proceedings of the National Academy of Science,
13) “Thomas Schelling”,
14) T. Schelling, “DYNAMIC MODELS OF SEGREGATION”, (1971), Journal of Mathematical Sociology, Vol. 1, p. 143-186,
15) “Monte Carlo method”,
16) T. Gross & B. Blasius, “Adaptive coevolutionary networks: a review”, (2007), Journal of The Royal Society,
17) M. Forina, S. Lanteri, C. Casolino, “Cluster analysis: Significance, empty space, clustering tendency, non-uniformity. II – Empty space index”, (2003),
18) S. Wolfram, “What Is Spacetime, Really?”, (2015),
19) J. Avella, “Coevolution and local versus global interactions in collective dynamics of opinion formation, cultural dissemination and social learning”, (2010), Institute of Interdisciplinary Physics and Complex Systems,

The Church of Reason

Too Much Credit?
In modern developed cultures, the words “beyond human knowledge” conjure all manner of nasty connotations and categorizations, such as “aberrant”, “pointless”, “why bother”, “just do the math and you’ll get the right answer”, “not normal”, “unscientific”, etc…

Perhaps we give science a little too much credit? Please do not misunderstand, I am not suggesting that science is not a valuable tool, only that people (including scientists) have a tendency to believe, and thus also tend to hold science in good faith. Big Bang theory is a fine example of this phenomenon. Nobody can honestly say that they understand it, but mostly it fits reasonably well the knowledge we have acquired via observation and measurement, about the physical universe. In some cases it fits extremely well. Where it did not fit well, we tailored it so that it does fit, quite well. So now, we modern rational folk can look-on admiringly at our intelligent accomplishment. It’s not perfect, but it is really very good! It must be good, after all it is high science.

Our knowledge of physical reality, of psychology, of mind, even of life, contains huge gaps and has very fuzzy boundaries. Indeed, what we think we know about these subjects is riddled with inconsistencies of logic, ambiguities, assumptions, outright errors (known and unknown), and vast regions of philosophical difficulty. Ironically, the fact that we can identify very fuzzy boundaries may be a sign that we are close to seeing the TRUTH. Or it may not be, there’s simply no way to tell.

A clear and fundamental example of what we think we know is the boundary separating objective facts from subjective experiences. Another example, though much better hidden under the auspices of science, is the standardized value of an electron which is defined theoretically by renormalization(1). Briefly, as I understand it, renormalization involves truncating the infinite but real electromagnetic cloud of high energies associated with an electron at extremely short distances, so that in calculations a finite albeit arbitrary value may be used. This renormalized value comprises a consolidation of measured mass and charge values for the electron. In effect, cleverly hiding from view an infinite nuisance.
The Feynman diagram on the left shows an electron-photon interaction. On the right, the same interaction comprises more complicated interactions, including an infinite loop.
– image by Matt McIrvin

Very few people bother to question the validity of renormalization in physics, or of Big Bang theory(2), though the former was meant only as a temporary solution to keep infinite values from popping up to make a mess of the mathematics, and the latter as a joke(3).
Fred Hoyle coined the term big bang during a BBC radio interview in 1949, he meant it sarcastically. “[to assume that the universe had a beginning is pseudoscientific, resembling arguments for a creator] it’s an irrational process, and can’t be described in scientific terms”(4).

A quarter of a century later, in 1975, Paul Dirac whom you may remember from an earlier episode, criticized renormalization, saying “Most physicists are very satisfied with the situation. They say: ‘Quantum electrodynamics is a good theory and we do not have to worry about it any more.’ I must say that I am very dissatisfied with the situation, because this so-called ‘good theory’ does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is small – not neglecting it just because it is infinitely great and you do not want it!”.(5)
– Oh! I just really like this guy! Too bad he is dead.

Thinking critically about the intractable conditions at the boundary of human knowledge has historically been politically problematic, sometimes just unacceptable, but on occasion deadly. By the 1970’s good science had become almost entirely lost to the politics of research funding, the marketing of science(6) and various problems of the peer review process(7).

Both Big Bang theory and renormalization are examples of the many unquestioned, or forgotten, assumptions upon which scientific knowledge of physical reality is based. Consequently, our knowledge does not conform with physical reality, but with a model of reality. Currently that is the Standard Model of particle physics(8).

What is Science?
If you are a physicist, or a scientist in any other field, you may buck and snort in contempt at what I am about to say; I urge you to think critically.
Big bang theory is not a scientific theory, nor is it a scientific hypothesis. There is a very simple reason for this: scientific theories and the hypotheses which build them must be empirically testable. The same is true for string theory(9) in all it’s various flavors, none are empirically testable. These examples represent extrapolations of scientific knowledge, but are in no way themselves scientific. The indoctrinated may attempt a defense by saying something like modern science studies many phenomena that are not obvious or visible. Flatly, this is incorrect. If the subject of study is not obviously measurable (with a reasonable margin of error), then it can not be studied scientifically. Clearly then, big bang and string “theories” are pseudoscientific.

We may correctly categorize them as philosophy and as creation myths. They are plausible stories told by the priests of the high order of the Church of Reason(10), “creation myths speak to deeply meaningful questions held by the society that shares them, revealing of their central worldview and the framework for the self-identity of the culture and individual in a universal context”(11). The authority of the high priests of the Church of Reason is respected and their stories are believed, generally. But critically, no one, not even the high priests themselves, holds any physically verifiable knowledge about the stories.
The products of the Church of Reason (good or bad) are theories; tentative explanations which may never be interpreted as truths. Presumably, truth exists only in the domain of God, whatever that is(12). Down here we are left to guess, measure and argue.

Value judgments ranging from negativism pessimism and skepticism, to devotion optimism and positivism, are forms of bias. Bias does not play a roll in the execution of scientific methodology, though people do tend to drag it along. It is precisely bias that leads to the negative connotations and categorizations (skepticism) of criticisms directed at the stories produced by the Church of Reason, and God forbid criticism of the Church itself!

Two good reasons why criticism may appear to be skepticism
1) Objectively, the environment (world) is generally antagonistic, if not outright hostile to the set of intricate networks of complex highly specific electro-chemical interactions which we call life. The ultimate antagonist being entropy. In order to reduce environmental entropy, and so allow for a local environment that is more conducive to directed self-organization, life invariably exists in association with one or more physical boundaries, which are each composed of a variety of selective semi-permiable materials. In the parlance of Star Trek, and also in fact, life may be defined as: a low-entropy improbability bubble in space-time.
– life is fundamentally a doubtful prospect.

2) Subjectively, more people are less critical about more. It would seem that we generally use our brains less now than we did pre WWII, and specifically since about the mid-1950’s when lifestyle and consumerism became central themes in our popular culture. We now consume agents of instant gratification rather than think about what it means to do so, and what consequences may emerge as a result. As a contemporary meme, analysis is nearly as ugly as criticism, the former having a scientific flavor, while the latter seems more connected to the arts. Unfortunately, both analysis and criticism are commonly mistaken for skepticism, or pessimism, or even hostility.
– has doubtfulness become impolite? – politically incorrect?

An Aside
When we hear about development and progress, precisely what is it that is supposed to be developing and progressing? More importantly, what meanings do we hang onto the words development and progress? Equally importantly, what is wrong with the state of affairs as they are now, that urges us to strive for development and progress?

A very interesting and telling example is the progress and development of medicines for incurable diseases. Recently I was granted the privilege of hearing presentations given at a joint congress of medical and microbiological associations.

Several talks were given regarding novel treatments for various cancers. One of these was particularly interesting and quite elegant. Briefly, the self-destruct mechanism in cancer cells has failed, so an artificial molecule is engineered to specifically target and tag the cancer cells, which are then visible to, and attacked by, killer cells of the host’s immune system. Apparently there were good experimental results with mice.

One single talk was given regarding antibiotics, and it described a development in informatics. Upon enquiry, I learned that the markets for novel antibiotics are not large enough for large institutions and corporations to risk the necessary cost of investment. Some say that we are in transition to a “post-antibiotic era”(13).

The distribution of resources and funds for research in these two fields is vastly disproportionate to common sense. Why does our system of development and progress find great new ways to treat cancer in mice, while ignoring our near-future antibiotic crisis?