The Common Good: a semi-rational emergent property of complex collective interaction between diverse actors – Part I

The common good invariably requires diversification, manifest as random fluctuations within the biological phase space from which emerge divisions of labour, and thus necessarily, inequalities among individuals comprising a social collective. Entropic forcing drives increases of the common good, via increased diversity, to an apparent limit.

Explorations are made of philosophical (Part I) and empirical (Part II) studies in politics, biology, and economics.

Cooperation via collective divisions of labour is a necessary prerequisite to biological metabolism and reproduction. A collective comprising diverse actors is thus assumed fundamental to the planetary biome. The preponderance of benefit (here designated ‘the common good’) that emerges for actors (individuals and groups), is mediated by Woesean collective cooperation, defined as “a diverse community of cells(note A) surviving and evolving as a biological unit.”(1)

“Diversity is an asset with which to confront uncertainty.”
– Groschl, 2013

Part I: Philosophical observations, models and theoretical analyses

Politikos: definition and mediation of the common good
Commenting on Aristotle’s political theory, F. Miller (2011) tells that “the modern word ‘political’ derives from the [Ancient Greek πολιτικός] ‎politikós, ‘of, or pertaining to the polis’ [polis translates as ‘city-state’, or city]. City-states like Athens and Sparta were relatively small and cohesive units, in which political, religious, and cultural concerns were intertwined. The extent of their similarity to modern nation-states is controversial.”(2)

As a point of interest, Amish culture, described in a previous post titled The Worldly and The Amish represents a modern, relatively small and cohesive population unit, in which political, religious, and cultural concerns are intertwined. Presumably, the world’s remaining populations of ‘primitive’ peoples (nations) would also fit this description, so Miller’s controversy appears to exist principally between modern globalized (‘worldly’) culture, and what one might loosely term ‘old school cultures’, or perhaps the ‘old world order’.

Edward Jenks’ well informed comment, describing a founding and central aspect of political states, seems much less controversial: “[Evidently,] all political communities of the modern type owe their existence to successful warfare. As a natural consequence, they are forced to be organized on military principles […].”(3)

Referring to warring as “sad”, Jenks (1909), posed that plunder is easier, or at least quicker, than working to build up and equip a household, and that men would be unwilling to give up a household; property. The resulting conflict, more than feudalism, developed the practical knowledge of plunder – how best to get stuff with a minimal input of work, and how best to protect the stuff you have worked to accumulate. War, then, is a result of ownership and property.

Jacques Callot, “Plundering a Large Farmhouse”, (1633), plate 5, The Miseries of War.
Inscribed: Here are the fine exploits of these inhuman hearts. They ravage everywhere. Nothing escapes their hands. One invents tortures to gain gold, another encourages his accomplices to perform a thousand heinous crimes, and all with one accord viciously commit theft, kidnapping, murder and rape.

Warfare and military organization were surely intrinsic to city-states existing during Aristotle’s lifetime, which he described as comprising a collection of parts (natural resources, households, and individual citizens), together taking a compound form, and certain order, defining the constitution of the state. For Aristotle, state constitution was not just a theoretical, ‘on paper’, statement of cultural ideals, but an immanent organizing principle analogous to the soul (spirit or genius) of an organism. Thus the Aristotelian constitution of the polis is the way of life of the citizens.(2)

In accordance with Aristotle’s political naturalism, political episteme (from Ancient Greek ἐπιστήμη, epistḗmē, ‘knowledge’) incorporates various practical sciences, such as the art of war (military), the art of household management (economy: from Ancient Greek οἰκονομία, oikonomia, ‘management of a household’, ‘administration’), and the art of language (rhetoric: from Ancient Greek ῥητορικός, ‎rhētorikós, ‘concerning public speech’). Critically, all practical sciences are means of rendering a collective human good. “Even if the end is the same for an individual and for a city-state, that of the city-state seems at any rate greater and more complete to attain and preserve. For although it is worthy to attain it for only an individual, it is nobler and more divine to do so for a nation or city-state”.(2)

“The needs of the many outweigh the needs of the few – or the one.”
– Spock & Kirk, Startrek II: The Wrath of Khan, 1982

Aristotelian political episteme refers to knowledge of how, why, when and where among the citizenry, noble acts and happiness occur, leading to an understanding of how, where and when to act; implementing policy in order to promote general goodness (a common good quality of life) for the state.

Modern political science does not inspire a great deal of noble action or happiness in citizens, if it did then commoners would surely all hold more respect for careering politicians – a role of state that each of us plays, either by direct action or indirectly by deference of action. The fact that so many modern citizens tend to believe that deference of their individual governing responsibility, to an unknown group of ‘representatives’ is better for them as individuals as well as for the commons, than collective self-governance is, clearly shows a lack of political episteme, and hence faith in political science – a faith in systematized governance, definable as technocracy.

A culture of faith in technocracy renders an equivalence between the church (spiritual affairs) and the state (affairs of governance), which is inescapable even if – or perhaps particularly if – one assumes oneself to be a divine ruler. This common faith of modernity invalidates the controversy suggested by Miller (2011), regarding the extent to which ancient city-states and modern states are (dis)similar; political, spiritual and cultural affairs are as intertwined in modernity as they were in antiquity.

Groschl (2013) propagates the Aristotelian meaning of political episteme, as concerning collective life for its own sake, and he suggests that modern political science acts to prevent people from accessing an understanding of what politics means.(4)

Here then is a guide:
Political life renders a constitution; the socio-physical epifunction of a population that emerges from a cultural milieu, is not attributable to any individual or group, but comprises a collection of individual and/or group interactions within and between a population and its local environment.

Political episteme is a collection of arts; the practical and theoretical knowledge of noble action and happiness of citizens, the purpose of which is to ensure a good constitution; a common good quality of life for a population.

Political science is the practical and theoretical knowledge of distribution and management of power and resources.

Better worded definitions do not detract from the difference in meaning between the latter two. Epistemes are outrospective, mostly open and giving. Sciences are introspective, mostly closed and reductive.

Political science – indeed science of any kind – is attributable solely to humans, and in particular to modern, ‘western’ (now ‘global’) affluent culture. Groschl teaches that political science has been tuned to Hobbesian political philosophy, leading into an era of misconception, or possibly preconception, about the meaning of economy – which is now assumed to be an intrinsic, if not central, aspect of politics. In modernity, both politics and economy have been redirected to face inward, targeting individual private interests as their primary beneficiaries. So it is due to the moral of modern society (modern worldview) that the rights and ambitions of the individual are elevated to a near holy status. We assume genius as ‘proprietary’ of an individual, rather than being the result of the commons; emerging gracefully from the cultural milieu – the complex and uncertain interactions of many and varied actors. Interestingly though, products of genius (generally forms of knowledge) are appropriated by society as common goods.

In emphasis of this last point it seems prudent to assume, as do Bibard & Groschl (2013), that goods and goodness are defined almost ubiquitously among our past and present cultures, as shared phenomena. As an example, they pose little good in owning the most beautiful painting in the world if no one but the painter ever experiences it. Indeed, without sharing experiences of the painting, how can the painter know that it is the most beautiful painting in the world? Goods are necessarily shared, and are thus to a greater or lesser extent, common.

Modernity holds the misinformed consensus that common goods, indeed goods of any kind, are necessarily made; that goods do not exist without the expenditure of energy by some individual or group. This interpretation has most likely resulted from our cultural fixation upon business, in which goods are produced, traded, bought, sold, and finally consumed. Critically, solar radiation and water seem obvious candidate common goods, yet neither can reasonably be assumed to be a product of expenditure of energy by some individual or some group. Also critically, goods are not necessarily good; it is possible to trade bad goods, or a bad lot of otherwise good goods. The word good appears to have a vaguer meaning stemming from the Germanic word gōd.

Orthodox biologists claim that common goods (termed ‘public goods’ in the technical dialect of biology) are invariably products of metabolic activity, and thus require work to produce. The word public is derived from the Latin publicus, which is a blend of poplicus ‘of the people’ (from populus ‘people’) and pubes ‘adult’. In contrast the word common is derived from the Latin communis, which is itself derived from the old Latin comoenus ‎’shared’, ‘general’. Thus the misunderstanding of common good, held by biologists, appears to be due to uncritical confusion of the meanings of the words ‘public’ and ‘common’, and in particular to a propagated misuse of the word ‘public’(note B).

However, this view is not ubiquitous among scientists. In private correspondence, an ecologist and forest ecosystem conservationist from the University of Wageningen in the Netherlands, G. Havik, has suggested that we should “distinguish common goods from limited common goods”, as the latter poses important consequences for evolution. “Sunlight” he has said “will not be a limited common good for as long as we are around on this planet – except when you’re in someone’s shade, which has driven speciation”. From Havik’s perspective, sunlight is an unlimited common good that is shared and used, but not produced, by metabolic activities. As we shall learn later during exploration of the diversity productivity relationship (DPR), increased diversity of life systems (speciation) may itself be considered a common good. Thus, in an ecological context, shade is an emergent property of biological metabolism, rendering a limiting condition upon the use of an unlimited common good, and shade is also itself a limited common good, due to its diversification effect upon organisms.

A similar example may be made of water. Orthodoxy says that water can be a common (or public) good only if energy is expended in order to create a good, such as a distribution and/or filtration facility rendering potable water. However, we shall assume a wider, more inclusive and more natural interpretation:
Water is a common good if it is available for use.(note C)

Wealth-getting: profiteering vs. sustaining
Non in depravatis, sed in his quae bene secundum naturam se habent, considerandum est quid sit naturale.
What is natural has to be investigated not in beings that are depraved, but in those that are according to nature.
– Aristole, Politics, Book 1(5)

Business assumes to share the goodness (profit) produced by its activities with a select group of actors (the shareholders), but not with a wider ecological sphere (the stakeholders). Simply, business is conducted for the good of an individual legal person; a corporation. In accordance with political science, the purpose of human social interaction – our political lives – is to serve private interests as exclusively as possible. Another way of saying this is that modern human social interaction is geared toward rendering and increasing private goods.

From my own perspective at the time of writing this essay, a cultural moral of self-fulfillment rather than social responsibility, seems to have peaked in the 1970’s among the post WWII American baby boomer culture; the “Me generation”. Twenge & Campbell (2009) have identified and exposed a generational aftershock; a “destructive spread of narcissism”.(6)

Bibard & Groschl suggest that private profiteering, exemplified by the corporate sector under the umbrella of political science, stands in full contradiction to a possible common good. They tell that ancient political philosophy respected private interests to some degree, and thus allowed business to occur to some extent, as a result of political life. Profiteering however, was viewed as a manner of managing private, familial, household affairs. The commons (community, city-state or nation) while requiring wealth-getting activities, does not necessitate a profit-motivated attitude. Aristotle further dissected wealth-getting, by defining a necessary branch that is related to sustenance, is limited, and by nature a part of household management; and an unnecessary branch that is unlimited, unnatural (abstract) and addictive.

“[Some] people suppose that it is the function of economy (household management) to increase property, and they are continually under the idea that it is their duty to be either safeguarding their substance in money or increasing it to an unlimited amount. The cause of this state of mind is that their interests are set upon life but not upon the good life. [Even] those who fix their aim on the good life seek the good life as measured by bodily enjoyments, so that inasmuch as this also seems to be found in the possession of property, all their energies are occupied in the business of getting wealth; and owing to this the second kind of the art of wealth-getting has arisen. For as their enjoyment is in excess, they try to discover the art that is productive of enjoyable excess; and if they cannot procure it by the art of wealth-getting, they try to do so by some other means, employing each of the faculties in an unnatural way.”(7)
Lead characters in the film The Wolf of Wall Street (2013).

“[The] business of drawing provision from the fruits of the soil and from animals is natural to all. But, […] this art is twofold, one branch being of the nature of trade while the other belongs to the household art; and the latter branch is necessary and in good esteem, but the branch connected with exchange is justly discredited (for it is not in accordance with nature, but involves men’s taking things from one another). As this is so, usury is most reasonably hated, because its gain comes from money itself and not from that for the sake of which money was invented. For money was brought into existence for the purpose of exchange, but interest increases the amount of the money itself; consequently this form of the business of getting wealth is of all forms the most contrary to nature.”(7)
– Aristotle ca. 350 BC

Earning money or any manner of profiteering for its own sake, tends to lead people astray from the good life. Aristotelian political philosophy does not assume the Hobbesian primacy of private freedoms, but is oriented toward a common good life via collective functions of community. Likewise, Bibard & Groschl suggest that the ultimate ends of our actions, in business as in political life, should be directed outward, toward the good of the commons, and that the common good should be understood as fulfilling human ends; producing a good quality of life.

Politics should be geared for and directed toward human ends, simple biological needs, not toward vanity or enrichment for their own sake. This message is echoed by the words and meanings of wizards and sages, stretching from ancient times through to modernity. They teach that the path toward intellectual fulfillment via a good quality education, leading to holistic contemplation, is a far healthier human pursuit than is simple material, or worse still, monetary acquisition.

Clearly, ancient philosophies of political episteme and of household management are more relevant to human nature than are their modern theoretic counterparts, political science and economics, respectively. Apparently, people hang onto the modern habit unreasonably; faithfully doing damage.

Spontaneous politic
Aristotle viewed humans as spontaneously political animals, and indeed human nature is fundamentally social. However, social behaviors of some kind or other may be observed throughout the known biome. Organisms are necessarily embedded within life-systems, thus living as parts of collectives (communities, ecosystems) that are formed and maintained via continual biosemiosis.
Consciously or not, we continually measure and compare ourselves and our acts against those of our peers – be they members of our own, or another species.

Schematic diagram showing potential bacterial interspecies interactions.(8)

The natural state currently proposed, is a spontaneously occurring, complex, anarchic, self-organizing and self-regulating, adaptive milieu fundamental to life-systems. The ‘state of nature’ is thus understood as an emergent sociophysical epifunction.

Homo sapiens is nestled symbiotically within the wholeness of the planetary biota. A similar natural state may be assumed to exist for all organisms and life systems on Earth, from the lowly kitchen sponge microbe(9)(10), through the great ocean mammal(11), to the mighty forest dendron(12).

Let us venture the supposition that no organism is capable of sustaining life in the absence of interactions with other organisms. Orthodox biologists would disagree with this umbrella definition, arguing that individual unicellular organisms (such as bacterial or archeal cells, some protozoa and algae) are capable of surviving in isolation, as chemotrophic or photosynthetic primary producers. Here, then, stands a challenge to provide an unambiguous example as proof of biotic independence in situ – naturally. In vitro attempts at sustaining an individual cell, isolated from sources of organic nutrients as well as from mineral products of biotic processes, fail rapidly. If access to organic nutrients and biotic mineral cycling is made available to the cell, then metabolism can continue, invariably leading to colonization of the habitat, by invasion of other species and/or clonal (vegetative) reproduction giving rise to genetic mutants. In either case the result is a form of diversified symbiotic collective; a culture.

The interaction imperative is expressed clearly by Cowden (2012), “the organism with the best interaction strategy has the highest fitness [and] stable payoff equilibriums have been shown for cooperation and altruism, behaviors that seem contradictory to the strongly supported individualistic, survival of the fittest mode of evolution”.(13)

Models of social behavior: informatory and unreal
Computer models of social behavior, are fundamentally flawed due to their necessarily rational (computational) basis. Natural systems of social behavior are in part, necessarily logical, but are just as necessarily irrational (non-computable) due to the fundamentally uncertain nature of nature itself. In order to be understandable, a model can only ever approximate nature in a simplistic manner, and in accordance with the state of knowledge (theory) at the time of the model’s construction. The sciences are model based activities, in theory. In practice, the sciences necessarily incorporate, then so far as technically possible, deny the influences of irrational factors.

Models, whether computerized or not, represent a truncation of reality. Scientific knowledge thus also represents a truncation of reality. Fascinating and awesome it is to begin to grasp the scale of modern moral and knowledge lock-in.(14)(15)

Game theory teaches that “cooperation results in the highest mutual benefit”. An offshoot of game theory, evolutionary stable strategy (ESS) theory, assumes that “a uniform environment, and resources are available everywhere”.(13)

ESS theory is an example of modeled social behavior. The theory is originally attributed to John M. Smith, a former aeronautical engineer turned geneticist and theoretical biologist who also developed signaling theory (biosemiotics), and to George Price, a physical chemist turned population geneticist and theoretical biologist, turned devout Christian and altruist. Price eventually committed suicide due to depression, perhaps in part due to an inability to show in practice what was provable in theory.
Clockwise from top left: William D. Hamilton, John M. Smith, George Price, John Nash, John von Neumann.

Smith & Price followed the works of evolutionary biologist and geneticist turned mathematician and logician William D. Hamilton, the polyhistor John von Neumann, and the mathematician, logician and schizophrenic John Nash, the latter both known for their work on game theory. Much like game theory, ESS theory comprises logical manipulation of rational, albeit abstract mathematical characterizations. The subject of ESS theory was popularized by Richard Dawkins in 1976, with his book The Selfish Gene, in which Dawkins made frequent use of the phrase “all other things being equal”, of course in natural environmental circumstances all other things are often not equal. To his credit, Dawkins did make reference to this fact, commenting that the environment does tend to radical and sudden change, thus allowing for the displacement of an existing ESS, which gives way to the emergence of new strategic patterns, before eventual re-stabilization of the biotic system into a new ESS; a new steady state.(16)

In the nascent literature of economics, environmentalism, and political theory, which together form the bulk of serious theoretical work on the topic of sustainable development, the emergence and stabilization of a novel ESS following the breakdown of an existing ESS, is termed a “paradigm shift”, which is nothing less than a change of cultural moral; a change of worldview.

In his popularization of genetic fundamentalism, Dawkins propagated arguments against the existence of altruistic behaviors and group selection, saying that both are common misunderstandings of phenomena that benefit individual genes. Dawkins knowingly skipped over a closely related concept, Hamiltonian inclusive fitness, which must have seemed as likely then as it does now, to disrupt the foundation of the gene-centric orthodox theoretical edifice. In fact, Dawkins’ text mentions inclusive fitness only in a footnote of the (2006) 30th anniversary edition, referring to his colleague and collaborator Alan Grafen, who’s work (Grafen, 1984) reported “the widespread misuse of Hamilton’s concept of ‘inclusive fitness’.” Grafen himself seems to have been considerably broader of mind, admitting that Hamilton’s rule (note D) upon which kin selection theory, inclusive fitness theory, and ESS theory are founded, “holds good only under certain assumptions”. There are “different definitions of [relatedness], and the scope of the rule depends on the definition of [relatedness] employed”. Grafen interpreted inclusive fitness as “a device that simplifies the calculation of conditions for the spread of certain alleles”, and suggested that the expression of those alleles affects the number of offspring produced by other organisms in a population.(17)

This last point brings us to the controversial idea of group selection, which makes intuitive sense (species are born, reproduce and become extinct, just as organisms are born, reproduce and die) but is vague and difficult to rationalize, particularly from the gene-centric perspective. However, empirical evidence of higher level selection (selection of traits above the level of individual organisms) was published by Wade (1976). In his initial study of group fitness among populations of Flour beetles, Wade concluded that a genetic bottlenecking “process of random extinctions with recolonization can establish conditions favorable to the operation of group selection.”(18) In a continuation of his experimental work, Wade (1980), reported that “under many circumstances, a species performance in competition is not predictable from its performance in single-species culture”, and that “competitive ability can be viewed as an indirect but general measure of the nature of population response to group and individual selection for increased and decreased population size.”(19)

Unclear and slight, the group selection idea is perhaps too easily dismissed. We shall not dwell upon it further here, except to point out that it bears the markings of an emergent phenomenon, and to respectfully remind the reader that epigenetic phenomena (the potentially heritable alteration of genetic traits, environmentally affected above the level of DNA code) are a relatively recent discovery.(20)

In regard to altruistic behaviors, Reuter et al (2010), have reported that in humans “oxytocin promotes interpersonal trust by inhibiting defensive behaviours and by linking this inhibition with the activation of dopaminergic reward circuits, enhancing the value of social encounters.”(21) Furthermore, a handful of genetic association studies have linked polymorphisms of the oxytocin receptor gene (OXTR) and the vasopressin 1a receptor gene (AVPR1A) to prosocial behaviors, while concurrently implicating the dopaminergic system. Thompson et al (2013), report two genes as candidate genes for human altruism, OXTR and cluster of differentiation 38 (CD38), both genes are active in the regulation of blood plasma concentrations of oxytocin. They suggest that OXTR and CD38 mediate trade-offs between self-focused cognition and behaviors, versus prosocial cognition and altruistic behaviors.(22)

“Inclusive fitness is often associated with kin selection, as more closely related organisms more likely share the same alleles – such alleles are referred to as ‘identical by descent’ as they are from a common ancestor. However, altruism genes may be found in non-related individuals, thus relatedness is not a strict requirement of inclusive fitness [which is widely quoted as an explanation for the evolution of altruistic behaviors]” Cowden (2012).

I continue to feed and care for an organism similar to the one pictured here. Dawkins would say that my expressions of care toward my pet Uma are not altruistic but selfish, that Uma somehow increases my own reproductive capacity, or at least that I am pushing my own feel good button. That may be so, I openly admit that my quality of life is bettered by Uma’s company, though Uma tends to enjoy a good quality of life also.
We’re not yet sure about the cat, who has been invited into the household to manage a population of mice. Apparently I am incapable of altruism toward mice.

Jest aside, wild biomes (natural states) are not all red in tooth and claw, but they are all complex and diversified, symbiotic and synergistic systems, defined by divisions of labour and collective actions, producing an emergent common good. Inclusive fitness does not describe a Hobbesian war of each against all, but infers the indirect reproduction of identical copies of traits (behaviors or phenotypes linked to environmental or genetic components) parallel to the vertical gene transfer achieved by parents to their offspring; horizontal gene transfer, as documented by microbiologists, comes closer but still does not fully hit the mark of indirect reproduction. Essentially, distant relatives within a species, as well as siblings, even twins, exemplify indirect reproduction. A wider exemplary scope might expose the various and diverse hemoproteins.

Hemoglobin is a tetrameric protein (left), comprising four heme groups (right).

“If iron is nature’s favorite essential metal, then heme is its Swiss Army knife: a versatile, indispensible tool that, in the company of its protein sheath, can do seemingly anything. The power of heme is particularly evident in the prokaryotes, where diversity in the catalytic activities of heme proteins, as well as proteins involved in the uptake, trafficking and sensing of heme, appears to be vast”.(23)
– Mayfield et al, (2011)

Dawkins paved his approach to the subject of biological collectivism, altruism, and social behavior, with logic and computer models. He was confident that he saw clearly, a single formal system, operating invariant rules, written by men – the theoretical evolutionary stable strategy (ESS). In all honesty, I admit to seeing rather less clearly, more vaguely and uncertainly, a set of complex and interacting systems. Biological processes are changeable, adaptable; are not written; are not rules, but malleable agreements and necessary compromises.

Theoretical biologist R. Rosen, argued that a living organism is not a machine, and thus cannot have a computer-simulable model. Furthermore, Rosen opined that the current reductionistic state of science – “sacrificing the whole in order to study the parts” – is inadequate to create a coherent theory of biological systems, as life is not observed after dissection of a biological organization. Rosen held what seems to be a mystical belief – that biology is not a subset of known physics, that relational studies of living systems (how parts of living systems relate to each other) may produce new knowledge of physics and result in profound changes for science generally. Inspired by Gödel’s theorems of incompleteness, and the limitations of Turing-computability, he suggested that “we should widen our concept of what models are”.(24)

The assumption of strict empiricism is fundamentally untenable, as any observation is necessarily dependent upon subjective experience. Thus the ’empirical sciences’, as well as those bodies of knowledge best termed ‘epistemes’ – including politics, psychology and the ‘arts’ – are principally subjective, intuitive understandings, leading to the formation and execution of practical arts, allowing for the acquisition of empirical knowledge. Rationalizations of irrational processes such as politics and the (inter)actions of political states, are conducive to modeling in a manner similar to the modeling of physical phenomena, those models being necessarily based upon truncations of empirical measurement, to render computable data.

That markets are composed of individual rational actors, is a fundamental supposition upon which modern economic theory is built, allowing for precise computational modeling of economic activity. However, this founding assumption is clearly incorrect; markets are composed of people (individuals and groups), and people are not invariably rational actors. Simply, people are not machines, they do not always Turing-compute, or act in accordance with expectation (theoretical or otherwise); people do not always do the right thing. Thus real market behaviors tend not to conform tightly with statistical, theoretical prediction. This observation is communicated succinctly by Bibard & Groschl, who have said that “the economic assumption of pure and perfect rationality is not an empirical, but a theoretical one”.

The complete failure of economic theory and subsequent data-driven models to predict, even imprecisely and inaccurately, black swan events such as the global finance sector catastrophe of 2007-8 and the ensuing global monetary crisis, is the result of both: truncations of empirical measurement data used in theoretical modeling; and the indoctrination of modern global culture into a system of theoretical and mechanical naivete.

In a very real sense, modern economic theory and models comprise a simplistic interpretation of the realities of political life; and generally, people place near-complete trust and reliance upon technologies that they misunderstand, or outright do not understand.

Groschl (2013) reports that recent annual meetings of the world economic forum at Davos have begun to recognize sustainable development not merely as a mechanical, technical process. Increasingly, behavior is seen as the missing link between analyses (providing knowledge of what is at stake) and implementation (doing something about it). He suggests that a transformation is occurring – or needs to occur, and calls upon his readers to realize that “not everything that counts can be counted and not everything that can be counted counts […]. One cannot rely too much on models and calculations. Instead one must rely on one’s intuition, and trust the intuitions of others”. In so saying, Groschl corroborates my own view, published as part of a previous post titled iconoclast which ends with a call for the realization that the greater part of reality is irrational – “irrationality is the denominator, and rationality the numerator”.

The mechanization of governance: expert systems – not even idiots
Hackett & Groschl speak of a transnational capitalist class (TCC) – the principal shareholders and managers of large corporations. These private businesses do not reside within a single nation, and thus are not bound by the laws and customs of any one nation, rather they are spread across several nations, the governing policies of which they tend to influence. In fact, Hackett & Groschl claim that the influence of transnational corporations has grown to become the core actor in governance discourse. Increasingly, developed states conduct peripheral, enabling roles, while developing countries have been entirely disenfranchised from the global agenda. Transnational corporations affect their influence upon the economies of most countries, and seem to play an ever increasing, albeit private and hidden role in international relations, together resulting in economic activities the scale of which are beyond the capacity of any one nation state. It is said that the power and reach of transnational business has in many ways surpassed the power and capacity of the United Nations.
Based upon the knowledge that people irrationally trust models, the understanding that government policy is strongly influenced by corporate interests, and that the governance of corporations is strongly influenced by economic theory and computer modeling, it seems reasonable to take the view that policy is increasingly being conducted by technological systems, most of which still employ people – albeit with the unrealistic assumption that human components of the politico-technological system are devoid of humanity; that they are perfectly rational actors.

The modern political state is thus modular, and most correctly defined as technocracy. Herein, warn Hackett & Groschl, lies a looming crisis of accountability. With the knowledge that corporate shareholders are not legally liable for the actions of the corporate person they own, and assuming the TCC as the global elite, economically governing group, who will hold the TCC and it’s individual members accountable? – and how?

One answer to this quandary is as predictable as it is incapable; artificial intelligence. Not the ‘general’ or ‘strong’ AI of science fiction, but decidedly unintelligent expert systems. The convergence of governance and expert systems is termed e-government – defined by the United Nations Global E-Government Readiness Report 2004, as “the use of [information and communication technology (ICT)] and its application by the government for the provision of information and public services to the people.”

Several aspects of governance, in business and government, have already been delegated to expert systems, as shown by the broader definition given in a more resent UN document, titled “E-Government for the Future We Want”:
“E-government can be referred to as the use and application of information technologies in public administration to streamline and integrate workflows and processes, to effectively manage data and information, enhance public service delivery, as well as expand communication channels for engagement and empowerment of people. The opportunities offered by the digital development of recent years, whether through online services, big data, social media, mobile apps, or cloud computing, are expanding the way we look at e-government. While e-government still includes electronic interactions of three types – i.e. government-to-government (G2G); government-to-business (G2B); and government-to-consumer (G2C) – a more holistic and multi-stakeholder approach is taking shape.”(25)

The Encyclopedia of Digital Government (2007), provides concrete examples of governance tasks performed by expert systems. “Increasingly, government organizations in the Netherlands use expert systems to make judicial decisions in individual cases under the Dutch General Administrative Law Act […]. Examples of judicial decisions made by expert systems are tax decisions, decisions under the Traffic Law Act (traffic fines), decisions under the General Maintenance Act (maintenance grants), and decisions under the Housing Assistance Act.

There are two categories of judicial expert systems. Expert systems in the first category support the process of judicial decision making by a civil servant. The decision is taken in “cooperation” between a computer and the civil servant. Expert systems in the second category draft judicial decisions without any human interference. In these cases the decision making process is fully automatic.”(26)

In 1989 J. Weintraub authored an article published in AI Magazine (note E), in which he lists twelve possible uses for experts systems in federal, state, and municipal governments.
1) Forecasting – financial planning and cash management
2) Labor relations
3) Document and archive retrieval
4) Regulatory compliance advise
5) Office automation
6) Capital assets analysis
7) Personnel employment assessment
8) Legal advice
9) Instruction
10) Bid and proposal preparation assistance
11) Natural language querying of database
12) Auditing

Further, Weintraub stated that “the applicability of expert systems and AI to government administration can be seen in a careful ‘between the lines’ reading of the Information Systems Plan (ISP). Although not explicitly stated, many of the systems and projects defined in ISP are driven by extensive and complex logic processes and would benefit from AI technology.”(27) This is more than a little humorous, as expert systems are thoroughly incapable of reading “between the lines”, in a sense proving the necessity of humans, whether expert or not, for the interpretation of real-world situations and to propose solutions that better, or at least maintain, a decent quality of life.

In this regard I speak from personal experience, having been subjected, rather frustratingly, to the stress-inducing ridiculousness of the expert system employed by the royal Dutch tax department. In regular correspondence with the Dutch tax system, it failed to remind me of a chat bot only twice during the course of six years – due on both occasions to the intervention of a (human) civil servant. The expert governor (Dutch tax bot) consistently appraised the situation incorrectly, whereas a layman (myself) and civil servant (tax inspector) appraised the situation correctly. The Dutch computer expert governor, a rational specialist, managed very well only to reduce the quality of my life, by not incorporating into the situation argument, the information that I had sent to it.

Apparently, the current culture of deference of individual responsibilities of governance to a group of ‘representative’ strangers, is not dysfunctional enough. Modern culture seeks to defer individual responsibilities of governance even further, feeding them to unintelligent expert systems. While I can imagine the presumed attraction of this course of action, if viewed superficially and from a disinterested distance, my own experiences have proven that deference of governance to machine systems, makes for singularly poor policy, resulting in absurd decision making. Expert systems have no understanding of the knowledge they house, nor of how the implementation of that knowledge impacts upon the quality of people’s lives. Indeed, this is part of the attraction – we hope to better our lives by employing selfless, unbiased, ‘incorruptible’, perfectly rational machines as civil servants; as our governors. A warning! Expert (governing) systems are not intelligent, in fact they are not even idiots.

There may be a glimmer of hope however, in the incorporation and interrelation of several expert systems, representing a diversity of specializations, thus synthesizing a multi-expert system; a diversified-specialized system; a computerized polymath. Such a system would not be intelligent, but it might be capable of more rounded, complex, decision making, which in turn may lead to more livable forms of governance for humans. However, the only sure way to attain a good quality of life is to personally, individually, abandon the current culture of technocratic lock-in (‘representative democracy’), and to begin to govern oneself in association with ones local group, resources, and territory.

A) For the purpose of this essay, the word cell is assumed to be synonymous with actor, and the latter may refer to molecular as well as systemic agents of action.

B) Take for example the report by Cordero (2012), in which is stated: “A common strategy among microbes living in iron-limited environments is the secretion of siderophores, which can bind poorly soluble iron and make it available to cells via active transport mechanisms. Such siderophore-iron complexes can be thought of as public goods that can be exploited by local communities and drive diversification […]” – italicized emphasis is mine.

C) Of course ‘water’ may be replaced with any object or process.

D) Hamilton’s rule (rB > C) was published in 1964, as a popularization of the mathematical treatment of kin selection, by Fisher and Haldane in the 1930’s, and a further formal mathematical treatment, a theorem, composed by Price.
r = genetic relatedness of the recipient to the actor.
B = benefit gained by the recipient as a result of the act.
C = cost of the act to the actor.

E) Elsevier publishes an entire journal devoted to the field of expert systems in governance, titled “Expert Systems with Applications” []. Here are two recent (2012 and 2015) citations:
i) “Evaluation and ranking of risk factors in public–private partnership water supply projects in developing countries using fuzzy synthetic evaluation approach”
ii) “An unstructured information management system (UIMS) for emergency management”

1) C. Woese, “The universal ancestor”, (1998), Proceedings of the National Academy of Sciences of the United States of America, vol. 95(12), p. 6854-9, (abstract)

2) F. Miller, “Aristotle’s Political Theory”, (2012), The Stanford Encyclopedia of Philosophy,

3) E. Jenks, “A History of Politics”, (1909), p.73,

4) S. Groschl et al, “Uncertainty, Diversity and The Common Good”, (2013), Gower,

5) J. Scott, “Critical Assessments of Leading Political Philosophers”, (2006), p. 421, Routledge,,+sed+in+his+quae+bene+secundum+naturam+se+habent,+considerandum+est+quid+sit+naturale&source=bl&ots=vLYRv0Xyl-&sig=JMpCOrRPx15W-We6lS-cvR1s9pE&hl=sl&sa=X&ved=0CC4Q6AEwAmoVChMIuNj85ce1xwIVCaZyCh339QR_#v=onepage&q=Non%20in%20depravatis%2C%20sed%20in%20his%20quae%20bene%20secundum%20naturam%20se%20habent%2C%20considerandum%20est%20quid%20sit%20naturale&f=false

6) J. Twenge & W. Campbell, “The Narcissism Epidemic:Living in the Age of Entitlement”, (2009), Free Press,

7) Aristotle, “Politics (Book 1)”, (1957), Aristotle in 23 Volumes, Vol. 21, translated by H. Rackham, Cambridge, MA, Harvard University Press,

8) F. Short et al, “Polybacterial human disease: the ills of social networking”, (2014), Vol. 22-9, p. 508-518, Trends in Microbiology, Elsevier,





13) C. Cowden, “Game Theory, Evolutionary Stable Strategies and the Evolution of Biological Interactions”, (2012), Nature – Education,


15) T. Foxon, “Technological and institutional ‘lock-in’ as a barrier to sustainable innovation”, (2002), Imperial College London,

16) R. Dawkins, “The Selfish Gene”, (1976), Oxford University Press.

17) A. Grafen, “Natural Selection, Kin Selection and Group Selection”, (1984), Behavioural ecology: an evolutionary approach, Vol. 2,

18) M. Wade, “Group selection among laboratory populations of Tribolium”, (1976), Proceedings of the National Academy of Sciences, Vol. 73-12, p. 4604-4607,

19) M. Wade, “Group Selection, Population Growth Rate, and Competitive Ability in the Flour Beetles, Tribolium Spp.”, (1980), Ecology, Vol. 61-5, p. 1056-1064, Ecological Society of America, abstract

20) V. Huges, “Epigenetics: The sins of the father”, (2014), Nature, Vol. 507-7490,

21) M. Reuter, et al, “Investigating the genetic basis of altruism: the role of the COMT Val158Met polymorphism”, (2010), Social Cognitive and Affective Neuroscience,

22) G. Thompson, et al, “Genes underlying altruism”, (2013), Biology Letters, The Royal Society,

23) J. Mayfield et al, “Recent advances in bacterial heme protein biochemistry”, (2011), Current Opinion in Chemical Biology, Vol. 15, p. 260–266, Science Direct,

24) “Rosennean Complexity and other interests”, (2008), Panmere,

25) “UNITED NATIONS E-GOVERNMENT SURVEY 2014 – E-Government for the Future We Want”, (2014), Untied Nations New York,

26) M. Groothuis, “Applying ICTs in Judicial Decision Making by Government Agencies”, (2007), Encyclopedia of Digital Government, p. 87-96,

27) J. Weintraub, “Expert Systems in Government Administration”, (1989), AI Magazine Vol. 10/1, Association for the Advancement of Artificial Intelligence,

The Laws of Thought

In this and the following episode we shall explore four assumptions:

a) For individuals living in modern developed economies, the frequency and duration of human-machine interaction are greater than that of human-human interaction and human-animal interaction combined.

b) Human behaviors are altered (conditioned) by human-machine interaction.

c) We are beginning to expect each other to behave in a more machine-like manner, particularly in work situations (predictable performance) but also in general public situations, such as in traffic.

d) Our biology is beginning to form psychological and physiological unions with machine-states and machines, respectively.

It has become a popular belief that human-machine interaction is ushering in a new age. Geologists hesitantly refer to the nascent epoch as anthropocene, philosophers refer to it as posthumanism and transhumanism. The phrase human kind will no longer strictly apply, but the term posthuman kind seems clumsy. By marriage of art, fashion, and popular culture, with technologies such as genetic engineering, synthetic biology, robotics, electronics and information processing, humans are becoming posthuman; it has been hypothesized that Homo sapiens will evolve to become Homo evolutis. I’m sure an apt term will be coined or adopted to describe the arrival of the first anthropogenic, self-targeted speciation event, and think it likely that H. sapiens and H. evolutis will coexist for some period, as seems to have been the case with other species of Homo genus.

Human-machine interaction (HMI) is almost ubiquitously referred to as Human-Computer Interaction (HCI) by the academic community, because the vast majority of research funding, and thus study, focuses on the interface between humans and electronic devices (including robots). In the current exploration, I have chosen to speak more broadly of HMI, which is to include novel technologies as well as seemingly mundane ones, such as automobiles, telephones, televisions, powered wood saws, and bread toasters.

Let us begin by diving overboard to explore the mating of animals with machines, and vice versa.

The mechanization of logic
In the history of cybernetics, “the influence of mathematical logic is a reoccurring element. The philosophy of Gottfried Leibniz [circa 1680] revolves about two closely related concepts – universal symbolism and calculus of reasoning. The calculus of arithmetic lends itself to mechanization, progressing through the abacus and step reckoners [step drum] to the desktop computing machine [mechanical calculator], and on to the ultra-rapid computing machines [analog and digital computers] of the present day. The calculus ratiocinator of Leibniz contains the germs of the machina ratiocinalrix, the reasoning machine. Leibniz, like his predecessor Pascal, was interested in the construction of computing machines. So the same intellectual impulse which lead to the development of mathematical logic has at the same time led to the ideal, and eventually to the actual, mechanization of thought processes.”(1a)

Gottfried von Leibniz (1646 to 1716)

Leibniz(2), along with Descartes and Spinoza, was one of the three great advocates of rationalism in the 17th century. His works anticipated modern logic and analytic philosophy. His philosophy indexed the scholastic tradition, in which conclusions were produced by applying reason to first principles or prior definitions rather than to empirical evidence (that is to say conclusions were reached by thinking, rather than by observation alone). Leibniz made major contributions to physics and technology, and anticipated notions that surfaced much later in philosophy, probability theory, biology, medicine, geology, psychology, linguistics, and computer science. He wrote works on philosophy, politics, law, ethics, theology, history, and philology (the study of historical texts).
Staffelwalze (literally: step drum) is named after its operating mechanism. Invented by Liebnitz in 1672, the device took more than 20 years to construct and was the first non-human calculator that could perform all four arithmetic operations.

Curta mechanical calculator (1948) – Count on Curta! (simulator)

Desktop computing machine (circa 1960) – pictured without housing

Modern culture commonly uses the words cyberspace, cybernetic, and cyborg, but as happens all too often to lay people and academics alike, the origin of the term has receded from consciousness. Norbert Wiener et al coined the term cybernetics, and described a novel field of scientific study in his book by the same name.

“We have decided to call the entire field of control and communication theory, whether in machine or in the animal, by the name Cybernetics, which we form from the Greek χυβερνητηζ or steersman. In choosing this term, we wish to recognize that the first significant paper on feed-back mechanisms is an article on governors, which was published by Clerk Maxwell in 1868, and that governor is derived from a Latin corruption of χυβερνητηζ. We also wish to refer to the fact that the steering engines of a ship are the earliest and best developed forms of feed-back mechanisms.”(1b)

In the mid to late 1940’s, the new science of cybernetics found a sympathetic home at MIT. It had begun as a multidisciplinary pursuit and has to date continued as such, attracting people from various fields of study, including but not restricted to mathematics, electronics, electrical engineering, physiology, biophysics, psychology, sociology, anthropology, economics, and philosophy.

“[Quite early-on, work began] on problems concerning the union of nerve fibers by synapses into systems with given overall properties. The technique of mathematical logic was used for the discussion of what were after all switching problems. […] The vocabulary of the engineers soon became contaminated with the terms of the neurophysiologist and the psychologist.”(1c)

Twenty years later, Sherry Turkle (also at MIT) wrote “[Computers provided legitimation for a radically different way of seeing mind. Computer scientists had of necessity developed a vocabulary for talking about what was happening inside their machines, the internal states of general systems. If machine minds had inner states, surely people had them too.]”(3)
So biology has fed into engineering, which has fed-back into Biology.

In the summer of 1946 experiments were begun to elucidate aspects of the feedback phenomenon of the nervous system. “We chose the cat as our experimental animal, and the quadriceps extensor femoris as the muscle to study. We cut the attachment of the muscle, fixed it to a lever under known tension, and recorded its contractions isometrically or isotonically. We also used an oscillograph to record the simultaneous electrical changes in the muscle itself. […] The muscle was loaded to the point where a tap would send it into a periodic pattern of contraction, which is called clonus in the language of the physiologist. We observed this pattern of contraction, paying attention to the physiological condition of the cat, the load on the muscle, the frequency of the oscillation, and its amplitude. These we tried to analyse as we should analyse a mechanical or electrical system exhibiting the same pattern of hunting. We employed, for example, the methods of McColl’s book on servo-mechanisms.”(1d) These experiments lead to the physical implementation of cybernetic (feedback) theory, which took the form of a phototropic mechanism.

Norbert Wiener (circa 1950) pictured with phototropic cart. The mechanism, nicknamed “moth”, would propel and steer itself toward a light source.

“It has long been clear […] that the modern ultra-rapid computing machine was in principle an ideal central nervous system to an apparatus for automatic control; and that its input and output need not be in the form of numbers or diagrams, but might very well be, respectively, the readings of artificial sense-organs such as photo-electric cells or thermometers, and performance of motors or solenoids. With the aid of strain-gauges or similar agencies to read the performance of these motor organs and to report [to feed-back] to the central control system as an artificial kinaesthetic sense, we are already in a position to construct artificial machines of almost any degree of elaborateness of performance. [This development] has unbounded possibilities for good and for evil. For one thing, it makes the metaphorical dominance of the machines […] a most immediate and non-metaphorical problem, [while giving] the human race a most effective collection of slave-labourers. [The industrial revolution devalued the human arm by out-competing it with machines; in the developed world there is no rate of pay low enough for a pick-and-shovel labourer to sustainably compete with a tractor for excavation work]. The modern industrial revolution [information technology] is similarly bound to devalue the human brain at least in its simpler and more routine decisions.”(1e)

rat_cyborg_1 rat_cyborg_2
The Cyberplasm project and BTBI (pictured here) are examples of the current state of this nascent field of endeavor.

Why pursue artificial intelligence?
What is intelligence? is an age-old question(4), perhaps even the original philosophical query. Artificial intelligence (AI) is our most recent attempt to model the mind, in the hope of slipping closer into self-awareness and self-realization. However, to date there is still no defining consensus about what intelligence is, where it is seated (if in the mind, then what and where is that?), and even if mind exists at all.

Turkle has commented on the latter, saying “[Inherent in the prospect of artificial intelligence is a threatening challenge: If mind is a program, where is the self? AI puts into question not only whether the self is free-willed but whether there is a self at all.]”(5)

Sherry Turkle (circa 1990)

The Laws of Thought
It has been said of George Boole’s algebra that the “reasoned and self-consistent system of high-school algebra”, which Boole himself called The Laws of Thought, and which has lead to the Boolean algebra of inferential logic, was attained by “incomprehensible”, “magical”, and qusimathematical methods”(6). It is somewhat comical that Boole’s Laws of Thought(7) require a computational operator, either human or machine but necessarily external to the system. This fact alone renders the self-consistency of Boole’s system (and all subsequent attempts at producing AI via logic) irrelevant, as an external user must be involved to do the real thinking of interpretation. That is not to say that Boole’s algera or Boolean algebra are irrelevant, only that they are in principle insufficient to accurately model thinking, or intelligence, or mind.

In his book “The Emperor’s New Mind”, Roger Penrose suggests that strong AI (aka: artificial general intelligence, which is defined as the ability to perform general intelligent action, as opposed to some specific set of specialized skills, such as chess playing or medical diagnosis, both of which would be categorized as expert systems. Strong AI is associated with the perception of consciousness, sentience, sapience and self-awareness), is impossible in principle due to the fact that the processes of any artificial system necessarily occur within the bounds of a set (or set of sets) of logical rules. This position mirrors the argument above; Penrose makes clear that an operator external to the rule set must be present in order to interpret the end result.

Top-down, controlled systems, do not provide much opportunity for innovation, albeit being fully understandable. However, bottom-up, self-organizing systems, do in principle allow for vast innovation but they do so at the expense of control and understandability. In my opinion the former defines an expert system, whereas the latter, via emulation of neuronal networks, may eventually give rise to strong AI.

Critically, a self-organized strong AI is just as unlikely to be able to understand itself as are we, and should thus be unable to provide us with any significant increase in understanding of its own, or of our own, intelligence. We will have learned nothing or precious little about consciousness and will still not be able to pinpoint or define intelligence. It is perhaps noteworthy that this last thought is not the product of logical deduction, but of intuitive induction.

1a-e) N. Wiener, “CYBERNETICS – or Control and Communication in Animal and Machine”, (1948), pages 7 to 39, The Technology Press, M.I.T.


3) S. Turkle, “Artificial Intelligence and Psychoanalysis: A New Alliance”, (1988), Daedalus, Vol 117, number 1, Artificial Intelligence, pages 241-268, MIT press,

4) J. Plucker, “History of Influences in the Development of Intelligence Theory”, interactive history map, (2012), University of Indiana,

5) S. Turkle, “Artificial Intelligence and Psychoanalysis: A New Alliance”, (1988), Daedalus, Vol 117, number 1, Artificial Intelligence, pages 241-268, MIT press,

6) S. Burris (2000), “The Laws of Boole’s Thought”, University of Waterloo,