In this and the following episode we shall explore four assumptions:
a) For individuals living in modern developed economies, the frequency and duration of human-machine interaction are greater than that of human-human interaction and human-animal interaction combined.
b) Human behaviors are altered (conditioned) by human-machine interaction.
c) We are beginning to expect each other to behave in a more machine-like manner, particularly in work situations (predictable performance) but also in general public situations, such as in traffic.
d) Our biology is beginning to form psychological and physiological unions with machine-states and machines, respectively.
It has become a popular belief that human-machine interaction is ushering in a new age. Geologists hesitantly refer to the nascent epoch as anthropocene, philosophers refer to it as posthumanism and transhumanism. The phrase human kind will no longer strictly apply, but the term posthuman kind seems clumsy. By marriage of art, fashion, and popular culture, with technologies such as genetic engineering, synthetic biology, robotics, electronics and information processing, humans are becoming posthuman; it has been hypothesized that Homo sapiens will evolve to become Homo evolutis. I’m sure an apt term will be coined or adopted to describe the arrival of the first anthropogenic, self-targeted speciation event, and think it likely that H. sapiens and H. evolutis will coexist for some period, as seems to have been the case with other species of Homo genus.
Human-machine interaction (HMI) is almost ubiquitously referred to as Human-Computer Interaction (HCI) by the academic community, because the vast majority of research funding, and thus study, focuses on the interface between humans and electronic devices (including robots). In the current exploration, I have chosen to speak more broadly of HMI, which is to include novel technologies as well as seemingly mundane ones, such as automobiles, telephones, televisions, powered wood saws, and bread toasters.
Let us begin by diving overboard to explore the mating of animals with machines, and vice versa.
The mechanization of logic
In the history of cybernetics, “the influence of mathematical logic is a reoccurring element. The philosophy of Gottfried Leibniz [circa 1680] revolves about two closely related concepts – universal symbolism and calculus of reasoning. The calculus of arithmetic lends itself to mechanization, progressing through the abacus and step reckoners [step drum] to the desktop computing machine [mechanical calculator], and on to the ultra-rapid computing machines [analog and digital computers] of the present day. The calculus ratiocinator of Leibniz contains the germs of the machina ratiocinalrix, the reasoning machine. Leibniz, like his predecessor Pascal, was interested in the construction of computing machines. So the same intellectual impulse which lead to the development of mathematical logic has at the same time led to the ideal, and eventually to the actual, mechanization of thought processes.”(1a)
Leibniz(2), along with Descartes and Spinoza, was one of the three great advocates of rationalism in the 17th century. His works anticipated modern logic and analytic philosophy. His philosophy indexed the scholastic tradition, in which conclusions were produced by applying reason to first principles or prior definitions rather than to empirical evidence (that is to say conclusions were reached by thinking, rather than by observation alone). Leibniz made major contributions to physics and technology, and anticipated notions that surfaced much later in philosophy, probability theory, biology, medicine, geology, psychology, linguistics, and computer science. He wrote works on philosophy, politics, law, ethics, theology, history, and philology (the study of historical texts).
Staffelwalze (literally: step drum) is named after its operating mechanism. Invented by Liebnitz in 1672, the device took more than 20 years to construct and was the first non-human calculator that could perform all four arithmetic operations.
Modern culture commonly uses the words cyberspace, cybernetic, and cyborg, but as happens all too often to lay people and academics alike, the origin of the term has receded from consciousness. Norbert Wiener et al coined the term cybernetics, and described a novel field of scientific study in his book by the same name.
“We have decided to call the entire field of control and communication theory, whether in machine or in the animal, by the name Cybernetics, which we form from the Greek χυβερνητηζ or steersman. In choosing this term, we wish to recognize that the first significant paper on feed-back mechanisms is an article on governors, which was published by Clerk Maxwell in 1868, and that governor is derived from a Latin corruption of χυβερνητηζ. We also wish to refer to the fact that the steering engines of a ship are the earliest and best developed forms of feed-back mechanisms.”(1b)
In the mid to late 1940’s, the new science of cybernetics found a sympathetic home at MIT. It had begun as a multidisciplinary pursuit and has to date continued as such, attracting people from various fields of study, including but not restricted to mathematics, electronics, electrical engineering, physiology, biophysics, psychology, sociology, anthropology, economics, and philosophy.
“[Quite early-on, work began] on problems concerning the union of nerve fibers by synapses into systems with given overall properties. The technique of mathematical logic was used for the discussion of what were after all switching problems. […] The vocabulary of the engineers soon became contaminated with the terms of the neurophysiologist and the psychologist.”(1c)
Twenty years later, Sherry Turkle (also at MIT) wrote “[Computers provided legitimation for a radically different way of seeing mind. Computer scientists had of necessity developed a vocabulary for talking about what was happening inside their machines, the internal states of general systems. If machine minds had inner states, surely people had them too.]”(3)
So biology has fed into engineering, which has fed-back into Biology.
In the summer of 1946 experiments were begun to elucidate aspects of the feedback phenomenon of the nervous system. “We chose the cat as our experimental animal, and the quadriceps extensor femoris as the muscle to study. We cut the attachment of the muscle, fixed it to a lever under known tension, and recorded its contractions isometrically or isotonically. We also used an oscillograph to record the simultaneous electrical changes in the muscle itself. […] The muscle was loaded to the point where a tap would send it into a periodic pattern of contraction, which is called clonus in the language of the physiologist. We observed this pattern of contraction, paying attention to the physiological condition of the cat, the load on the muscle, the frequency of the oscillation, and its amplitude. These we tried to analyse as we should analyse a mechanical or electrical system exhibiting the same pattern of hunting. We employed, for example, the methods of McColl’s book on servo-mechanisms.”(1d) These experiments lead to the physical implementation of cybernetic (feedback) theory, which took the form of a phototropic mechanism.
“It has long been clear […] that the modern ultra-rapid computing machine was in principle an ideal central nervous system to an apparatus for automatic control; and that its input and output need not be in the form of numbers or diagrams, but might very well be, respectively, the readings of artificial sense-organs such as photo-electric cells or thermometers, and performance of motors or solenoids. With the aid of strain-gauges or similar agencies to read the performance of these motor organs and to report [to feed-back] to the central control system as an artificial kinaesthetic sense, we are already in a position to construct artificial machines of almost any degree of elaborateness of performance. [This development] has unbounded possibilities for good and for evil. For one thing, it makes the metaphorical dominance of the machines […] a most immediate and non-metaphorical problem, [while giving] the human race a most effective collection of slave-labourers. [The industrial revolution devalued the human arm by out-competing it with machines; in the developed world there is no rate of pay low enough for a pick-and-shovel labourer to sustainably compete with a tractor for excavation work]. The modern industrial revolution [information technology] is similarly bound to devalue the human brain at least in its simpler and more routine decisions.”(1e)
The Cyberplasm project and BTBI (pictured here) are examples of the current state of this nascent field of endeavor.
Why pursue artificial intelligence?
What is intelligence? is an age-old question(4), perhaps even the original philosophical query. Artificial intelligence (AI) is our most recent attempt to model the mind, in the hope of slipping closer into self-awareness and self-realization. However, to date there is still no defining consensus about what intelligence is, where it is seated (if in the mind, then what and where is that?), and even if mind exists at all.
Turkle has commented on the latter, saying “[Inherent in the prospect of artificial intelligence is a threatening challenge: If mind is a program, where is the self? AI puts into question not only whether the self is free-willed but whether there is a self at all.]”(5)
The Laws of Thought
It has been said of George Boole’s algebra that the “reasoned and self-consistent system of high-school algebra”, which Boole himself called The Laws of Thought, and which has lead to the Boolean algebra of inferential logic, was attained by “incomprehensible”, “magical”, and qusimathematical methods”(6). It is somewhat comical that Boole’s Laws of Thought(7) require a computational operator, either human or machine but necessarily external to the system. This fact alone renders the self-consistency of Boole’s system (and all subsequent attempts at producing AI via logic) irrelevant, as an external user must be involved to do the real thinking of interpretation. That is not to say that Boole’s algera or Boolean algebra are irrelevant, only that they are in principle insufficient to accurately model thinking, or intelligence, or mind.
In his book “The Emperor’s New Mind”, Roger Penrose suggests that strong AI (aka: artificial general intelligence, which is defined as the ability to perform general intelligent action, as opposed to some specific set of specialized skills, such as chess playing or medical diagnosis, both of which would be categorized as expert systems. Strong AI is associated with the perception of consciousness, sentience, sapience and self-awareness), is impossible in principle due to the fact that the processes of any artificial system necessarily occur within the bounds of a set (or set of sets) of logical rules. This position mirrors the argument above; Penrose makes clear that an operator external to the rule set must be present in order to interpret the end result.
Top-down, controlled systems, do not provide much opportunity for innovation, albeit being fully understandable. However, bottom-up, self-organizing systems, do in principle allow for vast innovation but they do so at the expense of control and understandability. In my opinion the former defines an expert system, whereas the latter, via emulation of neuronal networks, may eventually give rise to strong AI.
Critically, a self-organized strong AI is just as unlikely to be able to understand itself as are we, and should thus be unable to provide us with any significant increase in understanding of its own, or of our own, intelligence. We will have learned nothing or precious little about consciousness and will still not be able to pinpoint or define intelligence. It is perhaps noteworthy that this last thought is not the product of logical deduction, but of intuitive induction.
1a-e) N. Wiener, “CYBERNETICS – or Control and Communication in Animal and Machine”, (1948), pages 7 to 39, The Technology Press, M.I.T.
3) S. Turkle, “Artificial Intelligence and Psychoanalysis: A New Alliance”, (1988), Daedalus, Vol 117, number 1, Artificial Intelligence, pages 241-268, MIT press, http://www.jstor.org/stable/20025146
4) J. Plucker, “History of Influences in the Development of Intelligence Theory”, interactive history map, (2012), University of Indiana, http://www.indiana.edu/~intell/map.shtml
5) S. Turkle, “Artificial Intelligence and Psychoanalysis: A New Alliance”, (1988), Daedalus, Vol 117, number 1, Artificial Intelligence, pages 241-268, MIT press, http://www.jstor.org/stable/20025146
6) S. Burris (2000), “The Laws of Boole’s Thought”, University of Waterloo, http://www.math.uwaterloo.ca/~snburris/htdocs/MYWORKS/PREPRINTS/aboole.pdf
7) G. Boole, (1854), “AN INVESTIGATION OF THE LAWS OF THOUGHT, ON WHICH ARE FOUNDED THE MATHEMATICAL THEORIES OF LOGIC AND PROBABILITIES”, http://www.gutenberg.org/files/15114/15114-pdf.pdf