Edward Craik – A (Proto) Cybernetic Explanation
Craik: “an essential quality of neural machinery is that it expresses the capacity to model external events.” Craik: “Thus our brains and minds are part of a continuous causal chain which includes the minds and brains of other men and it is senseless to stop short in tracing the springs of our ordinary, co-operative acts.” 2
Recent assessments of the cybernetic era often stress that the key members of the cybernetic movement were engaged in military research during World War II, and it was in that context that the relevance of feedback and servomechanisms became paramount. It is undoubtedly the case that any mobile servomechanism can be trained to scan and attack a target – just as any cybernetic tortoise or maze running mechanical rat can be fitted with a gun or bomb. The degree to which the cybernetic moment was built on an “ontology of war” cannot be underestimated, however it is equally important to recognise that the builders of the cybernetic devices studied in this text were – before and after the war – neurophysiologists leading in the field of brain research. In this capacity they developed machines which were designed to read and affect the nervous system. The British cyberneticians Edward Craik, William Ross Ashby and William Grey Walter were all experimental psychologists, convinced that machines could be built which model the mind and nervous system. Such machines could express mind at its most rudimentary level: as an organism which feeds information through its system and which affords adaptation to its immediate environment. The cybernetic moment might be seen, therefore, as a meeting of a.) the contingencies of war with b.) the conception of mind as coextensive to the system it inhabits. It is only after WWII that these two discourses overlap and the principle of a weapon seeking a target is translated to the principle of an organism seeking purpose. This meeting point was articulated by Edward Craik in the early 1940s. The implication arising from the premise that the organism seeking purpose can be modelled by a machine, is that the discourse of the organism is also the discourse of the machine. This conflation did not arrive with the moment of cybernetics, however. Samuel Butler had been amongst those who had recognised the implications of the advent of the “vapour engine” on the discourse of the machine and the discourse of evolution (see chapter two), Alfred Wallis had observed that the operations of the vapour engine and natural selection were principally the same. Long before the advent of cybernetics, in 1930, the experimental behaviourist Clark Hull, a builder of “thinking machines”, maintained: “it should be a matter of no great difficulty to construct parallel inanimate mechanisms… which will genuinely manifest the qualities of intelligence, insight, and purpose, and which will insofar be truly psychic”. The premise of organism-machine equivalence was built into the research cultures of experimental psychology. The group of three British neuroscientist-cyberneticians, William Ross Ashby, Kenneth Craik and William Gray Walter all researched war-time control systems (aka anti-fire systems).3 All three were hands-on engineers and all were in the forefront of experimental brain research. As brain researchers, they sought to extend Pavlov’s conditioned reflex through the experimentation with servomechanisms, or feedback machines. Grey Walter and Edward Craik both worked at the conditioned reflex laboratory at Cambridge, where Craik was based, and Craik was a frequent visitor to the Burden Neurological Institute, Bristol, where Grey Walter was the director. Here Craik collaborated with Grey Walter, using the EEG machine which Walter had developed to research the brain’s scanning function in relation to its production of particular brain waves (these, being the first, were called “alpha waves”). It was Craik who first suggested the relation between scanning and feedback to Grey Walter and encouraged the construction of a servomechanism which would combine the elements of scansion and feedback. This resulted in the Machina Specularix, or cybernetic tortoise (1947), which will be described in some detail in chapter seven. Kenneth Craik died in a cycling accident in 1945 at the age of 31 and the unfinished The Mechanism of Human Action (1940)5 was only partially published years after his death6. His philosophical work The Nature of Explanation (1943) was however published in his lifetime, as was his influential paper Theory of the Human Operator in Control Systems (1943). The Mechanism of Human Action and Theory of the Human Operator in Control Systems, anticipate many of the key ideas in relation to negative feedback and purpose which would later be discussed in Norbert Wiener’s Cybernetics and in the Macy conferences on cybernetics in New York (1946-1953)7 In 1947, following a visit to the Burden Neurological Institute by Norbert Wiener, Gray Walter noted: “We had a visit yesterday from a Professor Wiener, from Boston. I met him over there last winter and found his views somewhat difficult to absorb, but he represents quite a large group in the States… These people are thinking on very much the same lines as Kenneth Craik did, but with much less sparkle and humour” In The Nature of Explanation Craik argues for a hylozoist conception of mind and consciousness, which is to say that there is no distinction between mind and matter 8 To underline the interconnectedness between mind and matter (hylozoism), Craik argues for a particular conception of causation and for a particular function for the model as a form of imminent analogy. Central to Craik’s argument is that model making is constitutive of mind, that an essential quality of neural machinery is that it expresses the capacity to model external events.9 Despite the differences in style identified by Gray Walter, there is remarkable similarity between Wiener and Craik’s respective systems. Because both derived their work on feedback systems from their war-time work on anti-aircraft predictors and because they also both extended their war work to find application to universal principles of organisation Craik has been referred to as “the English Norbert Wiener”. In The Mechanism of Human Action (c.1940) Craik recognises negative feedback as the regulator of “equilibrium” within the organism or servomechanism; that the “stable state” is regulated by short and longer feedback loops;11 the re-cycling of information provides the conditions in which an organism or servomechanism “learns” or adapts. This approach is the central foundations of fellow British cybernetician Ross Ashby’s work and is descriptive of Ashby’s law of requisite variables (as we will see in chapter fifteen). To assist the conceptualisation of a learning machine Craik draws on the American behaviourist Clark Hull’s “conditioned reflex models”. Clark Hull had compiled a series of “idea books” (1929-52) in which he devised mechanical and electro-chemical automata which worked on the Pavlovian principle of trial and error conditioning (Pavlov’s Conditioned Reflexes was published in English translation in 1927). Hull also derived his own mathematical system to describe the operations of these automata and wanted to apply the rules of the conditioned reflex as computational units within his machines. Hull aimed to build a conditioned reflex machine that would demonstrate intelligence through contact with its environment. The devises were subjected to a “combination of excitory and inhibitory procedures and would exhibit an array of known conditioning phenomena”.12 Meaning, the machines would “learn” to overcome obstacles in order to achieve a particular goal. Hull methodically analysed how instances of purpose and insight could be derived from the simple interaction of elementarily conditioned habits. 13 “I feel” stated Hull, “that all forms of action, including the highest forms of intelligent and reflective action and thought, can be handled from purely materialistic and mechanistic standpoints.”14 In The Mechanism of Human Action, Craik15 seeks to extend the promise of Hull’s experiments. He does this by making a connection between negative feedback and order within an organism or servomechanism. Craik is systematic, outlining the principles of automatic regulation (homeostasis) in the organism and servomechanism and how flexibility within the organism or servomechanism is necessary for its maintenance and adaptation; Craik makes an application of synthetic principles to living systems, in which the sensors, transmitters and systems of amplification operate within the organism (all serving to extend the capabilities of that organism). Craik discusses brain mechanisms in terms of “levels” 16 and effectors17 ; and to frame the overall theory Craik adopts the term “cyclical action”.18 Craik makes a concerted effort to come to terms with the role of negentropy – although the explicit relation to information and negentropy is not made by Craik, support would be provided by Wiener and Shannon in the late 1940s – Craik uses the terms “up hill reactions” (which use energy) and “down hill reactions” (which use little or no energy) to account for the uneven distribution of energy within a system. Craik prefigures negentropy in the following terms: “Living organisms were, with few exceptions, the first devices to use a downhill reaction – such as the combustion of carbohydrates – to provide them with a store of energy by which to drive a few uphill reactions for their own benefit. This does not mean, of course, that the living organisms live contrary to the second law of thermodynamics and can prevent the gradual degeneration of energy; it merely means that they have means of storing external energy in potential form for driving some local uphill reaction. If we consider the whole picture, the reaction is, as far as we know always downhill on average; but some parts of it may go uphill by virtue of the energy derived from other parts.” 19 Here Craik is speaking within the discourse of “equilibrium” – which, in itself, does not necessitate homeostasis, although homeostasis in a system is an expression of equilibrium. The basic, down to earth, principles for Craik’s mechanism-organism are: (1) the storage of energy taken from the outside – the organism or servomechanism – food converted to glucose and carried through the system of an organism, or the energy stored in the battery of the servomechanism.
(2) the controlled liberation of energy. The second – to approach a “common sense” definition of life– requires: (a) a sensory device for detecting and countering the disturbance, (b) a computing device for determining the right kind of response, (c) an effector or motor mechanism for making the response21 In his philosophical work, The Nature of Explanation (1943) Craik argues for an anti-cartesian, non-vitalistic hylozoism – which is to say that mind is a function of matter, rather than an essence opposing matter. It includes the chapter Hypothesis on the Nature of Thought22 which has its roots in the pragmatist philosophy of John Dewy and Clark Hull’s practice of building “thinking machines”. Craik describes the nervous system as organised stochastically, the signals ‘take the path of least resistance’. This process works like the telephone exchange, says Craik, he also likens the human brain to a computer (the model being Douglas Hartree’s differential analyser (1934)). These are analogies but they are physical working models which “work in the same way as the process it parallels” the machine does not have to physically resemble the thing it parallels “it works in the same way in certain essential respects” (in general systems theory this would be termed “homology”) Craik is clear then that such machines are “analogous” in a very precise sense. One could inventory the countless instances of when a model is not like the thing it parallels, what is significant is the principle which underlines the similarities; which is there tendency to organise stimulus into symbolic order. For Craik, “Significance”, (signification) is an essential element of human experience. Significance is the relatedness of things. Justification of causality must be its trial by experiment23 For Craik, “all propositions carry as it were the right to apply to something objectively real” .24 Once one accepts the possibility of symbolism one must accept that symbols can represent alternatives, which experiment decides between, in this sense experiment becomes the arbiter of what is the case. Craik draws of the experimental hypothesis of Hermholtz, and supports the idea of “causal interaction in nature”. Admittedly, one cannot trace causation back to a first cause, but this is not a sufficient reason to doubt cause as such (as did Berkley and Hume). The model works in the same way as the thing it parallels, and in so doing it goes through the three phases of: translation, inference and re-translation. 1) translation of external process into symbols 2) inference, which is the arrival at other symbols through reasoned deduction 3) re-translation into external process (such as building and predicting). Some machines demonstrate this triadic ability, such as anti-aircraft predictors25 and calculating machines26 Craik again cites Clark Hull’s models, which respond to altered systems; from here Craik posits that an essential quality of neural machinery is that it expresses the capacity to model external events. The principle underlining the similarity of these machines is more important than superficial analogy. The brain works like a telephone exchange, sure, but the brain’s similarity to the telephone’s switching mechanism is one way to understand the operations of mind. The structural principle underlining the similarities is Craik’s triad of translation, inference and re-translation.27 Craik applies this triad to brain function as 1) the translation of external events into neural patterns through the stimulation of the sense organs 2) the interaction and stimulation of other neural patterns (association) 3) the excitation of these effectors or motor organs. Craik notes that to signify – to make a model in your head of something which could be actualized – has a close relation to entropy because signification involves the conservation of energy – it adds to the store of “down hill” actions. For Craik his own scheme “[…] would be a hylozoist rather than a materialistic scheme. It would attribute consciousness and conscious organisation to matter whenever it is physically organised in certain ways.” The machines Craik cites are an expression of mind, and expression of an ordering structure, they express negentropy. This notion of thinking would be adopted by fellow British cyberneticians. When Ashby demonstrated the self-regulating machine the homeostat at the Macy conferences on cybernetics, he would describe it in Craikian terms, a position that would cause some consternation amongst the delegates (see chapter *). Craik further develops the notion of the model with a very specific reading of the function of metaphor. Things, in their coming into being, mime the structures that run through reality. This is not to say that reality is an inert exterior object which is given significance (code is not written into or onto reality), but rather reality is emergent within the process of signifying itself. One makes a distinction and the difference establishes a relation. Thought, in the action of modelling, does not copy reality because “our internal model of reality can predict events which have nor occurred” This predictive ability “saves time, expense and even life”28 the ability to predict is “down hill” all the way – it is negentropic. It begins with the functions of homeostasis in the organism and extends to conscious purpose – this is the position adopted by British cyberneticians and endorsed by Gregory Bateson and Warren McCulloch. The modelling of reality is part of a larger system, which introspective psychology and analytical philosophy have been unable to address. The degree of organisation which goes on at a non-conscious level is however evidenced in neuropathological experiments, in which purposive activity is highlighted. We generally pass over such activity as natural because greater mechanical complexity often leads to greater simplicity and coordination of performance, introspective psychology and analytical philosophy pays little attention to them as constitutive of thought (we will see in chapter * that Gregory Bateson and Warren McCulloch’s cybernetic critique of Freudianism was established on this premise). For Craik the processes of reasoning are fundamentally no different from the mechanisms of physical nature itself. Neural mechanisms parallel the behaviour and interaction of physical objects, such processes are suited to imitating the objective reality they are part of, this is in order to and provide information which is not directly observable to it (to predict and to model).29 Craik compares the performance of an aeroplane compared with a pile of stones. The aeroplane is more complicated but its performance is more unified. If the parts of the aeroplane had been dropped into a bucket the atomic complexity would be high but the simplicity of performance would be nil.30 Once the plane is built, however, there is an atomic and relational complexity which increases the possibilities of performance. The mind, in modelling reality, takes clues in perception – there is no obligation to decide whether such clues are conscious interpretations or automatic responses to reality which express atomic and relational complexity. For Craik “all perceptual and thinking processes are continuous with the workings of the external world and of the nervous system.” and there is no hard and fast line between involuntary actions and conscious thought. Craik now approaches a position close to Bateson’s, stressing the “continuity of man and his environment” […] “...our brains and minds are part of a continuous causal chain which includes the minds and brains of other men, and it is senseless to stop short in tracing the springs of our ordinary, co-operative acts” and again Craik stresses that “man is part of a causally connected universe and his actions are part of the continuous interaction taking place.” 31 The process of experimentation, and the operations of servo-mechanisms, are important for this generation of philosopher-engineers because experimentation serves as the arbitrator of the model proposed by the processes of mind, and also because servomechanisms operate at the point where significance emerges. They provide models for a future stage of development, the conscious machine. This is the future that Samuel Butler modelled from the vapour engine and the future that Grey Walter would go on to model in the cybernetic tortoise. Experimentation with servomechanisms is, for Craik, Ashby and Grey Walter, an expression of the more complex structures that underly them. In this respect Ross Ashby’s homeostat and Grey Walter’s tortoise are a result of the discourse of Kenneth Craik and of a particularly experimental, hylozoist approach to neuroscience and behaviour. This involved rigorous mathematical and philosophical theorisation alongside the construction of devices in the metal, alongside the analysis of what these machines express. If these creatures think, one might ask, how do they think? Perhaps, the machines themselves suggest – in their unconscious, indifferent relation to us, in their blind insistence on passing through our space, making demands on our matter – that we don’t actually think in the way we think we think. 32 Edward Craik left the path open for his colleagues Ross Ashby and Gray Walter to build actual performative models which challenged fundamentally what we think thinking is.