My last post from Michell Waldrop’s book The Dream Machine: J.C.R. Licklider and the revolution that made computing personal discussed Norbert Wiener’s discovery that feedback loops can give automated, mechanical systems purpose, and therefore, allow the human mind to possess purpose while remaining utterly natural. Meanwhile, how the brain physically performed the necessary calculations to achieve purpose-driven action based on feedback was being figured out by another brilliant mathematician Claude Shannon who was working on a primitive analogue computer called the Differential Analyzer made of relays and circuits:
Shannon could see that the relays were actually little switches, each of which could be automatically opened and closed by an electromagnet. But as he traced his way through the control circuits, he was struck by how closely the relays’ mechanical operation resembled the workings of symbolic logic, a quasi-mathematical system developed in the 19th century to model human reasoning. In the analyzer, each relay had a binary choice: it was either closed or open. And likewise, in logic, any given assertion had a binary choice: it was either true or false.
Furthermore, as Shannon quickly remembered from his elementary physics courses, relays combined in circuits could physically embody all the standard ways that assertions were combined in logic. If two relays were lined up in series, for example, as in
they would embody the logical relationship and–that is, current would flow through the circuit only if relay A and relay B were closed. Opening either one would break the circuit. Similarly, if the two relays were wired up in parallel–
–they would embody the logical relationship or, meaning that the current would flow if either relay A or relay B was closed. And if a relay was wired up backward, so that it snapped closed when it normally would have opened, and vice versa, it would embody the logical operation of not.
Indeed, as Shannon told Omni, the analogy worked perfectly–and no one had ever noticed it before…. “A Symbolic Analysis of Relay and Switching Circuits” is downright fun to read–and strangely compelling, given what’s happened in the six decades since it was written. In an aside toward the end, for example, Shannon points out that true and false could equally well be denoted by the digits 1 and 0, in which case the operation of the relays could be associated with what was then an arcane and unfamiliar form of arithmetic–namely, binary arithmetic. The digit 1 would mean that the relay was closed, while the digit 0 would mean it was open. Thus, Shannon wrote, “it is possible to perform complex mathematical operations by means of relay circuits. Numbers may be represented by the positions of relays and stepping circuits, and interconnections between sets of relays can be made to represent various mathematical operation.” As an illustration, he showed the design of a relay circuit that could add binary numbers.
But more than that, Shannon noted, a relay circuit could make comparisons and thus take alternative courses of action according to specified circumstances, as in, “If the number X equals the number Y, then do operations P, Q, and R.” To illustrate the point, he showed that relays could be arranged to produce a combination lock that would open if and only if if the user pressed a series of buttons in the proper order. Of course, the implications were much more general. Any desktop adding machine could do simple addition and subtraction; a relay circuit could decide. (It isn’t even very hard. The assertion “If A then B” turns out to be logically equivalent to “(Not A) or B” So any circuit that can embody not and or can also embody if-then.)
Shannon had shown that a relay circuit could physically embody that ineffable what-it-was in a human computer’s head that allowed her to know what mathematical operations to perform when, and whether to perform operation A or Be depending on the result of operation C. That ability, in turn, is ultimately what makes modern digital computers so much more than just an adding machine: it can work its way through a sequence of such decisions automatically. In a word, it can be programmed. And that’s why is arguably the most influential master’s thesis of the twentieth century: in it Claude Shannon laid the foundation for all modern computer design, nearly a decade before such computers even existed. In the years since, switching technology has progressed from electromechanical relays to microscopic transistors etched on silicon. But thanks to Shannon, microchip designers still talk and think in terms of a device’s internal logic.
It would be left to other researches to apply Shannon’s insights to the brain itself.
For simplicity’s sake, they assumed that the brain as a whole could be modeled as a vast, interconnected electrical circuit, with neurons serving as both the wires and the switches. That is, each neuron would receive electrical input from dozens or hundreds of other neurons And if the total stimulation passed a certain threshold, that neuron would then “fire” and send an output pulse to dozens or hundreds more. The result today would be known as a “neural network” model–was admittedly a gross oversimplification of reality. But McCulloch and Pitts argued that it did capture the abstract essence of brain physiology…
B.F. Skinner and the other behaviorists had tried to make psychology “scientific” by reducing us all to the status of machines–blind, rigid, nineteenth-century-style machines. We were nothing more than walking, talking bundles of stimulus and response to be controlled and manipulated at will. But [Harvard’s] J.C.R. Licklider and George Miller had begun to sense the subversive possibilities of twentieth century machines. Feedback systems were the stuff of everyday engineering, and yet they could embody “mental states” such as purpose and expectation. Electronic computing systems were little more than arrays of very vast switches, and yet they, too, could embody “mental states” in the form of data stored in (where else?) a memory. Computer programs were nothing more than lists of precisely defined rules that the machine followed to the letter, and yet as Turing had shown, they could generate results that were inherently surprising and unpredictable.
The number of misconceptions about the brain and human thought that the early computer scientists overturned is nearly endless. It’s almost too true that before computers, we lacked not only the knowledge of how the brain operated, but even the language with which to describe it appropriately.