Advances in science and technology were pivotal in the second world war. The bombs dropped on Japan, born underneath a sports stand at the University of Chicago as part of the Manhattan Project. The descrambling of the Enigma code, in the stately British country house Bletchley Park. These projects and their protagonists are rightly famous. The work they did changed the course of the conflict, and by implication everything that came afterwards as well.
But a less well known project was taking place on the British coast at the same time which has some implications for present day challenges. The lesson it teaches about collaborative reasoning echoes today. For me, it underscores the strategic value of having a population both able and willing to think clearly. To process information with discipline, in the service of a larger communal goal.
A less glamorous success story
British radar surveillance outperformed German systems, despite the inferiority of the underlying radar technology. The genesis of this counterintuitive result sits at Denge airforce base in England. There, like a forlorn family of modern day Easter Island statues, a set of concrete acoustic mirrors stare resolutely out to sea. Giant concrete dishes, like a skate park that got flipped vertical in an earthquake. These experimental mirrors were designed to focus and amplify the sound of aircraft flying across the English channel, to give early warning of incoming air raids.
They were never used. The air speed of military planes increased rapidly and the system was never able to deliver timely warning on the basis of sound. But the research project developed clever ways of combining signals from an array of listening devices, and trained specialists who could run that system. This aspect of the project was transferred into the more capable radar technology that was brand new at that time. Even if the British radars were a bit rubbish compared to their continental counterparts, they were plugged in to a system of people who could make the most of the data they received. (I wouldn’t be surprised if some of the mathematics now exists in large telescope arrays.)
There’s an important lesson here, which we’ll return to. But first, let’s zoom out for a slightly broader view of time.
A brief history of our times
A while ago - say 500 years - most of us lived in some approximation of arcadia, an imagined pastoral utopia. Before the the industrial revolution accelerated the pace and density of everything, life on the land was the norm. There was a connection to what was local. The horizon of experience was constrained, without widespread access to trains, cars or planes. Alien ideas that might provoke the inner turmoil of trying to reconcile observation with belief were mostly out of reach.
Absent extensive data about the wider world, one source of understanding was a deity, whose edicts were interpreted by clergy and broadcast through a network of churches.
Then, along came the industrial revolution and its philosophical partner, the Enlightenment. A chicken and egg. A recursive trampoline of developments. New technology built factories that drew people off the land and into cities. In lockstep, the increasing population density of cities led to intellectual interchange that accelerated the development of technology. During this period of history, the concept of a deity began to look silly, and was rejected. Individual humans were endowed with the power to learn about the world on their own. Now that it was possible, it made sense for each individual to build knowledge for themselves. Knowledge built with your own two hands was knowledge you could trust. It was counterproductive to accept dogma from an organisation whose motives suddenly began to look suspect.
This historical moment arrived on the tail edge of a period where perhaps all human knowledge could fit inside a single brain. One copy of everything that was known inside every sufficiently educated mind. That claim sounds exaggerated, but load the context into your imagination: pre-calculus, pre-Newtonian mechanics, pre-germ theory of disease. Very roughly, the start of the Enlightenment marks the planting of seeds that have grown and sprawled into a jungle full of knowledge. More knowledge than we can hope to cram back into any one head. We’re now on the cusp of having to worry about whether we can cram all knowledge into a single society, even. But back then, they were just seeds, small enough to be held in a single mind.
Actually, the image of slowly growing jungle plants underestimates the contemporary scenario. Today, we’re struggling even to stand straight in the face of what is now a shockwave of information. From little things, big things grow, and sometimes they grow shockingly big, shockingly fast.
Explosive growth
On 30 October, 1961, 19 years after the first artificial nuclear reaction took place under a sports stadium in Chicago, the 50-megatonne Tsar Bomba was detonated over Sukhoy Nos in the Soviet Union. Boy, the kids grow up fast. The most powerful nuclear weapon in history created a small artificial sun. Shockwaves travelled far enough to break some Norwegian house windows over two thousand kilometres away. And the released radiation still echoes as a carbon-14 spike in carbon dating measurements. A breathtaking event. A result both of bringing together a critical mass of technical thinkers, and bringing together a supercritical mass of uranium-235 isotopes.
There’s satisfying symmetry to this image. The isotopes and the thinkers. Uranium-235 absorbs a neutron, becomes unstable and decays, emitting further neutrons, and the process continues. And on the other hand, our minds absorb ideas which spark new ideas, which we then emit. If we have enough fissile atoms or fissile minds in close proximity, we can end up with powerful chain reactions. The internet is bringing fissile minds into closer and closer proximity all the time.
What happens when we reach some analogous critical mass of information? When we end up with a sufficient density of ideas bouncing around and setting off further reactions, what will transpire? Ray Kurzweil writes in detail about his prediction of an upcoming singularity, where artificial intelligence outstrips our capacity to comprehend it. After that, the machines are off to the races, galloping off over the horizon of our cognitive capabilities without us. In many ways, we’re already long past the point where we can individually comprehend the waves of information reaching us each day. In light of this mock intelligence, bigger than any individual, I ponder the purpose of religion. I wonder whether the development of religious practice was, rather than being insultingly naive, an extremely forward-thinking endeavour. God never existed, but may soon. We already have to accept substantial information about the world on the basis of faith. On the basis of trust in the technoclergy who relay the news to us. The communicators and popularisers who transmit and translate the truth that comes out of the labs. We have no choice, because we can no longer hold the world in our own hands. We can no longer fit all knowledge in our minds. Knowledge and understanding is once again becoming a communal responsibility.
So I wonder whether religion, as a mechanism for grappling with knowledge that we can’t hold in our own hands, was a technology ahead of its time. It’s fascinating that throughout the history of technology, designs have been developed far in advance of any possibility of them being built.
Building stuff we don’t need
One example is the Fourier transform. Today it underpins virtually all digital communication and sound recording, though it was described in 1807, well before those fields were imagined. It’s a technique for calculating with waves, but you could think of it like this: it’s a method for looking at a fully cooked dish, and determining the ingredients and cooking instructions. It does this for the world of waves, though, not the world of food. Radio waves, sound waves, waves of electrical power. In fact, it was originally designed to help understand waves of heat spreading through the boilers of steam trains. Waves are everywhere, and being able to understand their properties is immensely useful.
As originally published, the amount of pencil and paper calculation required to compute the transforms we rely on today was overwhelming. So much dreary calculation that the description of the method was really just a transcript of a dream. The details were all there. The proofs were all there. The vision was there, fully formed - yet out of reach in practical terms.
But the transform burst in on the real world in 1965, when James Cooley and John Tukey rediscovered a fast method of calculation (previously described by Gauss - bloody Gauss got to everything first…). This method was not only fast, but amenable to implementation on digital computers. Just as cheap cars transformed leisure travel from a fantasy into reality, this new algorithm brought wave play within reach of the average citizen. Out of the smokey steampunk world which spawned it, and into the gentle glow cast by cratefuls of willing electrons. The Fourier transform had arrived.
Further examples of fantasy technology abound. Ada Lovelace wrote the first computer programme around 1843, for a machine that did not yet exist. The logic used to shepherd the electrons in meaningful ways was published by George Boole in 1847. Alonzo Church described lambda calculus in 1936.
And yet the first digital computer - ENIAC - did not exist until 1945. This clunky behemoth was the first inkling that any sane person had that such a computer could exist. The first demonstration that it would be viable. The first concrete proof that computing was more than a theoretical fantasy. Yet virtually all the important thinking about computation has already been done. (And it was later still, in the late 1970s, that the population at large first had any real sense of the existence of computers.)
Humanity’s enthusiasm for building systems that can’t be used is amazing. And so I wonder if religion is such a system: a system for acting according to advice from some higher intelligence. An intelligence which never existed, but may soon appear. Perhaps the predictions of the second coming are just a small accounting error: off by one, because it’s rather the first coming that’s on its way. Or, perhaps is already here - because our collective intelligence already surpasses what any individual can hope to understand.
More to be done
We have a lot of work to do. Designing the god that we want is our responsibility. And making sure that god is delivered according to spec is also our responsibility. The systems that teach us about the world are of our own creation, and keeping tabs on whether they are trustworthy falls on us.
The components have been delivered. The internet connections are there. The people are there. The democratic process is there. We have to glue and screw all the bits together to make a machine that does something worthwhile.
Recounting our excursion to the acoustic mirrors at Denge, a system works well when it’s made up of people trained in effective techniques for processing information. That training involves not only interpreting and relaying information, but becoming sensitive to what information is spurious and rejecting it. Riffing on the example of the acoustic mirrors, or the later Chain Home radar system which evolved from them: could the data really be telling us that aeroplanes are flying under the ocean, or is it more likely the sensor is misaligned in the mirror? Or, if one of the mirrors reports a signal, but the vast majority give contradictory information, what should we do? Reject it? Investigate? Act immediately in case it’s real?
We face analogous challenges today, with multiple stories about the world vying for our attention and endorsement. We’re faced with the important task of determining what to pay attention to and what to reject, and we’re not trained for the task. We’re tasked with staffing this complex system called democracy. We have to receive the incoming signals and boil them down into a democratic incident report called an election, and then act on that report by enacting laws. The quality of those laws, and the world that we live in, is dependent on our ability to distinguish signal from noise.
We need to build our capacity for communal reasoning. Perhaps we can expect more of ourselves, and of each other.
The social contract
At birth, we enter a pre-existing world. A world of conditions that we did not create or choose for ourselves. Light and air. Separation from our mother. The sudden and shocking obligation to keep fed and warmed. We enter into a society that we didn’t invent. We become citizens, and we become party to the social contract. Things that we don’t understand until much later in life.
The social contract is a piece of technology. It frees us from autocratic power. It describes how society can function without an autocrat. Knowing how the system can work without the autocrat, we can get rid of the autocrat. But the way it works is a bit odd: we gain some freedoms by giving up others. We exchange the freedom to do whatever the fuck we want, for the freedom from an autocratic telling us whatever the fuck they want us to do. We trade rights and responsibilities in a way that moves us into a better overall position. We allow ourselves to be subsumed into a whole that’s far greater than the sum of its parts.
A similar contract exists for our beliefs. We gain the right to hold whatever beliefs we want, in exchange for the obligation to form those beliefs in valid ways.
Thought policing
Does it sound too close to thought policing for your comfort? It’s not. I’m advocating exactly the opposite. By agreeing not to hold unjustified beliefs, we gain freedom from anybody telling us what to think, just like with the social contract. As it stands, we have a panoply of deranged conspiracy theories floating around that will ultimately grant social license to the government to crack down and start telling people what is and is not OK to think and discuss. Out of necessity. If we don’t work out a social intellectual contract and stick to it, we’ll end up with an autocracy of thought.
So how can we have a system that describes how to think, without dictating what to think? We can do it by having a system that describes the process, not the outcome. For hundreds of years, in fact, we’ve had a working example of such a system, which is our justice system. Put aside, please, your feelings about the justice system: focus the intent of the system, not any particular grievances with personal outcomes that you might have. The justice system is a system for coming to a conclusion about what is probably true. It does not specify what the decision should be, but instead the process by which a valid decision can be reached.
Where outcomes are contentious, appeals move up a chain of verifications. The appeals become appeals of process, not directly of the resulting decision. That is, there’s a system for making decisions, and a system of monitoring to ensure that the correct process was followed in reaching the valid decision. A decision made by an invalid process is not a valid decision (even if it’s coincidentally correct!).
This is a critically important point: for a belief to be valid, it must be reached through a proper process of investigation. Two people may hold an identical belief, but one might hold it validly and the other not. Two people, for example, might believe that pandemic lockdowns are wrong. One person may reach that conclusion on the basis of a historically informed understanding of power structures and the state. The other may simply assert that the pandemic is a fake. The former idea has a reasonable chance of being validly held. The latter can be rejected out of hand.
Importantly, what is distinct about the former idea, which is supported by critical reasoning, is that it is held honestly. If some of the reasoning turns out to be incorrect, or rests on falsehoods, or is not sufficiently nuanced, the holder of the first belief will change their belief. That’s categorically different to the second belief which is not subject to change in the face of new evidence. One belief has been reached through an honest process. The second, while superficially identical, is simply a fantasy.
If we want our system to work well, we need to learn a lesson from the skilled staff of the Chain Home radar system. We need to become very, very good at forming and validating our beliefs. Without this, our world view tells us almost nothing about the world, and ultimately leaves us open to devastating attacks of misinformation.