Picture yourself as a single neuron in the ornately woven fabric of the brain. You’re held in the electrifying embrace of the neurons around you, and you have a job to do: when enough of your neighbours feel excited, you have to feel excited too. You, in turn, relay a tiny frisson to other neighbours - one part in an electrical wavefront that spreads through the brain, like water waves cross the ocean. Perhaps, if feeling gradiose, one might call this thought.
Hard to imagine yourself as a neuron? In fact, it’s a role we already play in the vast information processing hives that are our social networks. When our friends get excited about something, so do we. Brains are fabrics of fatty neurons, tickling each other in choreographies of thought. Hive minds are fabrics of fatty humans, tickling each other with gossip, outrage, conspiracy theories and other communal dances.
Something else about neurons: they don’t always fire when their neighbours are excited. There are carefully calibrated synaptic gates, and thresholds, and specific patterns of excitement that are necessary to turn any particular neuron on. In this buzzing swarm of electrical flirtation, each neuron has its own particular tastes - its own preferences for which of its neighbours it trusts, and under what circumstances. This nuance is essential for the mind to function.
Similarly, for it to work well, the hive mind demands nuance of the fatty humans that make it up. Some hive minds function better than others as a result. The endless pejoratives we hurl at each other - labelling each other in terms of perceived political, religious, gender or class affiliations - reveal our belief that not all hives are equal. These labels are our recognition that different hives process information differently, because of the different approaches of their members.
Our job as fatty humans in a hive mind is to receive information, consider it, and pass it to our neighbours if it’s worthy. How can we do that well?
Error correction
Claude Shannon, mathematician and engineer at Bell Labs, had some helpful ideas in this area. Shannon invented the field of information theory and introduced it to the world in 1948. This irridescent idea elevated information from something nebulous, to something that could be measured and quantified. Stored and transmitted. Compressed and expanded.
Information can be stored in all kinds of media, from ink on parchment, to chains of nucleic acids in DNA, but the engineers at Bell Labs were particularly interested in electrons pulling each other back and forth in phone lines. They were interested in improving ways of sending messages, and Shannon’s information theory was the manifesto for a revolution. The information revolution, that we are all living through right now.
Yet as information is stored and transmitted, it’s subject to corruption and damage. Solar flares send storms of neutrons towards earth, disturbing the aether of electrons through which our messages travel. So the message doesn’t always get through.
With his theory of information at his disposal, Shannon then had this idea: what if we leaven our streams of information with additional ingredients, which let us detect if something has gone wrong with the message? Well, we can. Our computers and phones are filled with clever ideas that let them do this constantly. They’re constantly receiving corrupted data, detecting it, correcting the errors, and just dealing with it in general.
I’m going to sketch the technique behind error detection in information theory, and subsequently think about how we could apply it to validating ideas that people throw at us.
Being able to detect errors is borderline miraculous, but it’s a miracle that we can explain. First attempt at a process: what if we simply repeat everything twice? If we get the same message twice, we can be fairly confident that it’s correct, and if we spot a difference, we could ask for a re-transmission.
Let’s say we want to send the message “1593”. Under this scheme, we’d send 1593-1593. If the recipient sees that, they’ll be somewhat confident the message is undamaged, but if they see 2593-1593, they’ll know something has gone wrong.
Workable, but not very efficient. Perhaps we can do better.
What if we leaven the message with another ingredient - some extra padding. We also share a recipe with our intended recipient: sum the digits in the message. If the result is odd, the message was corrupted. If the result is even, it may be correct. We can strip off the extra ingredient, and more confidently extract the number we want.
So 1593-0 would be valid, because adding up all the digits gives 18, which is even. But if we get 1594-0, we’d better ask again, because we obviously didn’t hear correctly. 19, the sum of the digits, is odd.
That’s simplistic, and not very useful, because there’s a good chance that the message was corrupted, but still has an even digit sum by luck. A more sophisticated variant is the Luhn algorithm, which lets you tell immediately if a credit card number was entered with errors. That’s just a more complex recipe, with more robust sums involved. And there are many far more sophisticated, powerful and efficient recipes for checking the validity of encoded information in general.
As a more intuitively graspable example of error detection, all of us are incredibly skilled linguists. We can instantly tell that something has gone wrong with the sentence: “The driver man hit careless that himself”. We can detect an error. On the other hand, we famously know that “‘Twas brillig, and the slithy toves did gyre and gimble in the wabe” is a valid sentence, even though we may not immediately know the meaning. So the message was delivered correctly (and we just have to discover the meaning - the information is clean, but the interpretation is still to come). Our language abilities are shot through with the ability to detect, and often correct errors.
All in all, we have some pretty powerful tools for processing information. We have many ways to avoid destruction at the hands of egregious errors.
Information is not knowledge
Let’s keep our wits about us here: information is not the same as knowledge. Knowledge is our model of the world, synthesised in part from streams of information. It’s not information: it’s the distillation of information into something else. Perhaps information provides a foundation for knowledge, but we don’t yet have the machinery to treat knowledge with such precision as we can information.
Maybe we can borrow some techniques though.
What if we could leaven our ideas with ingredients that let others test their validity? What if we had recipes for examining ideas that others have passed to us, to try to determine their truthiness?
Falsifiability
Making statements that can be tested is at the heart of scientific practice.
The scientific method was born in 1620, at the hands of Francis Bacon. It comes to us via a line of venerable thinkers, including Descartes, Kant, Hume and more recently Karl Popper. It’s been refined to the point that Bacon himself likely wouldn’t recognise it anymore. Popper’s thoughts on the scientific method were published in 1934, and rest on the important idea that a theory is only scientific if it can be falsified. That is, you could conduct an experiment which would show the theory to be incorrect (if it is incorrect). No theory is every fully correct: all we can say is that we prefer theories that are less wrong.
All scientific ideas come packaged with this extra ingredient: the means to prove them false, if they are false. This functions similarly to an error checking code: every theory must contain the possibility of rejection by experiment, otherwise the theory is rejected.
Day to day on the internet, it’s worth bearing this in mind: ideas worthy of our time must have the means to be investigated. And if pushed, the author of the idea should help us to test it. Reluctance to reveal information that would contradict an assertion is a sure sign that somebody is not searching for the truth, but has some other intent. These ideas should be rejected: not because they’re definitely wrong, but because there’s a sign something went wrong in transmission. We should always ask for clarification.
Praxis
Praxis is the enactment of knowledge. It involves taking something we think we know, acting upon it, and then seeing what happens. Following this process, we can test our knowledge, observe where it deviates from reality, and refine it accordingly.
Praxis means that instead of consuming and judging information on its own, we turn information into theories, and theories into practice. And then we perform the practice! Again, we are actively testing our knowledge, trying to find parts of it that we can falsify and subsequently rebuild.
So if we’re unsure about an idea, we should try it out. Some ideas feel too big to try out for ourselves, for sure, but with some imagination we can distill even big ideas into practices in our own lives. We just have to remember that if the practice fails to produce what we expected, that is a sign.
Praxis is modelled by all kinds of people. A stand up comedian has theories about what is funny that are tested on stage. A product developer has theories about what might be enticing for a market, which are tested by building things and seeing what people do with them. A surgeon has theories about how to perform surgery, that are embodied in every day of work.
Dialectic
Dialectic is a particle collider for ideas: accelerate a thesis and an anti-thesis towards each other, smash them together, and see what comes out. There are opportunities for dialectic in every discussion you see on social media, if the right participants are there.
Dialectic is a collobarative truth-seeking exercise. Two people with opposing ideas, but a shared thirst for knowledge, argue. The intent is not to win the argument, but to sythesise a new position that is closer to the truth.
Most arguments are eristic - where people seek to win (but win what?). We have to, therefore, be picky about who we argue with. We need to become adept at recognising others with a genuine thirst for knowledge, as opposed to those with a thirst for the thrill of overpowering other thoughts. Contests with these opponents are bruising, but offer growth opportunities, and a path to greater understanding of the world.
The torrent of ideas available to us these days can be overwhelming, but it’s also an incredible source of new knowledge, if we know how to work with it. Look for well packaged ideas, well intentioned protagonists and opportunities to try ideas out in practice.