Explorers into unknown territory face plenty of risks. One that doesn’t always get the attention it deserves is the possibility that they know less about the country ahead than they think. Inaccurate maps, jumbled records, travelers’ tales that got garbled in transmission or were made up in the first place: all these and more have laid their share of traps at the feet of adventurers on their way to new places and accounted for an abundance of disasters. As we make our way willy-nilly into that undiscovered country called the future, a similar rule applies.
Christopher Columbus, when he set sail on the first of his voyages across the Atlantic, brought with him a copy of The Travels of Sir John Mandeville, a fraudulent medieval travelogue that claimed to recount a journey east to the Earthly Paradise across a wholly imaginary Asia, packed full of places and peoples that never existed. Columbus’ eager attention to that volume seems to have played a significant role in keeping him hopelessly confused about the difference between the Asia of his dreams and the place where he’d actually arrived. It’s a story more than usually relevant today, because most people nowadays are equipped with comparable misinformation for their journey into the future, and are going to end up just as disoriented as Columbus.
I’ve written at some length already about some of the stage properties with which the Sir John Mandevilles of science fiction and the mass media have stocked the Earthly Paradises of their technofetishistic dreams: flying cars, space colonies, nuclear reactors that really, truly will churn out electricity too cheap to meter, and the rest of it. (It occurs to me that we could rework a term coined by the late Alvin Toffler and refer to the whole gaudy mess as Future Schlock.) Yet there’s another delusion, subtler but even more misleading, that pervades current notions about the future and promises an even more awkward collision with unwelcome realities.
That delusion? The notion that we can decide what future we’re going to get, and get it.
It’s hard to think of a belief more thoroughly hardwired into the collective imagination of our time. Politicians and pundits are always confidently predicting this or that future, while think tanks earnestly churn out reports on how to get to one future or how to avoid another. It’s not just Klaus Schwab and his well-paid flunkeys at the World Economic Forum, chattering away about their Orwellian plans for a Great Reset; with embarrassingly few exceptions, from the far left to the far right, everyone’s got a plan for the future, and acts as though all we have to do is adopt the plan and work hard, and everything will fall into place.
What’s missing in this picture is any willingness to compare that rhetoric to reality and see how well it performs. Over the last century or so we’ve had plenty of grand plans that set out to define the future, you know. We’ve had a War on Poverty, a War on Drugs, a War on Cancer, and a War on Terror, just for starters—how are those working out for you? War was outlawed by the Kellogg-Briand Pact in 1928, the United States committed itself to provide a good job for every American in the Full Employment and Balanced Growth Act of 1978, and of course we all know that Obamacare was going to lower health insurance prices and guarantee that you could keep your existing plan and physician. Here again, how did those work out for you?
This isn’t simply an exercise in sarcasm, though I freely admit that political antics of the kind just surveyed have earned their share of ridicule. The managerial aristocracy that came to power in the early twentieth century across the industrial world defined its historical mission as taking charge of humanity’s future through science and reason. Rational planning carried out by experts guided by the latest research, once it replaced the do-it-yourself version of social change that had applied before that point, was expected to usher in Utopia in short order. That was the premise, and the promise, under which the managerial class took power. With a century of hindsight, it’s increasingly clear that the premise was quite simply wrong and the promise was not kept.
Could it have been kept? Very few people seem to doubt that. The driving force behind the popularity of conspiracy culture these days is the conviction that we really could have the glossy high-tech Tomorrowland promised us by the media for all these years, if only some sinister cabal hadn’t gotten in the way. Exactly which sinister cabal might be frustrating the arrival of Utopia is of course a matter of ongoing dispute in the conspiracy scene; all the familiar contenders have their partisans, and new candidates get proposed all the time. Now that socialism is back in vogue in some corners of the internet, for that matter, the capitalist class has been dusted off and restored to its time-honored place in the rogues’ gallery.
There’s a fine irony in that last point, because socialist management was no more successful at bringing on the millennium than the capitalist version. Socialism, after all, is the extreme form of rule by the managerial aristocracy. It takes power claiming to place the means of production in the hands of the people, but in practice “the people” inevitably morphs into the government, and that amounts to cadres drawn from the managerial class, with their heads full of the latest fashionable ideology and not a single clue about how things work outside the rarefied realm of Hegelian dialectic. Out come the Five-Year Plans and all the other impedimenta of central planning…and the failures begin to mount up. Fast forward a lifetime or so and the Workers’ Paradise is coming apart at the seams.
A strong case can be made, in fact, that managerial socialism is one of the few systems of political economy that is even less successful at meeting human needs than the managerial corporatism currently staggering to its end here in the United States. (That’s why it folded first.) The differences between the two systems are admittedly not great: under managerial socialism, the people who control the political system also control the means of production, while under managerial corporatism, why, it’s the other way around. Thus I suggest it’s time to go deeper, and take a hard look at the core claim of both systems—the notion that some set of experts or other, whether the experts in question are corporate flacks or Party apparatchiks, can make society work better if only they have enough power and the rest of us shut up and do what we’re told.
That claim is more subtle and more problematic than it looks at first glance. To make sense of it, we’re going to have to talk about the kinds of knowledge we can have about the world.
The English language is unusually handicapped in understanding the point I want to make, because most languages have two distinct words for the kinds of knowledge we’ll be talking about, and English has only one word—“knowledge”—that has to do double duty for both of them. In French, for example, if you want to say that you know something, you have to ask yourself what kind of knowledge it is. Is it abstract knowledge based on an intellectual grasp of principles? Then the verb you use is connaître. Is it concrete knowledge based on experience? Then the verb you use is savoir. Colloquial English has tried to fill the gap by coining the phrases “book learning” and “know-how,” and we’ll use these for now.
The first point that needs to be made here is that these kinds of knowledge are anything but interchangeable. If you know about cooking, say, because you’ve read lots of books on the subject and can easily rattle off facts at the drop of a hat, you have book learning. If you know about cooking because you’ve done a lot of it and can whip up a tasty meal from just about anything, you have know-how. Those are both useful kinds of knowledge, but they’re useful in different contexts, and one doesn’t convert readily into the other. You can know lots of facts about cooking and still be unable to produce an edible meal, for example, and you can be good at cooking and still be unable to say a meaningful word about how you do it.
We can sum up the two kinds of knowledge we’re discussing in a simple way: book learning is abstract knowledge, and know-how is concrete knowledge.
Let’s take a moment to make sense of this. Each of us, in earliest infancy, encounters the world as a “buzzing, blooming confusion” of disconnected sensations, and our first and most demanding intellectual challenge is the process that Owen Barfield has termed “figuration”—the task of assembling those sensations into distinct, enduring objects. We take an oddly shaped spot of bright color, a smooth texture, a kinesthetic awareness of gripping and of a certain resistance to movement, a taste, and a sense of satisfaction, and assemble them into an object. It’s the object we will later call “bottle,” but we don’t have that connection between word and experience at first. That comes later, after we’ve mastered the challenge of figuration.
So the infant who can’t yet speak has already amassed a substantial body of know-how. It knows that this set of sensations corresponds to this object, which can be sucked on and will produce a stream of tasty milk; this other set corresponds to a different object, which can be shaken to produce an entertaining noise, and so on. When you see an infant looking out with that odd distracted look so many of them have, as though it’s thinking for all it’s worth, you’re not mistaken—that’s exactly what it’s doing. Only when it has mastered the art of figuration, and gotten a good basic body of know-how about its surroundings, can it get to work on the even more demanding task of learning how to handle abstractions.
That process inevitably starts from the top down, with very broad abstractions covering vast numbers of different things. That’s why, at a certain stage in a baby’s growth, all four-legged animals are “goggie” or something of the same sort; later on, the broad abstractions break up, first into big chunks and then into smaller ones, until finally you’ve got a child with a good general vocabulary of abstractions. The process of figuration continues; in fact, it goes on throughout life. Most of us are good enough at it by the time of our earliest memories that we don’t even notice how quickly we do it. Only in special cases do we catch ourselves at it—certain optical illusions, for example, can be figurated in two competing ways, and consciously flipping back and forth between them lets us see the process at work.
All this makes the relationship between figurations and abstractions far more complex than it seems. Since each abstraction is a loosely defined category marked by a word, there are always gray areas and borderline cases, like those plants that are right on the line between trees and shrubs. The situation gets much more challenging, however, because abstractions aren’t objective realities. We don’t get handed them by the universe. We invent them to make sense of the figurations we experience, and that means our habits, biases, and individual and collective self-interest inevitably flow into them. That would be problematic even if figurations and abstractions stayed safely distinct from one another, but they don’t.
Once a child learns to think in abstractions, the abstractions they learn begin to shape their figurations, so that the world they experience ends up defined by the categories they learn to think with. That’s one of the consequences of language—and it’s also one of the reasons why book learning, which consists entirely of abstractions, is at once so powerful and so dangerous: your ideology ends up imprinting itself on your experience of the world. There’s a further mental operation that can help you past that; it’s called reflection, and involves thinking about your thinking, but it’s hard work and very few people do much of it, and the only kind that’s popular in an abstraction-heavy society — the kind where you check your own abstractions against an approved set to make sure you don’t think any unapproved thoughts — just digs you in deeper. As a result, most people go through their lives never noticing that their worlds are being defined by an arbitrary set of categories with which they’ve been taught to think.
Here are some examples. Many languages have no word for “orange.” People who grow up speaking those languages see the lighter shades of what we call “orange” as shades of yellow, and the darker shades as shades of red. They don’t see the same world we do, since the abstractions they’ve learned to think with sort out their figurations in different ways. In some Native American languages, some colors are “wet” and others are “dry,” and people who grow up speaking those languages experience colors as being more or less humid; the rest of us don’t. Then there’s Chinook jargon, the old trade language of the Pacific Northwest, which was spoken by native peoples and immigrants alike until a century ago. In that language, there are only four colors: tkope, which means “white;” klale, which means “dark;” pil, which means red or orange or yellow or brightly colored; and spooh, which means “faded,” like sun-bleached wood or a pair of old blue jeans. Can you see a cherry and a lemon as being shades of the same color? If you’d grown up speaking Tsinuk wawa from earliest childhood, you would.
Those examples are harmless. Many other abstractions are not, because privilege and power are among the things that guide the shaping of abstract knowledge, and when education is controlled by a ruling class or a governmental bureaucracy, the abstractions people learn veer so far from experience that not even heroic efforts at figuration can bridge the gap. In the latter days of the Soviet Union, to return to an earlier example, the abstractions flew thick and fast, painting the glories of the Workers’ Paradise in gaudy colors, and insisting that any delays in the onward march of Soviet prosperity would soon be fixed by the skilled central planning of managerial cadres. Meanwhile, for the vast majority of Soviet citizens, life became a constant struggle with hopelessly dysfunctional bureaucratic systems, and waiting in long lines for basic necessities was an everyday fact of life.
None of that was accidental. The more tightly you focus your educational system on a set of approved abstractions, and the more inflexibly you assume that your ideology is more accurate than the facts, the more certain you can be that you will slam headfirst into one self-inflicted failure after another. The Soviet managerial aristocracy never grasped that, and so the burden of dealing with the gap between rhetoric and reality fell entirely on the rest of the population. That was why, when the final crisis came, the descendants of the people who stormed the Winter Palace in 1917, and rallied around the newborn Soviet state in the bitter civil war that followed, simply shrugged and let the whole thing come crashing down.
We’re arguably not far from similar scenes here in the United States, for the same reasons: the gap between rhetoric and reality gapes just as wide in Biden’s America as it did in Chernenko’s Soviet Union. When a ruling class puts more stress on using the right abstractions than on getting the right results, those who have to put up with the failures—i.e., the rest of us—withdraw their loyalty and their labor from the system, and sooner or later, down it comes.
In the meantime, as we all listen to the cracking noises coming up from the foundations of American society, I’d like to propose that we consider the possibility that the future cannot be managed, and that all those plans and programs and grand agendas are by definition on their way to the same dumpster as the Five-Year Plans of the Soviet Union and the various Wars on Abstract Nouns proclaimed by the United States. Coming up with a plan is easy; getting people to do anything about it is hard; getting future events to cooperate—well, you can do the math as well as I can. It’s already clear to anyone who’s paying attention that we’re not going to get the Tomorrowland future bandied about for so many years by the pundits and marketing flacks of the corporate state: the flying cars, spaceships, nuclear power plants, and the rest of it have all been tried and all turned out to be white elephants, hopelessly overpriced for the limited value they provide. Maybe it’s time to consider the possibility that no other grand plan will work any better.
Does that mean that we shouldn’t prepare for the future? Au contraire, it means that we can do a much better job of preparing for the futures. There’s not just one of them, you see. There never is. The same habit of bad science fiction writers that editors used to lampoon with the phrase “It was raining on Mongo that Monday”—really? The same weather, all over an entire planet?—pervades current notions of “the” future. Choose any year in the past and look at what happened over the next decare or two to different cities and countries and continents, and you’ll find that their futures were unevenly distributed: some got war and some got peace, growth in one place was matched by contraction in another, and the experience of any decade you care to name was radically different depending on where you experienced it. That’s one of the things that the managerial aristocracy, with its fixation on abstract knowledge, reliably misses.
We know some things about the range of possible futures ahead of us. We know that fossil fuels and other nonrenewable resources are going to be increasingly expensive and hard to get by historic standards; we know that the impact of decades of unrestricted atmospheric pollution will continue to destabilize the climate and drive unpredictable changes in rainfall and growing seasons; we know that the infrastructure of the industrial nations, which was built under the mistaken belief that there would always be plenty of cheap energy and resources, will keep on decaying into uselessness; we know that habits and lifestyles dependent on the extravagant energy and resource usage that was briefly possible in the late twentieth century are already past their pull date. These things are certain—but they don’t tell us that much. What technologies and social forms will replace the clanking anachronism of industrial society over the decades immediately ahead? We don’t know that, and indeed we can’t know it.
We can’t know it because the future is not an abstraction. It’s not something neat and manageable that can be plotted in advance by corporate functionaries and ordered for just-in-time delivery. It’s an unknown region, and our preconceptions about it are the most important obstacle in the way of seeing it for what it is. That is to say, if you’re setting out to explore unfamiliar territory, deciding in advance what you’re going to find there and marching off in a predetermined direction to find it is a great way to end up neck-deep in a swamp as the crocodiles close in.
If you want a less awkward end to your great adventure, try heading into the unknown with eyes and ears open wide, pay attention to what’s happening around you even (or especially) when it contradicts your beliefs and presuppositions, and choose your path based on what you find rather than what you think has to be there. Choose your tools and traveling gear so that it can cope with as many things as possible, and when you pick your companions, remember that know-how is much more useful than book learning. That way you can travel light, meet the unexpected with some degree of grace, and have a better chance of finding a destination that’s worth reaching.