Recently I’ve been reading the writings of the American philosopher William James. You won’t see much discussion of his work among philosophers nowadays, and that’s not just because he happened to be white and male. He had the bad luck to reach maturity as Western philosophy was in its death throes, and he added to that misfortune by having a mind clear and honest enough that he drew certain necessary conclusions from the intellectual struggles of his day.
He hasn’t yet been forgiven for those conclusions. There are reasons for that—understandable reasons, though not good ones. The conclusions, and the reasons they’ve been ignored, have lost none of their relevance since his time. Quite the contrary, the harsh conditions tightening their grip on our industrial civilization just now can’t really be understood without listening to what James and others like him were trying to say, and what those who denounced him were trying even harder not to hear. Thus we’re going to have to talk a little about the history of philosophy.
Yes, I know perfectly well that most people think of that subject, on the rare occasions that they think of it at all, as the dullest sort of useless academic trivia. They’re wrong, but there’s a lesson in the mistake. The next time Neil deGrasse Tyson throws one of his public hissy fits insisting that philosophy is just plain wrongety-wrong-wrong-wrong, I hope none of my readers are so slow on the uptake as to think this shows that philosophy doesn’t matter. Quite the contrary, he’s so petulant about philosophy precisely because it does matter, and he’s got his eyes scrunched shut and his hands over his ears, shouting “La, la, la, I can’t hear you” at the top of his lungs in a vain attempt to ignore the message that philosophy is gently trying to murmur to him.
The same is true of the general public, if on a rather milder way. Seventy years ago, the publication of a new book by Jean-Paul Sartre or any of the other well-known philosophers of the time was a media event, the kind of thing that spawned articles in the arts and culture section of daily papers in dozens of cities and sparked cocktail-party chatter for months. That happened because the things Sartre and his fellow philosophers were talking about mattered to most people. That was then, of course. Since that time, philosophers and the general public have worked out a tacit agreement: the philosophers make sure to say nothing of interest to anyone outside their own little academic coteries, and the general public responds by ignoring them completely. All this makes it easy for both sides to pretend that an earlier generation of philosophers didn’t cut the ground right out from under their feet.
We can begin the story with René Descartes. He launched modern philosophy by trying to figure out what can be known for certain by the human mind. That’s where his famous dictum “I think, therefore I am” came from: having set out to doubt everything, he concluded that there had to be somebody doing the doubting. Now of course his version of what the mind can know included most of the standard beliefs of his time and culture, partly because doubting everything is a good deal harder than he realized, and partly because doubting the wrong things in 17th-century France could have landed him in prison or worse. What mattered was that he made a start.
That beginning was taken up and carried much further by a series of brilliant philosophers, of whom John Locke, George Berkeley, David Hume, and Immanuel Kant were the most important. All of these pursued the same question—what can the human mind really know through its own powers?—with increasing clarity and rigor. The endpoint of that trajectory was reached by Kant, whose Critique of Pure Reason showed that the human mind can only know what it creates.
Let’s walk through a bit of his reasoning. When you see an object—say, a cup of tea—what actually happens? You experience a series of disconnected sensations of color and shape; one part of your mind assembles those into an image, and another part of your mind assigns a label to that image: “teacup.” Without those processes of assemblage and labeling the world would be, in James’ useful phrase, nothing but “a blooming, buzzing confusion” of unconnected sensations. Try to follow the individual sensations back toward the object and you run into even more obstacles. How much does the image in your mind have in common with the game of electrochemical hopscotch in your optic nerve, how much information do the dancing electrons of the retina pass on from the antics of photons that spray through the eye, and how much does a splash pattern among photons really tell you about the quantum probability cloud of electrons that deflected those photons and set the process in motion?
Kant lived long before quantum theory, of course, but he got to many of the same conclusions well in advance by sheer ruthless logic. He even showed that space and time as we experience them are products of human consciousness, not “out there” in the world. There are doubtless things analogous to space and time in what he called the Ding an sich, the “thing in itself,” but we don’t know anything about them, and as the quantum physicists showed later on, they routinely behave in ways that make hash of our notions of the way space and time work. Thus we cannot know the world directly. All we can know is a model of that world assembled by our minds and nervous systems. That model is good enough for the purposes of everyday life and it can be leveraged in clever ways by scientists, but it can never tell us the truth about the world.
That was the discovery that rattled the foundations of eighteenth-century Europe. If Kant and his contemporaries had known enough about the history of philosophy elsewhere, they would have realized that this is something that happens to every philosophical tradition. All philosophy starts out with the naïve conviction that it’s possible for the human mind to know the truth about the world, and then runs face first into the same realization that staggered Kant’s readers. After a period of flailing, mature philosophical traditions reorient themselves by recognizing that if the mind can’t know the world directly, it can at least do a better job of getting to know its own creations. That’s where you get the great syntheses of Classical, Hindu, and Chinese philosophy, which present a set of more or less useful models about nature but go on to place the healthy unfoldment of individual or collective humanity at the center of the philosophical enterprise.
That was the direction that William James chose: to recognize that the human mind can give us only a rough approximation of the realities around us, and to focus instead about what we can actually know. That was the basis of his philosophy, which he called Pragmatism. There are other options along the same lines. Sartre, whose name I invoked a moment ago, did the same thing in his own way; so did Schopenhauer, and so did some others. Most philosophers in the Western world after Kant, however, rejected that path and set out instead to find some way to insist that Kant was wrong and the human mind really can know the truth about the world.
The quest for an answer to Kant fell broadly into two broad overlapping phases. The first, which had its peak in the 19th century, took its keynote from Hegel, who simply insisted that the mind had something called “intellectual intuition” which enabled it to do an end run around Kant’s challenge. That didn’t work very well, not least because no two philosophers seemed to be able to get the same results with their “intellectual intuition.” That difficulty led most later thinkers to interpret Hegel’s phrase as meaning something much closer to “brain fart.”
Despite the joke, this wasn’t a light matter. European thought inherited from its Christian roots the idea that knowing the truth about the world was a matter far more serious than mere life and death. That was why Friedrich Nietzsche—another thinker who took Kant’s insight seriously—wrote mordantly about the chaos set in motion by the collapse of the idea that the world known by the mind is the real world. That was also why Rudolf Steiner, whose ideas we’ll be discussing in future posts, launched his career with a volume, The Philosophy of Freedom, in which he tried to prove that thinking really could grasp the objective truth about the world. It was a gallant attempt, and he carried it out about as well as anyone could have done, but it didn’t work. He had the good sense to turn in other directions thereafter.
The failure of this first phase made the second phase inevitable, and set it going along its own foredoomed course. This peaked in the 20th century, and was based on the loud insistence that Kant’s insights don’t matter, so shut up, philosophers! That’s the intellectual current to which Neil deGrasse Tyson belongs, for example, in his angry claim that philosophy must be bad because it doesn’t justify his blind faith in the predestined march of science toward universal omniscience. More generally, it’s the current that gave rise to the modern managerial state.
The problem faced by this latter phase, in turn, is quite simply that the issues Kant described don’t go away just because you refuse to think about them. If you recognize the hard limits on our minds’ capacity to know reality, understand that our ideas can only be a rough model of the world as it is, and act accordingly, you can come up with workarounds for the bad habits of human thought and avoid many pitfalls. If instead you insist that the world is whatever the human mind says it is, and flee from philosophic insight into an increasingly shrill insistence that the mind’s truth is more true than the events it claims to describe, you end up in a world of hurt.
That, in turn, is where we are right now.
Look around you, dear reader, and notice how many of crises in today’s industrial societies unfold from somebody’s insistence that a concept they’ve fastened onto something is the absolute objective truth about that thing. I could doubtless provoke screaming tantrums in this post’s comment section by citing examples from either side of the political spectrum just now, but I don’t find that particularly entertaining, or for that matter particularly useful. All you have to do is look for what Korzybski called the “is of identity”—“this is that”—and watch the fur fly.
There’s a potent historical reason for this. During the first half of the twentieth century, most of the world’s industrial nations ended up being run by a managerial elite that claimed the right to rule on the basis of their allegedly superior understanding of the way the world works—and the “superior understanding” in question was based on a knowledge of abstractions. That process began in 1917 with the Russian Revolution and ended in 1945 with the imposition of technocratic governments all across conquered Europe and Japan; the beginning of Franklin Roosevelt’s first term in 1933 is a good start date for the process here in the United States.
That transfer of power was justified, or at least excused, by the claim that handing society over to cadres of university-trained experts would be ever so much more efficient than leaving it in the hands of the former ruling classes. Did that work out? In the short term, yes: some obvious abuses got taken care of, some programs that benefited ordinary people were enacted, and the problems caused by allowing too much wealth to be hoarded by the kleptocrats of the former elite were fixed by forced expropriation and redistribution of excess wealth, using expedients that ranged from high tax rates in the United States to mass murder in the Soviet Union.
That’s pretty much what every newly arrived ruling class does, and it always works tolerably well. It’s what happens afterward that matters, however. The new managerial elite made quite a range of confident claims about the wonderful world that would surely arrive as soon as it finished clearing away the detritus of the past and put its program of expert governance into effect. How well did it work? Well, try taking a look out the window. If you don’t see sparkling domed cities from which poverty and disease have long since been eliminated, flying cars zooming overhead while a daily flight to the Moon takes off from the local spaceport, and a nuclear power plant somewhere nearby turning out electricity too cheap to meter, let’s just say that the promises sounded great but the followthrough left a lot to be desired.
The difficulty in the way of that shining mirage was the same one that Kant analyzed in theory and James among others explored in practice. It’s one thing to manipulate abstract concepts and make a nice pretty picture out of them, and quite another to make realities in the grubby world of fact behave the way that the concepts do. Flying cars, space travel, and nuclear power all looked great on paper, but all three of them shared a common flaw: the value that each of them provided turned out in practice to be too small to justify their enormous costs. That prosaic reality never found their way into all those glossy portrayals of the world of the future that saturated the corporate media back in the day, but that didn’t make those or many other similarly unwelcome facts go away. We live in a world shaped by the tremendous failures that resulted.
Charles Fort pointed out many years ago that the prestige of science depends on a slick public-relations scheme whereby every success is trumpeted to the skies while every failure is swept under the nearest available rug. The same is true of the prestige of the managerial classes in today’s world. These days, their predictions and projects fail far more often than they succeed, but the corporate media can be counted on to yell all day and night about their successes and pretend that the failures never happened. There are plenty of reasons why so few people these days believe anything that comes from official channels, but that’s one of the big ones.
The logic behind this self-defeating habit is that our managerial aristocrats can’t simply step away from the claim that their mastery of abstractions gives them superior insight into the world of everyday affairs. That claim is what justifies their present condition of privilege, but it’s also the foundation of their collective identity. Like so many people cornered by the consequences of their own errors, accordingly, the managerial class has reacted to its failures reacted by doubling down. That’s why so much of the rhetoric that comes out of official sources these days consists of angry demands that everyone else has to agree that up is down, sideways is straight ahead, and if it walks like a duck and quacks like a duck, don’t you dare suggest that it might be a duck!
That’s the usual way that managerial castes deal with the mismatch between their preferred abstractions and the annoyingly contrary behavior of the world we experience. It’s also the usual way that managerial castes crash and burn. To begin with, insisting that abstractions are correct even when the facts don’t bear them out is a great way to drive the society you run straight into the ground. But there’s another difficulty. If there’s a conflict between the abstractions you prefer and the facts you encounter, and all your training (to say nothing of your class privilege) predisposes you to believe abstractions instead of facts, it becomes very tempting to treat belief in the abstractions and denial of the facts as a loyalty test for your subordinates.
This temptation becomes especially strong if you’re haunted by a suspicion that the abstractions are wrong and the facts are right, but can’t admit that to yourself. That’s when the psychology of previous investment comes into play, and you start demanding that other people believe in order to shore up your own flagging faith. Nor does a single act of faith ever do the job. The worse things get and the more obviously failure stares you in the face, the more likely you are to demand even more extravagant proofs of loyalty from those around you, and the result is that you expect people to believe a series of ever more preposterous claims in defiance of everything that’s happening around them. In due time you end up living in a dreamworld defined wholly by your own absurd demands for blind faith in abstract impossibilities—and it’s at that point, by and large, that the facts break down the door.
We’re probably not far from that stage just now. Turn on the media, if you can stomach it, and you can count on getting an earful of abstractions serenely detached from the grubby realities they claim to represent. When “safe” means “it kills people,” “effective” means “it doesn’t work,” and “the situation is under control” means “all our predictions turned out to be wrong and we have no idea what to do,” you’re looking at a ruling class that’s got a great big concrete wall across the road ahead and the accelerator slammed flat against the floor.
It’s a little late in the day to suggest that they might want to slow down a bit, read some William James, and grapple with the fact that the buzzing, blooming confusion of the universe is under no compulsion to behave the way they think it should. The rest of us, however, might want to put a little effort into learning from their mistakes. When the rubble stops bouncing and the smoke clears away, a great deal of rebuilding will be in order, and much of that will have to be done by individuals, families, and local communities, unhelped (and also unhindered) by the, ahem, intellectual intuitions of experts. A willingness to attend to the grubby world of fact, even when it conflicts with one’s preferred notions, will be a useful tool for the hard work ahead; a willingness to learn a little more about ourselves, even when that knowledge isn’t flattering, will be another.