All things considered, this may seem like an odd time to start talking again about the nature, history, and future of enchantment. That was one of the core themes I explored in posts during the first half of the year, granted, and I had much more to say about it when the pressures of a world system coming unglued made it necessary to talk about current events instead.
Those pressures haven’t abated at all—quite the contrary. The United States is currently trying to pay the costs and provide the munitions for two wars abroad, at a time when our government is close to US$34 trillion in debt and our defense industry has focused so intently on carrying out devastating raids on the national budget that it can no longer accomplish such subsidiary tasks as manufacturing bullets and bombs in adequate volume. Joe Biden’s approval ratings are dropping so steadily that he may just finish his term as the least popular president in US history, and his party is lurching toward open conflict between pro- and anti-Biden factions, not to mention pro-Israel and pro-Palestinian factions. Meanwhile Trump rises steadily in the polls and his party is drawing up plans for the most sweeping reforms in the US bureaucracy in most of a century.
At the same time, the US economy is stumbling into stagflation, that awkward and theoretically impossible condition where economic activity slows but prices keep rising. (Hint: it happens whenever the price of oil rises due to supply constraints.) Commercial real estate is in freefall and several other economic sectors are in deep trouble, but Biden’s flacks are insisting at the top of their lungs that everything is fine and the economy has never been better. It’ll be intriguing to see what effects that exercise in over-the-top gaslighting turns out to have; if history is anything to go by, the results will not be what Biden’s handlers want.
There’s more than this going on, of course. I could fill an entire post quite easily with signs of crisis from the nations of the modern industrial West, and another post with the evidence that much of the rest of the world is prospering as our decline picks up speed. For the moment, though, I want to set that aside and talk about enchantment. That’s not as pointless as it may seem; just as politics is downstream from culture, culture is downstream from imagination, and imagination is downstream from the states of consciousness that give imagination its context. Those states of consciousness change over time, and the change isn’t necessarily one-way.
This was the theme of an interesting recent piece by British journalist Mary Harrington in UnHerd. Harrington notes the difference between the modern experience (not merely “concept”) of the cosmos as lumps of matter tumbling pointlessly in the void, and the medieval experience (again, it was never just a concept) of the cosmos as a living whole in which not even the tiniest corner was without life, intelligence, and spirit. She then goes on to point out that the medieval way of seeing the world is much more accessible to us than many people like to think, and ends by suggesting that the flight from purely pragmatic social engineering to the moral crusades of left and right and the increasing influence of religious ideas in public life may herald the reenchantment of everyday life.
To regular readers of this blog, this will not be any kind of surprise. Since the beginning of this year, starting with a review of the implications of Jason Josephson-Storm’s insightful book The Myth of Disenchantment, a series of posts here has talked about what the word “enchantment” means, why so many fashionable thinkers have insisted that it belongs solely to the discarded and devalued past, and why I think it will be among the most essential concepts for making sense of the future immediately ahead of us. The news stories mentioned above, and the broader unraveling of industrial society in which they each play a role, might best be seen as stages in the dissolution of one state of consciousness and the birth pangs of another.
It’s been fashionable since the days of Max Weber to define the modern state of consciousness as “disenchanted.” Central not only to Weber’s core thesis but also to most modern conceptions of history, including the works of Ken Wilber, Owen Barfield, and Jean Gebser we discussed earlier this year, is the belief that modern thinking is uniquely free of mythology and magic, that we see the world in its bare nudity — free of the fancy conceptual clothing that hid its allurements from past ages. It’s a convenient way of justifying the most absurd product of the collective egotism of modern times, the blustering insistence that the people of every past age and every other culture were too stupid to notice that the only reasonable way to think about the world is ours.
However cozy and convenient that belief may be, it’s hopelessly mistaken. It’s long past time to talk about that.
We can begin with one of the standard modern historical notions about the evolution of thought in the Western world. The standard narrative holds that the scientific revolution of the seventeenth century led to the collapse of the enchanted world of the Renaissance and its replacement by the soulless world of modernity. It’s quite a lively little morality play: there’s the whole population of Europe, believing in elves and magic and the rest of it, and then the scientists show up and prove that they’re just plain wrong. The elves exit stage left, hauling the Earth away from the center of the universe as they go, and once a bunch of elderly conservatives read their lines bewailing the loss of beauty and meaning, and the scientists sing a little ditty in praise of Truth and Reason, the curtain falls on a thoroughly modern world.
This isn’t just a bit of pop-culture blather, although you can certainly find it throughout current pop culture. It also has a central role in classic works on the history of ideas such as Alexander Koyré’s From the Closed World to the Open Universe, Sigfried Giedon’s Space, Time, and Architecture, and Rudolf Wittkower’s Architectural Principles in the Age of Humanism, and in far more recent works as well. To a very real extent this narrative is the cornerstone of modern Western industrial culture, the story we use to explain to ourselves where we came from, where we’re going, and why that matters. It’s also a drastic falsification of what actually happened.
What actually happened is clear if you look at the timeline: the abandonment of Renaissance ideas of universal order, meaning, and value happened before the first stirrings of modern science, not after it. By the mid-sixteenth century, the same currents of thought that drove the Protestant Reformation were already shredding the Renaissance synthesis and rejecting the old enchanted world. Much of what led Protestant reformers to break with Rome, in fact, was precisely Catholicism’s reliance on religious enchantment: the precisely scripted rituals and sacred objects that, to the enchanted mind, possessed a link to the spiritual realm, was what the disenchanted minds of the Reformation could not accept.
You can see the same process at work in the shift from Renaissance to Baroque art. Consider two statues of David, one by Michelangelo, the other by Bernini. Michelangelo’s sculpture is one of the supreme works of Renaissance art; it is at rest, like the stationary Earth of the old cosmology, occupying a central place in the viewer’s cosmos, oriented toward nothing outside itself. It is also proportioned according to the sacred geometries of Renaissance tradition. Bernini’s statue is one of the great works of Baroque statuary, and it differs in exactly the way the world of the early modern West differed from that of the Renaissance: it is in motion, oriented toward the far distance, and its proportions aren’t based on any sort of sacred canon; they were chosen by Bernini purely on the basis of what pleased him.
The arts are remarkably useful here as a way of checking the traditional narrative. Histories of science love to talk, for example, about how Johannes Kepler discarded millennia of tradition by postulating that the planets move around the sun in ellipses rather than circles. They don’t generally mention that ellipses had become fashionable in architecture more than half a century before Kepler picked them up, and churches and plazas with elliptical patterns were springing up all over Europe by the time his writings were in vogue. It’s not that Kepler followed the evidence and stumbled across the ellipse; it’s that ellipses were fashionable, and Kepler figured out how to apply them to astronomy.
Now of course the facts that Kepler got his basic model from the pop culture of his time, and the sciences more generally followed the lead of early modern art and popular culture rather than blazing the trail the currently fashionable narrative assigns them, aren’t enough by themselves to disprove those narratives. Doubtless true believers in modern science could claim that the vagaries of intellectual and artistic fashion that made ellipses, infinite space, bodies in motion, and the rest of it popular in the cultural sphere just happened to spawn a set of concepts uniquely suited to make sense of the natural world.
Yet there’s another difficulty here, one that philosophers have been quietly discussing for some decades now. You may be aware, dear reader, that scientific popularizers like Neil DeGrasse Tyson hate philosophy. The issue we’re about to discuss is a core reason why. It goes by the name of underdetermination. It’s worth taking some time to understand what this means, because our world is already being shaped by the consequences of the underdetermination of scientific ideas and that process is still in its early stages.
Like most of the serious questions that keep philosophers busy, this one can be described quite easily. Let’s say we have a set of observations about nature that we’re trying to understand, and we come up with a hypothesis that attempts to explain them. We draw some logical conclusions from that hypothesis, and come up with an experimental test that can go one of two ways: one way that supports the hypothesis, the other way that doesn’t. We run the test, and the result is the one that supports the hypothesis. We repeat this process with several other experimental tests, and each of the results supports the hypothesis. Does this prove that the hypothesis is true?
No, it does not. It simply shows that the hypothesis successfully predicts the outcome of the specific tests we ran. That’s important, and it’s well worth knowing, but it doesn’t prove that the hypothesis is true—only that it’s useful.
It gets worse. Given any set of observations, it is possible to come up with an infinite number of hypotheses that will account for them. It’s impossible to think of all those hypotheses and figure out ways to test each of them against the data, for the same reason that if you try to count from one to infinity in any finite period of time, you’ll fail. Thus you can be sure, even if you spend the rest of your life running tests, that there are still an infinite number of hypotheses out there that fit all the data you’ve gathered just as well as the hypothesis you want to prove.
Now of course the obvious response of scientists to this argument is to demand that philosophers come up with one—just one!—hypothesis that explains the evidence gathered by some modern science as well as the accepted theories do. The philosophers’ proper answer is, “Sure—just give us a few hundred well-trained grad students and a couple of decades.” Current scientific theories look as impressive as they do, and succeed as well as they do in predicting the behavior of things in the world, because armies of scientists have beavered away for a couple of centuries tinkering with their theories to make them fit the observed behavior of nature as closely as possible. That’s what scientists do, and they’re good at it. The models they’ve created do an excellent job of predicting the behavior of many things in nature—but again, that doesn’t make those models true. It just makes them useful.
The history of science is among other things a potent antidote to the claim that today’s accepted theories have some permanent claim on truth. No theory of nature has ever looked as imposing and convincing as late nineteenth century physics. The physics of that time provided fantastically accurate predictions of nearly all of the phenomena in nature. Sure, nobody had figured out how the sun could keep producing light and heat long enough to fit the geological evidence for life on earth; the planet Mercury had a wobble nobody could explain, and attempts to explain it by postulating another planet named Vulcan hadn’t succeeded very well; and heat and light radiating from a black body behaved in ways that really didn’t seem to make any sense at all—but all those were tiny little details that would surely be solved with a little more work.
By 1910, due to those tiny little details, the entire structure of late nineteenth century physics was in ruins. All those carefully developed theories had to be scrapped because the tiny details turned into vast gaping chasms that ran straight through the middle of physics. Worse, the two theories that more or less accounted for them—quantum theory and the theory of relativity—contradict each other in important ways. That’s why physicists ever since have been trying to come up with so-called Grand Unification Theories to bridge the gap. They’ve failed so far, and there’s no particular reason to believe that a second century of effort will bring them any more success.
The collapse of certainty that flattened the soaring edifice of late nineteenth century physics isn’t unique in the history of science. It isn’t even unusual. As Thomas Kuhn showed more than half a century ago in his book The Structure of Scientific Revolutions, this is the normal rhythm of scientific discovery and explanation. It happens for a reason I discussed in an earlier post: scientific facts are social constructs.
Every step along the way from the first inkling of a hypothesis to the published textbooks that enshrine (or entomb) the work of past research is shaped, often to an overwhelming degree, by social interactions among scientists, between scientists and the institutions that pay them and finance their investigations, and between the scientific community and the larger society. Nature gets a vote in there by way of experiment—that’s what makes science more practically useful than many of the other ways our reality is socially constructed—but scientific reality is still a social product. Why does one hypothesis get turned into the centerpiece of a theory while others get discarded? By and large, it’s a matter of social pressures within the scientific community.
Glance back over the history of science, too, and you’ll notice something else that doesn’t get much comment: the influence of social factors over scientific inquiry has increased over time. Consider the phenomenon of peer review. Back in the nineteenth century, that didn’t exist. A scientist who completed a series of experiments published a paper about them, other scientists read the paper and did their own experiments to support or disprove the claim made in that first paper, and the whole thing would be hashed out in the letters columns of journals, sometimes for years, until it was clear who was right. That sort of free-for-all made for excellent science, and may explain why the rate of technological advancement was higher by most measures in the late nineteenth century than it is today.
Today? If you submit a paper to a scientific journal, it goes out for peer review, meaning that the journal sends it around to three or four acknowledged experts in the field and won’t publish it unless they give it a thumbs up. The politics around who gets to be peer reviewers in each subset of each field of science are intense and often bitter, and quite often a paper that offers evidence disproving the conventional wisdom in some field of science will be denied publication no matter how good the research is, because the peer reviewers are committed to the defense of the status quo. They may have good financial reasons for that: in an era when most funding for scientific research comes from corporate sources or from government bureaucracies “influenced” (we can use the polite word) by corporate money, who gets funding and who doesn’t has much more to do with quarterly profits than it does with good science.
Over time, in other words, the models of the cosmos upheld by scientific institutions, by the media, and by schools and universities as the truth about nature have become more and more influenced by social factors. Meanwhile, while scientists are encouraged to criticize the work of other scientists, people outside the scientific community who raise awkward questions about the validity and accuracy of this or that model are shouted down by the propagandists of science, and by all those figures in authority who benefit from claiming that their ideas shouldn’t be questioned. In effect, scientists have become the priesthood of the modern industrial world, handing down claims about the cosmos that no one else is supposed to question, and (as priesthoods always are) being manipulated by existing power structures to defend the status quo.
This same insight can be summed up neatly in another way. The claim that the coming of science meant the end of enchantment is completely mistaken, because science is itself an enchantment. The world inhabited by true believers in science is an enchanted world, a world where certain people in white lab coats have a unique ability to know the truth about nature, and the rest of us are supposed to accept whatever they say on blind faith, no matter how often the approved dogma changes and no matter how much harm it causes. There was no disenchantment of the world; instead, we exchanged an old enchantment for a newer one. Granted, the new enchantment had its advantages, but it also has had tremendous costs, and the bills are still coming due.
Yet there’s another factor in play, because the enchantment of science is breaking down around us right now. We’ll talk about how that’s happening, and what that means, in future posts.