You are currently viewing The Unaccountability Machine

The Unaccountability Machine

Under No Management

Dan Davies lays out why the systems we’ve built for ourselves are driving us to disaster and despair.

Review by Charlie Clark

The place to get started with The Unaccountability Machine is in the airport. Firstly, it’s an airport kind of book, slim and digestible, one that will fit right in at Hudson News. The question is whether to shelve it with Malcom Gladwell and Michael Lewis and HBR’s latest Must Read—the pop economics and business books—or with the thrillers. Because while it’s definitely pop social science, it’s also a kind of noir: starting from a series of apparently unrelated corporate misdemeanors, Dan Davies follows a trail of clues that leads all the way to the top of our economic system.

Secondly, The Unaccountability Machine is a book about the experience of the airport, a book about interacting with huge, complex systems and those systems’ inflexible rules. In fact, the airport comes up several times toward the beginning of the book as Davies describes the problem of accountability: Why is the person at the service counter so eager to help and yet so incapable of solving any problem whatsoever? Why are we still taking off our shoes? Why did employees of KLM Royal Dutch Airlines hand-feed 400 squirrels into an industrial shredder? Ultimately, Davies thinks he can link the irrationalities of the airport to everything from the 2008 financial crisis to the rise in “deaths of despair” to MAGA and Brexit.

According to Davies, the root of the crisis of accountability is that the complexity of the modern world renders old modes of decision-making obsolete. When people are overwhelmed by information, they make systems, which “restore the possibility of decision-making, by narrowing down the space of things that need to be thought about…. In order to manage an industrial society, you need decision-making systems that work on an industrial scale, and we don’t seem psychologically equipped to handle this.” Davies calls these industrial-scale decision-making systems “accountability sinks.” An accountability sink functions by taking the decision out of any identifiable person’s hands and preventing “the feedback of the person affected by the decision from affecting the operation of the system.”

In short order, Davies admits that another word for “accountability sink” might be “policy.” A system where decisions are made not by potentially capricious individuals but by reference to fixed criteria could be described as “the rule of law.” If we can all agree that “having rules” is, in theory, preferable to being subject to the personal whims of front-line service workers, then why, in practice, are our rule systems so dysfunctional that we’re tempted to burn the whole thing down? This mystery is what Davies spends the rest of the book investigating.

Davies’s argument is that something has happened to our institutions, from corporations to governments, to make them systematically unviable.

The heart of The Unaccountability Machine is a primer on the forgotten discipline of cybernetics. In Davies’s telling, cybernetics was a kind of sickly triplet, birthed alongside mathematical economics and computing by the Second World War, but which failed to thrive and was largely irrelevant by the 1970s. The word itself derives from the Greek kybernḗtēs, meaning “the person steering a ship.” As a discipline, cybernetics studies the construction and management of complex systems, and many of its foundational insights derive from the wartime experience of its early practitioners, who found themselves reflecting on the mind-boggling complexity of military operations, logistics, and strategy, and the interactions between them.

As a sort of précis of Davies’s primer, I would suggest that the essence of cybernetics can be captured by three distinct “moves”: 1) Don’t attempt to model or understand complex systems analytically. It’s not possible and will only get you in trouble. Treat them as “black boxes.” 2) Focus on what you can understand, which are the inputs and outputs of those black boxes. For example, when I turn the rudder to the right, the ship steers to the left; I don’t need to know anything about shipbuilding or fluid dynamics. 3) Pay special attention to the presence of feedback in a system, where the output of one black box can serve as the input of another black box. Feedback is powerful and potentially dangerous. From these three moves, much of the rest of cybernetic theory follows.

Davies is chiefly interested in one practical application of cybernetics, the viable system model, which specifies five subsystems that any decision-making system needs to have in order to function and adapt to changes in the environment. A final bit of jargon is worth introducing: cyberneticians talk about systems as managing “variety” and what they do as “variety engineering.” Variety is “a quantity like information, but in contexts of large systems and where you’re talking about control rather than transmission. A rough definition would be that the variety of a system is the number of states that it can be in.” Returning to our example of the ship, the variety of its steering system is the 180º span between port and starboard. It follows from this that, in order to manage the direction of the ship, the steersman needs a rudder with 180º of movement: variety in the control must equal variety in the system.

The problem of building viable systems is, therefore, the problem of generating enough control variety to manage the variety in the system and its environment. The cyberneticians expressed this in terms of five subsystems: 1) Operations, the subsystem directed to “making change in the real world,” 2) Regulatory, the subsystem directed to managing variety generated by Operations, that is, “enforcing rules for sharing and scheduling,” 3) Optimization or Integration, the subsystem directed to “the management of each individual operation in order to coordinate their activities towards a particular purpose,” 4) Intelligence, the subsystem directed to collecting information about variety in the environment that does not currently impact the System but may in the future, 5) Philosophy or Identity, the subsystem that remembers what the System is for and manages how much the first three functions (Operations, Regulatory, Integration) change based on input from the Intelligence function. On a ship, these subsystems might correspond to the rowers (Operations), the coxswain calling the strokes (Regulatory), the steersman moving the rudder (Optimization/Integration), the barrelman in the crow’s nest keeping an eye out for rocks (Intelligence), and the captain keeping the whole voyage on course (Philosophy/Identity). A System with all five of these subsystems in working order will be “viable”: it will have enough control variety to manage itself, it will not break down because of unanticipated environmental shocks, it will be able to adapt while maintaining its purpose.

Davies’s argument is that something has happened to our institutions, from corporations to governments, to make them systematically unviable. Who is responsible for this vast conspiracy, this spree of crimes against good cybernetic practice? Davies’s chief suspect is economics, which he positions, at least in its mainstream, mathematical form, as a kind of anti-cybernetics. Cybernetics handles complexity by drawing black boxes around what it cannot understand. Economics just assumes complexity away by building highly abstract models of commodities, spot markets, homo economicus, and representative firms, linked together by an all-knowing efficient market. While economics may yield many important insights, its reductionism also creates “blind spots,” and the widespread adoption of “thinking like an economist” has spread those blind spots into every nook and cranny of management. In cybernetic terms, all the many kinds of variety that don’t fit into economic models are running totally unchecked.

There is not—not in evidence at any rate—any natural law that the complexity of human life must infinitely increase until we can manage it only by resorting to more-than-human intelligence.

One way the hegemony of economics over management expresses itself is the dominance of financial reporting. Davies describes many ways that financial reporting methods can distort the actual functioning of a business and create perverse incentives. Yet businesses systematically underinvest in alternative modes of accounting, which would be more useful for management. This underinvestment is connected to what Davies describes as Friedmanism, the reduction of the management function to maximizing shareholder value. For Davies, this represents a faulty Philosophy or Identity subsystem. A corporation that only understands its identity in this narrow lens will adopt a reductionist approach to its own functions: “Over the course of [the 1980s and 90s], the industrial world’s productive system—the corporations—set about the equivalent of amateur brain surgery, hacking away bits of their regulatory and information-handling system, to see if they could do without them.”

Almost five decades of reduction in management capacity has coincided with an unprecedented increase in the complexity of doing business in a globalized, computerized economy. The result is what Davies calls the polycrisis. The masses find their own lives unmanageable: they are overloaded with information, ignored by the decision-makers, driven to despair as they lose control over their lives. Meanwhile, the managerial class is caught flat-footed and ill-prepared by shocks like the 2008 financial crisis or COVID-19 pandemic. Enter the populists with their promise of a return to a simpler time: Make America Great Again, Get Britain Out, and we can all go back to supporting 2.5 children on one factory job.

Davies thinks the populist revolt is based on a false promise, that there is no going back, even if the current System is unviable. In his final chapter, “What Is to Be Done?”, he proposes three reforms. First, the corporate Philosophy/Identity subsystem should allow for goals beyond maximizing shareholder value: Friedmanism is a false consciousness born of “an almost nihilistic lack of trust in human judgment,” and if they were allowed to think any other way, the managers of corporations might identify a variety of purposes for which their systems exist. Corporations would no longer “be unable to change course even when faced with the imminent extinction of human life.” Second, and in furtherance of his first proposed reform, Davies thinks something must be done about the way that debt and the threat of a leveraged buyout are used to discipline corporations into short-term profit-seeking. He proposes requiring a corporation buying out another to guarantee the target corporation’s debts. Finally, Davies suggests that AI can be harnessed to replace the layers of middle managers that once allowed the feedback of the “decided-upon” to reach the attention of the decision-makers and, if not to impact the current operation, at least to be incorporated in the design and planning of future operations. The problem, in Davies’s view, is that the masses are not psychologically prepared to accept their robot overlords.

Of Davies’s three proposals, I am most skeptical, by far, of his expectation that computing, which strangled cybernetics in its infancy, will now come to its rescue. 21st-century capitalism has already decided that human variety will be managed through surveillance and addiction; we should not expect the master’s AI tools to take a different tack. But more basically, perhaps Davies is wrong that economic involution is a one-way street. Maybe Davies is right and populism is a dead end. And yet what is impossible about a simpler world? There is not—not in evidence at any rate—any natural law that the complexity of human life must infinitely increase until we can manage it only by resorting to more-than-human intelligence. It may yet be possible to (re)form the elite, the System’s residual decision-makers, in a Philosophy that prizes the human scale.

Charlie Clark is a writer and retractor. He lives in New Hampshire.

The Unaccountability Machine: Why Big Systems Make Terrible Decisions—and How the World Lost Its Mind was published by the University of Chicago Press on April 4, 2025. Fare Forward appreciates their provision of a copy to our reviewer. You can purchase your own copy from the publisher here.

Leave a Reply