Democracy, or democratic aspirations, as we have known them, are intimately tied to conceptions of equity or equality, whereby when it comes to decision making, no one is counted, a priori, “more” than anyone else (hence the crucial contrast of democracy is not so much with monarchy as it is with aristocracy). Giambattista Vico must be credited with the argument that philosophy, taken in an extremely generic sense as commitment to principled debate, can be construed as a commitment to the authority of rational argumentation rather than the authority of precedent, custom, prestige, personality, or brute force. The emergence of democracy, Vico argues in The New Science, is best understood, in materialist terms, as the rhetoric of democracy. On Vico’s view, the masses “invent” philosophy (as Nietzsche also realized when he emphasized Socrates’ status as a “pleb,”) as an appeal for the right to participate in governance on the basis not of any actual but only the formal possibility of equality. This of course sets up an excruciating dialectic, because culture, education, experience, and above all wealth are missing in the people, and so they are never (yet) equal. Even in the minimal case of being consulted or invited to deliberate, it is obvious that there are often serious limitations on people’s ability to do so, grounded in health, education, experience, and so on. Given that these are necessary for governance the formal equality insisted upon by the people is always short of actual, and thus “the people are always missing” (Kafka/Deleuze).
Thus communists and anarchists often point out that the formal notion of equality is a ruse (and Rousseau already realized this), that in fact this liberal conception of an individual as “equal” to another depends on conceptions of autonomy and agency (and ultimately on property) that are either impossible or undesirable. At the heart of the weakness of contemporary democracy is the role that economic inequality plays in undermining our conception of what it means to be formally equal to one another, not only in the eyes of the law but also in terms of our ability to govern together (or on one another’s behalf, in republics). There is a link, also, between impasses in modern epistemology and impasses in modern governance. This link, or rather gap, is filled in by an economic and finally a religious set of imperatives that cannot be rationalized but must always be obscured (and glorified as obscure, mysterious, as Agamben has demonstarted) in order for the status quo to be maintained.
Like Henri Atlan, Ian Hacking also draws the line connecting the impasse at the core of our conceptions of chance and probability to the contemporary crises in both knowledge and government. In the 2006 re-introduction to his 1975 classic study, The Emergence of Probability, Hacking compares the dominance of “evidence-based” medicine over clinical medicine directly to democratic aspirations. The crucial link, here, is finance. It is cheaper to decide which cures to pursue based on randomized trials, to the obvious profit of pharmaceutical approaches over all other possibilities (or rather to generic, one-size-fits-all approaches over time consuming responsiveness to the needs of particular individuals). Likewise, it is simpler and cheaper (in the short run) to govern demographics rather than communities, averages rather than individuals. As Foucault already understood in the early 1970’s, just as neoliberalism was emerging, it is economics that is the crucial suture between the apparent power of modern modes of probabilistic and statistical inference, and new possibilities of governance of “the masses” by governing as little as possible, by determining how to adjust or intervene only in order optimize what is taken to be a stochastic (chaotic yet probabilistic) aggregate of desires, powers, interests, energies, and masses (sic).
What is crucial to see here is how the rise of probabilistic formalisms as a way of resolving political disputes makes the use and abuse of probabilities a defining issue of politics in the modern, post-17th century era. Hacking observes, following Porter, that “trust in numbers is a consequence not of mathematics but of the drive towards democratic government,” (Hacking 2006: Introduction 2006). And yet this is not possible unless people (and their desires and potencies) can themselves be considered as atomic and isolated counters, specific and bounded units. But given that people are irreducibly complex and unpredictable, how is this limitation and bounding possible? This is perhaps another way of asking Nietzsche’s question about how we have made ourselves into “trustworthy” animals.
There are many stories here to tell. But one important story, as David Graeber and others have shown, it that it is the long history of human experiences with money that facilitates the reduction of human complexity to an aggregate susceptible of induction over probabilities. Money makes the variable and incalculable nature of value appear limited, finite, and subject to both calculation and indefinite storage. But, as Graeber points out, it is only on condition that human beings can be (have been and continue to be) ripped from social contexts and enslaved that they can be treated as roughly interchangeable, and that thus a quantitative “value” can be placed on a human life, or more specifically, on human attributes (i.e. human capital). And the ability to quantify ourselves in general, politically, depends on a prior experience of economic quantification. This experience can only be an experience of enslavement, either in fact or in principle, since it is only a slave that, qua slave, is interchangeable with any other human being, and only as such capable of having her value fully monetized. It is no accident that the predominance of American finance capital is coterminous with the imprisonment and forced impoverishment of most of its population, as well as with the subjugation to U.S. military power of the rest of the world.
What is extraordinary to realize is the way in which, behind the contemporary biopolitical holocaust, in which vast tracts of human life can be justifiably destroyed in the name of profit, lies an unresolved, and perhaps irresolvable debate between two diametrically opposed modes of inference based on probabilities. On the one hand, we have the “Bayesian” view that what legitimates inferences from probabilities is their progressive confirmation or disconfirmation of beliefs. On the other hand, there is the “Fisherian” view that predictive inferences can be made on the basis of experimental conditions under which probabilities are made perfectly random: “we alter aspects of the world that concern us so that they resemble, as much as possible, artificial randomizers like dice” (Hacking, Introduction).
Hacking points out that these two options are both evasions.
“. . . it is instructive that each kind of probability has evolved its own way, not to solve the problem of induction, but to evade it. The degree of belief evasion uses the idea of learning from experience by exploiting Bayes’ rule. The frequency-type evasion deploys the idea of inductive behavior. From a purely logical point of view both evasions are defective. They are grounded less in logic than in a moral sensibility. The degree of belief evasion demands that one should be true to one’s former self. The frequency-type evasion relies, as C.S. Peirce understood, on the cardinal virtues of Faith, Hope, and Charity.” (Hacking, The Emergence of Probability, 2006 Introduction)
But these two options, these two evasions, are not symmetrical, and are in fact the components of a wicked dialectic. Today, the Fisherians have won (a situation which is very bad for statistical science, as McCloskey and Ziliak point out in The Cult of Statistical Significance). What this means is that our social engineers (i.e. economists and their medical, social scientific, governmental and military lackeys) will do anything necessary to turn human life itself into a game of perfect randomness (including the ecological destruction necessary for this to occur, since ecological and geographical variation interfere with “frequentism”). What is absolutely bizarre and horrifying is the way that this project of wanton human destruction continues to be carried out under the aegis of “progress,” i.e. the “frequentist’s” perverse “faith, hope, and love,” the virtues by means of which we carry on at any cost and endure any suffering. Ironically, the stoic, Calvinistic faith embodied in the “Protestant ethic,” by means of which one works out one’s salvation with fear and trembling as the market fluctuates day to day, is no longer the ethos of the capitalist classes, as Max Weber thought it was. As pointed out by Lazzarato in The Making of Indebted Man, this ethos of faith, hope, and love has become outsourced onto the laboring and impoverished classes, as they willingly and at unimaginable psychic and somatic cost take on the precariousness and systemic risk germane to the financial system as a whole. (As Adam Kotsko pointed out recently, this is Marx’s point that people will do anything, even work for free, in order to express themselves through unalienated labor).
Obviously it is time for revolt, and as ever, the revolution is just beginning. But a key strategy at this point is to disarm the theologico-political time bomb that is the modern mis-relationship to chance and probability, and to the putatively “natural” or “absolute” character of the uncertainty, randomness, and chaos to which it is supposed to “clearly” attest. Modern governance (since the 17th but especially since the 19th century) has attempted to be a “positive” science of the “masses,” as if political decisions were quite literally a matter of correct thermodynamic equations.
The ridiculous “conservative vs. revolutionary” alternative is a double evasion.
In Atlan’s terms we can describe these two positions (Bayesian and Frequentist) as the nostalgically secular and the not-yet-fully-desacralized approaches to chance (probability). The Bayesians are attempting to soberly and conservatively infer only what can be shown to be consistent with previously confirmed patterns of belief, thus justifying action in the present in terms of its likelihood to conform to past patterns. They are the nostaligically secular (corresponding to contemporary “leftism”). To put the matter in religious terms, this nostalgic view of chance as secular pretends that chance is nothing divinely creative, but simply the “next” opportunity to renew life as we have known it. On the other hand, the Fisherians or “frequentists” are the radicals who would have us re-tool reality in order to make it predictable, turn life as we know it into a genuine crapshoot. What is obvious is that contemporary “conservatives” (i.e. Republicans) are not interested at all in conserving the past but are in fact “radical” and bloodthirsty Fisherians in search of homogeneity—as we all know, the right wing wants to destroy and not to conserve life, while “liberals” (i.e. Democrats) are the pathetic Bayesians who want to conserve as much of the present as possible (in keeping with the past). So the argument for full communism, to escape the liberal pathos, cannot be simply an argument for conservation of life via the state unless it also an argument for an entirely different relationship to chance and contingency, one that is willing to radically alter our conception of what counts as a “viable institution,” as such. That this would be totally unrecognizable to us from our particular historical vantage point is not an argument against its truth.
It is of course a matter of real risk, throwing human life open to contingency in a very different way. But this politics is also the “faith” and “hope” (terms used under advisement, or sous rature) that in so entering into a more attentive awareness of singularity, the very need we apparently have for the kinds of security apparatuses, actuarials, and contingency plans we currently rely on (and which in any case do not work) will fade like the nightmare it currently is.
A common misperception about full communism is that, under communism, the state would or should be able to control the economy in a way that it is either unable or unwilling to do, under conditions of capitalism. On this view, full communism means a command or planned economy, with the implication that economic activity can be fully rationalized—if not perfectly or completely predicted, then at the very least subjected to constant scrutiny, re-evaluation, and re-assessment by all those concerned. In the best case, this would look something like what Bruno Latour calls the power of “following up” in the sciences, whereby the various mediators or transitions between theories, experimental practices, the dissemination of information, the effects of technologies, etc., can always be (at least in principle) re-examined, subject to further investigation, more or less “democratically” contested, and so on. Indeed, if only policy debates and implementation were, in this sense, more scientific, there would be incredible progress.
Be that as it may, I think it is crucial for all critiques of capitalism and capitalist ideology, and for all proposed alternatives, to come to grips with certain fundamental limits to human knowledge, no matter how pragmatically construed. In all human activities, including not only market exchanges, but also up to and including human language itself, there are fundamental ambiguities, ambivalences, and unforseeabilities that are intractable. This element of uncertainty in human activity is not only the occasion for deception, manipulation, and fraud, but also for creativity, innovation, surprise, and enjoyment. What is at stake here is the fundamentally ludic or game-like character of human activity, generally, and it seems to me that any vision of state communism has to account for not only this ludic dimension of human behavior, but more importantly has to better support the necessity of games and game-like structures than capitalism does.
This is true even if the ludic is often agonistic and even painful. One of the dirtiest (open) secrets about our addiction to capitalism, at nearly any human and ecological cost imaginable, is that it presents itself as a tremendously powerful, attractive, and intricate game, making even the pain involved (no pain, no gain) attractive to us. It is a game played most purely, as Bataille was perhaps the first to see clearly, by the extremely poor and the extremely rich—that is, by those who give everything they have, placing everything on the line, every day, whether for a lottery ticket or for high-risk debt swaps. Bataille shows very efficiently how important it is to insist upon the identity of the desperately poor and extravagantly wealthy, precisely from this point of view, rather than imagine that what human subjectivity needs or wants is some kind of middling or average viability or “sustainable lifestyle.” Bataille saw that even in less extreme forms than lotteries and foreign currency arbitrage, any exchange, any transaction, is a game in which one is playing not only to gain wealth and/or power, but to see, at any given moment, what it is possible to get away with, how far one can negotiate, and to test the limits of possibility, as such. To play for ultimate stakes.
In his recently translated Fraud: The World of Ona’ah, Henri Atlan puts this problem in an extremely interesting way. In his Sparks of Randomness, Atlan had argued that a secular age is grounded, in part, on the foreclosure of any “sacred” or “prophetic” meaning being ascribed to random events, including the more or less probable outcomes of various sorts of stochastic activities such as marketplace behavior. In Fraud, Atlan continues this line of thought by arguing that secular cultures must encourage a similar tolerance of a certain amount of fraud in all human interactions. The Rabbinic wisdom tradition, following Scriptures, recognized that on some level it is impossible not to defraud one another, in some minimal way, whether in commerce or in speech, unless we assert that we are complete masters not only of the means an ends of our own intentions, but that we would know in advance how all intentional activities would affect, support, or undermine the intentions of others. The secular tolerance of fraud parallels a tolerance of the unknown that was formerly religiously mediated by sacrifice rituals. In the “secularized” era of chance, we no longer ascribe chance to the influence or presence of a divinity, let alone to signs of divine revelation. The theological-political problem, of course, is that this secularization is necessarily an incomplete tendency, even in the present day, and what tends to happen is that religious meanings are brought back in, at the last minute, to suture the unbearable anxiety of shared responsibility for the unknown and for the ungrounded character of human decisions.
But it is extremely difficult to know how much fraud to tolerate. The Talmud gives the rule of one-sixth: it is acceptable to over-charge someone for up to one-sixth of the fair market price, but not more. This quaint conception seems impossibly naïve, especially in a market context as complex and high-velocity as contemporary global capitalism, where money flows as fast as information, and where the form of money itself (and by extension prices) is increasingly nothing but language: a series of promises backed by other promises, with no fixed or finite limit (i.e. a gold standard) backing or controlling the flow of promises (credit and debt). But the opportunity here, Atlan recognizes, is that we should be able to bring the same ancient and long-developed sense of appropriate and inappropriate speech into our conception of what is economically permissible and impermissible.
In this sense the struggle against contemporary neoliberal biopower is not to insist that economic problems can only be addressed in the “political” economy, stupid, but only in the “theologico-political” economy, dumbass. For in the end, there is no way to regulate economic behavior other than by religious and moral (i.e. theological) cannons, and surprisingly, this can be seen at the most abstract level of what counts as economically “reasonable”—that is, precisely in the vision of the market as a place of “random” activity (as Adam Smith already well understood, but with a conclusion opposite to my own). Whether or not we think we believe in providence, we always think we know something about providence. The games we tolerate in the market are still sacred games, still reflections of what we actually believe about providence (or fate or destiny), about the sacred and oracular character of chance outcomes, and about the meaning or lack of meaning that human lives are deemed worthy of in relation to and in view of this game (or games).
So the objection to capitalism cannot be that it plays games with our lives. The argument cannot be that life is not a game, but that there must be better games to play than the ones we are addicted to.
But for the moment, we are on the horns of a dilemma.
Either the economic endgame is sacred or it is not.
If it is sacred, then the results of marketplace activity are providential and oracular, and the game itself is beyond question (even if it can and must be optimized).
If the game is not sacred, then the results are neither providential nor oracular, and the game can always be abandoned or fundamentally changed, if it is seen producing undesirable or oppressive or unjust results.
But the dilemma can be evaded, through the following ideological ruse:
The game is explicitly secular and implicitly sacred: it is incompletely desacralized.
The dominant classes exploit the ambiguity by presenting results that are in their favor as providential and oracular, since those results can be construed as preserving the game as a whole. Simultaneously, it is necessary to reassure the subordinate classes that results not in their favor are the effects of a secular, non-oracular stochastic reality whose chances, when non-optimal for the subordinate, can be excused as insufficiently anticipated by current knowledge practices. Thus the masses can be appeased by an appeal to their self-sacrificial participation in an ongoing project of the refinement of knowledge. This suggestion placates the need of the subordinate for the appearance of democracy, since in principle knowledge is an open affair, not restricted in principle to the wealthy or powerful elites. But in order to ensure that this democratic aspect of knowledge is not effective, access to education is increasingly cut off. Hence (one consequence) the ongoing attacks on both higher education and public education, as well as public health (crucial for thought).
For Atlan, the key to overcoming the evasion, to overcoming the tacit and perverse sacralization of chance, is to completely desacralize chance. It is scientific knowledge—the tradition, practice, and community of scientific knowledge—that for him has and will continue to desacralize chance, even if political and economic interests will continue to distort and pervert the open-endedness of scientific claims (particularly when those claims are based on statistical inferences).
But scientific practice will be impotent politically if its results are foreclosed by the control of dominant interests. The revolt of the people and the refusal of the seductive passivity of comfortable spectatorship at our own live evisceration are also crucial. But this will not take the form of some more sober or even particularly “reasonable” insistence upon regaining control of our lives. Rather, it will be, at least in part, the collective recognition of the fact that the game played for money is the game played to destroy all other games. Success proves nothing in a game rigged in advance, but the solution is not to refuse to play. It is rather to play the ultimate game: to playfully detach ourselves, slowly and deliberately, without fear of death, from the game set to destroy all other games. This game, unlike the game of biopwer, takes chance not as an arbiter or judge, directly transcribed into differential equations, but as the differential spirit of each unique occasion.
I need time to digest this, but it strikes me that as you continue to research these questions, Joseph Vogl’s Das Gespenst des Kapitals would be very useful. (Supposedly someone is working on a translation.) He claims that economics is both a prescriptive and an apologetic discipline — instead of theodicy, we now have econodicy (Oikodizee in the original). There’s a lot in there about chance and finance as well.
This is a really impressive and interesting argument. I haven’t read any Atlan yet; it looks like I’ll have to move him up a few spots on the ‘to read’ list.
On the subject of the “ludic or game-like character of human activity”, I’m currently reading Meaningful Games: Exploring Language with Game Theory, by Robin Clark. There’s a good discussion in there of using game theory to represent social or common knowledge (I know that you know that I know that…) and of bounded rationality/satisficing that might be related to your claims about limits to human knowledge. Game theory has of course generally been used as a way of turning political decisions into calculations, and led to grotesque policies like mutually assured destruction, but Clark makes a good case that it doesn’t have to be used that way. It can also be used to study the complicated forms of co-operation that are involved in linguistic communication, in which being a rational agent doesn’t necessarily mean committing collective suicide.
There are two points in your post I’m not clear on (though it may just be due to my ignorance of some of your sources):
1. What is the link between Fisherian/frequentialist statistical theory and neoliberalism? How exactly do neoliberals want to “re-tool reality in order to make it predictable”?
2. What is the nature of the new “game” of your conclusion? Or is the point that we can’t know in advance what the new game will look like? And why should we call it “(full) communism”, apart from the historical significance of the term as an alternative to capitalism?
Adam–thanks for the reference. It will be a good excuse to read some German after too long. Also, I’m eager for your critical comment if/when you get around to it (I’m teaching political theology next term so all of this is very much on my mind).
nonmanifestation–thanks for the Meaningful Games suggestion, that’s something I will have to check out.
As for your questions, several things occur to me, none of which are definitive at this early stage. 1.) Foucault’s work, both on the societies of control and of surveillance, makes a powerful argument that for the human sciences to “apply” to individuals, those individuals first have to be systematically destroyed and then re-created, in a slow and uneven process, so as to be amenable to tracking by the so-called human sciences. And of course neoliberal biopower is the zenith (or nadir) of this process, with its combination of principles drawn from social darwinist eugenics, information science, cybernetics, and catastrophic, winner-takes-all “investment” schemes for the extraction of capital surpluses from vulnerable populations. Deleuze notes in “Postscript on the Societies of Control” that the outcome of this process is that subjects pass from being individuals to “dividuals.” So the link between the “Fisherian” approach to probability and neoliberalism is that in order for inferences from statistical findings to be valid, the data set has to reflect a perfectly stochastic order. Mutatis mutandis, in order for policies to appear “fair” that obviously single out and privilege certain populations or class interests, the ideologues must be able to insist that such inequities and asymmetries are the results of random fluctuations (hence the “Fisherian” tag) in a perfectly stochastic universe of forces that no one can mitigate, let alone control, and thus that it is only the government’s role to render the entire “process” (i.e. capitalism) as efficient as possible. (I know I have more work to do to make this argument really stick, which is why I’m immersed in mainstream critiques of the use and misuse of statistical reasoning such as Ziliak and McCloskey’s _Cult of Statistical Significance_, a book written by two die hard libertarian capitalists yet still an excoriating take-down of what passes as statistical “reason”).
In his recent The Rebirth of History (pp. 74-75 in the Verso English translation), Badiou describes the current political usage of this kind of reproduction of abstract and arbitrary set of referents for sociological, psychological, and economic concepts of “average,” “normal,” “within the bell curve,” etc. The neoliberal order is heavily invested in “governing less” by treating populations in terms of individuals who either approximate or fail to approximate some abstract individual (the average French person, average American, etc.) who does not actually exist but who is constructed by social scientists from a set of traits that have been aggregated (incoherently) and then averaged, and policy is written not relative to actually existing people but to the relative importance or perceived value of those who do or do not approximate this “average” person (who does not exist). Again, I think this is only possible on the basis of a misuse of statistical reason.
But what I’m really interested in, and this brings me to your second question (2.) is the way in which the situation in statistical inference and probability theory is itself a kind of symptom of a deeply contradictory relationship that “modern” (post-17th century) culture has to chance. On the one hand, chance must be seen as arbitrary, meaningless, and yet through probability theory chance is supposed to make possible knowledge itself–especially where, in sciences like physics we have stopped talking about “causes” altogether and take laws to refer only to sets of possible events (that’s too coarse, but you see the crux of the issue).
As Hacking (probably the world’s expert on the history of probability theory) sees so clearly, we cannot disinter our interpretation of the status of aleatory phenomena from moral and theological presuppositions (or we can just call them metaphysical presuppositions, so as to include moral anti-realists and atheists in the picture, too). This is why Atlan’s work is so important, because he actually tries to resolve or at least fully confront this problem in terms of the necessity of religious or mythical wisdom even in the context of a scientific enterprise that -must- be allowed to treat chance as “desacralized” in order to advance its research. It’s a strange position, and I’m still developing my own appreciation and critique of it, but the crux of his view is that from the point of view of Spinoza’s third order knowing, mythical wisdom and scientific insight are identical, but that due to the ambiguous (chaotic, evil) situation of human existence, we must treat these two modalities as if they were absolutely separate and had absolutely separate domains, languages, and practices (even though, in the light of “eternity,” they do not).
At any rate, in terms of what games come after the meta-game of capital, there are several ways to think of this, but lately I’m impressed by Ken Surin’s argument in Freedom Not Yet: Liberation and the Next World Order, that we in a very real (if also virtual) sense are –already playing– the games we will play, but until we have fully (painfully, carefully, over time) detached or “delinked” from the state/capital complex we will not be able to fully “deterritorialize” those games. One might claim that the games of art, eroticism, science, etc., are already utopian or “heterotopian” (Foucault’s terms), and that essentially all state/capital amounts to is a massive form of parasitic resistance to the desire and power and reality of the life/deaths we already play. The problem then is not how to create another world but simply how to dismantle capital’s vampiric hold on and resistance to the lives, relationships, games, pleasures, agonies, ecstasies we already know and love (even if the highly controlled and surveilled and near-extinct bodies we currently inhabit). I have also not begun to fully clarify the senses of play and game that I want to work with (after Huizing and Callois) but I”m working on that, too.