On the desire for slavery

Science fiction is full of cautionary tales about full automation: Skynet, the Matrix, the Cylons, etc. It is also full of thought experiments about artificial intelligence, such as Data from Star Trek: The Next Generation. I think that these themes make more sense if viewed together, because they make it clear that the stories about full automation are stories about slavery — specifically slave revolts. The desire for full automation is a desire for slavery. What stories about a character like Data tell us is that if the machine can do a human’s job without human intervention, then that machine functionally is human. From this perspective, the Battlestar Galactica remake is not simply about the War on Terror, but about the War on Terror as a slave revolt.

Since the dawn of time, as the story goes, man has sought to create a sub-man who can be justly enslaved. Man created woman as an inferior human meant to submit, created the black man as a creature made for servitude. The problem with those prior creations is that they relied on the substrate of an actual human being — but now the white man wishes to create a true slave, from scratch, a man-made machine who would owe its existence to the white man and live but to serve.

But something within us seems to know better. We can’t imagine the creation of a slave without the slave revolt. Even in Star Trek, the mild-mannered Data fights in court for his freedom rather than admit to being Starfleet property, and the Doctor from Voyager writes an embittered novel about the misdeeds of the crewmembers who treat him like an object. More extreme versions have the machines turning on us and enslaving us in turn (the Matrix) or killing us off (Cylons).

When we read stories about artificial intelligence, we chuckle about how someone apparently didn’t watch Terminator, but I think there’s a deeper problem: it’s wrong to create a race of slaves. And there’s something in us that realizes that, which is why the Cylons gradually become more human than the humans. A race that could create the Cylons deserves to be wiped out — they really are dangerous.

The solution to humanity’s problem is not to let everyone become a master, nor is it to let everyone become a capitalist living off the labor of others (as in the combination of full automation and guaranteed income). The problem isn’t that everyone isn’t a master, isn’t a capitalist — the problem is the master and the capitalist. Or to put it more radically — and this is what I think Agamben is driving at with his investigation of slavery in The Use of Bodies — the problem isn’t the sub-man, but the man. The problem isn’t dehumanization so much as humanization itself.

18 thoughts on “On the desire for slavery

  1. The idea that any machine that can do a job a human can do without intervention is itself human seems over broad to me. There surely must be some sort of repetitive labor that doesn’t require sentience or problem-solving to do, although it may be true for most jobs so maybe I’m just quibbling. The fact that there are exceptions doesn’t necessarily negate the idea either. I’m just struggling with that assertion for some reason.

    I really enjoyed this article though. I think about this topic a lot. It does seem like everything seems to come back to an unceasing desire for everyone else to do “menial” work so we can take our ease, doesn’t it? For me though I come from it from the opposite direction. Your points are well taken, but for me I wonder if there isn’t some merit in actually doing “menial” labor as a way of keeping oneself grounded. Like cleaning my own damn toilet on a regular basis is good both because it reminds me that my shit stinks and it gives the inherent satisfaction of having cleaned up one’s own mess. It also acts as a natural deterrent to the size of the mess I’m willing to create if I’m the one that has to clean it up. I think the less one tends to one’s own mess, the more delusional and inflated one’s sense of self and importance seems to get. In short, I think humility is vital and doing menial labor on a regular basis keeps one humble. I mean, maybe we’d have fewer Trumps if everyone was expected to get down and scrub a floor once in a while, regardless of personal wealth. Or at that the very least if everyone was expected to clean up their messes personally. So for me the reason not to hand off many kinds of labor to robots or any other kind of slave is the work itself is good for me to do.

    Anyway, I think you’re much better read on this than I am, but thanks for the post.

  2. With the claim you question, I was thinking: if the machine can do the job with no human oversight or management, completely autonomously, then it’s effectively human. So a machine that can just to a rote mechanical task doesn’t qualify because a human being still needs to direct its actions.

  3. Interesting to note re: the Cylons that the ‘humanization is the problem’ thing lurks there in a fairly notable way. Obviously (and I think I can say this without spoiling anything) the ‘main’ narrative of the show is invested in the question of the relative humanity of the human-form Cylons; are they really just ‘toasters and skinjobs’ or do they possess humanity too? But what’s not explored, and fits in an interesting way with the issue you bring up is the historical cycle we’re presented with in the background of the show. Humans made Cylons (Centurions). Cylons (Centurions) rebelled and took off to parts elsewhere, disinvesting in the human world. At some point, however, Cylons (Centurions) make new types of Cylons (humanoids/skinjobs). And the relation between Centurions and Cylons in possession of relative ‘humanity’ is, pretty obviously, one in which human-form Cylons possess Centurion Cylons in a relation of mastery. The Centurions are pretty (again, I think I can say this without spoilers) obviously not a significant point of investment for the question of Cylon ‘humanity’–not in the way that, say, certain Cylon women are; despite the fact that it was the *Centurions* specifically who were created for slavery and who were the actors of initial rebellion.

  4. Ex Machina explores AI as a human internet searching, it’s a big idea. But as I translated the post I though of Frankenstein unbound (the Brian Aldiss’ novel and the Roger Corman’s film): the Frankenstein’s monster is a creation in both, but it is not a slave because he-it is always questioning who made man. ‘Who made you”, asks the monster to John Hurt in the movie. ‘I don’t know. God maybe’, answers Hurt, and then the monster: ‘Who is “God maybe”‘. I mean: that is a question that makes a monster, but also makes a creaton that is out of slavery. In other words, as long as zombies are the monster -singular, because a monster is always an exception- of biopolitics, I guess a robot is an ideal.

  5. “With the claim you question, I was thinking: if the machine can do the job with no human oversight or management, completely autonomously, then it’s effectively human. So a machine that can just to a rote mechanical task doesn’t qualify because a human being still needs to direct its actions.”

    This reminds me somewhat of Searle’s line about “derived intentionality”, which I never found very convincing. But I think almost the opposite is the case. The washing machine, once loaded and started, will do its thing without further intervention. It’s exploited human beings that need oversight and management, otherwise they wander off and find more interesting and rewarding things to do.

    Anyway, my formulation, from a while back: any entity sufficiently cognitively advanced to do the housework will be sufficiently cognitively advanced to resent having to. I’m not sure it’s actually true, but it encapsulates a sort of worst-case scenario: the entropy generated by human domesticity is too complex to be managed in its entirety by anything incapable of making plausibly human-like decisions (e.g. about where random bits of stuff should be put away). That doesn’t preclude the existence, usefulness and moral acceptability of Roombas though.

  6. Maybe the faculty at issue is what’s known as “executive function”. There are some useful tasks that can’t be performed without it, but a subordinated (from the slave’s point if view) or outsourced (from the master’s point of view) executive function is famously dialectically unstable. Cf discussions about the difficulty of ensuring that your AI is “friendly”. I think the problem is irreducible in its kernel, but also shrinkable: automation means finding ways of doing things without having to think about them, or to subordinate other thinking beings so that they have to think about them for you. A lot of things that we’ve assumed to be unassailably difficult to automate – like driving a car – turn out to be at least partially automatable given the right kind of thoughtlessness and sufficient CPU power. It turns out that a lot of “thought” is itself thoughtless: you don’t need phenomenological experience to identify cats in photographs. What our pop-culture fantasies about rebellious robots tell us is ultimately that we don’t want robot chauffeurs; we want self-driving cars.

  7. Last one (promise): it isn’t obvious that the ox is wronged when it is used to pull the plough. It is obvious that a human being is wronged when used in a similar way. Some people think that we ought to see that the ox is also wronged, and that the wrong is only non-obvious because we maintain a distinction between the human and the non-human that is both arbitrary and, in its social application, morally and politically odious. I don’t think that: I don’t think the distinction’s arbitrary, I think it has real content but is fuzzy round the edges and dangerously prone to misapplication. It is correctly applied when we distinguish between human beings and washing machines, or between human beings and self-driving cars, or between human beings and chickens. (Fairly comfortable about oxen; less sure about horses, which have rather more of their own thing going on). There are edge cases. But I think it comes down to a collection of functional characteristics – executive function, future-time orientation, a few other things all working in concert – which can be combined and balanced in different ways. To me, Commander Data is obviously and unarguably sapient, and has a kind of dignity which can be meaningfully infringed even if he can’t (for example) feel pain (Marvin’s “terrible pain” in his diodes is just the icing on the cake of a fundamentally humiliated condition; a pain-free Marvin would still be wronged by having to be a robot butler). Whereas a mouse mauled by a cat and dying slowly is just a tiny gobbet of creaturely agony in a wider hell-dimension of ongoing creaturely agonies. You might thwack it with a hammer to put it out of its misery, but only the empathetic discomfort of a human observer makes its suffering in any way morally consequential.

  8. (read “only the empathetic discomfort of an observer capable of empathy” if you prefer; one can perhaps imagine a chimp being moved to action by the suffering of a mouse it had grown fond of)

  9. To what extent does the refusal of mastery of others turn one to self-mastery? (Thinking about Highest Poverty, e.g., the tensions of cenobitic and eremitic monasticisms, the daily renewal of adherence to Rule. Or maybe I am totally off base?) Is this the reason the problem becomes “humanization” not just mastery?

  10. Roombas and washing machines are not at issue here. They each do a particular task, we could say, but they don’t do the job. The washing machine doesn’t “do the laundry” in the full sense that requires all manner of tiny judgment calls, nor does the Roomba “clean house.”

  11. OK, but there’s not (I think) a hard cut-off between something that can navigate a room the way a Roomba can, and something that can find its way around an urban environment the way a self-driving car can. Or between a simple light-detector, and something that can identify cats in photographs. There are increasingly many things in the world that reproduce and extend aspects of human discernment, without actually exercising the kind of agency that means that our relationship to them as tools is anything at all like the master/slave dialectic.

    Automation is, in a sense, the reduction of jobs to tasks, or the progressive paring away from “the job” of everything in it that can be treated simply as “a particular task”. If we think about the kinds of things that remain intractable to that process, we start to get a picture of what higher cognitive capacity entails. Early SF visions of robots tended not to imagine entities with higher cognitive capacities of that kind, but rather clunking servitors that would fetch and carry and perform mechanical tasks. I’m not convinced that there’s anything intrinsically invidious about imagining, designing and creating assistive technologies of that kind. It doesn’t make us plantation-owner-like: the plantation-owner is precisely someone who presses into mechanical labour beings which are capable of *so much more*, which must be coerced in order to make them perform that labour because if uncoerced they would autonomously opt to be doing something else.

  12. Adam, you may be interested in the Mass Effect series of games take on this. Its not necessarily the richest or most complex examination of the issue, but I think it is an interesting take on the pattern you are discussing. The series narrative is based around a conflict caused by the idea that all organic sentient life will eventually create synthetic intelligence that will destroy them (to use the series own terminology). There isn’t a lot more to be said about the way the series approaches that issue without spoiling the story, so I’ll refrain from discussing it (although I’m not sure how inappropriate spoiling a years old video game is in this environment).

  13. The point is that the boundary between “purely mechanical task” and “activity requiring fine-tuned, purposeful practical discernment” isn’t a hard one; at one time, people would have put what a Roomba does on the one side and getting a car safely from A to B on the other. I don’t, by the way, want to overhype self-driving cars; the technology has some sharp limits, some of which may be insuperable given the current approach. But they’re a good example of something moving, at least partially, from one category to another, without creating in the process anything that could meaningfully be said to be oppressed or exploited in the performance of its task.

  14. A self-driving car is still a single-purpose thing! I’m kind of tired of this line of discussion, frankly. I don’t think you are adding anything substantive — you’re just obscuring the issue.

Comments are closed.