For many people, the Cambridge Analytica revelations are the last straw, leading them to delete their Facebook accounts or at least radically scale back their participation. And I am tempted. Facebook is often annoying, and it does tend to be a timesink — in fact, I sometimes find myself just scrolling and scrolling and scrolling without really reading anything. As a late adopter, who only joined after being forced from Twitter by right-wing harassment, I missed much of what made Facebook trying for others (constant contact with relatives and long-forgotten high school friends) and have found it mostly beneficial. It has given me a chance to connect with other academics, who are much better represented than on Twitter, and it has resulted directly in speaking invitations and other opportunities. And while we would all love to Make Blogs a Thing Again, the spell is broken: blogs and blog comments simply no longer function as the free-wheeling conversation they once were, and we can’t just will that back into existence.
For me, I’m not sure my outrage about Cambridge Analytica is enough to make up for all I would potentially lose. Some form of social media presence feels like a career necessity, especially given my somewhat tenuous situation. More substantively, I don’t see any other venue that allows for the kind of open-ended discussion that happens in the best Facebook threads. I can post about Haydn or obscure points of Hebrew grammar, and a lengthy thread will spring up that rivals the very best threads that I ever saw in the golden age of blogging. What am I gaining by quitting Facebook that would make up for that?
More broadly — and realizing that this can sound like a cop-out — I’m always skeptical of demands for me to change my personal behavior to solve systemic problems. People have come to expect the forms of connection social media makes possible, and simply demanding that they give it up without offering anything to replace it (or, even worse, making moralistic appeals to “get off your phone and participate in real life” or whatever) doesn’t seem like much of a solution.
The core problem is the ad-driven, click-counting model of the internet. Realistically, someone probably needs to create a range of competing alternatives that are not “free” and hence not ad-driven, which will then realign the incentives and give users a more direct way to influence corporate behavior. One reason Apple is marginally better on privacy than most tech companies is that they are primarily selling hardware, so you are not “the product,” as they say. If there was a moment we all collectively sold the store, it wasn’t when we clicked on the wrong news story or took a quiz on Facebook, it’s when we let ourselves be seduced by “free.” This whole fiasco is the price of the “free” internet. Even if Facebook as an individual company dies — and I would not mind it by any means! — the “free” internet will lead inexorably to another Facebook.
Presumably we have all seen The Social Network, or at least heard of the primal scene of Facebook that it stages. One night, a bored Mark Zuckerberg uses his ability to type really fast to set up a website to judge the hotness of the women of Harvard. It proves so popular that it threatens to bring down Harvard’s entire computer network. Here was the kernel of Facebook, with a foretaste of its worldwide success.
While it has evolved into something far more complex than its “hot or not” roots, Facebook is still a technology for passing judgment. The zero-level gesture of engagement with Facebook is to click “like,” a positive judgment that was recently diversified to allow one to express a range of judgments corresponding to the range of emotions we learn to name in kindergarten. People have found many other uses for it as well — it is, after all, a flexible discursive medium — but the core functionality remains that of passing judgment. It is the easiest thing to do on Facebook, almost effortless.
Continue reading “Critique of Judgment”
Yesterday, I received a Facebook direct message telling me that I was a Jew who should get into the gas chamber. Normally I just block and delete such things, but this one was so flagrant that I felt I had to report it. This morning, I got a message telling me that Facebook had taken action — they sent that user a note reminding him of the Community Standards.
This is more response than I have gotten from the dozens of reports I have sent to Twitter over the years. To be fair, I have seldom been in the mood to write the dissertation they expect me to write, so maybe it’s my own fault. Or maybe the form is a placebo and they set it up to be intimidating on purpose, so that they can blame the reporters for not providing adequate information.
From their perspective, this tepid response makes sense. They get more money if they can show higher user engagement. Right-wing hordes are among the most engaged users of Twitter especially. The same goes with fake news on Facebook — the combination of outrage and in-group formation that fake news stories generate is an engagement gold mine.
We need to admit that right-wing harrassment and conspiracy theories are baked into the business model of social media at this point. And with right-wing political hegemony for the foreseeable future, it will only get worse, because the range of “acceptable opinion” will shift even further to the right. Asking nicely and filling out all the proper paperwork will not change this underlying material reality.
If social media is worth having, then the answer is to build a non-profit alternative to the for-profit sites. Wikipedia could provide a model here. It is not-for-profit, it includes strong self-policing mechanisms, and it is arguably the most trusted and useful site on the entire internet. Wikipedia shows us that a non-profit internet not only can work, but can thrive.
I am an avid internet user. It is difficult to imagine my life without it. I owe many of my closest friends to the internet, along with many of my professional successes. As a force in my individual life, I’m willing to call it a net good.
On the systemic level, though, I wonder. It is hard to deny that the internet has been instrumental in the neoliberal regime, consolidating economic power in ever fewer hands. Amazon is archetypal here. Where the distribution of books used to be the province of a range of independent and chain bookstores, now Amazon is most people’s first stop — and they’ve actually moved to expand their reach by creating the kind of bricks-and-mortar bookstore they’ve been putting out of business for decades.
Amazon is also indicative of a broader trend: while we’re supposed to be shocked and awed by the endless innovation of the tech sector, their underlying business models are not new at all. Amazon is the bigger, better Sears catalogue. That’s it. Continue reading “Was the internet worth it?”
Yesterday afternoon, the harrassment campaign against me seemed to have reached a low ebb, and I felt confident that this particular storm had passed. Yesterday evening, however, it kicked back into high gear and I started receiving so many hateful Twitter messages that I literally could not keep up with blocking all of them. Since then, it has continued to ebb and flow — a few hours of quiet will be followed by a burst of activity. The Daily Caller and Washington Times have both picked up on the breaking news that I tweeted, though thankfully they have focused their ire on my claim of white complicity with slavery rather than the ludicrous smokescreen of their outrage at my obviously sarcastic call for “mass suicide.”
After last night’s outbreak, I woke up this morning ready to take my Twitter account private. I talked myself down after there were only a handful of people to block, but since then I’ve learned of further harrassment directed toward Shimer College and the people who work there. This particular case is probably out of my control at this point, but now I’m clearly on some people’s radar. It seems to me that there is no way to be sure that this won’t happen again unless I take my Twitter private and carefully choose who can follow me — or else just quit altogether. This incident and the Charlie Hebdo blow-up are probably going to be with me forever at this point, but why provide more fodder? Shimer has been very supportive, but what if I need to find another job?
In short, I’m seeing a lot of downside to continued Twitter participation. Much of the upside could be replicated if my regular dialogue partners followed my private account, but my ability to make new connections would be severely limited in that case. Plus it would completely destroy Twitter’s potential as a promotional forum for my work. I’d still have the blog, which would probably benefit if I were deprived of Twitter — and it seems like blogposts aren’t as vulnerable to this kind of thing.
I know the high-minded thing would be to say that I’m not going to let these bastards silence my voice — but screw that. Is my voice really making this huge contribution? Am I doing anything other than making an ass of myself at best, or exposing myself and my school to systematic harrassment at worst? The dog has pretty much healed up, which resolves the outstanding loose ends of my Twitter saga.
What do you think, dear readers? I know a certain number of you are going to say I should lead by example and commit suicide, and your comments will of course be deleted — I’m more asking the actual worthwhile human beings who know and care about me. What’s the upside of not letting myself be silenced?
A few questions for those of you more familiar with the landscape in online education:
- In your view, is the current state of video conferencing technology adequate to simulate a lively, seminar-style discussion session?
- Do you know if any schools have tried to offer such a thing, as either a substitute for or supplement to “traditional” online pedagogical methods?
- Do you believe that there would be a significant market for such an approach? (I ask this particularly in light of the fact that the necessarily synchronous nature of such class sessions might cut down on one of the primary appeals of online ed, namely its flexibility.)
Rebecca Solnit has a diary in a recent LRB in which she reflects on the shift in the texture of time that has taken place since the mid-90s. It would be easy to read this piece and start debating some of the generalizations she makes, though I think she steers clear of Luddite cliches. Yet for me the most salient point is that no one actively decided that our new technological era was desirable or beneficial. Or to be more precise: we all just rushed to adopt new technologies basically because they were new technologies, because they sounded cool.
I have long shared Solnit’s view on the destruction wrought by cell phones. Continue reading “Counting the cost on technology”
How big a stretch would it be to say that we here at AUFS are “digital humanists”?
My method for keeping track of appointments is appallingly primitive — I have a desk calendar in my office at home, and I just write everything on that. In terms of getting all the appropriate information onto the calendar, this actually works better than one would think, given that most appointments are made via e-mail. (In the last resort, I’ll e-mail myself to remind me to write down said appointment when I get home.) Yet the system has one glaring hole: I can only refer to it when I’m at home. It also seems likely that I’m only going to get more busy over time, and so it makes sense to form new habits now when things are relatively calm.
How do you, my readers, keep track of such things? I’ve considered using Google Calendar, which would be convenient given that Shimer uses Google for their e-mail, etc. I’m also due for a phone upgrade this summer, at which point I could get an Android phone so that it would all integrate, preferably without seams. (I’ve also hypothesized that I could put my files in which I take notes on students on Google Docs and have instant access to that, because I’ve noticed that I have a hard time remembering paper topics, etc., to write down once I get back to my office.)
One often hears that today’s young people find computers and the internet to be totally natural and easy to use — i.e., that they are “digital natives.” I’m going to say that that’s false, at least of college students.
Some pieces of anecdotal evidence:
- Getting college students to check their e-mail regularly is often a challenge.
- College students frequently display a lack of understanding of how basic computer programs work (using manual headings in a word processor, creating manual footnotes, etc.).
- College students only rarely take advantage of Google or Wikipedia to answer small factual questions for themselves. (If they do look something up, they make a point of mentioning that they have done so, indicating that it’s not taken for granted.)
- Supplemental online discussions (i.e., within course-management software) are generally no better than in-class discussions.
- If students give presentations requiring the use of computers, snafus generally abound, even for simple tasks such as opening files.
In short, I’m inclined to suspect that young people do not have a special aptitude or bond with computers and, therefore, that any educational strategy that relies heavily on computer technology is not making education automatically richer or more “relevant” — indeed, for many students, it’s adding an obstacle or impediment to learning.