The current “best practice” for course/curriculum design is to start from the learning objectives and then fill in gradually more detail, only supplying the actual course content at a relatively late stage. When Shimer was going through some curriculum debates a few years ago, I opportunistically seized upon this principle as a way to open up a little more space for thinking about new and different readings, but it was a way of thinking that just didn’t work, ultimately. We had one meeting when everyone seemed to be on board, and then we got back to the traditional debates over particular readings and how we can’t remove this one thing that really “works,” etc. And I don’t think this was because my Shimer colleagues are especially hidebound — the way they do curriculum design is just the way everyone does it.
The so-called “best practices,” as usual, have virtually never been done, and that’s because they presuppose a very simplistic, unidirectional version of curriculum development. We go from goals, to means, to delivery, without attending to the dialectical relationship among them. We think primarily in terms of readings, but starting with that doesn’t mean that we haven’t thought about our goals — in fact, our desire to include a certain reading may well clarify goals that were previously unarticulated, or coming from another direction, if a learning outcome seems incompatible with using a certain reading, it may show that learning outcome to be artificial. More seriously, the “best practices” neglect the fact that any given instance of a certain course will develop its own goals along the way, depending on the students involved. There is no room for this interplay if we’re deciding on the ultimate learning outcomes from on high.
What gives away the game here, I think, is the horrible pedagogy of the “training” sessions for these “best practices.” All the worst amateur mistakes are present — the “sage on stage” information delivery style, asking questions with a single predetermined answer (in the style of a TV teacher), contrived and inorganic group exercises (always asking people to artificially bracket a huge number of concerns), etc., etc. If our classrooms looked like an assessment training, we would be fired.
Now if I’m being honest, my opportunistic embrace of “best practices” was not purely open-ended — I did have certain new readings and topics in mind that our learning outcomes would lead inexorably toward. And in that, I think that I had correctly discerned the spirit of the whole enterprise, as revealed in the pedagogy. What is presented as an open-ended process of discerning what it is we really want to do, etc., is aimed at a certain vision of learning. They want us to create classes that are more easily measured and assessed, with a focus on skills over content. That works fine in some disciplines, but in the humanities, for instance, it is very artificial — hence why we constantly have to fall back on generic notions of “critical thinking” or similar vague gestures. If content doesn’t matter, then the humanities don’t matter. This is not to say that the whole notion of outcome assessment is purely an attack on the humanities, but it does cohere with a vision of the university where the humanities have less and less of a role.
More seriously, the reliance on formula and top-down design fits with a vision of the university where professors are interchangeable cogs, to be hired on a just-in-time basis. It cuts out intuition and professional judgment in favor of a sheer means-end schema, completely in line with the deprofessionalization of undergraduate teaching through adjunctification. And the conceptual hierarchy of the method fits with a hierarchical version of the university that is more compatible with administrative unilateralism than with traditions of shared faculty governance.
What should we do in the face of this? The traditional strategy of slow-walking any assessment initiative is probably the most straightforward response. But I think we also need to take literally the accreditors’ claims that they want each individual institution to find an assessment method that works best for them. If it is going to be required for accreditation anyway, trying to develop a system that is less of a neoliberal dystopia may be a good use of at least some faculty members’ time. And studies have shown that humanities and liberal arts disciplines really do deliver more intensive learning via most standard assessment measures, so gathering that data could be to our advantage.
(In short, this is the more cynical version of the article I wrote when Shimer was under the gun to develop an assessment regime.)