This year I am going to be serving as Associate Dean at Shimer, and one of my primary duties will be to coordinate the academic assessment program. Though in principle I am an assessment skeptic, in practice I have been heavily involved with the ongoing development of Shimer’s program and have tried to make the case that liberal arts and humanities programs can make a virtue of necessity in the case of the assessment regime.
While we’re expanding our efforts through a fairly aggressive schedule of developing new pilot programs, our baseline measure has been an assessment of writing and discussion skills at set checkpoints in the curriculum. It’s taken a lot of work to get the rubrics straightened out and make sure that our assessments are calibrated across the faculty, and it can sometimes be a hassle — one more thing to do, usually scheduled for already busy times of the school year.
But as a result of carrying out this program, we now have pretty clear evidence that what Shimer does is working. In aggregate, students make notable progress in their discussion and writing skills. This is unsurprising, given that we’re following best practices according to a vast majority of pedagogy research — small, discussion-based classes (every day, every course) with significant writing in every course and some courses designated as intensive-writing courses over and above that. In our discussion of the data, we raised the possibility that some of this progress results from attrition, as weaker students may simply be weeded out over time. But we were able to control for that and found essentially the same results.
This is pretty remarkable, given that differential outcomes between schools generally result from the quality of students the institution can attract/buy. With Shimer — which does not have the financial resources necessary to “buy” students and has a commitment to giving a chance to applicants who have not been as successful in less engaging environments — there’s clear evidence that our program is actually producing a value-add.
Not all the data has been so confirmatory, of course, but a weird thing happened: we noticed the problem, changed our pedagogical approach, and things got better. So we not only have evidence that our basic approach works, we have evidence that we as a faculty are able to work together to improve on what we’re doing in a systematic way.
Basically, even though assessment is annoying and there are good reasons to be suspicious of the agenda behind it, there are worse ways we could be spending our time.
Writing this post was a great way to procrastinate on my assessment pilot proposal! Another benefit! Or something.