The Problem with Assessment
I've been thinking a lot about assessment lately. This is partly because it is on my institution's radar screen in a big way, and partly because it seems to be one of the real stumbling blocks for faculty relations all over. That is, it seems to me that there are often faculty who are very much on board with the idea of having clearly articulated assessment programs and others who aren't. It doesn't seem to me to be a generational thing, although there is certainly something to that -- at SlACs like mine, older faculty are often used to doing things their own way, whereas younger and often junior faculty are a bit more open to working on such programs. You all probably know that I'm sort of an assessment fan. I don't want anybody in lock-step with me, and I don't want to be in lock-step with anyone else, but I see the value in all the people in a department or an institution having the same sort of standards.
This seems to me to be a particularly American thing in some ways, too. My colleagues in the UK are used to a system of double marking and outside evaluators. I think that's a good thing. I know people who see it as a threat. In fact, I think that, in general, the people who want to stay as far away from any coherent assessment program are those who are the most frightened of being found out. It's impostor syndrome, but in a way I've never thought of before. I worry all the time about being found out, about my colleagues finding out I'm not really one of them. This is entirely centered on my worth as a scholar. It never occurs to me to feel like a fraud in the classroom, but then it always occurs to me that there are better ways to teach something, and I talk to people about teaching all the time. there are plenty of ways my teaching is flawed, but I do also know that I'm not a bad teacher. Weirdly, it never really occurred to me that there might be people whose impostor syndrome worked in the reverse.
Assessment, good assessment, means looking carefully at oneself and the way that one teaches. When we talk about assessment and "quantifying the unquantifiable" as one of my colleagues puts it (which is total bullshit, as far as I'm concerned), it looks like we're tracking our students. To some extent we are, but more importantly, we are assessing ourselves. If our students aren't doing well, then we have to ask why. And why it might be that we aren't doing as good a job as we ought to be doing. We might have to change and re-think things. To me, this is a given. But I can see that, to others, this might also be an indicator that we were wrong, that we weren't doing our jobs well. What if our students aren't lazy or stupid? What if it's us??
I think the truth is that we do have some lazy students and some students who are kind of boneheaded. But we also just have students who are smart, but aren't ready, or unprepared. And we do need to learn to teach them, and perhaps to change the way we teach in order to serve them and yes, to teach them in ways they can learn. Because if we don't assess, and self-assess, then the problem *is* us.
7 comments:
One of our internal situations (not a problem, just the way things ARE) is that some studio artists* insist that there is no reliable outside assessment, but that they are able to evaluate others (especially junior colleagues) on the grounds of aesthetic judgment or taste. And that this taste can never be explained, only learned after years of doing. So, no, they can't explain their standards to the Committee on Tenure and Promotion, but there are standards.
*remember, I'm in a mixed department
Somewhere about the place--hang on... oh yeah, in fact it was the Chronicle, and I blogged about it here there was a study published about how different disciplines rate their own work. The short version would be that historians mainly judged on whether something appeared, not to be correct or well-founded on the basis of what the assessor knew about the subject, but on whether it appeared to be careful, so that that sort of stuff was a priori likely to be OK; and that literature scholars were less than convinced there was any such thing as quality at all and certainly no way to measure it. I don't know if they covered artists, but that description would put them somewhere in between. Of course, this was all research, not teaching, but the evidence for disciplinary difference has implications for that I guess. Some of Jeffrey Cohen's classroom stunts sound awesome, for example; but when I look at them I think, 'but wow, where in that class did you hit your learning outcomes?' And Jeffrey would doubtless say some subset of, 'making them question everything and play with it is the biggest learning outcome of all' and I would say, 'yes, but in history we're kind of weighed down by those damn facts, you know, they have to remember some to pass'. (I hope he doesn't mind my entirely inventing his half of the dialogue here.)
I guess where this is going is, quos ipsos custodes licet custodare, or, if I've screwed up the Latin there, who is best placed to guard the guards themselves?
Ye gods, Cranky, that sounds difficult. SLAC has a lot of performing arts people, and the faculty actually use juried shows and peer-review from outside the institution for their own work.
Tenthmedieval, I think you're right. In fact, our departmental standards really are quite heavy on skills, and focus less on content than even I might like. But if I had to choose between the two, I'd go skills every time. Is the student thinking and writing like a historian? By the same token, skills are the hardest thing to teach. Anybody can learn content, but it's how we read and use information, and how we construct the narrative, that sets us apart from people in many other fields. So if the students aren't learning how to do those things, and we say that they are important, then why? Is it us or is it them?
We're still in a half-molded state of assessment here, but it also is touchy and problematic. From what Clio Bluestocking's blogged, putting too much content into the assessment rubric tends to have examiners and reviewers want everyone to teach the same readings and read the same arguments from students.
Bah! That kind of content-focused assessment would be a shame. But since my colleagues and I teach very different content for the same level of classes, we can agree to set those standards of skill for assessment.
FWIW the greatest resistance to assessment in my program is the youngest faculty member :)
It seems to me that the *program* objectives are appropriately focused on skills, while they are applied to particular content within courses. If we ever redesign our Program Objectives, I might add one on scope.
I find the resistance to assessment comes with the way it is formalized. So, a few years ago after we'd read our senior these, we identified some problems, and then started to think about how we provided the skills students needed earlier on. This was not formal assessment -- we hadn't sat down with our rubric and a sample of theses and figured out whatever.
As one colleague noted when we did try to construct a rubric and use it, it worked against the holistic evaluation we usually do.
What's really clear is that assessment is not going away. And -- given Clio Bluestocking's experience, it is really important that we do it so we don't get stuck with all doing the CLA.
Susan, that's really interesting. One of the things that I've been working on is trying to revise my rubrics for exactly that reason. The difficulty is in articulating what we think of as intangibles, I think. And perhaps in the fact that using a rubric can mean giving good essays poor marks. Where I'm finding the alterations are in things like, "what makes an A better than a B"? And it can be style and/or use of evidence or something else. So I tend to have several qualitative scales, some of which are more important than others. The most important qualities are argument and use of evidence, but things like mechanics and grammar can knock down a paper, and sophisticated writing can bring it up. I really found the A-level marking standards I've found online useful, in that they have a qualitative description, and then the point range for that description. I may end up doing something like that in future.
Oh -- and Susan, I don't think it's an age thing as much as it's an insecurity thing. But where I am, and I suspect many institutions, there are a lot of senior faculty who haven't undergone any sort of meaningful evaluation since getting tenure 20 or more years ago. So those seem to be the most resistant in our case. I have plenty of senior colleagues with Oxbridge training or sim who are just fine with this sort of thing :-)
Post a Comment