Tuesday, January 24, 2012

testing dead students

Cris Tovani, writing in So What do they Really Know? (Portland, ME: Stenhouse Publishers, 2011), makes a striking analogy between autopsies of the deceased and the typical multiple-choice, objective heavy standardized testing that is increasingly common in assessment-crazed, NCLB world of education.
Sure, an autopsy might inform the medical profession, comfort a family member, or provide useful information to a crime investigator, but it doesn't do anything for the person who has died. Like the autopsies, summative assessments can rank and categorize learners, give colleges a way to standardize how they admit prospective students, and allow parents to brag about their genius child. Unfortunately, they don't help students get smarter in the tested area (12).
This comparison struck me as both accurate in fact and powerful in imagery.  I definitely experienced this in my learning career.  As is common, Tennessee had a statewide writing assessment that everyone took in 7th grade.  The test was scored on a scale from 0-6, with the rubric having lots of bulleted descriptors of superior essays next to the 6 column, and 0 being described as either blank or "predominantly in another language."  I'll never forget that - "predominantly in another language" - it really sounded funny to me at the time, in the context of an essay test for 7th grade language arts.  It's illustrative of the type of stolid prose produced by a committee of exam drafters in league with state bureaucrats, like the common ethnicity choice on the same tests of "Black, not Hispanic."  This always implied all sorts of weird cultural assumptions to me.  It was as if blacks, unlike all other racial/ethnic groups, have some extra suspicion they must meet to prove they aren't Hhispanic.  Or maybe blacks wanted to make damn sure they weren't cross-categorized as also hispanics, so they received the extra opt-out.  I always wanted to write in "White, not black, potentially hispanic, and ancestrally European."

Anyhow, the chosen task was the persuasive essay, and I recall spending a decent amount of time in class going over how to write a good persuasive essay.  Now, that's not to say that the ability to write a persuasive essay is not a desirable skill to teach to 7th graders.  However, the execution of the assessment of that exercise was poorly thought out, just as Tovani describes:
With summative assessments, students are left out of the conversation. Rarely do they get to use the results to improve their performance. Summative assessments are too final. They tell the learner, "Time's up. Put your pencils down. If you don't know the information by now, it's too late." There isn't a lot teachers can do for the learners if summative data is available only after students have moved on to another grade level or class (12).
The state writing assessment I took was toward the end of 7th grade.  Naturally, these tests had to be sent off to...somewhere...so that grading was efficient and, above all, normalized.  An eternity of middle school summer later, I began 8th grade.  Just like the year before, I had a Language Arts class.  Not really remembering or even thinking of that past assessment, my 8th grade teacher one day decided to spend class talking about the writing assessment we did last year.  She had the grades, and she wanted to talk about how we all did, and what strategies we could use to write better persuasive essays.  She then told the class that, out of our city school systems two middle schools, I was the only student to get a score of 6 on the writing assessment.  The teacher presented this information as praise, and the class responded with Oooh's and Ahh's and few students said "wait to go, Parker" in various tones of smarmary.  I felt proud, I'll admit it.

The pride ended when the inevitable questions came: "What did you do so well Parker?"  "What did you write?"  My response: "Uhhhhhh...."

I didn't remember in the slightest.  I couldn't even be sure of what writing prompt I answered.  I had no idea what I had done, why I had done well, and what that meant.  All I knew was that I had done exceedingly well, congratulations.  Not even the teacher could refresh my memory, as she had no copy of the essay I had written or the graders scoring marks.  The only artifact was a number.


Congratulations.  Now, try to do it again.  And we did.  Our 8th grade teacher apparently wanted us to all do essays for class, and she was going to grade them on the same 6 point scale.  A noble idea, certainly, as she very likely was reacting to the bad system of assessment that the state-wide writing test was.  This way, she would be able to give more meaningful feedback, and that feedback would be given to the students more quickly.  I got a 5.5 on the new essay, and I'm sure it wasn't perfect.  However, since the entire project had been introduced in response to the previous state writing test, and the same rubric was used, the comparison was implied.  Knowing my performance had decreased, I still had no knowledge of how or why, and, at the risk of belaboring the point, it all was because of the poor execution of the summative assessment.

More generally, it was because of the summative approach to assessment.  Just as Tovani stated, with lesson-final assessment, students often do not get the opportunity to compare their learning activities to the assessed product.  If they do, the chance to remedy the tested skill or improve the tested knowledge is not given.  With this approach to assessment, we deprive students of one of the fundamental goals of learning, which is improvement.  When tests are handed back and the only response is "Good Job" or "Did you not study?" we are not serving our students adequately.  Even the student who gets an A grade is not served, at least not any better than could be served without the teacher assessment.  This way, learning is transformed from an ongoing process of synthesis and improvement into a series of isolated moments of listening followed by disconnected moments of regurgitation.

No comments: