Monday, January 30, 2012

As part of a class I'm currently taking from Yong Zhao on globalization and education, my colleague Jordan and I are conducting a series of interviews with local resident for analysis.  The goal of this interview process is to get a snapshot - an admittedly small sample size - of contemporary opinions regarding issues related to education and globalization.

Our plan is to ask each interviewer a battery of questions, which I have included below, and post their responses in a blog, along with reactions and further questions inspired by the interviewees' responses to the questions.  Finally, we will look at the collective of the responses and analyze them as a whole, taking a critical look at the commonalities of response and what they might say about our society's understanding of the issues confronted at the intersection of education and globalization.

Below are some questions I quickly jotted down during the first day of class that I envision as tools to query individual understanding of globalization and education, which are big terms that mean lots of things to lots of people.  I appreciate any feedback on questions that might be added, taken away, or tweaked.
  1. What does globalization mean to you?
  2. How do you think globalization impacts education?
  3. What is your perception of American education?
  4. Should teachers and/or parents concern themselves with globalization?
  5. Is globalization a good thing?
  6. Has globalization impacted you?  Your family?  Your community?  How and in what ways?
  7. What does globalization mean to you?
Obviously, anytime conclusions are to be drawn from data that consist largely of interviewee response, one has to consider how questions may cloud or color responses.  Just as I typed these up, I was struck by how the order of the questions could significantly impact a person's response.  That's something that I think can be addressed in the interviewer review and assessment of the interviewed subject.  Rather than trying to be overly scientific about definitions and categories, we are intending the questions to be a starting point for a discussion about the issues of globalization and education.  If questions are given in varying order and questions are added or subtracted, that's okay.  The real goods will be in the interviewee responses that are prompted by these questions.

Also, I intentionally repeated the first question, just as a kind of processing device to see how the prior discussion elicited new thoughts or reactions to the idea of globalization that were not evinced at the beginning of the interview.  I also hope that, rhetorically, the recognized repetition of the question will prompt the interviewee to take the opportunity to give a sort of final statement or conclusion concerning whatever they deem important about the topics discussed.

Tuesday, January 24, 2012

testing dead students

Cris Tovani, writing in So What do they Really Know? (Portland, ME: Stenhouse Publishers, 2011), makes a striking analogy between autopsies of the deceased and the typical multiple-choice, objective heavy standardized testing that is increasingly common in assessment-crazed, NCLB world of education.
Sure, an autopsy might inform the medical profession, comfort a family member, or provide useful information to a crime investigator, but it doesn't do anything for the person who has died. Like the autopsies, summative assessments can rank and categorize learners, give colleges a way to standardize how they admit prospective students, and allow parents to brag about their genius child. Unfortunately, they don't help students get smarter in the tested area (12).
This comparison struck me as both accurate in fact and powerful in imagery.  I definitely experienced this in my learning career.  As is common, Tennessee had a statewide writing assessment that everyone took in 7th grade.  The test was scored on a scale from 0-6, with the rubric having lots of bulleted descriptors of superior essays next to the 6 column, and 0 being described as either blank or "predominantly in another language."  I'll never forget that - "predominantly in another language" - it really sounded funny to me at the time, in the context of an essay test for 7th grade language arts.  It's illustrative of the type of stolid prose produced by a committee of exam drafters in league with state bureaucrats, like the common ethnicity choice on the same tests of "Black, not Hispanic."  This always implied all sorts of weird cultural assumptions to me.  It was as if blacks, unlike all other racial/ethnic groups, have some extra suspicion they must meet to prove they aren't Hhispanic.  Or maybe blacks wanted to make damn sure they weren't cross-categorized as also hispanics, so they received the extra opt-out.  I always wanted to write in "White, not black, potentially hispanic, and ancestrally European."

Anyhow, the chosen task was the persuasive essay, and I recall spending a decent amount of time in class going over how to write a good persuasive essay.  Now, that's not to say that the ability to write a persuasive essay is not a desirable skill to teach to 7th graders.  However, the execution of the assessment of that exercise was poorly thought out, just as Tovani describes:
With summative assessments, students are left out of the conversation. Rarely do they get to use the results to improve their performance. Summative assessments are too final. They tell the learner, "Time's up. Put your pencils down. If you don't know the information by now, it's too late." There isn't a lot teachers can do for the learners if summative data is available only after students have moved on to another grade level or class (12).
The state writing assessment I took was toward the end of 7th grade.  Naturally, these tests had to be sent off to...somewhere...so that grading was efficient and, above all, normalized.  An eternity of middle school summer later, I began 8th grade.  Just like the year before, I had a Language Arts class.  Not really remembering or even thinking of that past assessment, my 8th grade teacher one day decided to spend class talking about the writing assessment we did last year.  She had the grades, and she wanted to talk about how we all did, and what strategies we could use to write better persuasive essays.  She then told the class that, out of our city school systems two middle schools, I was the only student to get a score of 6 on the writing assessment.  The teacher presented this information as praise, and the class responded with Oooh's and Ahh's and few students said "wait to go, Parker" in various tones of smarmary.  I felt proud, I'll admit it.

The pride ended when the inevitable questions came: "What did you do so well Parker?"  "What did you write?"  My response: "Uhhhhhh...."

I didn't remember in the slightest.  I couldn't even be sure of what writing prompt I answered.  I had no idea what I had done, why I had done well, and what that meant.  All I knew was that I had done exceedingly well, congratulations.  Not even the teacher could refresh my memory, as she had no copy of the essay I had written or the graders scoring marks.  The only artifact was a number.

6.

Congratulations.  Now, try to do it again.  And we did.  Our 8th grade teacher apparently wanted us to all do essays for class, and she was going to grade them on the same 6 point scale.  A noble idea, certainly, as she very likely was reacting to the bad system of assessment that the state-wide writing test was.  This way, she would be able to give more meaningful feedback, and that feedback would be given to the students more quickly.  I got a 5.5 on the new essay, and I'm sure it wasn't perfect.  However, since the entire project had been introduced in response to the previous state writing test, and the same rubric was used, the comparison was implied.  Knowing my performance had decreased, I still had no knowledge of how or why, and, at the risk of belaboring the point, it all was because of the poor execution of the summative assessment.

More generally, it was because of the summative approach to assessment.  Just as Tovani stated, with lesson-final assessment, students often do not get the opportunity to compare their learning activities to the assessed product.  If they do, the chance to remedy the tested skill or improve the tested knowledge is not given.  With this approach to assessment, we deprive students of one of the fundamental goals of learning, which is improvement.  When tests are handed back and the only response is "Good Job" or "Did you not study?" we are not serving our students adequately.  Even the student who gets an A grade is not served, at least not any better than could be served without the teacher assessment.  This way, learning is transformed from an ongoing process of synthesis and improvement into a series of isolated moments of listening followed by disconnected moments of regurgitation.