Thursday, November 17, 2011

Wrong answers....

My daughter was laughing her way through Richard Benson's "F in Exams: The Very Best Totally Wrong Test Answers", which is a pretty funny read, and asking me what I thought. Well, mostly I thought the answers were amusing, and the questions that illicited them poorly written. If you ask students, "Where was George Washington born" and they answer, "in a bed", you cannot mark it wrong. It correctly answers the question asked. If you want the name of the city, you have to ask "In what city was Washington born?" And so on. But half the answers in Benson's book are in fact wrong answers, but funny nevertheless, either because they are cheeky, or because they simply reveal an understandable misunderstanding. As it happens, I came across just such an example as I was marking my own exams at that moment. A student who meant to refer to the educational limitations of "rote memorization" wrote instead "rogue memorization" -- an unfortunate slip of the pen, or perhaps a genuine mishearing of what the student thought his/her profs have been saying. But I kind of like the phrase! Yeah, you really don't want your students dwelling on rogue memorization. Given the randomness of some of the answers to the questions posed on the exam, I greatly fear some of my class have in fact experienced rogue memorization when studying for the exam....

Wednesday, August 31, 2011

Questions Answered.

Q: How many alternatives should a multiple-choice question have?

Unless you have a specific subject that demands more than five possible alternatives (say, doing a unit on the solar system, having all 8 planets as alternatives might make sense -- though a matching question would probably work better!) don't do it! The more alternatives, the worse your question is likely to be. Four or five alternatives are standard on professionally designed test (since statistically, this reduces the score students could get by pure chance to 25 or 20% respectively). Some people want to pile on alternatives to make the questions 'tougher' and to 'eliminate chance'. But here's the thing -- having 7 alternatives does reduce the chance of them getting the question right by blind guessing, but why bother? Once they've gotten less than 20%, how much more failed do they need to be? And while theoretically reducing the impact of chance on getting the right answer, the more answers the student has to read through, the more it becomes a readiing test rather than a test of that subject matter. So do we really want to eliminate chance factors by penalizing poor readers, ESL students, and so on. Most test designers agree the trade off isn't worth it!

And in the real world, coming up with 7 or 8 credible alternatives becomes REALLY hard. Again, unless there are an obvious 8 possible choices like the 8 planets of the solar system, you will drive yourself crazy trying to come up with credible but clearly wrong answers six, seven and eight. Why do this to yourself?

Or, people will do terrible things like having alternative 5 as "A and B, but not C". No, no no! ? Never do this! It becomes a test of reading and logic rather than subject knowledge. Students will hate you with justification since the test will not be an accurate reflection of what they know -- indeed, some research suggests that this INCREASES the importance of luck...

But here's the killer -- research in the early 1990s suggested that the overall quality of tests DECLINCED with the increase in number of alternatives per question. Everyone who has ever designed an mc test knows that coming up with the answer is easy, the first two wrong answers pretty easy, it's alternative 4 that's tough, and #5 is almost impossible -- the higher you go, the more desperate one becomes to fill the last spot. So, a test with seven alternatives will reduce the writer to grasping at straws, and they will end up accepting ridiculous alternatives that even those completely ignorant about the subject will have no trouble eliminating -- a complete waste of space and student reading. And then -- this is where human nature gets interesting -- since I've given up and accepted a stupid alternative for this question in desperation, my standards for writing the next question go down, because even though I know this is a terrible alternative, it is not as bad as the last one. Or, having accepted three weak ones, what's one more? Pretty soon, the test is garbage.

In contrast, tests with 3 alternatives (the right answer and two wrong alternatives) turn out to be easier to write, and therefore are written to a much higher standard. Students perceive them to be much tougher tests! And, research says, they really are more valid and reliable! So, I tell my students to write questions with three (good!) alternatives rather than going for four or five. Professional test designers can go for four or five because we have the time to come up with high quality 'd's and 'e's, but the realities for classroom instructors is that that is not going to happen.

It's true that with only three alternatives, students can get 33% just by blind luck, but um, so what? I don't know any course where 33% is a pass. Failed is failed. And the results of this test will more accurately reflect what students actually know than one with 7 or 8 alternatives.

Questions Answered.

From time to time strangers email to ask a question on test construction, which I do my best to answer. If they are the sort of questions that I get a lot, I add them to the "Frequently Asked Questions" file on the test construction site; but I think I'll highlight some of them here in the blog as well.

Q: Where is the best place to put the correct answer? For example, if I provide seven choices, does it make a difference if the correct answer is choice 'b' instead of choice 'e'?

A: Professional test designers place the answers randomly -- I mean that in the literal statistical sense of the word, not 'wherever'. They use tables of random numbers, or complicated computer programs that assign the answer randomly, to decide which spot will hold the answer to each question.

What they do NOT do is place it themselves. Research shows that left to our own devices, most people will attempt to 'hide' the correct answer somewhere in the middle of the list. (Nobody wants to put the right answer in A, because then the students won't even read the other alternatives you worked so hard on; and putting it in 'e', it just sort of seems to hang out there over the edge. Sticking it in the middle feels right! Even though, that's wrong.) Even experienced test construction professionals will unconsciously choose 'c' (or for some individuals, it turns out to be 'b') 3/4 of the time. That's why the rule for taking an mc test is "when in doubt, choose 'c'" -- because unless one takes care to distribute correct answers to get an equal distribution of A, B, C,D, etc, there will be way more 'b's and especially 'c's than other answers, so testwise students can do quite well for themselves simply by answering 'C' to every question. That's why professionals force themselves to do it randomly by using computers or tables of random numbers. And then they'll double check at the end of the test to make sure they have roughly equal number of a, b, c, d, etc.

For classroom instructors etc, I wouldn't bother with tables of random numbers (which are kind of a pain to work with) and let the answers fall where they may by pyramiding questions. To stop students from trying to figure out which answer will come next ("there have been three 'd's in a row so next one must be something else") you let the internal logic of the question dictate placement. Numerical answers are listed in ascending or descending order; dates in chronological order, single word answers are listed in alphabetical order; sentences either on the basis of some internal logic or more usually shortest to longest or longest to shortest. (Incidentally, this also makes the test look really pretty! People who don't pyramid their tests have really ragged looking questions). So if the correct answer turns out to be the longest, it places itself in the 'e' slot, not letting the designer 'hide' it in the middle 'c' spot. After the initial draft of the test is done, one quickly looks through to ensure one has equal numbers of a, b, c,s etc. Where there are too many of one, say 'A's, you go through and change some of the ascending questions to a descending to move the 'A' to a "D" or whatever. It works pretty well!

Wednesday, April 6, 2011

Peer Evaluation vs "Passing Back"

Caught a student teacher having her students pass their assignments back for the student behind to mark (last person in row brings it forward to person in front desk). This is a fairly common practice among classroom teachers, but it is a very, very bad idea.

It is bad evaluation practice because students routinely 'fix' mistakes for their friends, misinterpret directions or answers, are too narrowly literal or too generously accepting of variations in acceptable answers. The assessment data produced thus collected is therefore unreliable-- and if the data one is collecting is unreliable, why bother?

it is unprofessional because grading is the teacher's responsibility, and should not be delegated to student slave labour -- time spent by students marking other student work for the teacher may well save the teacher's time and energy, but diverts class time from learning. The teacher is supposed to be there for the students' needs, not the other way around.

But mostly my objection is that it is an unethical practice because it violates the students' right to confidentiality. Teachers may regard "passing back" of spelling or math tests a trivial matter, and the waste of instructional time or the danger of unreliable data of little importance. But the student (and their parents) may take a different view.

My daughter, for example, has dysgraphia (a relative of dysexia) that makes it difficult for her to spell. For years, teachers using 'passing back' marking caused my daughter considerable embarrassment as the students marking her work invariably told others in the class laughable examples of my daughter's poor spelling. As a consequence, all her peers believed her to be a weak student (they of course phrased it in less complimentary ways) and their taunting caused her no end of difficulties. All because (a few of) her teachers shirked their responsibilities for marking. And she is just one example -- by definition, every classroom has students who perform badly for one reason or another (including lack of ability) and they all have the unconditional right not to be humiliated. If students choose to share their scores with each other that is one thing; but it is unethical and unprofessional for a teacher to disclose a student's grade to the class or to allow students to view other students grades--the practice is simply unacceptable.

So it is a topic that I hammer on pretty heavily in my evaluation classes. So when I found a student teacher doing "passing back" marking, I of course questioned her. To my utter astonishment, instead of saying she had to because her Teacher Associate insisted, or admitting she'd slept through that part of the evaluation course, she said that she thought we had told her to include peer assessment in her evaluation strategy! As if 'passing back' marking constituted peer assessment! After I picked my jaw up off the floor, I tried to explain the difference between having kids scoring a quiz for the teacher, and students providing constructive feedback to each other through portfolio conferencing, workshops, rubric design, etc., etc., etc. Not sure how successful I was. Generations of teachers have modeled bad evaluation practice; it is an uphill battle to move the profession towards more professional practice. If all this student teacher has ever seen is 'pass it back', how could she know to, or how to, organize her students for peer evaluation?

Saturday, February 5, 2011

Textbook Proposal

Pitched assessment textbook to a publisher this week, had it tentatively accepted. Title, The Cheap, Simple, Straight-Forward Guide to Student Evaluation, says it all.

But honestly, it drives me crazy when text salesmen come around and want to sell me a text that sells for $140+ (and god only knows how much markup the u bookstore will add to that!), of which I can only use three chapters in my course, and which the students will never open again once the course is over. Most evaluation texts devote hundreds of pages to theory which no classroom teacher needs, but which ed psych profs insist on teaching anyway. My attitude is that most of that stuff can wait till grad school. What pre-service teachers need is just enough practical hands-on skills (how to write a multiple choice question, how to set an essay assignment, how to handle oral questioning) to survive practicum and the first two years of teaching. The sort of info I offer on the website.

I figure we can sell the text for under $25 and it will be something students would actually keep around and consult for at least their first couple of years of teaching. I want to see my text on teacher's bookshelves, preferably dog-eared from their constantly consulting it, not in student's trash at end of term. I'm guessing the e-version of the text could be even cheaper -- maybe as little $5. I would have no hesitation asking students to spend $5 on a text; but I cannot bring myself to ask them to spend hundreds on the uniformly pompous, tedious, bloated texts that are currently out there. Up to now I have simply refused to buy texts and just used a custom learning resource (i.e., course reader printed by bookstore) but the price on that has kept creeping up, and as our university is one of those refusing to sign off on ACCESSs new copyright agreement, books of readings have become almost impossible to produce. So time for me to produce my own cost-effective text.

I know it's what I want in for my class, and my publisher is betting that once its out there, other profs will use it too. Or if not, their students will certainly buy it, whatever the official text for the course.

Because in addition to a focus on what preservice teachers really need to know, I intend for the book to be readable -- plain, readable English with touches of humor. I can't stand how pompously serious all the other textbooks are. The last time I took a text to a major academic press, they made me edit out nearly all the humour. This makes no sense to me, but they said profs on other campuses reviewing the manuscript all complained that they felt they were teaching a serious course and needed the text to be serious. Because I had a coauthor, I had to go along with deleting huge chunks of what I considered to be the really interesting bits. That's so wrong! Students want entertaining writing. "Serious" so often translates as "pompous" and "tedious", that it kills students' love of the subject. They want to see the same passion in the text as they get from good profs. (I sometimes wonder if those profs who rejected the funny bits in that manuscript felt threatened by the text being more entertaining then they were....) The one article I went to the wall for was a provocative essay -- not funny as such, but totally in your face outrageous attack on standard interpretation of things. They let me keep that chapter in. So the text itself went by the boards as soon as my colleague and I stopped teaching that particular course (I changed campuses) but my provocative essay, I am pleased to say, is still reprinted in course readers across Canada, and there isn't a conference I go to that someone doesn't come up and tell me how much they love that article and how refreshing it was to read something not dull.

So, all those factors coming together -- realizing I've made far and away more money from the reprint of that one provocative article over time than from all my other textbooks and chapters put together; annoyance over the weak selection of outrageously overpriced options available to me from mainstream publishers; and the collapse of ACCESS agreements -- have made mesee it's time for me to write my own evaluation text.

If it goes over well, the Cheap, Straight-forward, Provocative Introduction to the Sociology of Education could be next.

Sunday, January 9, 2011

Exam Anxiety

The Tuesday, November 9th, 2010 edition of the CBC's news show, The Current had an interesting discussion on exam anxiety, described on their site as follows:

Exam Anxiety - Gabor Lukacs

Passionate debate is a core part of the university experience, and at the University of Manitoba in Winnipeg, the hot topic right now deals with academic policies at the university itself. Specifically, how far should the administration go to accommodate students with academic anxiety?

In this case, the university waived certain degree requirements for a doctoral graduate math student diagnosed with extreme exam anxiety. The student, whose name has been withheld, was ultimately granted a PhD.

University of Manitoba math professor Gabor Lukacs considered that an unacceptable breach of academic standards and has taken the matter to court. The University has since suspended the professor for three months without pay, accusing him of harassment, insubordination, and violating the student's privacy.

Gabor Lukacs joined us from Winnipeg this morning.

Officials of the University of Manitoba will not comment directly on this case, citing privacy rules. But university spokesperson John Danakas responded this to the university's policies regarding accommodating students with disabilities. We aired a clip.

In the 2008-2009 academic year, 136 University of Manitoba students were registered as having exam anxiety. That means they were able to provide medical documentation of their condition. Dr. John Walker is the director of the Anxiety Disorder Program at St. Boniface General Hospital in Winnipeg. He explained what exam anxiety is, and how he comes to a diagnosis.

Manitoba Math Fight - Carolyn Mamchur

We spoke to officials of several universities across the country, and they also noted more students reporting forms of academic anxiety. That doesn't mean faculty think it's a good idea to waive degree requirements for students debilitated by academic stresses.

Carolyn Mamchur is a professor of education at Simon Fraser University who has written extensively about how to address different learning styles and student anxiety. Professor Mamchur was in Vancouver.

You can give the podcast a listen at

John Mighton on the Bell Curve

Provocative talk by John Mighton of the Fields Institute for Research in Mathematical Sciences on The Ubiquitous Bell Curve: What It Does and Does Not Tell Us. The talk focuses on JUMP Math program and Mighton's work helping teachers learn how to excite students about mathematics. Mighton raises a number of key issues about teacher expectations, assessment, and students' ability to learn.

The talk is from TVO's Big Ideas series (highly recommended, whatever your interests).

Like all of TVO's Big Ideas series, the talks are made available as either audio or video versions. I find downloading the audio to my ipod, and then listening while cutting the grass, washing dishes, taking the bus etc. allows me to turn wasted time into productive time; but you may prefer to watch the video so you can see presenter's slides etc.