Teacher’s Assessment. Or the Inevitable Arrogance.

In Uncategorized on February 20, 2011 at 5:50 pm

I admit. I am not a good teacher blogger . I prefer writing poems and watching beautiful things. But I do have my “professional” wonderings and one of them occurred today.


What is it? How can we assess humans in the first place?


I read many of Alfie Khon’s articles and agree on many of the points he makes. Empirically (or by experience and reflection), I have had the same ideas and I have been as fervent opponent of homework and grades as he is ever since I became a teacher.


Going to the heart of the matter now, I think that, regardless of the assessment type, it is subjective to quite an extent. Even if you vary and combine the assessment formats (portfolios, complex projects, tests etc), you will fail in knowing exactly how much a student has LEARNT. Let alone how DEEPLY he has learnt. Or HOW LONG his learning will stay with him/her. You can only find out what he can DO on that task in that specific moment of his learning journey.


There are two points I want to make:

1. Teacher to teacher variation

Given the same “product” (which can be a story, complex project, report etc) there will not be two teachers who will grade/assess it alike.

2. Personal bias/experience

 Moreover, even YOU might grade it differently:

at different points in your teaching career (I read about a similar experiment). That makes sense, because we evolve as teachers (or regress in some cases – burnout signal). Our approach to teaching and education shifts in time.

depending on your students (the hallo effect). This cognitive bias appears almost involuntarily and you need to take a good step back to be aware of it. It can appear in regard to the students of the same class or students you had in different generations. Even kids can hardly manage it and I noticed that when they present final projects and have a peer-assessment session: if a kid with not so good a project presents his work after a very good /impressive one the tendency is to underestimate the latter.


On the other hand, if we did not assess students we would not know where they are on the learning (?) continuum. We could not plan for future teaching nor give them feedback so as to set own goals for learning.


So much for accuracy. Or our arrogance in using this word: “assessment”. I think we should add “our” (assessment).

What do you think?


*Photo credit: Morgue File, Anita Patterson 


  1. Hi,Interesting thoughts. I personally, have a lot of difficulty trying to assess maths projects. What I’m really interested in is the thought process going on in the students’ heads but all I can go on is what’s on the paper. Sometimes it’s easy to have a guess at what they were thinking but often it’s not easy and (possibly) impossible.This leaves me marking what has been done on a very descriptive bases and forms a ‘jump through this hoop to get level x’ assessment system.I have definite experience of your points in 1 and 2 above. When we used to have coursework in maths, we would get together as staff and moderate the work so we agreed on assessments (or at least could argue our point).My feeling is that more collaboration and a wider appreciation of the PLTS (personal learning and thinking skills) would make for better assessment.Dave

  2. Thank you for the comment, Dave. I think the collaboration idea is something that can be done, indeed, and SHOULD be done more often. I feel uneasy when I grade work for more reasons than those mentioned above:- geniuses for instance usually had very low grades in school (that tells quite a lot!)- some students display what is called "scholastic intelligence"- that is, the ability to learn fast, take tests, and generally do very well in school – but who fail short in demonstrating more creative approaches to their learning and later in life are rather mediocre professionals- with this rise of technology in the classroom it gets harder and harder to assess student work because they create products we did not learn how to assess (complex multimedia digital projects, for instance). I admit that as I grew in a completely different environment. When I look for experts'(?) opinions these vary greatly on the spectrum. I wrote the blog from a more philosophical perspective and I know there are no neat solutions. It was a "wondering out loud" sort of reflection.Thanks again!

  3. Hi CristinaI hate ‘grading’ too.So I don’t give grades. The exception is when it’s report card time and I have to label the learners mastery of outcomes with M – D – E – I (more work needed, developing, evident, independent). Which I hate for all the reasons you have stated above. It seems totally out of character with the rest of the programme.The PYP outlook on assessment is pretty good though, don’t you think? I mean, for the most part it’s assessment for learning so I take it as an opportunity to see whether the students are on the path, on the sidewalk (And needing a bit of guidance to get back on) or totally off the path (in which case we probably need to think of a new path). Often this comes with the realisation that the inquiry has become more of MY inquiry than theirs – i.e. a welcome wakeup call!Lots of food for thought – thanks. I hadn’t heard of Alfie Khon, so I guess I’d better get reading…

  4. Hi Tom, The PYP frame is indeed closer to authentic assessment but what bothers me is the very core of it all…Eventually, we DO grade students and that is what is taken into account in their academic future, not the anecdotal record, the formative assessment, the on-going conversations and feedback we give along the way…The grade determines the student’s future.As for Alfie Kohn…you definitely need to read his articles: incisive, but backed up with excellent arguments. You may wish to check his website: you for the comment! šŸ™‚

  5. I think this post and the other one you wrote: both actually make a very strong case for objective assessments. Subjective judgement doesn’t come into play when asking questions like: 7×7=?I feel that while objective measures are critical, in some instances they are incomplete. Having subjective assessments are unavoidable at times if we want to examine how deeply a student understands larger concepts. In my class (technology) students produce products like propaganda videos designed to test their knowledge of how people manipulate and package information. Some objective measures would include students having "title frames" "closing credits" "special effects" to demonstrate that they learned how to use the software. These objective measures don’t give a complete picture though because a student may come up with another way to demonstrate the larger concepts of say, a political attack ad. To exclude innovative demonstrations would be wrong in my opinion, so I leave room to subjectively evaluate them on the abstract concepts. The best we can do as teachers is to explain our criteria for judging student work as effectively as possible and use the assessment as a valuable learning experience. Everyone has different opinions of quality so it is certainly likely that what I considered an "excellent" demonstration of the abstract concept might only seem "good" or "satisfactory" to another set of eyes.This is related to larger issues surrounding the educational testing discussion though, so I’d like to address them here, even if they are tangential to the points raised in this specific post. In our twitter exchange, you quoted someone as saying, "no one measure is a reliable reflection of real learning. " I think there is a kernel of truth in the statement, but as with most rhetoric in education it is a big over-statement. If the quote was, "Don’t rely too heavily on written tests, because sometimes they might not correctly assess learning," I would have just let it slide past me on the stream. That quote however, made a definite, sweeping statement.My problem is not with the sentiment that written tests are often incomplete. My problem is with the rhetoric. I see hyperbole as a disease in progressive education and the only way to prevent runaway groupthink is to use critical thinking. When people who believe what we believe make exaggerated claims it undermines out credibility. What frequently happens is that people infected with groupthink are so conditioned to go unchallenged that they begin to use exaggeration and hyperbole out of habit. You can listen to any keynote address at an edtech conference or just see what celeb edubloggers’ statements get retweeted to find examples. It is our duty as educators to model critical thinking and challenge obvious cases of hyperbole like the quote above.The burden of proof here should be on the person making an absolute statement like "no one measure" etc., but I will offer some examples of why the statement is overblown.If I want to find out if a child learned how to perform a task I taught in class, I can ask them to demonstrate it. Some of them can complete the task with no help, some can barely get started and most fall somewhere in between. It is one measure and it is a very reliable reflection of real learning. Oral exams, written exams, essays all can be unreliable and all can be reliable. I don’t teach math in the classroom, but I do tutor children in math so I have experience with written tests, including standardized tests. Most of the questions are excellent reflections of what a student learned. Contrary to the trendy belief that written tests are mindless acts of "regurgitation" the ones I have experienced are full of questions requiring thinking in the form of evaluation, analysis, comprehension.One student I was working with had some difficulty with basic operations, but she wasn’t too far away from knowing answers. The problem was that she also had trouble analyzing the questions to determine exactly what operation she needed to perform. The end result was below average test scores and that is why she came for tutoring. If the test was just a list of basic operations, she would have appeared better because in that element she was okay. If the test was just about analyzing problems to determine what solution was needed, again she would be okay. When the two were combined in a more authentic assessment on the test, the errors overlapped at times.When she was quizzed with the goal of assessing basic operations, those quizzes proved reliably accurate. When she was quizzed with the goal of assessing analysis, again, reliably accurate. When she was tested by the state with the goal of assessing application and transfer, the state tests proved reliably accurate.I could probably provide dozens of other examples of why the statement "no one measure is a reliable reflection of real learning " is inaccurate and an example of unchallenged hyperbole, but when someone makes an absolute statement all that is needed to disprove it is one example.I provided two, hopefully that will suffice.

  6. Thank you for the comprehensive reply (which by itself could have been a blog entry – I appreciate the time taken).After having read it I can only conclude the following:1. We misunderstood each other. My view on testing (although I clarified that in two different tweets on was exclusively focused on STANDARDIZED testing – that was the only form I repeatedly emphasized I disapprove of (the examples being of Cambridge YLE exams and the international math tests).2. Secondly, I couldn’t AGREE more on the hyperbole and slogan-like words used in education today. I am constantly irritated by them and I always question people who use them – few, if ever, bother to think through and reply. 3. The quote mentioned above was fairly correct. As a matter of fact, it came from an educator who relies heavily on DATA and ACCURACY in assessment (@DataDiva). You overreacted (in my opinion) since her statement only implies the necessity of using more assessment formats – it does not deny the importance of testing at all. (Remember, it all started from the "blood pressure" analogy which, in as much it conveys correct information about the health of the patient, does not show the whole picture). You reaction did not come from the quality of the quote itself, but from your own mindset – you view it as an attack on testing (which was not).4. As you could notice, I question OBJECTIVITY quite frequently. The problem is that you cannot measure learning in areas such as humanities accurately. Your addition example (7+7) severely restricts the assessment to a very exact science.5. All in all…I think we do not disagree that much and that most of our interactions were heading in this direction – that tests are necessary in the right FORMAT and FREQUENCY. They are just another tool among many (projects, portfolios, oral presentations etc) that help us create a more or less accurate picture of student learning. The fact that I, personally, DO NOT test my students but very rarely (once a month) was a personal choice and always pointed that out. I am no an absolutist neither in what regards assessment, nor teaching methods. I think a good teacher combines what is best at the right time. Thank you again for the reply .

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: