Thursday, January 30, 2014

MAP scores, RIT bands, Lexiles, and other nonsense

-->
I have finally evaluated all the planning reports from my endorsement students. This report is the first submission for the semester, focuses on their implementation of library research they conducted last semester, and will be followed by two rounds of preliminary reports prior to the final submission. The topics students have chosen are interesting and grounded in their own teaching contexts. All in all, I am happy with their submissions . . . but surprised about the number of students who want to rely on MAP and RIT and Lexile scores as their only source of data - as the only way to gauge whether their implementation plan is effective. I shouldn’t be, though. One of the things that depresses me is the loss of autonomy and respect that classroom teachers are experiencing. They don’t seem to trust their own professional judgment anymore – and they should. What does a particular MAP score mean, really? Teachers can tell you so much more about what students can do and what they need more help with based on their own formative assessments. And don’t get me started on Lexile scores – particularly as they are used to bludgeon young readers (as in: students can’t read outside their Lexile range).

Readability (measured in Lexiles) takes two major things into account: word length (reasoning that long words are harder to read) and sentence length (reasoning that longer sentences are harder to read). But in reality, if you take a long, complex sentence and shorten it into two sentences, what frequently happens is you take a sentence with an explicit connection and end up with two sentences with an implicit connection that requires inferences in order to comprehend the sentence. I am not aware of any readability formula that takes the considerateness of text and the reader’s prior knowledge into account. These two elements are crucial in determining how difficult a text might be for a particular student. I remember years ago, when I was teaching the clinic course at Clemson, we had a young 5th grader who was failing reading. We administered an Informal Reading Inventory to her and she did fine – read on level. We were puzzled and asked her to bring some of the school texts she was reading. They were awful – very inconsiderate – no wonder she was having difficulty with the school texts! In addition, the assignment was simply to read and then answer the questions at the end of the story. No introduction, no discussion, nothing. Her reading ability was being measured at school using inconsiderate text on unfamiliar topics. Reading ability is a measure of how well a person reads. Does anyone know what “fourth grade reading” looks or sounds like? Reading ability is rarely measured on the same metric as readability of text. So readability formulas may have a role to play in teachers’ instructional planning, but to use the results of reading tests to determine a particular Lexile a student is allowed to read makes me crazy.

OK – I’ve gotten off on a tangent again. Seriously, though, if this country is to improve student learning (and sometimes I wonder if that is the goal we have – seem like politicians and the general public equate test scores with learning – a big mistake), we have to change what we are doing. Currently, we are preparing students perfectly for the world of 1950 - but we live in the 21st century. Raising test scores on multiple choice standardized tests with one and only one right answer for each question will not prepare students to the world of the 21st century.

No comments: