Saturday, December 14, 2013

When growth is not growth

Unless you're living in a cave, you know that teachers in many states are being evaluated and rated based on the scores their students achieve on standardized tests.  I have written about why this is a bad idea here, here, here, and here. Today, I'm going to explain yet another part of the system that is patently unfair to teachers.


Once upon a time, there was an excellent teacher in an excellent school district who taught excellent students.  In fact, the students were so excellent that they achieved far higher than most other students in the state in every way.  They competed in tests of scholastic aptitude, they excelled in debate and music, and of course, they always scored in the very top of statewide standardized tests.  

This year, the excellent teacher in in the excellent school had seven of these excellent students in her excellent classroom.  When the standardized test were given, she was quite confident that her students would do well, even though three of them had been out late the night before at a concert.  When the results were released, however, it was found that four of the excellent students had excellent scores that were the same or a little better than the scores they had the previous year.  Unfortunately, they had scores so near the top in the past that their total gain was very small.  One can't score better than 100%, after all, and all of the students had begun at scores of 93-99.  

What's really unfortunate, however, is that the three excellent students who always did excellent work and had excellent academic accomplishments didn't have such excellent scores on the state tests administered the day after the concert.  In fact, all three of them had scores that dropped anywhere from 12-28 points on the bell curve.  When the State averaged net gain for this group of excellent students, they found that the average change in growth was negative 8.  

The excellent teacher in the excellent school with the excellent students was devastated because, based on this data, the district was determined to be a FAILURE when it comes to student growth with their gifted and talented students.  

This story is based on a real situation in a real school district in Ohio.  And herein lies yet another problem with using Value Added Measures in determining teacher effectiveness.  Average is not always the best representation of a set of data.  Let me give you another example.  Let's say 100 teachers are in a room, and we want to calculate the average income of the population of the room.  Looking at the wages of each teacher, we determine that the average annual income is $45,000.  Now, let's say Bill Gates walks into the room.  His annual income is $3,710,000,000.  We recalculate the average salary in the room and find that the mean annual income is now $36,777,230.  Do you believe that calculating the average income gives us a truly representative, accurate look at this data?  

Of course not.  That's why using average is NOT a fair and accurate practice when there are students who are near the ceiling of the test score and/or there are outliers not representative of the overall student growth.  There is only so far that students can go up when they start near the top, but their ability to drop in score is out of proportion to what they can gain.  We simply can't use this as an accurate measure of student growth, and most certainly, we can't use this as a measure for a district's or a teacher's accountability.  

No comments:

Post a Comment