Analogies are important. While often imperfect, they help us make connections and better understand our world.
That’s why I think it’s a shame the education field has yet to come up with good analogies for value-add data and models, especially in the teacher evaluation context. For example:
- Is value-add like a high-school GPA, providing a threshold for certain decisions (you can’t get into College ABC without a certain GPA) but best used in conjunction with other factors (were you a varsity athlete? Do you do community service? How was your admissions essay?)
- Is value-add like a batting average, telling us how good teachers are at some skills (hitting), but not others (fielding)?
- Is value-add like a credit score, highly dependent on input qualities (did the bank get everything right?) with the potential to change over time (decreasing when you took on student loans, increasing as your credit history grows)?
The inability of education reformers to clearly explain value-add has confused the conversation. For example, yesterday I went to a terrific conference sponsored by the National Center for the Analysis of Longitudinal Data in Education Research (a mouthful! – so they go by CALDER). The brightest minds in education research presented papers on value-add. But it was clear that throughout the day, the non-researchers in the audience struggled to grasp the policy implications. In a somewhat tense moment (as these things go), an audience member suggested that value-add doesn’t take into consideration x situation so it can’t be used by itself to assess teachers, to which an exasperated panel member replied “I just heard [my colleague] say that!” Something was clearly lost in translation – an analogy would have been useful (JF).
What analogies would you offer for value-add?
Great point! So much of the conversation around value-added models is focused trying to explain/ understand the complex data models that the critical exercise of framing the number within a meaningful context for educators has been completely overshadowed. I think that this has been a fatal misstep for some of the states and districts have attempted to implement evaluation systems that incorporate value-added ratings for educators thus far. That being said, my personal favorite of these analogies is the “batting average” comparison- with multiple measures to describe different skills (ie slugging percentage, on base percentage, etc). The point of educator evaluation systems shouldn’t simply be to get rid of teachers who don’t hit a certain number- it should be to drive improvements in practice through targeted improvements in teacher preparation and ongoing professional support. Thats what would actually make the numbers meaningful.
I like the quarterback rating from football as a metaphor. A quarterback rating is basically a formula that tries to distill the results we want to see from a quarterback (complete passes, yards, touchdowns, and no interceptions) and tries to weight them into an index. The numerical value of the index itself is really not of much value, like in value added, rather it is useful to get a sense of where the quarterback stands in comparison to other quarterbacks who are trying to meet the same objectives. The specific value of the rating is not what is useful in the context of evaluating the quarterback, its how they sort compared to others in the broad sense.
Over time, quarterbacks with high QB ratings are generally considered very good quarterbacks, and those with low ratings are considered poor, even if there are fluctuations from year to year. Like value-added, you could look at a quarterback who seems not to be very effective, yet look at their rating and be surprised. You might also see a QB who looks great on the field, but does not produce.
Leave a Reply
The comments are closed.