In other words, test-based value-added doesn’t make sense

Education Week posts a defense of value-added metrics, attempting to address concerns about their use and reliability. By their admission, it all hinges on a central assumption:

If student test achievement is the desired outcome, value-added is superior to other existing methods of classifying teachers.

What if student test achievement isn’t the desired outcome?

Advertisements

About Spherical Cow
I'm a trained cognitive scientist and education researcher currently working for an education non-profit. In my job, I translate findings from education research into classroom practice and observe and evaluate the results. I also help non-scientists understand what we can and cannot conclude from different data sets. I hope that increased awareness of quality research will improve the discourse and policymaking in education.

2 Responses to In other words, test-based value-added doesn’t make sense

  1. Pingback: Some limitations of value-added modeling « Real Learning Matters

  2. Pingback: Look for the story behind the numbers, not the numbers alone « Real Learning Matters

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: