Phi Beta Cons

Evaluating Student Learning

An interesting problem with higher education is that there’s no way to measure it on a value-added basis. One can look at how bright students are coming in to a school — College Board has average SAT scores, etc — but that just measures how selective the school can be. One can also look at how much money students make after graduating, but again, if they make more, that might just be because they were brighter to begin with (or studied more lucrative topics). There’s no standardized test students take before graduating that proves they’ve learned something.
The Educational Testing Service has a few ideas in its new report (PDF). Its suggestions for educational institutions are rather commonsensical: Decide what you want students to learn, devise a test that determines whether they’ve learned it, and if they haven’t, find ways to teach better. Always try to improve your tests.
That’s great in theory, but the report is (necessarily, at 32 pages) short on specifics. In many fields, students have (and should have) a whole buffet of courses to take, and no one standardized test can really determine whether students in these fields have “succeeded.” Which regions, periods, and writers should an English major concentrate on? Should he specialize — take many courses on similar topics — or learn a little bit about a lot of literature? Higher education could certainly stand to move in the direction of accountability, but sometimes it’s just hard to give everyone the same test and expect it to mean something.

Exit mobile version