I’ve done enough posts on this subject for one week, so I’ll just tie up some loose ends.
First of all, it seems I misunderstood “shopping week” — most schools have a certain period in which students can drop classes without penalty, but Harvard apparently goes an entire week without assigning work. In other words, they waste a week just so students can be sure they’re in classes they’ll like. So you’re right: There’s no particular reason they really, really need other students’ evaluations to help them decide.
Two, I disagree that faculty advisers’ recommendations are somehow immune from administrative oversight. If a school employee, specifically charged with helping professors teach better, outright encourages instructors to give grades higher than those students earned, the receivers of this advice ought to report it. The individual had the title of “faculty mentor” and made the suggestion in that capacity, so the communication was not “private.” How often reporting would happen is an open question, but any institution is based on the assumption that its officers don’t commit professional misconduct. If a few professors take evaluations as license to do just that, it reflects on the professors, not the evaluations. Arguing otherwise is like saying colleges shouldn’t give tests because some students will cheat.
Finally, just to summarize my overall point, student evaluations (A) give professors direct feedback from their customers and (B) give students information they can use to (hopefully) wisely choose their classes. This comes with the cost of some grade inflation, some of it intentional but most of it likely unconscious. An additional cost is that some schools give the evaluations too much weight, even using them as a major source of information for tenure, but this is a problem of execution, not an inherent problem of evaluations. (By the way, I’m flabbergasted that schools would do such a thing without introducing a statistical control between the average grade given and the average student-evaluation rating. Given the data, even I could put that together in a rudimentary way, and I didn’t major in statistics.)
There are a number of workarounds, but I don’t think any has as good a cost-benefit balance. Having professors rate their peers’ performances doesn’t give a true customer perspective and takes up professors’ time. Following the path of Harvard — wasting a week’s worth of student and faculty time — seems like the nuclear option.