One of the crucial questions in education reform is how to improve teacher quality — it has a large influence on student learning outcomes and lifetime earnings. But it turns out we know remarkably little about how to improve teaching — how to turn bad teachers into average ones, or average ones into excellent ones.
That shouldn’t necessarily limit our ability to succeed in those goals, though. In fact, Google’s innovative approach to improving the quality of its managers suggests there really is a way forward for rigorous, analytical development of a complex skill like teaching or managing. Far be it from us to endorse Jeff Jarvis’s “What Would Google Do?,” but such is the sad state of teacher professional development.
Just how depressing is the state of teacher professional development (PD)? Tom Loveless of Brookings usefully summarized the gory details: A report prepared for the Institute for Education Sciences reviewed 643 studies on teacher PD effectiveness for K–12 math, and concluded that “there is very limited causal evidence to guide districts and schools in selecting a math professional development approach or to support developers’ claims about their approaches.” In lieu of evidence, they basically recommended that schools and districts make their best guess.
All but 32 of the studies identified as relevant were not even conducted in a way could meaningfully contribute to scientific knowledge of PD effectiveness. Of those 32, only five met evidence standards as outlined by the IES’s What Works Clearinghouse (whose work is important but still underfunded and underappreciated). Of those five, just two found positive effects of PD on student math performance. Here’s an illustration from the IES report on the depressing state of our knowledge:
To close followers of education research, these findings are likely unsurprising. But they point to a serious problem with how knowledge is created and used in education.
Which isn’t to say there’s nothing we can do. One can at least imagine what it might look like if schools, districts, or charter networks made regular use of existing empirical (and especially experimental) research, or even institutionalized their own analytical practices — for teacher PD as well as for other aspects of education.
Is that possible? Well, look at what Google does to train and improve its managers. In the December 2013 issue of the Harvard Business Review (free, registration required), David A. Garvin, a professor of business administration at Harvard Business School, explored how Google has tackled a problem similar to that of improving teacher quality: identifying and improving manager quality.
At first glance, managing — like teaching — seems more art than science. Nonetheless, Google initiated something called Project Oxygen to identify and measure qualities of managers that led to important outcomes — employee retention, performance, career development, happiness, among many others.
After months of data collection and analysis, they settled on eight behaviors that were particularly well represented among the highest scoring managers. They then implemented a program assessing those behaviors and a training program to help managers voluntarily improve where their scores were low. Over a two-year period, manager scores improved across the board, and — notably — low-scoring managers showed the greatest improvement.
Two key aspects of the program can offer lessons for education: First, as one analyst in the program noted, the assessments were used “as a development tool, not a performance metric.” Upon receiving assessments that indicated areas where they could potentially improve, managers could voluntarily seek out the management classes the project had made available. In fact, Google initially considered tying the scores to performance reviews, but decided that it might impede the program because employees might perceive it as “a top-down imposition of standards.” Sound like a problem ed reformers sometimes worry about?
Second, the assessments and classes for improvement were nearly uniformly positively received in part because Google has such a data-oriented culture and the project was carried out in the most rigorous ways possible. Both factors helped get managers to trust the validity and utility of the assessments. This poses a challenge for education reform: Most schools don’t yet have a data-oriented culture that would be as amenable to such an initiative, though there are exceptions, especially in the charter world — Doug Lemov and his charter network Uncommon Schools comes to mind. But if teachers trust that the project is being carried out rigorously — as they rightly suspect many PD program today are not — and they are given high-quality opportunities to improve, Google’s experience suggest that it’s possible to get employee buy-in.
More generally, it’s unsurprising that talent analytics are being developed most extensively in the context of competitive markets. Companies like Google have the capacity to develop them, and every reason to try to get ahead of their competitors in an extraordinarily tight tech labor market. In contrast, the education environment is routinely hostile to experimentation of the scientific or trial-and-error varieties. If it were less so, particularly — though not exclusively — through greater decentralization and reliance on competitive markets for educational goods and services, we’d much more likely see the development and proliferation of practical knowledge about how to identify and improve teacher quality.
We should be careful, as AEI’s Rick Hess has noted, not to oversell the promise of research — it’s limited in ways not always acknowledged by its most overzealous advocates. But as Google has shown, it’s possible to identify, measure, and improve important so-called “soft skills” that might otherwise seem difficult to develop. This is all the more reason to adopt a persistently and rigorously analytical approach, and attempt to institutionalize it where possible in education.