The Problem With the Way We Use Evidence in Education

Download PDF

An excerpt from the new book:  Common Sense Evidence: The Education Leader’s Guide to Using Data and Research.

Advocates of evidence-based policy have lamented for decades that evidence isn’t used enough to guide practice in education. To a degree, they have a point. A recent nationally representative survey of school and district leaders from the National Center on Research in Policy and Practice found that only about half of survey respondents agreed or strongly agreed with the statement “I find it valuable to consult education research” or “I look for research studies that might be relevant” when confronted with a new problem or decision. And only 54 percent reported that they “conduct studies on programs we select and implement to see how they work” “often” or “all the time.” But bemoaning the problem in light of these statistics misses two key points: the available evidence is less helpful than these advocates imagine, and leaders already use more evidence than the advocates think—just in different ways.

Much of the evidence often viewed as most rigorous isn’t as helpful as it couldscreen-shot-2020-09-02-at-2-42-09-pm be, because it is divorced from the actual needs of the field. This disconnect arises from the academic community’s focus on a narrow, methodologically based defini- tion of quality, which overvalues the technical aspects of research and undervalues relevance.

In contrast, most of what we suggest in this book would not meet a top peer-reviewed academic journal’s requirement for rigor. Don’t get us wrong. All else being equal, we’ll take greater methodological rigor over less. But rigor has its downsides. It can limit what you can evaluate, where you can evaluate it, and how applicable what you learn will be in a real-life setting.

One issue with rigor is that programs, often developed by nonprofits or other external vendors, are easier to evaluate rigorously than practices are. Programs are based on clearly defined protocols that all participants implement similarly, whereas practices are ideas about education that educators can customize to their settings. If we only build evidence on programs, then a push for evidence-based practice morphs into a push for vendor-based practice.

That push also leaves out core practices employed in every district—for example, what time to start the school day, how to configure grade spans across schools, and how to assign teachers to classrooms. Changes to these core practices could substantially improve outcomes, often at little additional cost. Further, most research designs evaluate a given program or strategy as if it operates in isolation, when in reality many factors interact in the complex, social process of educating humans.

[Upcoming Event: Common Sense: What’s Missing from Evidence-Based Education Policy]

Rigorous evaluations require resources that many districts lack, such as access to a research professional with substantial expertise. Rigorous evaluations also require a large number of participants, and often a large number of schools. This means lots of research gets conducted in atypical contexts, so the findings might not easily translate to other settings. And when outside researchers, rather than education leaders, set the agenda, the outsiders are less attuned to real-world political and fiscal constraints, as well as how things are actually implemented.

All of these downsides suggest that the most rigorous research—the kind most people mean when they talk about evidence-based practice—is designed by and for researchers, rather than for practitioners. The supply of evidence is not meeting the actual demands of the field. So when practitioners say they see little value in research, who can blame them?

Furthermore, education leaders are already incorporating research into the body of information they are using to guide their work. They are just doing it in ways that are hard to observe—even for the leaders themselves—and they could be doing it more intentionally and efficiently. When advocates of evidence-based policy claim that educators don’t use research, they have in mind what Carol Weiss, a scholar of research use, would call an instrumental use of research: actively con- sulting research to inform a decision. But Weiss’s work and many related studies have demonstrated that research also influences policy and practice conceptually— by shaping ideas and beliefs, providing frameworks for understanding, shifting the options under consideration, and altering the terms of debate.

Practitioners who use research in this way may not be able to provide specific citations to the individual pieces of research that have influenced their thinking. Indeed, they may not even realize that research is the source of some of their ideas. But its influence is there just the same. And while much of the existing academic research may miss the mark for education leaders’ needs, the popularity of sources like Edutopia and EdReports suggests education leaders are hungry for information on topics related to real problems of practice.

But there’s a big challenge. Education leaders are not typically trained in how to evaluate research evidence, let alone how to generate their own. Without building strong research skills, they may unintentionally be swayed by weak evidence, particularly when it confirms what they already believe to be true. Education leaders, like all other humans, are subject to cognitive biases that can limit the range and quality of information they use.

Rather than searching rationally for information to answer our questions, we humans are hardwired to be more likely to notice information that aligns with what we already believe and to overlook or dismiss information that is inconsistent with our beliefs. Recognizing this tendency—and knowing how to avoid it—helps leaders avoid being persuaded by faulty data and gain access to the full body of information on an issue, not just what they already believe to be true.

Excerpted with permission from Common-Sense Evidence: The Education Leader’s Guide to Using Data and Research by Nora Gordon and Carrie Conaway, 2020, published by Harvard Education Press. For more information, please visit  

Photo courtesy of Allison Shelley/The Verbatim Agency for American Education: Images of Teachers and Students in Action