From the Field

Q&A: Jackson on Why We Should be Measuring Social-Emotional Well-Being

Amid schools’ increased focus on students’ social-emotional development, researchers and educators are asking two questions: Can helping students on this front improve their chances of success? And are student surveys an appropriate way of measuring social-emotional development? A new working paper sheds light on both questions. Northwestern University professor Kirabo Jackson and his research partners—Shanette C. Porter, John Q. Easton, Alyssa Blanchard and Sebastian Kiguel—found that Chicago high schools that excel at improving certain social-emotional measures also saw improvement in such results as grades, attendance and graduation rates. FutureEd’s Phyllis W. Jordan talked with Jackson, a FutureEd research advisor, about the findings.

Can you tell us how you conducted this research and how you rated different schools?

Jackson: We classified schools in three different ways. We looked at schools’ effects on test scores and we used that as our traditional measure of school quality. Then we looked at school effects on an index made up of survey questions that were related to the extent to which students worked hard. It was a combination of questions asked about self-reported grit, about whether students work hard and persevere towards academic goals. We also examined effects on other questions related to the extent to which students had the sort of study habits that were conducive to working hard, things like, I do my homework on time, I always finish what I start.

Then we looked at a measure of what we describe as social well-being. This measure relates to questions that reveal, essentially, what I think psychologists would call belonging, the extent to which students feel like they belong in the school. It also included questions that related to interpersonal skills: I’m able to resolve conflicts with my fellow classmates, and these sorts of things. And we combined those questions to form a social well-being index.

This is information you gleaned from school climate or student engagement surveys?

That’s correct. The Chicago public school system, through the University of Chicago Consortium on School Research, has been administering the My Voice survey, which is given to students and teachers. At the student level, they ask a bunch of questions related not only to the school environment that they’re in, but also questions about themselves and their attitudes. So we use these questions that related to the student’s own actions and attitudes to form our measures of socio-emotional development.

And you found that these social-emotional learning measures were predicting positive outcomes for students.

In fact, these socio-emotional measures had larger estimated effects on most outcomes than test scores did. You could basically predict much more of a school’s impact on things like disciplinary incidents, attendance, course grades, high school completion, college going, college persistence. You could explain much more of the school effects on those outcomes using the school effects on socio-emotional development than you could using the school effects on test scores.

Did schools that excelled at the different components of social-emotional development—the hard work indicators versus the social well-being measures—have different outcomes?

Schools that had larger effects on grit also tended to have larger effects on study habits. Which also tended to have larger effects on academic engagement. Which were exactly the three components that we combined to form the work hard index. And by similar logic, the schools that were better at moving the needle on self-reported belonging also tended to do better on self-reported interpersonal skills.

And what sort of outcomes did that translate into?

We found that schools that have larger impacts on social well-being seem to be reducing absences and disciplinary outcomes. Whereas the schools that promote hard work tend to have bigger effects on things like course grades and high school completion.

Were some schools really strong in some areas and weak in others?

The school effects were actually pretty highly correlated. If schools were good in one there were likely to be good in the other. Of course, it was not a one-for-one correspondence. But in general, they were pretty highly correlated. If a school was good at raising test scores, it was probably going to be above average at improving the socio-emotional measures, and vice versa.

 Was there anything that surprised you about what you found?

 We read a lot of literature arguing that surveys are subject to all kinds of reporting biases. And it’s been documented that students who win a lottery to go to a high-performing charter school have test scores go up but their self-reported grit and growth mindset go down. So they are actually moving in opposite directions in those cases. The potential explanation is this “reference-group bias” story, that when you go to a tough environment, the group of students you compare yourself to changes, and it changes the way you answer the questions—even if your own actions and disposition haven’t changed.

So when we came to the project, we actually did not expect to learn much from surveys.  We were surprised that we did find meaningful results and at how consistent the effects were over time. We found that if a school moved the needle on some socio-emotional measure in 2010, they also moved it in 2011, they also moved it in 2012.

Right now eight states, including your home state of Illinois, use school climate or engagement surveys for judging school quality under ESSA. There’s some talk about whether that’s a good idea. Is your study proof that surveys are valid ways to measure school effectiveness? Or is the jury still out?

I think yes, and no. We’ve documented that the measures we identified are capturing something. Second, moving the needle on socio-emotional measures seems to make real differences in outcomes. You would think that if a school is getting students to be more engaged, to work harder, students would do better academically. And that’s exactly what we find. We’re seeing it in their course grades. We’re seeing it in their high school completion.

[Read More: School Climate Surveys Under ESSA]

Similarly, if a school is actually getting students to feel better adjusted—and not just say they’re feeling better adjusted—then you should see that they’re more likely to attend and less likely to have a disciplinary infraction. And that’s precisely what we see in attendance and disciplinary data. It’s not just self-reporting.

So, to some extent, we’re providing strong evidence that we can evaluate the impact that schools have on social-emotional development and that what the surveys purport to be measuring do seem to correlate pretty well with attendance, grades and other outcomes that we’re seeing moving.

Does this suggest that social-emotional development should be used in assessing schools?

We’re not saying we have a perfect measure of social well-being or a perfect measure of hard work. What we’re documenting is that the survey instruments being used, that have been designed to measure these things, are certainly capturing these skills. And going to one school versus another does seem to move the underlying trait in a way that is reflected in objective measures like graduating from high school, going to college, and persisting in college.

 So what advice would you give to schools given your findings? What are the policy implications?

 I would say that the research probably raises as many questions as it answers. We view this as a proof of concept, that it’s possible to measure school impacts on social-emotional development. And it also demonstrates the extent to which some of the important skills and dispositions and traits that we’re measuring with surveys are even more predictive of important long-run school impacts than school effects on test scores. That is an important take away, saying that policymakers shouldn’t just focus on test scores. We should be attending to school effects on these harder to measure socio-emotional factors.

[Read More: Core Lessons: Measuring the Social and Emotional Dimensions of Learning]

I would recommend that policymakers enter into partnerships with researchers to come up with ways—in collaboration with teachers, with students, with people on the ground—to measure these things that will have buy-in from principals, buy-in from teachers. Then we can start to figure out exactly how we might use those things.

Does that mean attaching high stakes to these results?

I don’t think that’s necessarily where we need to go. It’s someplace we can go, but not necessarily where one needs to go. We could simply start saying: Well, what are the practices that we could observe that are associated with improvement in these socio-emotional measures?

If we can identify those practices maybe we can encourage teachers to engage in them, either with some kind of performance-based pay for doing these things or bringing these practices explicitly into professional development programs. I think we’re opening the door to that conversation about what we should do, but we’re definitely not at the point where we should take the measures found in this paper and run with them and start attaching stakes.