From the Field

A Researcher’s Take on Reducing Chronic Absenteeism

Professor Todd Rogers of the Harvard Kennedy School of Government, a behavioral scientist, has focused in recent years on improving school attendance. He and his research team have developed and tested several interventions that motivate students to show up for school and others interventions that don’t. He spoke with FutureEd Editorial Director Phyllis Jordan about his recent work.

Your latest working paper looks at truancy and how rewriting truancy notices to parents—in simpler, more supportive language—can have an impact on student attendance. Can you tell me about that? 

Throughout the country, different states have different attendance policies, but almost all of them have some sort of policy where after a student is absent a certain number of days without a valid excuse, the school is required to send out a notice of truancy. Often states will give recommended language for these notices. In California, there is specific recommended language written at a college reading level. It’s a lot of words and very threatening language. Working with Senator Kamala Harris when she was the California attorney general and Hedy Chang of Attendance Works and other researchers, we rewrote the letter so that it was within a fifth-grade reading level and had a tone that was much more supportive. And it was written in a very large font.

Then, working with a massive district in California, we randomly assigned families of truant students to receive either the standard letter, which was the one written by legislators in Sacramento, or the improved letter. There were 130,000 or so families randomly assigned to receive different versions of the letter in the experiment. And what we looked at was what happened to absenteeism for 30 days after the letter was sent.

What did you find?

We found that absenteeism was lower among students whose families received the improved letter rather than the standard letter. To make sense of how big the effect size was, the improved letter was about 40 percent more effective than we estimated the standard letter to have been.

What does that translate into in terms of days absent?

Not a ton, on a per-family basis. But you could think of this in the following way: if it was implemented statewide in California, it would increase attendance by around 140,000 days of increased attendance. The improved truancy notices increased attendance by about 0.1 days per student who received the improved notice versus the standard one.

To put this in context, though, no light touch interventions actually have big effects in education. The idea is to continuously make everything we’re doing a little more effective. We estimate something like 15 million truancy letters are sent per year across the country. If we make each of them 40 percent more effective, it would make a difference.  Applying just this modest improvement to all truancy notifications around the country would reduce absences by more than one million days per year.

What made the difference: simplifying the language, clarifying the message of why attendance is important? Or was it changing the tone from punitive to supportive? 

We couldn’t tease apart which of those elements were most effective. What we take away from the research is that simplification and tone matter. The language was much easier to understand. The tone was much more supportive, conveying messages underscoring that we’re all on the same team in wanting to help the student succeed. And then we also focus on some key messaging about how every day adds up.

This builds on the work you’ve done before with attendance nudge letters, which target parent beliefs about how many school days their students have missed. They also have shown improvements in attendance, haven’t they?

Right. When I started doing research on absenteeism, the first project I worked on was modeled after home energy reports that get sent around the country comparing your energy use to your neighbor’s energy use. The effects of those mail-based, social comparison interventions rose over time and persisted for years. When I started studying absenteeism, I learned that parents underestimate their own student’s absences by 50 percent. So if my child has missed 20 days of school, I think my child has missed 10 days.

And the majority of parents of students who miss more school than their classmates think that their child has missed the same or fewer days than their classmates. Those two key false beliefs—one about my child’s total absences and the other about how my child compares to her classmates—were nearly ubiquitous and suggested that the same kind of intervention deployed for reducing energy use could be useful for improving attendance.

I have since studied this kind of intervention in large randomized experiments in 14 school districts including Philadelphia, Los Angeles, Chicago, and many medium and smaller districts.  We have published academic papers on what we’ve discovered. In short, sending these absence reports throughout the year in the mail reduces chronic absenteeism by 10 percent to 15 percent consistently, across districts.

You sent these through the mail. Why do you think that works better than texting?

After surveying parents, we find that the majority of parents put the reports on their fridge or their kitchen counter, and they talk to their child about them. To keep attention focused on absenteeism and these beliefs we send multiple rounds of these reports.  And so the notes become what we think of as “social artifacts” in the home: conversational objects that stick around and get talked about. There have been a handful of randomized experiments looking at similar text-based interventions for reducing absenteeism and they generally find trivial to no effect. My interpretation of this is that texting is great for immediate calls-to-action—“Todd did not turn in his homework today.  Please get him to complete his homework tonight.”

But when the behavior unfolds over time, and we are not communicating exactly when the decision is being made, bridging the time between the intervention and the decision is a challenge. Hard copy that sticks around in the home helps.  Another challenge with digital communications is that most districts have very low coverage for accurately contacting families digitally, especially families of more disadvantaged students. Coverage for mail is surprisingly high, especially for families of more disadvantaged students.

This seems like such a simple intervention. Are there ways that it can go wrong? 

It looks simple, but it’s actually surprisingly complicated. Delivering the right message to the right student and family at the right time is critical to the intervention’s effectiveness. There are now thousands of versions of the absence reports that are delivered to families, different versions depending on attributes of the student, classmates, time of year, language spoken at home, and other features. These different variations matter; we find that the same attendance message will motivate one group of students but may have minimal motivational effect on another group of students, so it’s important to deliver the right message to each individual student. Similarly, we’ve also learned about timing. Sending absence reports during the wrong window of time—such as just prior to a school break—is ineffective.

[Read More: Nudging Students and Families to Better Attendance]

And the more widely this intervention is deployed, the more effective it becomes. For example, the reduction in absences is greatest immediately after the reports arrive home. They then gradually fade over the following weeks. This means a month or two after the reports arrive students in the treatment group will be absent about as much as students in the control group (those randomly assigned to not receive the reports). Think of this program as treating absenteeism as a chronic condition that requires repeated, regular intervention. It is not an absence inoculation. Given this timing aspect, the optimal intervention program now involves delivering as many as seven reports per family—w hereas originally it was just four.  Interestingly, the effects seem to get larger with each subsequent round of reports.

Are there privacy concerns in sharing this information with families?

The biggest concern I have when working with school districts is the security of their student data. In addition to just protecting the data, this means ensuring we deliver the correct (useful) data to the correct families. When the intervention program is deployed, it now involves over 160 distinct data checks to prevent accidentally sending erroneous, identified data to the wrong families, and it now includes spot-checking in more than a dozen languages. FERPA and privacy violations like this are easy to make, and could be catastrophic. It’s a dimension that I worry about all the time. I suspect the broader education-intervention world probably doesn’t worry enough about it.

Which students respond best to the reports?  Students who have higher absenteeism rates? I would assume you want to target them.

Not necessarily. When you do these kinds of randomized experiments, you end up with data that allows you to identify which families will be most positively benefiting from receiving different kinds of reports. There are statistical approaches that allow for teasing out these kinds of unexpected patterns. For example, there are certain household structures, like families with multiple students in the home, where the letters can have slightly larger effects on each student. I don’t have a good theory for why that would be, but it’s a consistent pattern. There is also different content for students with different levels of absence, at different points in the year, across different grade-levels. Rather than thinking of this as a single uniform intervention like an “absence report” it makes more sense to think of it as a program of intervention that gets more effective over time with continuous learning.

For example, there’s a sweet spot for a social comparison: You don’t want to compare students to classmates who are unattainably better than them, because that can be discouraging. For every student level of absence, there’s a sweet spot for what’s the most motivating comparison, if any comparison at all.

Over the last few years districts asked my lab to help them implement this intervention program. Harvard would not allow us to implement it through my lab since it’s no longer academic research but rather an actual absence-reduction program. So we started a company, EveryDay Labs (formerly InClassToday) to work with districts. This year it will deliver several million of these reports throughout dozens of districts. It is approaching having prevented one million days of absence from the most at-risk students, recovering over 200 million minutes of lost learning time.

One of the other research efforts your team did, with somewhat surprising results, was a project that involved sending perfect attendance certificates to students.

We did a project in 15 school districts with 10,000 students, and the research question was: Does giving an attendance award in the winter to students who have a perfect month of attendance in the fall further increase their attendance? It’s a common practice to give awards. We did a survey where we found more than 90 percent of schools give some form of award for attendance. So we identified high school students who had a perfect month of attendance at some point in the fall.

We found 10,000 students who qualified for this and then we randomly assigned them to one of several groups. We had an untreated control group which received nothing, no award. We sent an award to the other group, an embossed, fancy award, which said, “Congratulations on a perfect month of attendance.” And the awards were mailed privately to their houses. We predicted that this award would increase attendance and reduce absenteeism. But we found that those who were sent the award subsequently attended less school than the students who didn’t get the award.

Why do you think that is?

In follow-up studies we showed the award to people and asked, “What would this award say to you if you were to receive it?” We came to understand that students interpreted the award as signaling that they attend school more than their peers. So by sending the award, we were saying, “Good job, you do something that you may not want to be doing in the first place, and you do it more than your peers.”

And in addition to attending school more than your peers, us giving you an award almost implies that you’re attending school more than we expect you to. By sending the award we have come to realize we’re also sending other unintended messages.

Does this negate the concept of using incentives and contests to increase attendance that seem, at least anecdotally, to make a difference?

It definitely does not negate all of that. It’s worth highlighting a handful of features of these awards that make them different from some common practices. First, the award was unexpected. Second, it is intrinsically valueless. It’s not a lottery or raffle for an iPad or a pizza party. It’s delivered privately, that’s the third. It’s not posted on the wall; it’s not shown to your classmates. One could imagine that awards would work if they were public, celebrated, sought-after and perhaps even tied to a desirable reward. Nonetheless, when we implemented the award program, what we found was that we were just sending a purely symbolic message that communicated good job, you’ve overshot the mark.

It’s also worth noting that one really interesting finding in the study is that the students who showed the biggest blowback in absenteeism, the biggest backlash, were the ones who had the lowest GPA. It seems likely that these are the students who were least motivated to be in school in the first place. Giving them an award resulted in them decreasing their attendance the most.

Let me add, the effects for these awards are real and they matter, but the effects—w hich are negative—a re small. It’s not as if giving students this award completely undermined the students’ academic motivation and later life success. Narrowly, the research highlights that the version of awards we studied is counterproductive. More broadly, it highlights that when we engage in common practices that are not well understood we may inadvertently send counterproductive messages.

What should be our take away from this body of research?

It’s important to remember that very few education interventions have robust, large effects. And certainly no light touch intervention has a massive effect. We need to temper our expectations about the impact of light touch interventions in education. They can be very cost-effective elements of broader programs, but they are not silver bullets. That would help us avoid common hype-cycles. But just because a single intervention doesn’t change the world doesn’t mean it’s useless. This is especially true when an intervention is proven to move an outcome we care about, is easy to implement at scale, and can be implemented with fidelity at scale cost-effectively.

[Read More: Attendance Playbook: Smart Solutions for Reducing Chronic Absenteeism]