In the field of SEL, educators are spending a lot of effort to answer a seemingly simple question: Is my investment in SEL programs and practices paying off?

This can take many forms. Scenario 1: A district adopted an evidence-based SEL program and leadership wants to measure student growth in response to the program. Scenario 2: An SEL program developer wants to study the impact of the program on student outcomes. Scenario 3: A state-level educational policy maker wants to understand student social and emotional competence statewide and regionally, and particularly the impact of a statewide initiative on student growth.

The Yardstick Problem

All these scenarios require some way of measuring student social and emotional competencies. What is the best way to go about this?

One possibility is to use the often rudimentary outcome assessments that are occasionally bundled with interventions. Makes sense, doesn’t it? After all, the developer created the assessment to measure the competencies as they are taught in the program.

One of the problems with this approach is that developer-created outcome assessments may overstate program impact. Because they are designed specifically for the program, showing program impact with the developer-created outcome assessment doesn’t tell us whether the program impact generalizes to other important outcomes. Bob Slavin Slavin wrote about this in his provocative and always educational blog post recently.

An alternative to using developer-created outcome assessments is to use assessments not created by the developer, that are designed to measure consequential outcomes that should be influenced by the SEL program or intervention being evaluated. If a program is truly effective, it should register improvement on such measures.

The Case for Common SEL Measures

Mark Schneider, the commissioner of the Institute of Education Sciences, the country’s premier educational research agency (and funder who supports the development of our assessments) recently argued for a common measures approach in education research.

A main point of his post is that if we want to know if our programs are working, it won’t do to use a bunch of different yardsticks. Or maybe the metaphor should be it won’t do to use a yardstick in one setting, a measuring cup in another, and a shoe sizer in yet another. It will greatly benefit the field if educators who want to know if their SEL investment is yielding dividends use a common set of measures to do so.

Without apples-to-apples comparisons at our disposal, it will be easy to fool ourselves into thinking that things are working, or come to the mistaken conclusion that they aren’t, only because we haven’t measured what matters most in the best possible way.

The Benefit of a Common SEL Measures Approach

I can hear you saying, “Okay, Clark, I see the benefit of common measures, but what measures should we use?

Admittedly, I’m biased, but in most cases, I believe xSEL Labs’ assessments are the best choice. They are rigorously validated direct assessments of social and emotional competencies that are in CASEL’s model and that are the targets of instruction in most evidence-based SEL programs. They would be strong candidates for a common assessment for program evaluations.

Among top-flight researchers and SEL program developers, our assessments are already making a strong case for themselves as the common SEL assessment of choice. A large number of investigators are using SELweb as an outcome measure in their field trials of SEL programs, including researchers from University of Virginia, Pennsylvania State University, RAND Corporation, Harvard, WestEd, RTI, and American Institutes for Research. Several SEL program developers have chosen SELweb as the outcome measure of choice in independent evaluations of their programs’ impact. Many programs and many researchers all using the same outcome assessment.

My advice to those who want to know if their program is working: Pick a strong assessment that measures what matters most in the best way possible, and use it across settings and programs. Our assessments are a great choice. And our team is available to help design and execute external evaluations. Regardless of the assessment you choose, the field will be better off with the adoption of common measures.