Using Research Synthesis to Develop Informed Interventions

Watch and listen as Mark Lipsey, Director of the Peabody Research Institute at Vanderbilt University, elaborates on the identification of factors that are predictive of the outcomes of interest and looking at a broader set of research including generic program approaches as opposed to kind of specific name brand programs to inform selecting interventions.

(click “+” to see more details.)

Transcript

Mark Lipsey: Well gosh, we're going to miss Dennis. [Audience member] No, he's coming. [Mark Lipsey] Is he coming? OK good. Cause Dennis and I planned these complimentary presentations. He's going to talk about kernels and components and using them, and I'm going to talk about research and so on, and without Dennis this is going be kind of like the sound of one hand clapping. [audience laughter] So let me do one hand and I'm glad to hear Dennis is on his way. Here's sort of the premise and approach I want to take here okay. Evidence-based programs are typically defined as a kind of a specific brand name program with credible supporting research. MST was just mentioned as a kind of a classic example here, but there's much more research evidence available about the effectiveness of intervention programs in different areas than just what falls under this brand name evidence-based program framework, and there are two forms of research evidence that may be especially informative in the context of evidence informed approaches and programs as opposed to evidence-based. And that's identification of factors that are predictive of the outcomes of interest in targeting of those factors and then a broader look at generic program approaches as opposed to kind of specific name brand programs. And there are systematic ways to draw on the research base in both of these areas. I do meta-analysis, so I'm kind of inclined to think that's the best way to synthesize evidence, but it's not the only way. Here's sort of a graphic version of that, that's kind of a risk oriented intervention strategy that is implicit in a lot of our programs. We have risk factors, they may be proximal outcomes for the immediate targets of intervention that are expected to lead to the later outcomes or are predictive of those later outcomes. So having some idea of what those proximal outcomes or risk factors are is helpful in shaping evidence informed programs because they give us targets. And we also have broader frameworks on intervention and broader research on intervention and its relationship to those outcomes, so knowing something about the broader research perspective beyond the name brand evidence-based programs may also be helpful in this regard. So let me talk about each of those in turn. So identifying the intervention targets likely to be effective. Here I think longitudinal research, and there's a great deal of it in many of these areas, is particularly useful because you can identify factors that may be relatively malleable and by a straightforward interventions and which are predictive of the more distal policy-relevant outcomes. Not all of these factors represent the causal vectors of course. We all know correlation isn't causality. What we have to remember is that correlation is a necessary even though not sufficient condition for causality. So if there's no correlation, there's not going to be any causality, and we may well not waste our time focusing on those factors. Let me give you an example from our meta-analysis work on predictors of adolescent anti-social behavior. I'm going to have to stick with anti-social behavior and delinquency because that's where I have the data, but I think the implications extend to the other areas. So I wave my hands at this slide, we have a big meta-analysis, lots of studies, lots of data about risk factors for anti-social behavior and delinquency. Nearly twenty five hundred risk correlations out of this from time one to time two. This slide you're not permitted to read, and that's good because you probably can't read it anyway. The point here is that we can't just compare correlations of different risk factors with different outcomes across studies without taking into account that the samples are different, the methodologies are different, the studies are different. So you have to kind of put this in a multivariate modeling framework and kind of level the playing field and control for these variations, so that we can sort of isolate in a comparative fashion what the contribution of these different risk factors is. If you take the categories in which we can extract data from longitudinal studies predicting anti-social and delinquent behavior in adolescents, you get average covariate adjusted correlations that looks something like this in different categories, and you'll see that there's quite a range here of the correlational relationships between a time one factor and the time two outcomes that we're interested in and would like to change with any kind of intervention. Not surprisingly the biggest category here has to do with prior antisocial behavior. This is true in many areas that prior behavior is the best predictor of later behavior. There are a few interesting twists here though. Early problem behavior, and particularly substance use behavior, early substance use is a relatively strong predictor of later general delinquency and anti-social behavior. So there's an interesting target for early intervention right there. The next largest category is a particularly interesting one. This has to do with self-regulation, hyperactivity, and attention deficit. By contrast things like self-esteem, internalizing symptoms, a certain range of parenting practices, parental warmth, have very low predictive correlations here and are not attractive targets for intervention if your intention is to reduce delinquent and anti-social behavior. OK, so the implications here from this simple example are, as I've just said, that the early forms of anti-social behavior are the strongest predictors. There are moderate relationships in other categories. There are some things like the, the early, much of, as some of you will know, the early literature in delinquency really put big emphasis on self-esteem. It's pretty clear from these correlations that you're not going to get very far pushing self-esteem buttons. The overall, the overall point here though is that we can use this evidence to make some judgments about what are likely to be effective programs in terms of the immediate targets of change that they are focusing on, those distal outcomes, the mediating variables that are part of the theory of action here. Some theories of action based on some of these mediating variables are not very promising given this longitudinal research; others are much more promising even though we don't know for sure that these correlations represent, are going to represent causal pathways.

OK, second approach here is to look a little more generically at the nature of our evidence on interventions and go beyond just the name brand programs. And again I got examples from meta-analysis of on anti-social behavior. What I mean by generic interventions here, the broader intervention type categories within which the familiar evidence-based interventions fall. So for example Functional Family Therapy is an evidence-based intervention that appears on almost all of the registries and lists, the generic category here is family therapy. Reasoning and Rehabilitation is an evidence-based cognitive behavioral program. The broader category is cognitive behavioral therapy programs for say juvenile offenders. Here's an example of the research base we pulled together just in the family, family therapy or family intervention area. This is a little histogram for twenty nine studies that are available in our meta-analysis of family therapy interventions with recidivism outcomes for juvenile delinquents. The name brand so called evidence-based programs here, Functional Family Therapy and MST, are only eight of these twenty nine studies. So the interesting question is what can we learn from the twenty nine studies as opposed to just the eight that are on a name brand program with a -and notice that Functional Family Therapy and MST do quite well on this distribution, so they deserve their reputation, but notice that they don't fall on the top end of that distribution, outperforming every other family therapy program that's been examined in credible intervention studies in the, in the literature. Here's a similar chart for fifty eight cognitive behavioral therapy programs with offenders. The colored ones here, the blue, green, and red are recognizable name brand programs: Moral Reconation, Aggression Replacement Therapy, Reasoning and Rehabilitation. They're supposed to be gold and yellow, they're kind of sick green looking right now, but those others are all sort of homegrown, no name, one shot kinds of programs on which a credible research has been done, and you can see again that these all pretty much fall on the same distribution. The general point here being that there's more evidence about, at the level of the generic interventions than there is about any specific name brand. On the other hand in both cases we have a broad distribution ranging from almost knock your socks off kind of effects on the top end to zero and even skewing over into negative effects. So we clearly need to know more about what pushes you up on top end, and that information can guide us toward more effective interventions even if we're not using a name brand program, at least that's the concept here. We've done extensive analysis on the, on the database here and much coding. There are more than five hundred studies in this database. Let me give you a quick overview then of where we go trying to sort out what we can learn in these generic categories. We do again multiple regression type analysis in which we're looking at the effects on outcomes as a function of the study methods. I kind of grayed that out cause that's a statistical control variable, contributes a great deal incidentally to the outcomes here. Characteristics of the samples, juvenile justice settings and level of control, type of program of course, and implementation and dosage characteristics. If we first sort these, this evidence according to the type of program we find not surprisingly quite different average effects for different approaches. We've struggled over the years on how to categorize these programs because they all have multiple elements. Generally now we're differentiating broad philosophies, what we call the control philosophies and the therapeutic philosophies, and then drilling down within those into different program types. Looks something like this. The broad therapeutic interventions that show on average all show positive effects, restorative justice programs, skill building, academic, interpersonal skills, employment skills, the ubiquitous counseling category, and then various kinds of curriculum and multiple combinations and case management and so on. On the top end, disciplinary oriented, deterrence oriented, and to a large extent surveillance oriented programs are much less effective. So already we get some guidance here. If you're going to do something for anti-social behavior in juvenile offenders try a therapeutic intervention and not a control type intervention. Within those categories there's also variation. Just to take one example, here are the skill building programs. And you can see that the behavioral and cognitive behavioral programs on average have quite positive effects. Social skills, challenges, ropes, outward bound things on that order, academic, job related, are smaller, but all have average positive effects. Now of course the catch is that there's great variability around those averages, just like I showed you in the previous histogram. So again, just like on a name brand program we have to think about implementing with fidelity. OK for the generic programs we need some other kind of information about how you implement them well or how you get up into the top end of the distributions. To make a long story short there are some very broad characteristics that account for a good bit of that variability. One has to do with the risk level of the juvenile offenders. So high risk cases show bigger effect than low risk cases. Implication: target your intervention on the high risk offenders. Service dosage matters, surprise, quality of, so as the amount of service drops off, the effects drop off. Quality of implementation matters, and the way we code this is largely an organizational issue, much like we were talking about yesterday. Has to do with having explicit protocols about what's to be delivered, regular monitoring of the quality of that delivery, corrective action when the implementation is not adequate. And then there are some program specific characteristics that show up in certain areas. For instance in cognitive behavioral programs having large interpersonal problem solving components and anger management components are associated with more positive effects. So you see we're getting some generic advice about generic program approaches here that we can build to guide interventions even if we're not working with a specific name brand evidence-based program that's on one of these registry lists. So I just said most of that, so I'm done.