Principal Considerations in Selecting Evidence Based Programs

Watch and listen as Philip Uninsky, Youth Policy Institute, Inc. Executive Director, explains the critical policy considerations and questions that should be kept in mind when selecting an evidence-based program. 

(click “+” to see more details.)

Transcript

Phillip Uninsky: Hello everybody. I yeah, I’ve got this timer. Oops, I’ve started it already. Oh, that’s good. [laughter] So this is to be a brief talk about the critical policy considerations that we should keep in mind when we select evidence-based programs. I'm just going to call them EPB's and speak really fast, so please buckle your seatbelts. I decided not to commit an act of omission so I'm going to say everything I wanted to say in 15 minutes. Now, what is an evidence-based program? Well it's something that's, it’s a program that has undergone rigorous testing and evaluation. It's likely to be effective under the right conditions with an appropriate target population and with appropriate implementation, we all know that. Of course those words mean nothing really. Knowing that the program will work best in a community is still difficult- knowing which program will work best in a community is still very difficult for all of us to know, the problem with selection. And, to the best of my knowledge, there some emerging literature in the field but there really isn't of a very specific body of literature that treats selection criteria as a predictive variable. That is, the effect of the selection process on ultimate efficacy. My strategy today is to try and remedy a little bit of that by doing something that we call in the tech business, back engineering. So I'm going to use what we know from failed and successful implementations of evidence-based programs—we've done a lot over the past years in my various guises, and with different people. We’re going to decide what criteria are useful in terms of predicting ultimate success using three standards, three sources of information. First one, of course, is validation research, replication studies, and the experiences of this implementer, me. Now you may say personal experience as a source of a discussion about how to bring research to service is not a good idea. I don't know if you know this expression, I always think we should start with some wisdom. It’s most likely that good judgment comes from experience; it's also true that experience generally comes from bad judgment. But I hope you'll forgive me because we have been, my team, we've been examining and refining selection criteria as predictors of ultimate success. That is, that we get the results, the generation of results that were predicted by the validating research. We've been doing this for a long time. Now I don't want anyone to misunderstand what we're doing here today. I'm not saying that proper selection criteria means that you can therefore maximize success. Obviously the path from selection to ultimate results is strewn with a lot of confounding and undermining variables. Certainly many people, Karen and, well, all of us are going to be talking about this over the next few days. So what am I going to do, talk about a list of, I wanted to do twenty, ‘cause my statistician said twenty questions is what his kids love to play when driving from one pointless place to another, but I was told twenty questions was too flip. So actually there are over thirty considerations here which I calculated to be less than twenty seconds a consideration. So really, I'm going to spend more time on some than others. They're really four overlapping categories, program quality, of course, the match with the evidence-based program in the community, organizational resources, we'll go much more into these, I am not going to define them now, and sustainability. The general theme here and I think this is critical, I hope is that evidence-based selection involves a balancing of priorities. I'd like to say that we’ve created an algorithm that indicates the relative importance of each one of the considerations we are going to talk about today, but that doesn't exist. So what we're going to do here is talk about principle considerations that really are linked to success, and the question of how they get balanced, to some extent, will be discussed by Karen and others but will be discussed, I think in more detail over the next day and a half. So the process of identifying programs that work is, of course, very well supported by federal and state agencies and by national organizations and if you haven't looked at LINKS of Child Trends, you should. But there are several important points that I'd like for us to think about when looking at these registries because they have become universal concepts when you look for an evidence-based program you are going to find it on a registry. First, these registries don't use identical selection criteria for assigning the highest evidentiary standards for efficacy, they’re different. Second, it's worthwhile to keep in mind that evidence-based programs meeting registry standards typically are designed as interventions. The interventions that resolve problematic behaviors and dysfunctions. They are not likely to be ideological, that is they are not likely to solve, to be addressing the root causes of problems, that's a very significant issue we'll come back to. Third, and I think this is very important, a lot of the literature and the research on evidence-based programs can lack transparency. There's a tendency, there’s a very strong tendency, which is regrettable, to focus on the sources of programs’ success without making explicit the predictors of failure, which is what, I’d like to, of course I think we all know is a Type-1 error for those of us who are evaluators. Also program effects during participation are often conflated with outcomes. What happens during intervention is not the same thing is what happens after the intervention. We care more, hopefully, as implementers and researchers about what happens after. And, finally I have to say this, because it's just something that I think everyone needs to know, there's no substitute for reading evaluation studies, especially they should be published in peer-reviewed, well respected journals. What is the evidence, what is the evidence of effectiveness? Well, of course, and I think I'll skip over the type of research design, I think that's probably something we can talk about later in a discussion. But I think it's really important to know whether the results were reproduced in a subsequent implementation. That is, was the program replicated? Evidence of successful replications protects people who are implementing against, I think, the three great problems in predicting failure to replicate. One, are early statistical flukes which happen a lot. Two, the publication bias. Those of you are academics know, I think, know what I mean— it's the well established tendency to prefer positive data over no results. And, selective reporting, that is there are lots of subtle omissions and unconscious misperceptions the creep into the work of researchers who are trying to make sense of their data. And I think we have to ask a very important question, which is, that we should be, do you suspect that there are signs of a charisma effect. I've seen this a lot, I know what it means is that the passion, the knowledge, the commitment of the developer is a factor in the successful program implementation. To what extent has this been implemented without the developer? So, [pause] a match with the community. [Pause, as he turns to a slide of a Farside cartoon with two cows listening to phone ring] Come on, this is the best cartoon ever written. “Well, there it goes again… two cows watching a television, there it goes again, (the phone is ringing) And here we sit without opposable thumbs.” So it's important to ask whether the program explicitly defines the specific context with in which it succeeded. Urbanicity, types of implementation agencies, range of community risk factors, the extent to which the program works well in the context of other service initiatives. The latter one is very important, we tend to think that the world is a tabula rasa when it's really not. And one of the things that I think is really important to know is, can we precisely define the target population with which this program is most likely to work? And are there sub groups with whom it is not likely to work? The capacity for implementers to consistently provide program services to the target population has been shown time and again as a critical predictor of effectiveness. It's also essential to know if the program is socio-culturally relevant. I think I'll skip over that because it's been discussed fairly richly. I just want to say one thing which is that we really need to know, have sensitivity to the cultural factors that influence receptiveness in a community to an intervention. That's too often overlooked when talking about the thin layer of, ah see I ended up talking about it. And of course, and I want to come back to this again, and again, in a service-rich community we need to know the manner in which this program was ever implemented in a coordinated manner with other programs. To the fullest extent, an evidence-based program should be aligned—here we're talking about perception and legitimacy— aligned with the community conceptions of the problem that needs to be resolved. In other words, the stakeholders who are implementing should be committed to resolving the problems addressed by the program. There are five corollary questions here, I hope you like playing questions. Are all agencies and community-based organizations with an interest in resolving the problem involved in selecting the evidence-based program? If they're not, their capacity to collaborate later on in terms of sustainability is going to be undermined, coordination, monitoring, et cetera. Is the definition of the problem to be addressed by the program similar to the one that's been, that’s held by the prospective implementers? This is a very serious problem that I don't think much has been written about. Bullying, for example, can be viewed as an aggressive exercise of power of some people against others that can be addressed by a coherent school-wide preventive initiative; you probably know what I'm talking about. Alternatively it can be seen as a composite of very distinct, discrete behaviors, physical, sexual, exclusionary, cybernet, which have different origins and they require completely different remedies. Failure to recognize the absence of a match here between the definition of a problem and the community’s definition of a problem can lead to some very serious failure to have implementer commitment. Is the program robust enough to address the level and complexity of the risk factors among the participants? I think that speaks for itself. Programs also should have clearly defined requirements of intensity, frequency, and duration and these need to be considered in light, not only of the capacity of implementers, but the capacity of those who are receiving the services to comply with those, with those intensity, frequency, duration requirements. And finally, I think it's really important to be very wary of what I call the explanatory dissonance problem. For example, you can say that truancy is best, can best be addressed by focusing on negative peer associations. That's a theory, right? But if the community has already decided that it is household dynamics and parental norm setting, creating a program, implementing a program that doesn't have the same explanatory power to resonate with the explanatory decisions that have already been made in the community can be very dangerous. Let me just say one other thing about programs, is that you need to look in these evidence-based programs to see if they have systemic consequences, that is community, individual, organization, political, physical. I don't know how many of you know, for example, crime prevention through environmental design but that requires that you change the environment. And until you understand how that environments going to be changed by the requirements of that program and whether that community is willing to even accept cutting all the bushes down in front of their schools, it becomes very, very hard to think that you should have selected this program to begin with, so knowing that in advance, of course, is very important. Manualization. [pause, speaker puts up slide titled, “Organizational Resources” with a cartoon of conductor with a music score showing pictures of how to point the baton. Scattered laughter in room.] Thank you, I love this cartoon. Sufficient manualization promotes fidelity and it promotes consistency amongst implementers of varying skills and experiences. One of the problems we have with insufficient manualization it’s very hard to raise the level of achievement of say a mediocre implementer to that which would be normally required. Implementing agencies obviously, and I think Karen's going to talk more about this, need the experience and capacity to provide ongoing instruction and consultation to implementers. But I have one other rule that I think is really important, as I see that I am down to nearly four and a half minutes, and that's the proximity rule. And this is a rule that never gets written about and I think it's really important. Programs that are developed within a state are more likely to perform well within the state's regulatory structure. And when they are school-based, they are more likely to conform to its learning standards. That's because each state has very complex, different sets of rules. Proximity also means more timely provision of vital technical assistance more often than not, which among other matters, helps ensure fidelity to an evidence-based program. So I'm not saying, none of these rules are rules, none of these considerations are rules, they are just considerations and if you can't, if proximity should be one that you weigh very seriously and my experience it has been very important over the years. But what I wanted to dwell a little bit more on was sustainability. Now there are ... I have two final general points I want to make, one is sustainability and two, is the general notion of whether any program in particular can solve a very complex problem. There are many elements to sustaining a program. Two are really essential. And I think, one and both, [adjusts what he wants to say] none of them, neither of them is usually considered during the selection process phase. First, ongoing funding and capacity building have to be anticipated and can't be an afterthought once success is achieved. We'll talk more about that. And second, an evidence-based program must have the capacity to be readily monitored. This is a very complex problem that's also underappreciated. Not only for fidelity, which is closely linked to efficacy, but also for outcomes. Monitoring is essential for program acceptance and legitimacy in a community. If we know what the results are, if we know it's been implemented right, if it's helping our kids and families or it is isn’t. And acceptance and legitimacy play a core role in the willingness of a community to engage sustaining. C, funding— I think it's pretty obvious that we need to identify how much it costs, how much the program costs and hidden costs. We also need to identify whether there's access to recurring funding sources and whether there's the institutional capacity within a community to blend funds and braid funds across systems in order to support a program. That's something that needs to be discussed before you pick a program, not after. And I bring this up only because, program vitality is undermined when the implementers in a community see the prospects of continued operations as unlikely. And this is something that we must, that we must see in all of our demonstration programs. First two years things go well, third year everyone's looking over their shoulder. Another point is inherent fragility. I think this is a very important program consideration. To what extent are, does a program have very specific skills to that program that need to be learned, technical skills that are required to implement it correctly. And then you need to consider whether the labor market in your area is structured such that when you train the implementers in the skills they don't become more portable. Portability has turned out to be one very serious problem. You train social workers in a certain set of skills, nursesand then they discover that their value to other communities becomes much greater, especially in high risk communities where a lot of people don't necessarily want to work. Disruptiveness, to what extent is there an existing program that looks like the program you want to implement. This was discussed before— there's a way of creating a program that actually convinces people that they've been doing something wrong from the very beginning and that needs to be addressed before. I won't talk about immutability to adaptation but to the extent that the evidence-based program does talk about immutability to adaptation both in the cultural sense and the broader senses and other senses, excuse me, is really critical. Monitoring for fidelity and outcomes. I wish I had more time to talk about this but there are several, there are program-specific variables, there should be program-specific variables for gauging fidelity and measuring outcomes. They should be clearly defined. And even before we get into what those variables are, can they be collected in a rights protective manner? If you're concerned about recidivism, can you, can you just walk right into the juvenile justice agencies records to see whether the group of people who you're concerned about have recidivated? And the answer is generally no. Now an evidence-based program should clearly identify those features, staffing, implementer, particular implementer strategies, duration, and so on, that are non-negotiable. I think that when we talk about fidelity, and I know Gary Bond is in the audience, and I learned a lot from him. That when we talk about fidelity, we need to talk about what is not negotiable in the implementation. Not, do we have … My time is up, can I have another minute? Ten seconds? Can I just say some final considerations then? I think the careful selection of evidence-based programs, one that balances these wide range of priorities is an important first step in ensuring that programs work as intended. When you select, however, it's important to acknowledge that significant social problems generally have complex origins that resist simple solutions. But that they are best when they are put in the context of a range of risk factors that are leading to the problem. It's our experience that a spectrum of services is needed to successfully address many of the pressing social problems we are faced with as a society, teen pregnancy, drug use, violence, school disengagement, abuse and neglect. And we need to focus, as I said before, both on the ideologies as well as the behaviors themselves. It is invariably the case, as we see in North Carolina, that that there needs to be a forum for multi-agency efforts; you can't solve truancy by simply working with school districts. And we need to be effective not only in approaching prevention intervention from the outset as a series of problems, but also awareness of cause and effect and of the importance of addressing multiple risk and protective factors, of the developmental processes of children and of families, We need to think more about engaging across disciplines. And, finally, and very importantly, we need to think about whether we can work with vulnerable children and families in more than just one type of setting. OK, thank you very much.