Replicating Evidence-Based Programs: Fidelity and Adaptation
Watch and listen as Felipe Castro, a Professor in the Department of Psychology at Arizona State University, explains the difference between fidelity and adaptation and discusses the factors that need to be considered when identifying appropriate adaptations of evidence-based programs.
(click “+” to see more details.)
Felipe Castro: Well good afternoon, it's a pleasure to be here, and my time with you is very brief, at least in terms of being up here, so I'd like to launch right into this, and put food for thought. Two points, my slides tend to be very busy, so I won't cover every point, but it's food for thought and as it might relate to the discussion. The other is, I wanted to add to my presentation the contributions to much of our work, Dr. Manuel Barrera Jr., so I've added him because much of what I've done has really benefited from the synergy in our working relationship. And so I wanted to mention his work as well. There are two central questions that I'd like to cover, and these were presented to me, and I'm very happy to address them. One, are there clear boundaries between fidelity and adaptation? And two, what factors need to be considered when identifying appropriate adaptations of evidence-based programs? So let's start with the fidelity adaptation issues and challenges and as a prelude I'd like to say that I am not against fidelity, in fact I think it's absolutely essential. So I want you to know that. The issue is how we do it. How we put these two pieces together, and so first of all with the clear boundaries question I'd like to say that mostly yes there are boundaries, but I think it's better to reframe the issue as you see there, fidelity and adaptation are two sides of the same coin. They're, the synergy is there, and I was struck by the fact that if you do fidelity well, as Sharon had mentioned, you may actually be provoking the need or creating the need for adaptation because something isn't working right. But isn't the challenge for us to make sure that our interventions work to the highest degree possible? So part of this two pieces of the same coin, is let's do what's needed in order to get to the best outcome. So these are not pitted against each other, these are complementary approaches. The issue is when do we do which? So two, both are important then to maximize efficacy, and yet both are different in form and approach so generally you can't do them at the same time. You need to switch from one or the other, but as I said both sequentially in whatever order for the purpose of maximizing efficacy or effectiveness out in the community. Are there clear boundaries? Again the answer I think is to some degree, but there are many issues. Fidelity was mentioned so I won't go into that. Let me focus on adaptation here, is necessary when original prevention protocol exhibits significant mismatches with consumer needs or worldviews, and as such continued implementation with fidelity is likely to significantly erode efficacy. So fidelity implemented in the wrong context when things are not working is going to work against the final bang for the buck. You're not going to get what you're looking for, so we need to back up a step. Again fidelity is critical; we do not want to short change it, but there may be a time when we can't implement a program exactly as prescribed. When maybe the program doesn't fit or there's something wrong in the field, and we need to listen to the people in the trenches. They're the ones implementing it, and they're saying “It's not working, time out, let's go back,” and we need to listen. There are different issues there, but with that said we need to consider what's going on. The program is not operating as intended; we need to back off a bit. Let me just mention briefly the evidence-based movement is specifically critiqued by people of color, special populations, because they felt that what evidence has accrued has not been relevant. It's not that evidence is bad, but if the evidence is not admissible in court so to speak, you've got a problem. So let's fix it. Again evidence is a good thing, and as a scientist myself, evidence is critical to getting to the heart of the matter, but is the evidence a little bit out of tune with what's needed? Also from the field, bullet two, providers often felt that interventions were overly prescribed and culturally insensitive, and so they wanted to make the adaptations. In fact from prior work from Ringwald and others it's evident the adaptation seem to be the rule rather than the exception. So something is happening in the handoff between the scientist and the provider. What's going on? Can we work with it rather than against it? By contrast, as you know, program developers see changes as compromising effectiveness, and that's actually true because you put a lot of theory and thought and validation into your program. You don't want people to change it, unless there's a really, really good reason, and often there isn't unless the program doesn't fit, and then you have to revisit. So there again as you can see a spirited dialogue has occurred, but I think the dialogue should not be adversarial although that can be good too. It really should be conciliatory, in terms of how do we fix this because aren't we both after the same thing, which is the biggest bang for the buck? OK fidelity of implementation, let me again, a wonderful presentation so I don't know want to reiterate those points in the interest of time. Go back to the adaptation, which is more the focus of my presentation, and I use EBI, evidence-based interventions, which is really equivalent to evidence-based programs. Dr. Barrera and I used interventions to talk about the two theatres of action in the area that we look at, which is treatment related evidence and prevention related evidence, so we felt that the umbrella term is intervention. So in principle of course, high fidelity, adaptation aimed at addressing sources of non-fit and, from mismatch between intervention content and participant needs. Now a key point here is, that I've suggested here and just want to nail it down a bit more, an intervention, there in blue, not originally designed with the needs of a particular group is often unresponsive to their needs. That's been a complaint from different populations of color that basically say “This is neat except we can't relate to it or it doesn't quite fit our needs,” but you don't have to be a person of color or population, special population to say that. Other mainstream groups have said kind of the same thing in different degrees. So with that as a point of concern we need to do some problem solving and as indicated there are many interventions that have been insensitive. I would say the core of that is the theory that was used was not central to their concerns and I say to you that I think theory is also absolutely essential, but maybe the approach was not centered, or not so much that a theory like social learning, social cognitive theory is bad but some of the pieces were not essential to the concerns of a particular population. So I'm an advocate of taking base theory, expanding it so that we can do more to make it more relevant. Let me just mention briefly these, time is short of course, oh by the way at the bottom I have the different references so you can follow along or look at articles that might relate to what I'm talking about here. Universal, I think in principle, universal is really what would be ideal because we would have an intervention that can be implemented broadly, but it's the ideal not the reality, and so in some ways the term is unfortunate because one size does not fit all. That doesn't mean we don't want to make our intervention as penetrating and as expansive as we can, but we have to make sure that it works for the different sectors in the population, and I only talk about selective here and indicated as other approaches that the Institute of Medicine says are different types of programs, and these then require different levels of adaptation ideas depending upon what the program tries to do. Let me just make passing reference to efficacy and effectiveness. This is the bottom line. Efficacy of course is under more controlled conditions effectiveness under community conditions, and typically of course it's harder to have effectiveness equal to efficacy because it's harder to do things in the outside world. So efficacy then in principle is what we want to do, that's the indicator that a program works, and it means that they produce therapeutic change on targeted health outcomes and in a prescribed manner. So we want to be very explicit. In other words, we can make a program that tries to reduce drug abuse and drug use in children and adolescents, but then maybe it makes them feel happy to go to school, which would be a great outcome, but it's not what the program was designed to do, so that is not a good program in terms of the true definition of efficacy. What I'm talking about can be encapsulated in the issue of relevance. In principle a targeted change must be relevant, and it must fit the need. For change to be therapeutic it must be relevant in addressing important identified need or needs. With that the diversity need I mentioned, one size does not fit all. Ideally yes, in practice no. So we need to make some adjustments accordingly, and to the point I made before, universal programs, I wish that it was true, but it's not so we need to revisit that issue.
Moving along quickly, approaches then to adaptation, what factors need to be considered, and some of the, an article that Dr. Barrera and I worked on years ago just briefly, the most obvious need for adaptation is linguistic mismatches, and it was mentioned earlier. If I do a program in English, and I implement with Spanish speakers, by definition you must adapt because they won't understand. That's a very simplistic case, but It's an obvious one that can be used as the basis for further thinking. Number two comprehension mismatches: folks can understand to some degree what is being asked of them, but not enough so they can actually do it, and those are comprehension mismatches. And the most difficult one is cultural mismatches because it involves a much richer, but more complex set of issues around cultural values, beliefs, attitudes, expectations, and other things that are deep structure issues that cannot be ignored if you're going to make a program truly relevant to a target population. What factors need to be considered in identifying appropriate adaptations? Mentioned before, here are reports from the field staff—those are the ones in the trenches and if they come to you, and they have come to me, saying “This is not working as we planned,” then there's several issues that need to be addressed. It may be that they're not doing it right, but it also may be that the program is not functioning as planned. One key point here, beware of misadaptation, and that's happened to me too—where staff just did what they wanted to, they didn't follow the rules, and it screwed up our program, okay, not good. So of course I love fidelity because you must implement as needed, unless of course there are compelling reasons not to do it because what you planned is not what's out there. So staff arbitrarily changes content with no justification, that's misadaptation. Or staff eliminates or fails to present a certain content with no clear reason, that's misadaptation. We do not want misadaptation, okay. I won't go into these, these are from the Castro, Barrera, and Martinez article, 2004, but just as we begin to enumerate the different types of areas of mismatch, and under the big categories you see there are several under group characteristics, others under down here, program delivery, staff, things are not matching as needed, and the last section here administrative community factors. So that was our first effort to try to say “These are the areas where we must understand why things are not matched, and we need to do something about it.” Let me mention phases, or I should have put stages here, in formal intervention adaptation, and from the work that Dr. Barrera and I have done. We've mapped out now from looking at other programs and what we've worked on there is now a formal process. So the idea that there is no clear way to do adaptations is no longer an argument which is valid, and here you just see the four steps: information gathering, review of literature, focus groups, etc. Number two: preliminary adaptation design, you recast something in a way that's going to fit, but it's only in a tentative way. Three: preliminary adaptation test, you pilot test, evaluate the changes that need to be made. And four: adaptation refinement, where you actually make the adjustment. This is a busy slide, but from the Barrera, Castro article, 2006, basically the idea is there are two major theatres of actions that we need to be concerned with. Engagement, which is a big area, and basically the argument that I've made is you can have the best and most wonderful program, but as mentioned before implementation, which is part of the engagement, if folks don't come then you can't have high efficacy or effectiveness. They didn't show up, and it's not, and one of the challenges in many minority or communities of color we face that, which is one of the biggest single factors, how do you get folks to show up and then acquire the great content that we are trying to convey to them? Then of course there's outcomes, and I only call attention to the branch where when you see the evidence-based treatment, there is the conventional branch, which is a common mediator of effect and a common outcome, but there can also be unique adaptive element. This is the branch below and the unique mediator, which leads to a unique outcome. So this is the adaptation, which gets at cultural issues. Finally here just sources of variation. Some evidence, and I have to be brief here, types of adaptation that have been discussed, cultural values and concepts, matching by native language, and treatment in clinically response clinics. This is different approaches in very broad strokes. The other evidence basically says that there is some evidence that culturally adapted interventions can be effective. The question is whether they're more effective than the original. And the evidence indicates that they can be equally, but typically evidence is weak that they're better, although there are challenges to that as well. Here's other types of adaptation. Simply, the point to make is that special populations tend to require more adaptation. The Huey and Polo study is less positive about whether adaptation is really that useful and effective, and there are other issues that I won't go into in terms of what other points are made regarding adaptation that are important to look at. There are issues of motivation as you see, change facilitation, and responsiveness to diversity, all of which are important elements. Future ideas just briefly, hybrid interventions, the ideal would really be that we incorporate adaptation and fidelity into a single intervention game plan, so that we basically ground the program in the local community based upon its unique needs and then once we know that the fit is really, really good, then we must adhere with fidelity to the revised or grounded program. Culturally relevant theories I believe is absolutely essential, and which is one of the big weaknesses. Many of the theories especially as related to programs applied to special populations, are simply not vocal about what are their unique needs, and we need to then work in that area because that should guide the design of future programs that would fit a lot better from the very beginning, and as you see consumer and developer partnership. A couple of very brief thoughts since my time is almost up. Randomized controlled trials: they are our gold standard, but my pitch is that the simple ones that just tell us did it work or not is not enough information for us, and what we need is those trials that include mechanism of effect, mediation analyses so that we can truly understand the mechanism. And finally a pitch for rigorous qualitative methods. As a measurement person myself I love measurement, but I also believe that qualitative is a complement to that, as it relates to nuances that are missed by scales and measures that exist and by combining them both in integrated mixed methods methodology we ideally get the best of both. Thank you.