Breadcrumb

  1. Evidence and Innovation
  2. Experts
  3. Starting with the End in Mind

Starting with the End in Mind

Watch and listen as Karen Blase, Senior Scientist with the UNC National Implementation Research Network, explains the importance of making sure evidence-based practices are “service ready,” and that communities, states, and partners are prepared for the work involved, before program implementation begins. Dr. Blase also discusses processes and tools for assessing fit and feasibility of initiatives. 

(click “+” to see more details.)

Transcript

Karen Blase: It's always, it's bad to get reinforced for lack of very specific preparation, but knowing that I was going to come last, I thought, well certainly half of what I was going to say would've been said by others, and indeed it has. So I am going to focus our lens on a little bit different place and the title of the presentation “Starting with the end in mind: How service ready are evidence-based programs?” How street ready are they, and how ready are our communities, states and partners for the work that's going to come? It really is two sides of the same coin. A theme that you are hearing emerge, or that I am hearing emerge, is this need to deal with the entire system. That this is not a linear set of events that marches forward and takes care of a population, but all this work is occurring in a system. So is the evidence-based practice ready to live in that system and are our communities, states and partners and federal government ready to play in that system environment? A consideration of starting with the end in mind is: are people ready, are communities ready, states ready, partners ready for the long haul? Data indicate that implementation takes two to four years before you get to full implementation. This has implications for evaluators and what are they evaluating, sometimes incomplete implementation.

This isn't a recommendation and some people say we don't like you recommending this. I'm not recommending this, this is what the data indicate needs to occur. And much of the selection work is going on during the often neglected exploration and installation stage. Installation just meaning that nobody's been served any differently or just figuring out how to get this thing up and running. This takes much longer than we often think it does and if we don't pay attention to it, we're kind of in serious trouble. Very exciting to see the changes at the federal level in terms of allowing for planning years, requiring implementation plans, I'm privileged to be a part of one of those efforts. So it's exciting to see the opportunity there to engage in this thoughtful selection process and it is going to have big implications for implementation. Philip and Stephanie, I think, I am going to go beyond this, not going to spend any time on the slide, because Philip really covered the waterfront about how do we decide, what should we be looking at when we're looking at the evidence-based practice. So, I'm going to, since there are some data people, I am going to take you into a non-evidence-based mathematical formula. So, that, but it's a conceptual view. We want interventions that work, why go to all this trouble if we aren't going to get some good outcomes? That requires also that there be effective implementation. We need a good what, we need a good how, so that in fact we get impact for kids and families. We need both halves of this equation in order to get to the outcome and I'm going to focus a little bit on the selection factors that go into figuring out how can we get this done? Can we in fact actually get this done once we've made the choice? And I want us to remember that it's a multiplication problem so any number times zero is zero. Meaning if we don't have, we can have a razzle-dazzle implementation process but if the what is not effective, we are not going to get impact. And often vice versa, we've got a razzle-dazzle what, but we have not implemented well, we're not going to get a great impact. How important is the impact? Our colleague Mark Lipsey tells us that in looking at some of the meta-analyses works that even less efficacious programs have performed better than higher efficacious programs, when they were implemented well. So implementation really does matter. What do we need to implement? What goes into that how box and what do communities need to be assessing? They need to be assessing the infrastructure. What's the infrastructure required by that evidence-based program? And that infrastructure we think has three buckets. I speak in buckets rather than lists because I can't remember lists; buckets I've got a chance. So competency drivers, how is it that staff, supervisors are going to be able to do this well? How we going to improve confidence and competence of the practitioners at multiple levels? How are we going to create a hospitable environment for new ways of work? The current system is designed to produce the current results. Whatever results we're getting, that's what our system is designed to produce. So, how are we going to create more hospitable environments for new ways of work, both in organizations, in states, and in communities? And we do need to pay attention to leadership, which I'm not going to talk about today because of lack of time. And I am not going to go through these buckets, but we think these are the leverage points that define the infrastructure that need to be done differently in order for the evidence-based program to come to life. And that need to be done through an implementation lens. So selection is done differently through an implementation lens, training is done differently through an implementation lens, and again I'm not going to go through these, except to say that as you're looking at an evidence-based program in making the decision to choose, you matched it with your population of concern, you think it's got a good fit but is it service ready? So that means is there expertise available to you and what is the required infrastructure? You have to assess those at the beginning in order to increase the likelihood that you're going to be successful. As far as expertise being available, people have already talked about that fact that this isn't going to jump off the journal pages. And the Nike model of “just do it” is not going to make it happen. We're not going to take a manual, read it and go do it. What is the role of a purveyor? You know, there are TA centers people attached at the hip to researchers who are helping communities implement. Not all are equally experienced. Not all of them view their roles in the same way. Who are these people? How are they going—and many of them are my very good friends. You know, what do we know about them, what are they bringing to the table, and what are the infrastructure requirements? And the infrastructure requirements differ from evidence-based practice to evidence-based practice. More, I think, hypothetically, for you researchers that would like to study this, tighter, much more attention to the infrastructure needs to be paid when we're in unbounded systems with high risk populations. The more bounded the system is, the more universal, from a public health model, the intervention is. Not that these are unimportant but you really when it's high risk and unbounded and the skills are very new then you need a lot of attention to this infrastructure. And then you need to know what's the purveyor's plan for working with you. Is it a progenitor model meaning, I had to look that up, are they going to leave behind a quality infrastructure for you to run with some modest monitoring from them? They are going to train trainers, train coaches, train evaluators, or are you tied at the hip to them for ongoing? And there are advantages and disadvantages to both those models, but what are you buying when you're purchasing services? So a lot of this is to help communities and states to be wise consumers of evidence-based programs. What are the infrastructure supports that are needed? And so we ask people to interview. We give them an interview guide to say “Interview purveyors.” You have a series of questions to ask them about each of those pieces of the infrastructure, what do they know, do they know what their core components are, how did they get in this business? A whole range of questions, what drivers will they initiate, what are they going to maintain over time, which ones are going to be there for you to develop? This is just an example of a model. The evidence-based program will remain unnamed, ah but it's one where, after the infrastructure analysis was done, found that they had no advice to give about how to choose the frontline practitioners. They hadn't thought about the system's intervention— meaning how do we get the three buckets of funding we need: funding for start-up, funding for the face-to-face service, and funding for the infrastructure. They didn't have advice about regulatory issues. They just sort of, God bless and good luck, you know your state, go for it. They needed the Stephanie's and the intermediary organizations. Coaching, kind of but not really, so going into that, people knew they were going to have to build these pieces themselves. So what's there, the other thing is the charisma thing, (you'll) always want to interview other implementing sites. You'll get one, a good story, general story from the purveyor or implementing group and you'll want to see how it is on the ground for other people that have worked with them and worked through the problems themselves. What is their experience? The other thing is, though, to stand on the side of my purveyor friends. These people know what they're doing. They've done this, many of them, not all of them, some are still learning we all are, but many of them have done this for a long time. They know their conditions under which it will work and conditions under which it won't work. They're not going to BS you about it. Beause they have limited resources and capacity so they want to spend their time where they think it's going to work. They spent too much of their time learning what works. So they have often very clear preparation and installation processes. This whole issue of readiness. Some purveyors will help you create readiness in your community and that's a whole process in itself. If not, how are you going to create readiness in the ways that Philip was talking about during his presentation. A final question is, are we willing and able to partner with this purveyor to have a good working relationship. So, flirt, date, get engaged, eventually get married, but don't leap into bed.  The reproductive health people will like that. Ok, so at the end of the day, they have to go through, communities and states have to go through a process of examining their needs, evaluating evidence-based practices, interviewing the resources that will get them there, interviewing implementing sites, and then back to this theme we have, of it's a messy world. It's a tradeoff. Not everything's going to be at a hundred percent, but you're going to need to assess how well it will meet the needs. What's the fit in the community? What's the, do we have the resources to do it? What is the evidence? How ready are we for replication or the degree, how ready is the program for replication? How well operationalized is it? And what's our capacity to implement? So we give people this as a discussion tool after they have collected all their data about their interventions, we ask them to sit down and think about these six areas. Again I'm a bucket lady, so how well does it meet the needs, what's the fit , do we have the resources how strong is the evidence, how street ready is the intervention what's the capacity to implement? This is not decision-making, the numbers do not make the decisions, this is a discussion tool. So it gives communities the opportunity to rate independently and then see where they disagree and why they disagree and have a conversation about the degree to which this fits and there's alignment. So it's all a tradeoff. You might have amazing evidence, terrific purveyors, but you can't afford them. You might have a wonderful program but really the cultural, the purveyor is not willing to work with you on the cultural fit or the purveyor is willing to work with you on the cultural fit. So it really it is this complex set of the issues that need to be addressed in terms of how does this fit in your state? What are the pushes and pulls on it? What's the sustainability? A pitch, because I get to make one final pitch, is we need to begin to think about the funding for this infrastructure for the how. Where are our implementation teams at the agency level, at the state level? What's the role of intermediaries so that sustainability for the program can be there over time? New programs have a, if you will, a pathway, a way to be scaled up in programs. We're not good at thinking about how to fund other, there are big policy issues there about how to fund the infrastructure required. You know, we've looked at Medicaid funding and they've waxed and waned about whether or not they're going to do bundled rates, they're not going to do bundled rates, they are going to include some of these costs, they're not going to include costs. You know, states are addicted to Medicaid funding and so they're matching every bloody dollar they've got, leaving no money to fund actually some of these infrastructure costs that are required. So, it's a complex set of variables. We know we need a good what. We have got to have, that should be a multiplication sign there, [points to slide] times a good how gets you a good impact. Communities need to assess purveyors. Purveyors are going to be assessing communities. It's a mutual selection process that's going to go on. Readiness matters, not preparing sites, communities, practitioners wastes time and money.

I'll tell you a little bit about some data. I would encourage you to look at a study that Darren Hicks did with the Nurse Family Partnership scale up in Colorado, showing  a naturally occurring kind of experiment, not a randomized clinical trial. I don't think implementation science is going to do randomized clinical trials. So looking at communities in which the local partnership was very thoughtfully and carefully formed and partners and those where it was a bit messier, a bit loose. And the correlations with fidelity and outcomes are very clear. At the way those local partnerships run. Another little study by Stephanie Romney in San Francisco shows that when agencies were forced to come to the table to get money, the cost per graduate in that program was astronomical. They didn't graduate very many people, compared to the cost per graduate of the evidence-based programs, the families and communities. When, in fact, readiness did occur. So some nice cost data again, not rigorous research but some indications that this readiness work, this exploration work is important and we need to focus on both the what and the how in order to get impact for kids and families. Thank you.