Breadcrumb

  1. Evidence and Innovation
  2. Experts
  3. Discussant Remarks, "Strategies for Supporting the Selection, Implementation, and Alignment of EBPs in Community Settings"

Discussant Remarks, "Strategies for Supporting the Selection, Implementation, and Alignment of EBPs in Community Settings"

Watch and see as Brian Bumbarger, Director of the EPISCenter at The Prevention Research Center for the Promotion of Human Development at Penn State University, reflects on the second panel of the Forum, highlighting the importance of having implementation support match the stage which the organization or the system is in, developing organizational and community capacity to use data for continuous improvement purposes, and including intermediaries as a prevention support infrastructure.

(click “+” to see more details.)

Transcript

Let me just start by telling you a little bit about who I am.  I think – raise your hand if you know who I am and what the EPISCenter is.

[Laughter]

Okay, well, so that’s about half; that’s about half.  So, I don’t think my bio’s in the packet; so, let me just give you a real briefer, because half of the room doesn’t know I think this is relevant to how I’m filtering what I’ve been hearing the last two days.  So, I direct what’s called the Evidence-based Prevention and Intervention Support Center, which is a state level intermediary organization in Pennsylvania.

We’ve been working in Pennsylvania for 15 years to scale up a really diverse menu of evidence-based prevention and intervention programs, the Blueprints for Violence - what started out as the Blueprints for Violence Prevention.

So, we’ve been doing that work intentionally to scale up these EBPs, these very diverse EBPs for 15 years.  And for the last five years, I’ve directed this state-level intermediary organization that evolved from that 15 years.  It evolved into this specific organization to facilitate that process.  So, it was a recognition on the part of diverse state policymakers that this really needed some infrastructure; it needed some specific help.

So, the – so, again, so I’m working at that level.  I’m working at the state level.  We’re working with about 200 replications of these very diverse evidence-based prevention programs, and evidence-based intervention programs.  And that’s important too, because if you ask me any question related to evidence-based programs, the answer is going to be, “It depends.”

It depends whether you’re talking about Multisystemic Therapy, that you’re talking about a system where there are clinicians and therapists and a culture of clinical supervision, or if you’re talking about Big Brothers Big Sisters, or if you’re talking about the Strengthening Families program, or the PATHS program.

So, this is the context that I’ve had to work in for the last 15 years, and it’s great that there was about ten years before the state created this state-level infrastructure because what we did before there was an EPISCenter is we just ran around the state putting out fires.  They would give a grant – the state – some state agency would give a grant to a community to implement one of these EBPs, and then a year later, or two years later, they find out that nothing happened, and they didn’t know why, and they’d send us out to figure it out and fix it.

That’s not very efficient, not good use of taxpayer dollars, probably, but it helped us to accumulate a body of knowledge about what the issues were, and we realized that it wasn’t an endless list of issues.  We saw the same kinds of things coming up over and over again.  And so, we built a knowledge base that led to the development, the creation of this state-level infrastructure.

So, just given that context of what I do and that experience, let me just kind of try to capture and talk about some of the – a few of the things I’ve heard over the last few days.  One is, that there has been some mention a few times, but I don’t think there’s been enough focus on this so I just want to reemphasize this point, that implementation support has to match the stage at which the organization or the system is.  And that varies across time.

So, I mean you have to – that makes this a much more challenging endeavor because we have to figure – it’s a form of readiness assessment, but we have to figure out where the organization or the system is in this developmental process and apply our implementation support and technical assistance specifically to meet that stage of development.

There’s been a lot of discussion about readiness assessment.  One of the things that we ended up having to do, and most of the things that we’ve developed in the EPISCenter have been reactive.  We’ve encountered barriers and identified problems, and then we tried to come up with resources and tools and technical assistance to fix these.  So, one of the things that we’ve ended up developing – so, Caryn mentioned the logic models that we ended up developing.

We were scaling up the Blueprints for Violence Prevention, and we realized from the get-go that most of them didn’t even have a logic model.  These were the Blueprints; these weren’t something somebody dreamt up in their local community.  These were the world’s Blueprints for Violence Prevention, and they didn’t have logic models.

Another thing we developed were readiness assessments to sort of identify – you know, when you buy a new piece of software, they always identify the minimum system requirements.  You know, you have to have so much RAM and whatever.  So, we try to identify, what are the minimum system requirements necessary for each of these specific – again, very diverse interventions?

There’s been a lot of discussion about capacity building, and I think that – I think that it’s really important to think about capacity building specifically around Continuous Quality Improvement.  That’s one of the most important capacities that we’ve recognized that has to be built among practitioners and provider organizations, but also among policymakers and funders of Continuous Quality Improvement.

And the way – the place that that seems to be most relevant in my experience has been around the use of data.  So, there are lots of – there are thousands of these boutique MIS systems that developed to collect data and answer questions, but it’s almost always from an accountability perspective, not a Continuous Quality Improvement. So, we count things to say we accomplished something.  We don’t measure things to figure out how to do better.  Right?  So, that’s an important differentiation I think.

And in that regard, I think what we really want to do when we’re building capacity; we need to shift the whole field towards intrinsic motivation.  So, we need to move the field from a culture of compliance to a culture of excellence.  So again, think about that vis-à-vis that, you know, data collection for accountability versus data collection for Continuous Quality Improvement.

Arthur Evans talked about drivers of behavior for policymakers and funders. Abe spoke about motivation for innovation.  So again, is it about accountability, or is it about Continuous Quality Improvement?  Are we seeking compliance?  Are we seeking excellence and better outcomes?

Let me see what I want to focus on.  We also need a lot of work on capacity building at the very fundamental needs assessment level.  Right?  They’re really – we need much more sophisticated and rapid systems for epidemiology and community-level diagnosis.  I know this is pushing way upstream, before even the selection, but a lot of our problems with implementation are based on a poor fit between an EBP or an intervention and the needs of a specific community.  So, we really need to push upstream and get more sophisticated and rapid systems for community-level diagnosis.  We need to improve on our traditional surveillance systems and move toward systems that inform problem solving and outcomes improvement.

We talked yesterday a lot about multiple EBPs within a community and how complex that is.  I think we definitely need to recognize the mix of EBPs and non-EBPs within a single organization.

I – in the 15 years I’ve been working in Pennsylvania, I’ve never encountered an organization that is only implementing evidence-based programs.  That’s a really important reality to acknowledge because sometimes what we do is we set up these unintentional dichotomies that are sort of iatrogenic.  We make it actually more difficult and more expensive to do the things that work than to do the things that we don’t know whether they work or not.  They’re not evidence-based.  That doesn’t necessarily mean they’re ineffective, but we don’t know.

Another reality is that the problems that we’re dealing with, a lot of what we’re talking about here, are specific artifacts of how the field of prevention and intervention science developed.  And the field isn’t that old.

And the EBPs that were struggling so hard to figure out how to fit in communities, they were developed at the sort of the age of enlightenment of prevention science, and they were – most of these EBPs that we work with in Pennsylvania, they didn’t go through this nice, tidy stage of efficacy testing, and then effectiveness testing, and then packaging for scale up.  They went from efficacy testing to scale up.  They went from efficacy testing to being on the list and everybody wanted them.

And so now, a lot of what we’re doing is trying to pound square pegs through round holes.  The importance though is that there’s probably – there’s likely to be some natural evolution as new interventions are developed and proven efficacious by a new cohort of intervention developers.  So, some of these things are going to evolve naturally.

Similarly, I think we need to consider progressive reform in our conventional financing structures for these programs.  The financing structures –this has been a huge problem in Pennsylvania – the financing structures were set up before there was an evidence base.  So, they weren’t set up specifically with these kinds of interventions in mind, with fidelity in mind.  So, what we – again, what we’ve encountered is systems that are a better fit for things that aren’t evidenced based. 

So, the very last thing I want to say again is, going back to what we do in Pennsylvania, is the importance of intermediaries as a prevention support infrastructure.  Abe talked about the Interactive Systems Framework.  That’s really the model that we look at in Pennsylvania, and we recognize that in the – when we read about the interactive systems framework, we said, “Oh, yeah, we get that there’s this prevention delivery system.  We get that there’s this knowledge translation system.  And we get that there – in the original Interactive Systems Framework, the policymakers and funders were sort of this macro level on the outside.”

But we read about this prevention support system in the middle, and we’re like, “What is that?  We’ve never seen that.”  So, we became that.  We became the prevention support infrastructure, and it’s really about facilitated interaction and a problem-solving approach to technical assistance.  It goes back to – I can’t remember who used this term, but “making it happen rather than just helping it happen.”

[Applause]

[End of Audio]