1. Evidence and Innovation
  2. Implementing and Adapting

Implementing and Adapting

Once you have selected an evidence-based program, how do you make it a reality and incorporate it into the everyday life of your organizational setting? The process of putting the evidence-based program into practice is implementation. In other words, you can think of the evidence-based program as what you are trying to put into practice or introduce and implementation as how you are putting it into practice or introducing it. The Where Are You? section discussed assessing your capacity and readiness. Ensuring that you have the capacity to implement a program before implementation occurs can help you limit challenges and frustrations during implementation. Developing an implementation plan and clear processes and procedures can also help you ensure a smoother implementation that is sustainable over time.

It is important to note that implementing new programs is not easy, even if you have ensured that your organization has the capacity and you have developed a clear implementation plan. You may face unforeseen barriers and resistance to change. You must also recognize that it takes time to fully implement an evidence-based program and to see the outcomes that you are hoping for. Therefore, throughout implementation it is important to identify and address potential implementation barriers or changes to the organization that may affect implementation and to clearly communicate with and ensure buy-in and support from staff and stakeholders by keeping them informed and eliciting their feedback and by using evaluation data to illustrate positive outcomes or make adaptations.  

Fidelity and Adaptation

One key consideration when implementing your evidence-based program is fidelity—whether you are implementing the evidence-based program as it was intended to be implemented when it was developed or validated. This may include looking at the following components of fidelity: adherence, exposure/duration, quality of delivery, program specificity, and engagement.1

  • Adherence refers to how well we stick to the program—do we implement as the research on the program intended it to be implemented?
  • Exposure/duration refers to how often the program is delivered and how long the program lasts. When thinking about fidelity, we are considering whether the exposure/duration being used matches the recommendation by the author/publisher of the program.
  • Quality of delivery refers to how well the program is delivered, including whether it is delivered by trained and knowledgeable staff.
  • Program specificity refers to how well the program is defined and is different from other programs. Having clearly defined program allows staff to more easily adhere to the program as defined.
  • Engagement refers to how engaged and involved the participants are in the intervention or activity.

Being faithful in how you implement the evidence-based program is a noble goal, but the difference in context, staff, and populations may require adaptations. While adaptations or modifications to the program may be necessary, it is important that you do not change the core components of the program. The key question to answer is how the changes you make when adapting the program will affect your desired outcomes. If you stray too far from the evidence-based program’s essence when adapting the program, your results will no longer be representative of the program. Therefore, when adapting the program to fit your context, it is essential that you consider what adaptations you have made, particularly for staffing and dosage/duration, and ensure that the core components or the essence of program is maintained. Some program developers will provide information on these core components and should be used when possible.

Effectiveness Factors for Interventions that Address Externalizing Behavior Problems

The report, Developing Evidence-Based Practice Guidelines for Youth Programs: Technical Report on Effectiveness Factors for Interventions that Address Externalizing Behavior Problems (PDF, 98 pages), describes a core components approach to using evidence to improve the effectiveness of youth programs. Across the many program environments that offer youth programs (e.g., community, mental health, public health, child welfare settings, schools), there is a great deal of well-controlled research available. Meta-analysis was used to uncover the characteristics of programs effective in reducing externalizing behavior problems, which will be translated into practice guidelines for those who design, support, and implement such programs.


It is important to ensure that the changes you make when implementing your evidence-based program are sustainable or that they can endure across time. The issue of how to keep a good thing going has long perplexed those who implement programs for youth and families at risk. The easy way out is to ignore the issue of sustainability. Denial, although comfortable, is not a solution. Instead, issues of sustainability should be considered during the initial selection, planning, and implementation of an evidence-based program. In addition, evaluating the program and continually adapting the program on the basis of evaluation results and changes to the context of your organization (e.g., changes in staff, changes to the community, changes in the population served) can help ensure that the program remains relevant and addresses any potential challenges that occur over time.

1 Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review, 18, 23–45; Gresham, F. M., Gansle, K. A., & Noell, G. H. (1993). Treatment integrity in applied behavior analysis with children. Journal of Applied Behavior Analysis, 26, 257–263; O’Donnell, C. L. (2008). Defining, conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K–12 curriculum intervention research. Review of Educational Research, 78, 33–84.