Breadcrumb

  1. Evidence and Innovation
  2. Selecting Evidence-Based Programs

Selecting Evidence-Based Programs

The Where Are You? section focuses on identifying what your organization is interested in addressing as a result of your needs, assets, priorities, capacity, and goals. These factors, such as determining the fit between the evidence-based program and your population (e.g., does it address cultural differences?) and organization (e.g., is it aligned with other evidence-based programs that you are implementing; are there underlying contextual differences that could affect results?) are important to keep in mind when selecting a program to implement. In addition, it is important to consider the level of evidence that supports the program, including

  • how evidence is defined,
  • the depth of the evidence,
  • the criteria or standards for classification,
  • the focus or target of the program, and
  • the population studied.

Finding Evidence-Based Programs

Evidence-based registries, directories, or lists are tools that help you match the problem or gap you have identified with the pool of existing evidence-based programs. A number of government and nongovernment organizations have put together registries that list evidence-based programs as a way to disseminate information about programs and their level of effectiveness.

Registries list programs and categorize them, similar to a consumer guide. This process allows users to identify programs that best meet their needs and have a body of research behind them. Registries vary in the programs they include, how they define evidence, the depth of evidence they require, the criteria they use for classifying evidence-based programs, and their area of focus. For example, registries may focus on specific content areas, such as teen pregnancy prevention, violence prevention, or educational interventions. Some registries provide only programs that meet a certain standard of evidence, while others report both programs with evidence of positive effects and those that have limited, mixed, or negative effects. Registries also vary in how they categorize programs. In all registries, programs with a lot of research and replication are placed in a higher category than those with less research and replication. What constitutes “a lot of research and replication” differs among registries and by the outcomes of interest (e.g., teen pregnancy prevention, juvenile justice involvement). Because of the variability in the category definitions, it is important for users to carefully review the criteria or standards used by the registry in classifying a program. Overall, registries are moving in the direction of using stronger standards.

Although each registry is different, most contain

  • descriptive information for each program listing;
  • ratings of research quality;
  • a description of the research rating methodology;
  • a list of studies, implementation, and technical support materials reviewed; and
  • contact information so that potential users can obtain more information on studies and implementation.

Looking at the Evidence

While registries and evidence-based program vendors and developers often provide a lot of information and may classify programs into certain categories or claim effectiveness, it is important to look closely at what the evidence is actually saying and the context in which it was studied. Consider these questions:

  • Where is the evidence from (e.g., peer reviewed journal, vendor website)?
  • What type of evidence is it (e.g., randomized control trials, single-case design, quasi-experimental design, quantitative research synthesis)?
  • What population was studied?
    • What was the sample size for the study/studies?
    • How representative was the population studied (e.g., who was included in the study/studies?) and is it similar to your target population (age, gender, race, culture, disability status, language spoken)?
    • Were the data disaggregated by subgroup in order to understand variation of outcomes by subgroup?
  • What is the quality of the evidence (e.g., was the program implemented by a researcher or a practitioner? was fidelity measured?) and how feasible will it be for you to replicate the program under the same conditions used in the original study?
  • What outcomes were measured (e.g., are they relevant to your outcome of interest?)? What effects did the study show (e.g., are effect sizes reported and if so what is the size of the effects?)?

The goal is not simply to select what works best, but to select what works best for you, with your resources, and in your context because evidence-based programs will not work the same for everyone. Therefore, as previously mentioned, it is important to consider your population, your capacity (e.g., funds, resources, staff knowledge/competencies and training), and your structure to ensure that you can implement and sustain the program.

Population
Because evidence-based programs may work differently with different populations, it is important to compare the populations studied with your target population and to look at whether the program provides information to address variations in the impact of the intervention across different individuals. Specifically, you should look at

  • how the program addresses a need for variation such as cultural or linguistic differences, and
  • whether the data showed differences in the impact for different individuals or different subgroups.

Cost
As you think about the cost of the program, it is not enough to consider initial startup costs. You must also think about what costs are associated with implementing and sustaining the program, which may include the cost associated with materials, initial and ongoing training, and tools and products from program developers. To ensure that you will be able to sustain the evidence-based program, it is important to determine whether you have the capacity to support the program over time or to identify what resources (human, financial, physical) you will need to seek out or reallocate to support sustainability.

Structure, Culture, and Alignment
To help ensure that the evidence-based program you select is sustainable, it is important to ensure that the program aligns with the structure of your organization, is consistent with your organizational culture, and aligns with other interventions and programs you have in place.

The more you know about the evidence-based program you are selecting, the better able you will be to understand how you can get your evidence-based program to work for you. If you need additional information to make an informed decision, you may consider reaching out to the program developer or others that have implemented the program.

Challenges of Blending Programs

Blending an evidence-based program with another program may cause trouble because it is likely that the evidence-based program will not be implemented with fidelity. Because evidence-based programs have key characteristics that make them unique, blending them with other programs and not implementing them as the developers intended may result in poor or even harmful outcomes. See the implementation section for more about the importance of fidelity.

Although blending programs may create issues, implementing multiple evidence-based programs in an aligned manner within communities and organizations can be effective. For example, Cayuga County, New York, was able to implement a range of evidence-based programs to support youth at risk through a collaborative framework. Learn more about how the county was able to do this.

A Complementary Approach

Core components are the parts, features, attributes, or characteristics of a program that a range of research techniques show influence its success when implemented effectively. These core components can serve as the unit of analysis that researchers use to determine “what works,” and they become the areas practitioners and policymakers seek to replicate within and across a range of related programs and systems in order to improve outcomes. Research techniques such as meta-analysis can shed light on which components make programs successful across a range of programs and contexts, and help researchers identify with greater precision what works, in which contexts, and for which populations. Learn more about core components approaches to building evidence of program effectiveness.