During COVID-19, everyone went into baking mode. Last year, it was all about banana bread. This year, for my family, it is about the best homemade ice cream for our upcoming Memorial Day Bash. We have a family recipe, but I wanted to kick it up a notch. I reimagined the model of what I thought the recipe for ice cream was and came up with something even better. Every year we encourage our clients to reimagine their programs and services. They each have their own recipes, but sometimes you need to rethink everything to create an award winner or, because circumstances change (such as COVID-19), you need to reimagine your programs altogether. One of the best ways to do this is through a logic model.
 
Logic models have been in use since the 1970s, but gained popularity in the 1990s when United Way started using them with its agency partners. Logic models graphically express the step-by-step process of organizing appropriate resources and activities that then produce the intended outputs and outcomes of a program or organization – just like a recipe using ingredients to create a dish. Each logic model can be considered your “secret recipe” for how you are successful in your programs and services. Internally, they can be used to monitor and evaluate work. Externally, the best logic models can summarize the purpose of a program in a way that a written statement cannot. As you reimagine your logic models, here are the top questions we receive, as well as a template we developed, to get you started on your new recipe:
 
How often should a logic model be created or updated?
 
Logic models should be created or updated for new programs or existing programs that are undergoing change. Programs with long track records of success, those with a strong evidence base or those that follow quality improvement programs, like early childhood centers following NAEYC standards, are less likely to require updating on a frequent basis. However, logic models for programs that are experimental or for which the evidence base is still developing should be reviewed regularly to ensure fidelity to the model or to make necessary course corrections. Given the extraordinary nature of events in 2020, we recommend reviewing all of your logic models to see if they are still relevant. If you’re trying to determine whether to update your logic model, here are some questions to ask:
  • What was our original hypothesis about our program?
  • Based on our experience now, what have we learned? 
  • Based on recent events, have conditions changed and how do we best respond to them?
  • What inputs were really used to produce our desired outcome?
  • Did we experience any positive or negative unintended outcomes? How do we mitigate or accelerate them?
  • Does our model take community goals, including social justice goals, into account?
  • Have we created or experienced any systemic shifts?
  • Does any new research exist that addresses what works and should be added?
  • Given the above, what should we modify for next year?
 
It is important to update logic models because it informs the evaluation plan that staff use to determine which data needs to be collected at which intervals to demonstrate impact.
 
Whom should the creation process include?
 
Creating or updating logic models is a team sport. It cannot be delegated to a single staff member to complete because multiple people are involved with the organization’s programs, including program staff who execute the services, management who set expectations and fundraising staff who report on program outcomes to donors and supporters. Creating or changing a logic model is akin to creating or changing a program. Staff from all areas of the organization need to be involved to ensure the program is feasible, the projected outcomes and budget are realistic, and the organization has a plan to capture data it needs to tell compelling stories.
 
Are logic models only for programs?
 
No – logic models are like recipes; you can create logic models for every area of your organization, including your board, your fundraising ideas and even your finance team.
 
We serve multiple populations. How do you make logic models easier to read?
 
Although the original logic model from the 1990s is classic, we gave it a facelift to better account for system change and two-generation programsFor our clients who are serving multiple populations (e.g., women and children) and/or have community-driven outcomes (with multiple drivers), we created a “layered logic model.” It creates clarity about who you are impacting and what outcomes you expect for each population. If your program serves multiple populations, consider upgrading your logic model with this approach – you can find a template HERE.
 
What is difference between all of the logic model components?
Logic model components – inputs, activities, outputs, outcomes and impact – can be confusing. We are often asked to clarify the meaning of each term. Here is a quick rundown on the difference between them.
 
  • What is the difference between inputs, activities and outputs?
To understand the difference between inputs, activities and outputs, we tell nonprofits to think of inputs as the raw ingredients of a recipe, the activities as the actions you take to create a meal and the outputs as the number of meals produced. In nonprofit language, inputs can be things like staffing, funding or curricula; activities include things like administering assessments, delivering classroom instruction or hosting counseling sessions; and outputs include number of children served and number of counseling sessions hosted. Remember: more activities do not equate a better logic model; list only activities that create the impact.
  • What is the difference between an output and an outcome? 

This is the most common question we get, but the easiest one to answer. An output is a unit of measurement that counts numbers served or activities conducted. It answers the question, “What happened?” An outcome is a unit of measurement that determines what has been accomplished. Any time you multiply or divide (e.g., percentage change in the number of meals produced), it is always an outcome. It answers the question, “What resulted?”

  • What is the difference between outcomes and impact? 

This is the hardest one to measure but is the most important to differentiate. Impact is a unit of measurement that illustrates whether the service made a difference. It can be calculated by starting with the participant group outcomes (what resulted?) and subtracting control group outcomes (what would have resulted anyway?). It answers the question, “What difference was made?”

 

For example, an afterschool program has 15 seniors (output) in its program with the goal of increasing the graduation rate of its participants. It has a 95 percent graduation rate (outcome). If the target high school has a graduation rate of 80 percent, the program’s impact increased the graduation rate by 15 percentage points (assuming that the demographics of the student body and the program participants are constant). For impact, you cannot count what would have resulted without your intervention (the 80% who would have graduated anyway).

 
What differentiates a great logic model from an average one?
 
We have worked with many organizations on creating or improving their logic models. The difference between a great and average one is that the reader can tell what the program does and which methods you need to achieve the stated outcomes. You should be able to look at the logic model – without any other information – and know what the program does and the organization’s secret recipe. The ideal logic model doesn’t need to show everything done (we call this the “kitchen sink” version); instead, it should focus on the most important elements that drive impact (we call this the “right-sized” version). A great logic model is an award-winning recipe that can help other organizations replicate the results of your program. It narrows the activities down to only the essential ingredients and helps streamline the data agencies need to collect to demonstrate impact. It also includes a theory of change as a headline for the logic model. To pressure test your logic model, ask someone unfamiliar with your program to look at your logic model and ask what they observe. If they cannot accurately articulate what your program does, it may be time for a refresh. To go one step further, have them rate it critically based on clarity, comprehensiveness, coherence and common sense.
 
As we begin to reopen programs and services, we hope you will be inspired to look at your organization’s logic model and assess whether it is time for a new or reimagined recipe. Use this as a time to determine whether your programs are as impactful as you’d like, still relevant, and whether you have the right data to draw and communicate that conclusion. And as always, we’d love to hear what you cook up with and if you have any additional questions.
 

Sign up to receive the Social TrendSpotter e-newsletter

Facebook Twitter Linkedin Email