Understanding Program Evaluation and Making It Work for You
Written By Sarafina Ndzi & Amreen Thompson

In the world of afterschool, out-of-school time (OST), and school-based programs, we often talk about impact, but how do we show it? Staff and supporters of these programs are increasingly interested in understanding whether the programs achieve their intended goals—and to what extent they improve outcomes for youth.
That’s where program evaluation comes in.
Program evaluation is the systematic process of collecting and analyzing data to understand the implementation and outcomes of a program. It often seeks answers to questions such as, “Is the program being implemented as planned?” and “Is the program effective?” While the word “evaluation” can feel intimidating, at its core it’s simply about asking good questions, gathering evidence, and using that evidence to grow and strengthen programs.
Demystifying Evaluation
Program evaluation uses a few common terms that can be helpful to know:
- 
Logic Model: This is a simple roadmap showing how your program’s activities are expected to lead to results. Think of it as a visual story of your program. 
Logic models often illustrate the key components of a program, including inputs, activities, outputs, and outcomes. They are useful tools for conceptualizing how a program is intended to work by visually mapping the connections between resources, actions, and expected results.
A logic model can also serve as a foundation for program evaluation. For example, evaluation efforts should focus on what the program is designed to achieve, with data collection planned around components that are both present and feasible to measure.
- 
Inputs, Outputs, and Outcomes: Inputs are the resources you put in (like staff and materials), outputs are what you deliver (number of sessions, students served), and outcomes are the changes you hope to see (improved skills, engagement, or well-being). 
Understanding these elements is like knowing the ingredients needed to bake a cake. Without the right ingredients, the end result won’t turn out as expected. Similarly, if a program lacks the necessary components or if they aren’t aligned, it may not function effectively—or may end up looking very different from what was intended.
- 
Qualitative, Quantitative, and Mixed-Methods: Quantitative data refers to numeric information collected through methods such as surveys, attendance records, counts of sessions or participants, and statistical analysis. Analysis of these types of data helps determine whether changes over time, or differences within or between groups, are statistically significant and not due to chance. 
Qualitative data captures subjective information, often expressed in words or descriptions. It is commonly gathered through focus groups, interviews, or program observations. Qualitative data can offer deeper insights into participant experiences and how a program is being implemented not fully captured by quantitative data.
Mixed-methods approaches combine both quantitative and qualitative techniques. By integrating these methods in a mixed-method approach, evaluations can offer a more comprehensive understanding of program outcomes and processes and strengthen overall findings.
- 
Formative vs. Summative Evaluation: Formative evaluation happens while a program is running. It’s about learning and improving in real time. Summative evaluation looks at the bigger picture —did the program achieve its goals? 
Before assessing whether a program was effective for participants (summative evaluation), it’s often valuable for staff to first understand whether the program was implemented as intended, and to identify any adjustments that were made along the way (formative evaluation). This allows for real-time learning and course correction—essentially, adjusting the sails before evaluating the journey’s end.
These terms aren’t just for program evaluators and researchers, they’re tools that program directors, managers, and frontline staff can use to better understand and communicate the “what,” “why,” and “how” of their work.
Why Data Matters
Strong stories make programs relatable, but stories backed by data make them powerful. When you collect and share data about your participants’ growth, staff practices, or program outcomes, you:
- 
Build credibility with funders, partners, and schools. 
- 
Make a stronger case for support, expansion, or sustainability. 
- 
Equip your team with insights to adjust practices and maximize impact. 
From Insight to Action
At PEAR, we believe evaluation is not just about numbers, it’s about continuous learning and improvement. By combining stories with data, programs can better advocate for their work, demonstrate their value to stakeholders, and most importantly, support the young people they serve.
Bottom line: Engaging in program evaluation and using data to support your program doesn’t have to be complicated. Start by reviewing this list of common terms to help set the foundation for your evaluation then build from there—letting data amplify your program’s story.
