Evaluating a program’s design can help improve its effectiveness
While evaluation is not without challenges, the information obtained can help streamline and target program resources most efficiently.
Program evaluation is a valuable tool for program managers seeking to strengthen the quality of their programs and improve outcomes. As program administrators, we design and implement programs to support grad students and postdocs as they navigate the complexities of academia. It is an integral part of continuous improvement and accountability to leadership, program participants and funders. Therefore, when evaluating programs, it is essential to critically examine them by systematically collecting data, analyzing the information and making informed decisions and recommendations.
In his book Program Evaluation: Core Concepts and Examples for Discussion and Analysis, Dean Spaulding defines a program as a set of specific activities designed for an intended purpose, with quantifiable goals and objectives. Each specific activity of a program requires tailored evaluation methodologies and frameworks. However, many of us fall short of developing robust evaluation frameworks due to a lack of time and resources. There is also a sentiment from many frontline administrators that program evaluation is complex, needs specific expertise and could divert the limited resources spent on running the program. Even though this sentiment is not unfounded as this process can genuinely be complex and resource-intensive, not evaluating programming could lead to missed opportunities to improve the quality of the design, delivery and outcomes. It can even be counterproductive, as evaluation can uncover opportunities for cost and resource savings that can, in turn, support the growth and development of the program in other areas.
But accountability is the most crucial reason practitioners should make evaluation a regular habit. As we work with limited resources and funds, and because we’re accountable not only for stewarding those resources but for developing a new generation of highly trained professionals, program evaluation gives us tools to ensure that we’re staying accountable to our institutions and the graduate students and postdocs working at them.
According to the National Association of Student Personnel Administrators, to understand the importance of evaluation, program administrators can ask the following questions:
- Can the results of the evaluation influence decisions about the program?
- Can the evaluation be done in time to be useful?
- Is the program significant enough to merit evaluation?
- Is program performance viewed as problematic?
- Where is the program in its development?
The answers to these questions can set the stage for choosing programs to evaluate and selecting the proper methodology.Program administrators must carefully consider the data they need to collect, the data analysis methods they will be using, and how the results will be used to improve the program.
Administrators also need to choose the direction of their assessment: inward or outward. When using evaluation to improve program design and implementation – i.e looking inward at how the program works – it is valuable to assess and adapt your activities periodically to be as effective as possible. Being able to identify areas for improvement will ultimately help you attain your goals more efficiently. When using evaluations to demonstrate program impact – i.e. looking outward at what the program does – the information you collect allows you to better communicate your program’s impact to others, increase staff morale and attract and retain support from current and potential funders.
Evaluations fall into two broad categories: formative and summative. Formative evaluations should be conducted during the initial program development and enactment and are helpful if you want direction to achieve your goals or improve your program. Summative evaluations should be completed once the program is well established and will tell you to what extent the program is achieving its goals. Logic models and evaluation matrices are standard evaluation design methods. A logic model focuses on identifying the resources going into the program, the activities, the outputs, and the program’s short-, medium- and long-term outcomes. An evaluation matrix is used to lay out the program evaluation design and is completed after the program components have been reviewed through a logic model.
So what makes a good evaluation?
A carefully executed evaluation will benefit all stakeholders more than an evaluation that is thrown together hurriedly and retrospectively. Good evaluation is tailored to the program and builds on existing knowledge and resources. Below are indicators of sound evaluation.
Inclusivity
An inclusive evaluation considers a diversity of viewpoints and makes sure the results are as comprehensive and unbiased as possible. Input needs to come from all those involved and affected by the evaluation. These could include students, administrators, academics, or community members.
Objectivity and honesty
Evaluation data can suggest both program strengths and limitations. Therefore, it should not be a simple declaration of program success and failure but also showcase where to direct limited resources to improve things.
Replicability and rigour
A good evaluation is likely to be replicable. A higher-quality design and methodology could help with more accurate conclusions and build more confidence in others in its findings.
Program evaluation is an iterative process. Therefore, things can change as the process unfolds. The purpose and goals identified at the beginning of the evaluation may change through the design and data collection phases. While evaluation is not without challenges, the information obtained can help streamline and target program resources most efficiently by focusing time and money on delivering services that benefit program participants. Data on program outcomes can also help secure future funding. Being open to adjustments is extremely helpful, especially for first-time program evaluators.
Featured Jobs
- Electrical Engineering - Assistant Professor (Electromagnetic/Photonic Devices and Systems)Toronto Metropolitan University
- Economics - Associate/Full Professor of TeachingThe University of British Columbia
- Electrical and Computer Engineering - Assistant/Associate ProfessorWestern University
- Indigenous Studies - Assistant Professor, 1-year termFirst Nations University of Canada
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.