As part of the curriculum design and renewal process, two types of evaluation might occur. First is an assessment of the impact of the curricular change that has been implemented. The second type, which is explored in the Program Review and Accreditation section, relates to the external assessment of an entire curriculum (i.e., through formal program review and accreditation).
Assessing curricular change
A key goal of curricular change is to align the students’ educational experiences with the program outcomes. At Waterloo, formal program reviews occur once every seven years. Between these formal review periods, other measures can be taken to evaluate the success of curricular changes and provide formative feedback to the department. In this section we share two Waterloo examples to highlight different strategies for continuous assessment of the program.
Example: Electrical and computer engineering undergraduate program
With the introduction of graduate attributes (PDF) as a component of accreditation by the Canadian Engineering Accreditation Board, engineering departments at Waterloo began making a shift toward outcomes-based assessment. To assess students’ progress toward the completion of their outcomes, Dan Davison, associate chair for undergraduate studies of the electrical and computer engineering department, developed the following feedback process.
Figure 1: Block diagram of feedback process
As shown in Figure 1, the general process identifies the system input (i.e., the desired outcomes), with a feedback loop that modifies elements based on a number of variables (e.g., assignments, workload, etc.). For more information, view the detailed diagram (PDF).
The focus of this procedure is to continually monitor students’ progress toward meeting the program outcomes. This continuous feedback provides increased opportunities to modify the curriculum or make other relevant changes rather than adjusting the curriculum only in response to formal program reviews or accreditation.
This process is in very early stages. Having created their program outcomes, the department then refined the outcomes adding a deeper level to articulate measurable requirements and to distinguish expectations for the two degrees (i.e. electrical engineering and computer engineering). Their next step was to identify both primary and secondary courses/activities that led to the development of these outcomes. They are now in the process of gathering data to measure progress toward the outcomes. We would like to thank professor Davison for allowing us to share this evaluation plan.
Example: Waterloo professional development program
As a new program in 2006, the Waterloo professional development (WatPD) program was created to enhance the professional skills of Waterloo's undergraduate students participating in co-operative education. On-going formative assessment of the program has been a priority since the program’s inception. Several evaluation models were considered and the department worked closely with the Centre for the Advancement of Co-operative Education (WatCACE) to develop its evaluation plan. Because of the nature of the program and its courses, the evaluation team selected the Kirkpatrick model, which assesses training at four levels. The four levels evaluated are:
- reaction (participants initial reaction to the program, e.g., whether they liked it, how they perceived its difficulty, etc.);
- learning (what participants actually learned);
- behaviour (whether participants’ behaviour changed as a result of the evaluation); and
- results (what impact followed as a result of the courses/program) (Kirkpatrick, 1998).
To gather on-going feedback on the effectiveness of their curriculum, WatPD uses the following data sources.
Table 1: Data sources for program evaluation (Pretti, 2009)
The program completed a self-study (PDF), which was presented to Senate in 2009. It is a good example of the range of data that can be used to provide both formative and summative assessment of a program. While the WatPD report for Senate focused on the whole program, any of the above elements could be used to assess a smaller curricular change. For example, if a suite of courses changes in a program, focus groups could be used to collect data from students in the current curriculum then the same focus group questions could be used with students after the change is made.
We would like to thank Judene Pretti and Anne-Marie Fannon of WatPD for sharing this example.
Faculty members and curriculum specialists cannot and should not be solely responsible for evaluating curricular health. Students themselves need to be full partners in the assessment of their progress toward meeting goals and outcomes. After all, the students are the ones doing the learning. They don’t always follow the same paths to the finish line. Along the way, they take certain electives and they are often the ones who can best document how they have achieved milestones. If they are treated as partners in these curriculum evaluation efforts, they may in fact become more intentional about their learning as well.
Consider the possibilities of an ongoing portfolio for each student (e-portfolio solutions are probably most useful in this regard): as the students progress, they gather evidence from their various courses of how their work contributes to program outcomes (or pieces thereof). By the end of the degree, they can point directly to this evidence in a framework that makes sense to them and to anyone who needs to see it (accreditation teams or potential employers).
How we can help
We provide support in designing evaluation instruments and processes as well as facilitation of data collection activities, such as focus groups. We also support activities related to ethics approval and grant applications.
For more information on these services, please contact Veronica Brown, Senior Instructional Developer, Curriculum and Quality Enhancement.
- Grants – Grants are available to investigate innovative approaches to teaching, at both the course and at the program level. Information about both internal and external grants is provided on this page.
- Office of Research Ethics – Research involving human participants requires approval through Waterloo’s Office of Research Ethics.
Kirkpatrick, D. L. (1998). Evaluating training programs: The four levels (2nd ed.). San Francisco: Berrett-Koehler Publishers Inc.
Pretti, J. (2009). The uPDate: The First report on the evaluation of the Professional Development Program (PDF) (WatPD).