Enhancing Assessment Practices

The Enhancing Assessment Practices project is a strategic initiative of the Faculty of Mathematics to explore how to effectively use assessments to measure and improve student learning. This website includes the findings and best practices for assessments.

The information on this page was last updated in Winter of 2024.

Introduction

Flowchart from diagnostic to formative to summative assessment

Assessments can generally be grouped into three categories: diagnostic, formative, and summative.

  • Diagnostic assessments are used to give the instructor an idea of what students know going into a course. They are generally ungraded/graded for completion.
  • Formative assessments are administered throughout a term to assess students' understanding of course material and give feedback. They can be graded or ungraded.
  • Summative assessments are employed at the end of a term to evaluate students' learning and provide a grade that quantifies this learning.

The importance of context and alignment

Different types of assessments can be used for each of the categories mentioned above. Some types of assessments may work for more than one category. It all depends upon the context in which the assessment is being used - what works for one course may not work as well for another. For example, a good summative assessment in a pure math course might not work for a first-year computer science course.

The examples given on this website are taken from Faculty of Math courses, but the only way to know what works is to try it.

The Faculty of Mathematics leadership supports the exploration and use of innovative assessment techniques, so you are encouraged to try things out and see how they work for your students.​

Irrespective of the context, one of the most important factors to consider when creating an assessment is its alignment with course learning goals. For students to learn effectively there must be alignment of the three items below:

  1. What the instructor wants to teach
  2. What the instructor actually teaches
  3. What the assessments assess

Even if individual components are executed perfectly, a misalignment can be very detrimental to student learning and learning satisfaction.


Assessments for different learning goals

Use the expandable tables below to find a type of assessment that would work well for your learning goals. Both examples that require relatively few resources to implement from the instructor as well examples that require more resources are included. Class size will also have an impact on resource requirements.

You are encouraged to contact the instructors below for more detail on their assessments.

Improve Understanding of Basic Concepts

Assessment technique Low resource requirements High resource requirements
Tests/Exams
  • Optional test retakes (Dan Wolczuk - MATH 127, PMATH 340)
In-class activities
  • In-tutorial group practice questions on recently covered concepts (Dina Dawoud - STAT 230)
  • Two-stage (individual then group) concept quizzes (Peter Balka - STAT 373)
  • Flipped classroom (Stacey Watson - CS 136, Diana Skrzydlo - STAT 334)
  • In-class polling (Diana Skrzydlo - STAT 230, many others)
Assignments
  • Proofs of upcoming material on assignments (Faisal Al-Faisal - MATH 136, 235, 237)
 

Apply Content to the Real World

Assessment technique Low resource requirements High resource requirements
Assignments/Projects
  • Probability models inventory - students provide memorable examples of course concepts (Diana Skrzydlo - STAT 334) [Sample Probability Models Inventory Question PDF]
  • Explain finance topic to non-math student in recorded video and have classmates anonymously provide a score and written feedback (Mirabelle Huynh - ACTSC 231)
  • Create and document spreadsheet, write memo (Diana Skrzydlo - ACTSC 455)
  • Group project to synthesize 3 medical research papers and create a poster (Cecilia Cotton - STAT 337)
  • Project to model a topic with Markov chain, report and presentation/video (Diana Skrzydlo - STAT 334) [Case Study Outline PDF]
In-class activities
  • Image compression demo (Faisal Al-Faisal - MATH 235)

Improve Written Communication Skills

Assessment technique Low resource requirements High resource requirements
Assignments/Projects
  • Create and document spreadsheet, write memo (Diana Skrzydlo - ACTSC 455)
  • Project to model a topic with Markov chain, report and presentation/video (Diana Skrzydlo - STAT 334) [Case Study Outline PDF]
Tests/Exams

Improve Oral Communication Skills

Assessment technique Low resource requirements High resource requirements
Oral exams
  • Students have a chance to be selected to explain solutions and related course material (Joe West - AMATH 250, 332)
  • Follow-up to midterm (Dan Wolczuk - MATH 128)
Assignments/Projects
  • Explain finance topic to non-math student in recorded video, and have classmates anonymously provide a score and written feedback (Mirabelle Huynh - ACTSC 231)
  • Project to model a topic with Markov chain, report and presentation/video (Diana Skrzydlo - STAT 334) [Case Study Outline PDF]
  • Group project to synthesize 3 medical research papers and create a poster (Cecilia Cotton - STAT 337)

Improve Student Metacognition

Assessment technique Low resource requirements High resource requirements
Self-learning
  • Multiple options for student engagement grade (aka partcipation points): reflections, journals, forum posts, wrap-up sessions, and self assessment quizzes (Burcu Karabina - MATH 237, 103, 106)
Assignments
  • Specifications grading: Grade based on completion performance on selection of tasks each week (Ian McKillop - CS 634)
Tests/Exams
  • Grade prediction on tests
  • Optional test retakes (Dan Wolczuk - MATH 127, PMATH 340)
In-class activities
  • Two-stage (individual then group) concept quizzes (Peter Balka - STAT 373)
  • In-class polling (Diana Skrzydlo - STAT 230, many others)

Types of assessments

The expandable sections below contain some suggestions for some of the most common assessment types.

Tests/Exams

Quizzes, tests, midterms, and exams all have similar forms, thought they tend to vary in length and weight. Quizzes in particular are an excellent form of formative assessment - better than assignments when it comes to information retention (Karpicke, 2012). Quizzes may also be a better predictor of future performance than assignments.

Jordan Hamilton and Dan Wolczuk in MATH 235 found that short proctored quizzes are more predictive of final exam performance than assignments – a correlation of 0.78 vs. 0.55. Joe West found similar results in MATH 237, with respective correlations of 0.76 and 0.61.

This is not to say that assignments don't have their uses - they can still be effective learning tools. Go to the assignments section for more information on increasing the effectiveness of assignments.

Listed below are some important factors to consider when creating assessments such as tests and exams.

Balancing difficulty

The generally accepted method of balancing difficulty is to follow a 60:30:10 ratio.

  • 60% of marks should come from basic computations that should be easy for anyone who followed the course material
  • 30% of marks should come from slightly more challenging problems - problems that take some effort even for those who are familiar with course material
  • 10% of marks should come from challenging problems that require the student to extend their thinking beyond the confines of the course material

Another way of looking at this is through the lens of Bloom's Taxonomy of Learning. Approximately 60% of questions should come from remembering, 30% should come from understanding and applying, and the final 10% should come from analyzing, evaluating, and creating.

Bloom's Taxonomy - top level split in three

Source: Critical Assessment of English Examination Paper of B.Com Degree Program with Respect to Bloom’s Taxonomy

Open vs. closed book

A middle ground between open and closed book tests can be reached if students are allowed to bring limited reference materials (ex. handwritten reference sheet) to an otherwise closed book test. This reduces student anxiety, but still ensures that students prepare adequately (Block, 2012). The reference sheet can be limited to a certain size (ex. a notecard) or certain content (ex. formulas only, no explanations or descriptions).

Closed book Limited reference material Open book

+ Benefits from the testing effect

+ Students spend an adequate amount of time studying

+ Students don't waste time searching for information

+ Reduces student stress

+ Removes unnecessary memorization

+ Students spend an adequate amount of time studying

+ Reduces student stress

+ Removes unnecessary memorization

It is important to re-emphasize the importance of context when it comes to reference materials on tests. For some courses, memorizing formulas is an unecessary cognitive load, but for others, memorization may be important for student learning.

Student-written questions

There are multiple ways in which this approach can be implemented. It can be as simple as replacing a question on an existing exam with one that asks students to write and solve a problem of their own, or as complicated as a whole assignment in which students create a mock exam with full solutions.

Asking students to create their own question(s) requires higher-order thinking that might not be tapped into otherwise.

Creating a mock exam gives students excellent preparation for a real exam and increases their sense of agency (Kalajdzievska, 2014).

Test Question Templates (TQTs)

Creating a good question without any guidance can be very challenging for students. Test questions templates give students the basic outline of a question that could appear on a test without giving away the actual question. They can be given to students as a study aid, allowing students to create their own practice questions.

A good test question template has many possible question variations, enough different answers that they cannot all be memorized, and questions that require appropriate levels of reasoning to answer. Students self-reported a deeper understanding of course material after TQTs were implemented in a class (Crowther et al., 2020).

Grade prediction questions

Similar to reflective questions, asking students what grade they expect to receive on a written assignment makes them think more deeply about their performance. It's important for students to be able to judge their own abilities accurately and understand whether or not they have adequate command of the material.

Assignments

Assignments can be valuable learning tools when used correctly. They work particularly well as formative assessments that cover more challenging material than could be assessed on quizzes. Listed below are a few ways in which assignments can be enhanced to increase their value.

Teaching on assignments

Assignments can be used to introduce new material to students. Instructors can include 'teaser' questions that offer students a preview of the material to come. For example, one could create a true/false question where the correct answer is justified by information found in future material.

A more complex alternative is to create investigation-esque assignments that walk a student through a process with an appropriate (not excessive) amount of help. This is more situational - not all courses would benefit from this kind of assignment.

Using technology

Assignments are generally intended to be completed at home. Students will want to use all the resources they have available, so involving technology in some capacity makes the most of the situation. It is important for instructors to create assignments that factor in technologies such as chat bots, online calculators, and solution repositories. For more information on how to make assignments less vulnerable to these technologies, see this section on embracing technology.

Pair/group assignments

Collaboration on written assignments is inevitable, so there's no reason not to take advantage of it. Encouraging collaboration on assignments can improve students' oral communication skills and ability to collaborate. Specifically requiring that students work in pairs or even groups also reduces the amount of marking that needs to be done.

Reflective questions

Reflective questions can be included in written assessments of any kind to bring about student metacognition. They are well-suited for assignments where students have ample time to consider their answers. Asking students about what they learned, how they learned it, and why it's important that they learned it gives students the opportunity to self-monitor and think deeply about their learning processes.

Real-world applications

Using scenarios that can occur or have occurred in real life can help students connect to the material more deeply and increase the personal significance of the question. Real-world applications are generally best administered through assignments or projects where students have the time to explore the topic in depth.

Projects

Projects can provide an excellent opportunity for students to learn. However, for a project to work well, it generally revolves around a central problem that satisfies the following criteria:

  • Allows students be creative
  • Asks questions that students want to answer
  • Uses real-world scenarios/techniques
  • Produces a solution that students can verify independently

(Lewis and Powell, 2016)

In addition to these four characteristics, there are some practical considerations that need to be made when designing the assessment itself.

Open-endedness

Open-ended problems give students lots of flexibility, but it can make grading more challenging. Students can also become overwhelmed by a lack of guidelines. Including a few exemplars can help give students an inkling of what their project might look like.

Scaffolding

Scaffolding breaks up the project into discrete components - often components that involve utilizing different skills. Having "checkpoints" makes completing the project more convenient for students because they only need to focus on a small portion at a time. It also gives the instructor an opportunity to ensure that students haven't fallen behind.

Group work

Group work is very common for projects. It allows students to reduce the individual workload required to produce a complete product and also reduces amount of marking for instructors. It also gives students practice working in teams, teaching collaborative and communication skills that aren't always emphasized through other types of assessments. However, group work also comes with its own set of challenges. Determining individual contributions within a group, for example, can be difficult.

People sitting around a table brainstorming

Making Group Contracts. Centre for Teaching Excellence, University of Waterloo.

Peer evaluations

One of the issues that arises with group work is assigning grades to individuals within a group. It is rare that the final grade achieved by a group is representative of its constituent individuals' contributions.

The best way to judge an individual's performance within a group, short of embedding an instructor within the group, is to use peer evaluations. Having a part of a student's grade on a project come from their peers adds some accountability, ensuring that group members are rewarded as fairly as possible for their effort.

One technology that can be used for peer review of projects is PEAR. It is particularly valuable for streamlining the group assessment process as it allows peer evaluations. For more information, see the EdTech Hub page on PEAR.

Group contracts

Group contracts can be used to replace peer evaluations by ensuring that all students contribute equally and earn the final group grade. Group contracts can contain information including (but not limited to):

  • Expectations for group meetings
  • Assignment of tasks/responsibilities and due dates
  • Methods for dealing with conflicts/unmet deadlines

It is important to have all group members sign the contract before the project commences to agree to the terms. At the conclusion of the project, all students must sign the contract to indicate that they agree all other students fulfilled their agreed-upon responsibilities.

For more information about group contracts as well as some templates, visit the Centre for Teaching Excellence webpage on group contracts.

It may also be useful to point students toward the Teamwork Clinic modules.

Group formation

Allowing students to choose their groups can leave large portions of a section working in less-than-ideal situations. As it turns out, optimizing groups is very doable. Ideally, all groups should have roughly the same average to prevent large skill disparities. Also, no group should have a lone international or female student to prevent such students from feeling isolated.

Presenting results

Written reports

Written reports can improve students' written communication skills considerably, and they require considerably less time to grade than other project deliverables. However, students may need some guidance as far as what to include in the report.

Rubrics and exemplars can give students an idea of the level at which they need to write and what needs to be included in their reports. For the more technical aspects of writing, pointing students toward the Writing and Communication Centre can be helpful.

Scaffolding can be implemented in the form of rough drafts. Offering an early-bird deadline where students submit a draft and receive feedback is not too much more work than just marking a final draft. Final submissions should contain fewer mistakes and the content should be familiar to the instructional staff doing the grading. Alternatively, peer feedback can be organized in a scalable manner by using group discussion boards. Students can post their drafts and their peers can review them for a small completion grade.

Oral presentations

Oral presentations require students to speak fluently about the subject matter - to do so requires intimate knowledge of said subject matter. Just like for any project deliverable, rubrics can help students understand what they are being assessed on.

One of the flaws with oral presentations is the amount of class time that they take up. To avoid this, students can be given the choice to submit pre-recorded video presentations. While the learning experience might not be exactly the same, very similar skills are emphasized.

A step further is peer evaluations on a platform such as Bongo, where students can upload presentations and review their peers' presentations. For more information, see the EdTech Hub page. Alternatively, creating a "symposium" where some students present and others walk around achieves a similar effect in-person.

Another issue that oral presentations occasionally suffer from is difficulty grading. For live presentations, not everything will necessarily be captured by the person marking. One way to work around this is to invite "guest judges" in the form of other instructors. Not only will this add additional sets of eyes, it might also motivate students to put a bit more effort in than they would have if they were presenting to an instructor they were more familiar with.

Other options

There are many different ways in which students can express their learning. Listed below are a few examples, though the list is far from comprehensive:

  • Posters
  • Infographics
  • Webpages

In-Class Activities

Not everyone would classify every in-class activity as a true assessment. However, they can still aid in student learning. For example, think-pair-share is an easy-to-implement framework that can increase student engagement. It can easily be worked into lectures and tutorials alike. Asking students to think on their own then share their answer with a peer is more likely to have a lasting impact than simply asking students to think about an answer. The process of sharing one's answer adds some small stakes, which can make the event more memorable.

Included below are several other types of in-class activities and how they can be implemented.

Clicker questions/in-class polls

Clickers give students anonymity in their responses, making it less intimidating for students to participate than if they had to raise their hand. From an instructor perspective, any kind of in-class polling offers immediate feedback regarding student understanding. For some interesting data from a UW math course see Keeping the Learner Interested in Class: Engaging Students with Clickers (PDF).

The University of Waterloo has support for iClicker, for more information see the EdTech Hub page.

An alternative technology is MathMatize, a polling platform designed specifically for math. It can also be used for self-learning.

A no-technology alternative is to have students hold up their fingers close to their chest to display a multiple choice answer to just the teacher.

Best practices

  • Clicker questions should be worth a small portion of a student's final grade (approximately 5% is a good place to start).
  • The questions themselves should have a focus on concepts that students have been known to struggle with.
  • Instructors should spend additional time on any question that a significant number of students (more than 1/3rd of the class) get wrong, potentially asking students to try the question again.
  • The use of clickers should be consistent across sections in terms of the number of questions asked per lecture.
  • Note that including a visual with clicker questions does not improve student performance compared to the same question without a visual (Gray et al., 2012).

Tutorial activities/labs/practicums

Labs can take many forms, but generally involve real-world applications of content taught in a course explored through in-class activities. Labs generally focus on higher-order thinking skills such as understanding, applying, and analyzing. This makes them an excellent fit for a flipped classroom or as tutorial activities.

Flipped classroom

The theory behind a flipped classroom is that students attempt to understand the basic content outside of class with materials provided by the instructor (recorded lectures, course notes, interactive modules, etc.) and the time spent in-class is used for real-time formative assessments and involving more active learning strategies.

There is potential for students in flipped classrooms to outperform students in traditional classrooms (Nielsen et al., 2018). Furthermore, it has been found that extensively implementing student-centred teaching practices such as active learning can help students gain more content knowledge, even in large class sizes (Connell et al., 2016).

A flow chart with arrows. In order, "Introduce task", "Out of class task", "Assess learning", and "In-class activity"

Course design: planning a flipped class. Centre for Teaching Excellence, University of Waterloo.

There are a few challenges that come with flipped classrooms. There is a large initial effort required to produce the materials for students to use outside of class. Also, it is highly likely that some students will not use these materials and arrive to class unprepared. The Centre for Teaching Excellence goes into great depth on this page about course design for a flipped classroom. Additionally, Diana Skrzydlo has delivered a seminar on the topic on YouTube.

Self-Learning

Diagnostic questions

Diagnostic questions can benefit both students and instructors by revealing how well students in a class understand the material. They can be administered in-class through polling or asynchronously through an online platform. Piazza and MathMatize are two good options for online diagnostic questions.

Retesting

Correcting mistakes

Giving students a chance to correct and explain their mistakes and resubmit their work improves student performance. This also has the effect of reducing test anxiety without decreasing the level at which students prepare (Velegol and Jackson, 2015).

This can also be implemented as a sort of two-stage testing, where students correct their work in small groups with aid from instructional personnel as needed. This can also be graded on a completion basis, reducing the amount of marking required. The assumption can be made that any students that cannot figure out where they made mistakes could learn from their peers or instructors.

Optional test retakes

Optional test retakes give students a chance to improve upon their mistakes. One way to implement this technique is to return just a grade without feedback to students. Students then have to take it upon themselves to determine where they went wrong. It can help engage students in the learning process, but it requires very fast turnaround times on tests, which isn't always feasible for larger class sizes.

Dan Wolczuk has used optional test retakes in multiple courses:

  • In MATH 127, 578 of 702 (82%) of students elected to retake and 501 (87%) of them improved. 90% of students said their test anxiety was reduced knowing that an optional rewrite would be offered.
  • PMATH 340 had two tests, 58 of 66 students rewrote the first and 59 of 65 students rewrote the second. On the first test, 53 students improved with an average improvement of 38 percentage points. On the second test, every student improved with an average improvement of 32 percentage points. 98% of students said their test anxiety was reduced knowing that an optional rewrite would be offered.

Remedial retesting

Retesting can also be reserved specifically for students who perform poorly. Students that perform below a certain threshold on a quiz/test are required to attend an additional tutorial and then retake the quiz/test. This was one of several factors that resulted in improved student performance in a study by Nelson et al. (2021). This requires a course to be restructured to some degree, with specific tutorial sections designated for students to attend after quizzes/tests.

Mastery grading

Mastery grading involves mastering specific skills instead of achieving partial understanding of many topics. Numerical grades are generally not assigned; levels of proficiency define a student's success in understanding the material. Students are given multiple chances to demonstrate their mastery and are not penalized for any failed attempts.

While mastery grading requires a considerable amount of restructuring and effort, student attitudes in courses that use mastery grading are generally more positive than in traditional counterparts (Posner, 2011).

For more information on mastery grading, see this webinar from the Mathematical Association of America on YouTube.

Oral Exams

Oral exams give a very accurate picture of a student's understanding. They also require students to communicate at a high level - one that is difficult to replicate through written assessments. However, oral exams are difficult to implement on a medium-to-large scale due to their time- and resource-consuming nature.

Reducing workload for instructional personnel

As mentioned previously, one of the biggest flaws with oral exams is the amount of strain they put on instructors and teaching assistants. Oral exams as summative assessments take a prohibitive amount of time.

Teaching assistants

Having teaching assistants perform oral exams can greatly reduce the workload for the instructor. However, it is important to ensure that the TAs are on the same page as the instructor so grading is consistent and fair. This can be done by implementing a short training session with any instructional personnel involved in the oral exam process, but even this will not necessarily be enough to achieve perfectly fair grading.

Optional oral exams

Giving students the option to complete an oral exam reduces the overall number of oral exams that need to take place and be marked. They can be used as a way for struggling students to improve their grades in a similar fashion as follow-up oral exams. Alternatively, oral exams can be offered as the only way to achieve at a very high level in a course, for example, requiring that students complete an oral exam if they wish to achieve a final grade of 80 or higher.

Group oral exams

Another option is to conduct lower-stakes oral exams in groups. This can give the instructor a feel for how well students understand the material. Student opinions of such exams are mixed - some students find that it decreases testing anxiety, while others resent the performative nature of the interactions that take place (Goodman, 2020).

Online oral exams

This option reduces the time between oral exams and also allows for easy recording if any exams need to be revisited.

Follow-up to written assessments

One of the challenges with oral exams is covering a sufficient portion of course material in a limited amount of time. This can be worked around by using oral exams as a follow-up to written assessments. Instructors can use these exams as a way to ensure that students understand the content by requiring students to explain their correct answers to keep those marks. These exams can also be an opportunity for students to improve their grade by explaining and correcting their mistakes.


Other factors to consider

Creating Robust Assessments

It is important to consider how an assessment holds up in a variety of situations and adverse conditions. Emerging technologies, health crises, and student needs can all impact the effectiveness of an assessment.

Working with student absences

All students, even those that are most engaged with a course, will likely have trouble completing an assessment on time at some point within the course of an academic term. This could be due to an illness (viral or otherwise) or simply not having enough time for a full course load.

The University has implemented several policies that allow students to take short-term, 48-hour absences or declare a pandemic related absence if they have an influenza-like illness. These policies are very helpful for students, but can pose something of a challenge to instructors. It is important to have a plan in place for students absences. Listed below are two ways to prepare for student absences.

Flexible grading

In addition to the University's policies, flexible grading policies can be used to avoid complications surrounding student absences. Enabling students to drop an assessment (or more than one low-weighted assessment) can help prevent undue stress without sacrificing student learning. Alternatively, slip/grace days can be offered. Multiple grading schemes that shift weights of certain assessments can also be offered.

Lower weights

Creating more assessments with lower weights can reduce the penalty that students might suffer from missing one assessment throughout the course of a term.

Embracing technology

Online calculators and AI bots can pose challenges in maintaining academic integrity and ensuring students are actually learning material. However, these new technologies also provide an opportunity for creating new assessments. Students will want to use all the resources they have available, so involving technology in some capacity makes the most of the situation.

Chat Bots

The emergence of AI chat bots such as ChatGPT might be concerning for some, but there are ways to leverage this powerful technology on assignments. It might be worth inputting an existing assignment question into the bot to see what it spits out. If it's incorrect, asking students to explain why could be a question on its own. If it's correct, asking students to critique the bot's solution could also be a question included on an assignment.

More information is available in this FAQ page from the Associate Vice-President, Academic.

Online calculators

Programs that can complete complex calculations such as Wolfram Alpha and Symbolab can be leveraged by instructors to produce interesting problems. For example, if students are going to use online calculators to solve an integral, try to avoid making calculating the integral the focal point of the question - have them focus on setting up the integral. For some applications, encouraging use of an online calculator to complete a numerical computation could save time and keep students focused on the main learning goal.

Online solution repositories

It can be safely assumed that any solutions posted by an instructor could end up in a repository on a site like Chegg. This situation is unavoidable, so posting old versions of tests and assignments on an LMS can help level the playing field by allowing all students free access to old materials. Students can also be encouraged to use old tests/exams to prepare by simulating an upcoming assessment.

Improving assessment accessibility

There will be some students in just about every class that need some form of accommodation. These accommodations can be time-consuming to implement on an individual level, so it may be worth simply implementing them course-wide. For example, tests administered via Crowdmark might benefit from additional time for students to upload solutions.

The theory of universal design for learning (UDL) deals with accommodations such as these. Listed below are 9 easy-to-implement UDL strategies:

  1. Group work/collaboration options

  1. Flexible grading

  1. Flexible deadlines

  1. Extra resources on tests

  1. Extra time on tests

  1. Self-assessment/diagnostic questions

  1. Clarity of expectations

  1. Choice of topic for projects

  1. Choice of deliverable medium for projects

For a general introduction to UDL in post-secondary education, consider the literature reviews conducted by Schreffler et al. (2019) or Seok et al. (2018).

Assessment Feedback

Students want timely, detailed feedback. Unfortunately, it is virtually impossible to achieve both of those goals, especially in a large class. A compromise can be reached by offering detailed feedback on a subset of questions, though this might not be well received by students.

It is important that students are accountable for their learning - feedback is useless if students don't use it to improve their understanding. The onus is partially on the instructor - giving students the opportunity to use feedback is necessary to justify giving out feedback to begin with. This can take the form of retesting (Boyle et al. 2020), although retesting can also replace feedback entirely.

Ultimately, learning takes effort, and providing excessive feedback can reduce the amount of effort students put in to a course. The process of trying to find mistakes can be very valuable to student learning. That said, listed below are some ways that feedback can be implemented.

Scalability of feedback

Scaling up to larger class sizes comes with some challenges. There are several ways to provide quality feedback for large class sizes.

Rubrics

Being able to "drag-and-drop" common pieces of feedback can reduce the time it takes to grade a large number of written assessments. Rubrics also ensure grading remains consistent across multiple graders.

On the student side of things, rubrics offer a framework around which an assignment or report can be written. A rubric can be the catalyst for improved learning outcomes (Kruse and Drews, 2013).

Best practices

  • Choose criteria that always appear in high-quality work
  • For each criterion, describe what satisfies a certain level of achievement in detail (ex. what level of work is satisfactory, what distinguishes satisfactory work from outstanding work?)
  • Relate the level of achievement to a grading scheme of some kind (ex. satisfactory might be equivalent to 70%)
  • Always include additional space for comments that don't fit within the framework of the rubric

For more information on rubrics, visit the Centre for Teaching Excellence webpage.

Crowdmark

Electronic submissions not only reduce on paper waste, but also allow for faster grading of written assessments. Crowdmark can be a very helpful platform for the grading of written assignments and take-home tests/exams. For more information, see the EdTech Hub page on Crowdmark.

Peer grading

For lower-stakes assessments, peer grading can provide some relief for instructional staff while also offering an additional learning opportunity for students. Grading another student's work requires students to think critically and analyze questions that they had previously completed from a different perspective.

An option for peer grading of assessments like assignments is Kritik. Instructors can create two versions of each question on an assignment, then have each student in the class mark several other students' responses to a different version of each question.

Instant feedback

This is most commonly found in computer-based testing. Students can enter an answer and receive immediate feedback as to whether or not they are correct. This is particularly useful for homework, where students have time to experiment. Some more advanced online homework systems can even offer customized feedback on incorrect answers (e.g. if a student's answer is off by a numerical factor).

There are, however, several downsides to this type of feedback. The first is that it can cause students to pursue correct answers instead of understanding. The second, and more detrimental, effect is that students can accidentally stumble across the correct answer despite making mistakes, which reinforces false beliefs. This can happen very easily with multiple choice questions, but it's also possible for questions where students write in an answer. Instructors should be very careful when designing such questions.

Improving Student Buy-In for Novel Assessment Techniques

Why student buy-in matters

Students enter most courses expecting to be assessed in the standard ways - tests, exams, and assignments. Less common assessment techniques such as oral exams and projects might scare some students off. Students will generally complete any assessments that are worth grades, but having student buy-in can motivate students to take responsibility for their learning (Lewis and Powell, 2016).

One of the most important factors that influences students' perception of their learning is their perception assessment design. Students tend to associate perceived assessment efficacy with how much they learned. Student learning satisfaction is also correlated with assessment design, students tend to feel more motivated and satisfied by assessments they perceive as effective (Chen et al., 2018).

How to improve student buy-in

Defining expectations

UW Faculty of Math students reported in a 2023 survey that worked examples/exemplars and detailed explanations of expectations made them more comfortable with novel assessments. They also highly valued low-stakes practice as a way of increasing familiarity with a new assessment type.

Pre-course survey

Sending out a quick survey before a course starts can allow an instructor to gain an understanding of what students are looking for in a course. The information collected can help an instructor understand what elements to emphasize such that students view the assessment(s) in a positive light.

Data from past offerings/literature

For other students, data might be a more effective tool. Showing a graph indicating a correlation between performance/participation in a new assessment technique and overall performance on the course might be all it takes to get students to buy in. If possible, collecting and using data from past offerings of the course would be ideal. However, citing literature can also be valuable.

If using data, it may be worth reminding students thoughout the term of the benefits of an assessment type.

Testimonials from former students

Testimonials from past students can appeal to some students that might be apprehensive about a new assessment technique. Knowing that the assessments worked for peers might be comforting to some.

First-day class discussion

Another way to improve buy-in is to have students come to the conclusion that an assessment is helpful to their learning on their own. For example, Gary A. Smith (2008) had trouble convincing students that a learner-centred classroom would be beneficial - despite increased student achievement. He solved this problem by introducing the course structure with a few questions that naturally led students to understand the value of a learner-centred classroom.

Class culture

Ultimately, every class will be different. Some classes may be more outwardly enthusiastic about new ideas, in which case it might not be necessary to do much convincing. Other classes might be quieter - avoid trying to force enthusiasm onto these classes.


Resources

Additional Resources

Centre for Teaching Excellence

The Centre for Teaching Excellence contains a large amount of useful information. The two subpages described below have a particular focus on assessment.

Teaching Tips - Assessing Students

This subset of the teaching tips page contains a wide variety of information on assessments, including pages on rubrics, peer review, feedback, and more.

Waterloo Assessment Institute

A two-day retreat intended to help instructors rework their assessments. This retreat gives instructors a chance to focus on redesigning their assessments with the help of peers and experts and without any distractions.

Educational Technology Hub

The Educational Technology Hub contains information about a variety of educational technologies that may aid in improving assessments. A few such tools are listed below:

Useful Papers

Title Author (Date) Summary
Adapting Entry-Level Engineering Courses to Emphasize Critical Thinking Hagerty and Rockaway (2012) Demonstrates that making small changes to the design of a course can have a positive impact on student performance without disrupting the natural flow of the course.
Rubrics to assess critical thinking and information processing in undergraduate STEM courses Reynders et al. (2020) Focuses on the utility of rubrics as tools to aid instructors in articulating intended learning outcomes, giving feedback, and improving consistency across multiple graders.
Razalas' Grouping Method and Mathematics Achievement Salazar (2015) Examines the effectiveness of Razalas' Method of Grouping in an integral calculus class. Razalas' Method of Grouping: Students start off in groups of three for a problem-solving activity, the first group to solve the problem defends their solution, whatever student from that group that explained their solution is removed from the group and works individually for following problems.
Promoting Student Metacognition Tanner (2012) Offers practical ways in which faculty can introduce metacognition to students and teach metacognitive strategies.
Guide to developing high-quality, reliable, and valid multiple-choice assessments Towns (2014) Offers some useful advice for developing multiple-choice questions. Note that it focuses on chemistry.

References

Block, R. M. (2012). A Discussion of the Effect of Open-book and Closed-book Exams on Student Achievement in an Introductory Statistics Course. PRIMUS, 22(3), 228–238. https://doi.org/10.1080/10511970.2011.565402

Boyle, B., Mitchell, R., McDonnell, A., Sharma, N., Biswas, K., & Nicholas, S. (2020). Overcoming the challenge of “fuzzy” assessment and feedback. Education and Training, 62(5), 505519. https://doi.org/10.1108/ET-08-2019-0183

Chen, B., Bastedo, K., & Howard, W. (2018). Exploring design elements for online STEM courses: Active learning, engagement & assessment design. Online Learning Journal, 22(2), 5976. https://doi.org/10.24059/olj.v22i2.1369

Connell, G. L., Donovan, D. A., & Chambers, T. G. (2016). Increasing the use of student-centered pedagogies from moderate to high improves student learning and attitudes about biology. CBE Life Sciences Education, 15(1). https://doi.org/10.1187/cbe.15-03-0062

Crowther, G., Wiggins, B., & Jenkins, L. (2020). Testing in the Age of Active Learning: Test Question Templates Help to Align Activities and Assessments. HAPS Educator, 24(1), 592599. https://doi.org/10.21692/haps.2020.006

Goodman, A. L. (2020). Can Group Oral Exams and Team Assignments Help Create a Supportive Student Community in a Biochemistry Course for Nonmajors? Journal of Chemical Education, 97, 3441-3445. https://doi.org/10.1021/acs.jchemed.0c00815

Gray, K., Owens, K., Liang, X., & Steer, D. (2012). Assessing Multimedia Influences on Student Responses Using a Personal Response System. Journal of Science Education and Technology, 21(3), 392402. https://doi.org/10.1007/s10956-011-9332-1

Kalajdzievska, D. (2014). Taking Math Students From “Blah” to “Aha”: What can we do? PRIMUS, 24(5), 375391. https://doi.org/10.1080/10511970.2014.893937

Karpicke, J. D. (2012). Retrieval-based learning: Active retrieval promotes meaningful learning. Current Directions in Psychological Science, 21(3), 157-163. https://journals.sagepub.com/doi/10.1177/0963721412443552

Kruse, G., & Drews, D. (2013). Using Performance Tasks to Improve Quantitative Reasoning in an Introductory Mathematics Course. International Journal for the Scholarship of Teaching and Learning, 7(2). https://doi.org/10.20429/ijsotl.2013.070219

Lewis, M., & Powell, J. A. (2016). Modeling Zombie Outbreaks: A Problem-Based Approach to Improving Mathematics One Brain at a Time. PRIMUS, 26(7). https://doi.org/10.1080/10511970.2016.1162236.

Nelson, R., Marone, V., Garcia, S. A., Yuen, T. T., Bonner, E. P., & Browning, J. A. (2021). Transformative Practices in Engineering Education: The Embedded Expert Model. IEEE Transactions on Education, 64(2), 187194. https://doi.org/10.1109/TE.2020.3026906

Nielsen, L.P., Bean, W.N., & Larsen, A.A.R. (2018). The Impact of a Flipped Classroom Model of Learning on a Large Undergraduate Statistics Class 7. Statistics Education Research Journal, 7(1), 121-140. http://www.stat.auckland.ac.nz/serj

Posner, M. (2011). The impact of a proficiency-based assessment and reassessment of learning outcomes system on student achievement and attitudes. Statistics Education Research Journal, 10(1), 3-14.http://www.stat.auckland.ac.nz/serj

Schreffler, J., Vasquez III, E., Chini, J., James, W. (2019). Universal Design for Learning in postsecondary STEM education for students with disabilities: a systematic literature review. International Journal of STEM Education, 6(1), 8. https://stemeducationjournal.springeropen.com/articles/10.1186/s40594-019-0161-8

Seok, S., DaCosta, B., Hodges, R. (2018). A Systematic Review of Empirically Based Universal Design for Learning: Implementation and Effectiveness of Universal Design in Education for Students with and without Disabilities at the Postsecondary Level. Open Journal of Social Sciences, 6, 171-189. https://www.scirp.org/journal/paperinformation.aspx?paperid=84751

Smith, G. A. (2008). First-day questions for the learner-centred classroom. The National Teaching & Learning Forum, 17(5), 1-4. https://d32ogoqmya1dw8.cloudfront.net/files/introgeo/firstday/first_day_questions_learner-ce.pdf

Velegol, S. B. (2015, June 14-17). Quiz re-takes: Which students take advantage and how does it affect their performance? [Paper presentation]. 122nd American Society for Engineering Education Annual Conference and Exposition, Seattle, WA, United States. https://peer.asee.org/quiz-re-takes-which-students-take-advantage-and-how-does-it-affect-their-performance