The Survey Research Centre draws upon a wealth of expertise and best practices to achieve the highest possible response rates for all survey projects.

Definition of response rate

A generally accepted definition of a response rate is:  

(Number of eligible sample units with completed interviews  x  100%)/ (Number of eligible sample units)  

To define the numerator and denominator we need to define what is meant by a sample unit, what "eligibility" is, and what constitutes a completed interview. The numerator is then easy to calculate, being an observed count. 

For many surveys the denominator is not observable; these are surveys in which for some portion of sample units eligibility is never determined. In estimation of the denominator, in addition to a count of the known eligible sample units, the number with unknown eligibility may be included;  less conservatively, the latter might be replaced by  an estimated proportion of those eligible from those with unknown eligibility.  For example, for a random survey of smokers (one per household), if it is known that 20% of Canadian households contain smokers, 20% of units of unknown eligibility could be included in the denominator.

It is important in reporting results of surveys to clearly define the calculations used to arrive at a response rate. Anyone should be able to reproduce the calculations given the call dispositions. If it is desired to track response rates over time, it is also important that the same method of calculation be used over time.

In the case of a recruited survey where all valid entries on the database are eligible, we could define the denominator to be the number of sample units altogether, or perhaps the number of sample units with valid contact information (traceable).  


Recruited Sample

The two most common reasons for not completing an interview are failure to contact the sample unit, and the sample unit declining to be interviewed. It is useful to distinguish between the two situations for analytical purposes. The response rate for a recruited sample can be decomposed to:

Response rate = contact rate x cooperation rate

For example, if there are 1000 eligible sample units, and we are able to contact 850 of them, and to interview 600 of them, then the response rate is 60%, while the contact rate is 850/1000 or 85%, and the cooperation rate is 600/850 or 70.6%.

Contact Rate

Contact rate measures the proportion of all cases in which contact was made with a possible respondent.

(Completes + Partials + Refusals + Other)  / (Completes + Partials + Refusals + Other +Non‐contact)

Cooperation Rate

Cooperation rate measures the proportion of all completed cases out of all contacts.

Completes /(Completes + Partials + Refusals + Others)

The cooperation rate may or may not include partial completes in the numerator. The denominator of the cooperation rate should be the same as the numerator in the contact rate.

For recruited surveys, it may be useful to remove those respondents deemed untraceable before calculating contact and cooperation rates.

Untraceable Rate

The untraceable rate removes, from a recruited or list based sample, all possible participants with confirmed moved, not‐in‐service and wrong numbers. For example, of a sample of 530, 19 participants were designated untraceable, for a traceable rate of .96 (512/530).   For calculating the contact rate, 512 would be the denominator (not 530).

Uses of Response Rates

  1. as measures of quality and expenditure of effort to achieve completions.
  2. as measures of the propensity of the target population or a subgroup of interest to respond. 
  3. as indicators of potential bias in the estimates. 
  4. as a device for monitoring field operations.

Notes:  For 3. The data may be able to tell us something about the non‐response bias. We can check whether or not there are characteristics on the database which can predict non‐response.  We can check whether a response measuring, for example, success in smoking cessation seems to be associated with the amount of effort to contact. 


It can be useful to compute the rates for subgroups. For example, it might be helpful to note (if it’s true!) that people under 25 tend to have lowered contact rates, but higher cooperation rates.  Similarly, the rates will tend to vary by region and by degree of urbanization of the respondent’s location  

Maintaining High Cooperation Rates

The decision to respond (cooperate) is influenced by many factors:

  • The importance of the topic to the respondent
  • The extent to which the respondent feels obligated to respond (having been recruited or received an incentive)
  • Health, mood, circumstances
  • Intensity of other unsolicited phone calls
  • The length of interview
  • The opening script
  • Interaction with the interviewer
  • The prestige of the surveying organization
  • The respondent’s level of comfort with the surveying organization

High cooperation rates can be maintained:

Listen to and record reasons for declining to respond; consider

  • Modifications of script that might be indicated;
  • Changes of timing that might be indicated
  • Note whether reason is often dissatisfaction with topic itself
  • Pre‐contact letters, incentives
  • Interviewers maintain a positive attitude
  • Attempt soft conversion with reluctant respondent
  • If an appointment must be made at the time of the first conversation, try to leave the respondent with a good feeling.
  • Keep appointments.  

Maintaining High Contact Rates

Contact rates are impacted by:

  • Number of accurate in‐service household numbers in sample
  • Number of call attempts made to each number in sample  
  • Length of field period
  • Quality of sample management

Contact rates can be maintained:

  • Have a quality sample with few wrong, not‐in‐service, business numbers
  • At least 8 call attempts to all numbers, higher if a particular sub‐group of the population is being sampled
  • Leave a reasonable contact interval between call attempts to allow for circumstances to change and to prevent people who screen through answering machines or have call display from feeling harassed.
  • Select a realistic field period for conduct of survey
  • Maintain tight control over sample to ensure that all numbers in play receive the same number of call attempts

Applying for a research grant?

Learn about a new, no-cost Office of Research partnership to help UWaterloo researchers with grant applications.

Partnership with Institutional Analysis & Planning

The Survey Research Centre has partnered with Institutional Analysis & Planning (IAP) to develop and administer several campus-wide surveys to students, alumni, faculty, and staff.

The Student Experience Survey, the first survey administered through this partnership, aims to understand students' perceptions of their academic and non-academic learning environments and their experience at the University of Waterloo. These topics are explored through questions engaging students in subject matter about their general well-being, perceived efficacy of various instructional modalities and learning supports, and opinions on various strategic initiatives.

Read more about the Student Experience Survey on the IAP website: