Use of Crowdsourcing Services and Other Online Panels (Updated December 2021)

Amazon Mechanical Turk | Other Platforms | Ethics| Restrictions| Privacy and Data Storage| Remuneration| Data Quality|Incomplete Tasks| Eligibility Criteria|Risk Level | Deception| Follow-up Studies|Draws|References


Online labour markets, known as crowdsourcing, have become popular mechanisms for recruiting potential research participants. “In the context of research, crowdsourcing is a use of online services to host opportunities for a large pool of individuals to participate in research” (TCPS2 Interpretations, 2021).  Crowdsourcing and online survey panels have become popular among social scientists as a source to recruit research participants from the public for studies.

Researchers studying the use of crowdsourcing have found that individuals who sign up to complete tasks tend to use the monetary remuneration they receive as a supplementary source of income. A study focused on why people use crowdsourcing (in this case Amazon’s Mechanical Turk) found that most individuals use the site as either a form of paid leisure or as a part-time job (Moss, A. and Litman, L. 2019).    

These guidelines have been created to assist University of Waterloo researchers when planning online studies using a crowdsourcing service or other online survey panels. These guidelines detail information about the most common research ethical dilemmas researchers may face such as:  

  • contractual obligations required by the crowdsourcing service,
  • details to include in recruitment and information-consent letters,
  • inclusion/exclusion criteria,
  • risks,
  • use of deception or partial disclosure,
  • details concerning how to withdraw from the study,
  • remuneration,
  • privacy,
  • confidentiality, and
  • contacting participants for follow-up.

If there is new information that you have learned about a crowdsourcing service or online survey panel that would be helpful to other researchers, please contact Research Ethics so that we can include this information in these guidelines.

Amazon Mechanical Turk

There are various crowdsourcing services and online panels, however, the most common used by University of Waterloo researchers is a service based in the USA called Amazon Mechanical Turk.  

This service started in 2005 mainly as a service to ‘crowdsource’ tasks that require human intelligence to complete the activity such as video or audio transcription. Businesses, researchers, or individuals who use the crowdsourcing service Amazon Mechanical Turk (MTurk) are known as "requesters" and they post tasks to be completed as HITs (Human Intelligence Tasks). "Workers" can browse among the posted HITs and complete them for monetary remuneration set by the "requester". "Workers" can be located anywhere in the world, but over 80% of "workers" using MTurk reside in the USA or India (Ross, Zaldivar, Irani et al., 2010) with a fairly even split between males and females with females making up 57% (Moss & Litman, 2020).  While MTurk does not necessarily target the research community, many researchers have had success using the service noting that they can obtain cost-effective, rapid data from a sample more diverse than typical college student samples (Buhrmester, Kwang & Gosling, 2011). Back to top

Other Platforms (Prolific, Qualtrics, Survey Monkey, Leger, Nielson, CloudResearch)

Other online crowdsourcing platforms offer services like MTurk.  Many researchers have started using Prolific, a U.K. based company designed for researchers and start-up companies (Peer, Brandimarte, Samat & Acquisti, 2017).  The Prolific website indicates their participants are more diverse and naïve than MTurk and their services include advanced features such as video/audio calls (“Why Prolific?”, 2021).

There are a variety of other online panels used by researchers at the University of Waterloo. Qualtrics and SurveyMonkey now provide samples to accompany their survey platforms. Other online panels used by researchers include Leger (Canadian owned, originally designed for market research), Nielsen (a global measurement and data analytics company) and CloudResearch (formerly TurkPrime, an advanced platform developed by researchers in response to the limitations of Mturk). These panels offer access to large groups of diverse panelists and are typically fast and easy to manage. However, larger online panels tend to provide less flexibility in terms of communication with participants, less flexible remuneration, and may come at a greater financial cost to researchers (Chandler, Rosenzweig, Moss, et al. 2019). Back to top

Ethics

The use of crowdsourcing and other online panels as a participant recruitment tool in research should be guided by the core principles of the TCPS 2: Justice, Respect for Persons, and Concern for Welfare (TCPS2 Interpretations, 2021).  Various publications have discussed the ethical issues associated with the use of crowdsourcing or online panels as a method of recruitment and data collection (Standing and Standing, (2018), Moss, Rosenzweig, Robinson, et al. 2020).  Several issues are identified below:

  • participant non-naivety or the so-called "super-worker" problem (Moss and Litman, 2020)
  • appropriate inclusion or inapprorpiate exclusion of participants (i.e., mere convenience is not sufficient justification for inclusion/exclusion) (TCPS2 Interpretations, 2021)
  • low-pay (Moss, Rosenzweig, Robinson, et al. 2020)
  • low attentiveness and potential fraudulent responses (Bruhlmann, Petralito, Aeschbach, et al. 2020)

In addition to these concerns, an article by Gleibs, 2016, noted that “…we should not only focus on data quality and the validity of the obtained results, but also on how workers are treated as participants and the relationship between researchers (requesters) and participants (workers).” Researchers are asked to consider fair pay when using these services as this relates to the ethical principle of respect for persons outlined in the TCPS2.

Participant knowledge and contribution should also be recognized and/or rewarded (Standing and Standing, 2017).  Gleibs’ states that the relationship between researchers and participants has shifted to an employer-contractor relationship and that “requesters hold more power than the workers in setting wages and withdrawing [rejecting] work.” While a worker can choose not to participate, the worker does not have a high degree of control in terms of the scope of the tasks offered or the pay and various studies have pointed out these practices may also generate negative reactions on the part of participants, related to feelings of being exploited and/or misled (Djelassi and Decoopman, 2016).

As with all studies, informed consent is key to ensure that people understand the purpose of the research as well as its risks and potential benefits as fully as possible. Once individuals are aware of what is involved in a study, they can decide to agree to participate or decline. On MTurk, studies are displayed so workers can see the name of the request (Requester) the title of the hit (i.e., the title of the study), the reward amount (i.e., remuneration) and the time left remaining to access the HIT.  Once a potential participant clicks on the HIT, they are immediately directed to the study recruitment materials and the study’s information consent letter. The letter needs to identify, as part of the investigator information block at the top of the page, the name of the investigators, Department name, institution, country, and contact information (phone and email). This will aid potential participants to identify the study as a University of Waterloo and Canadian research study.

Each platform should provide information on how studies are set-up and managed. Studies must be programmed so that potential participants are directed to an informed consent letter. After reading the information consent letter, potential participants are then prompted to agree or disagree to participate before either being moved on to the study or re-directed back to the platform. When preparing information-consent letters researchers are encouraged to review the samples on the Research Ethics website for conducting questionnaire studies with online consent (i.e., web survey).

Researchers are responsible for satisfying the Research Ethics Board that the use of a specific panel is appropriate to answer their research question(s) as well as reach the intended participant group (TCPS2 Interpretations, 2021).  Back to top  

Restrictions and Prohibited Uses

It is a researcher’s responsibility to become familiar with and adhere to the policies of the crowdsourcing service or online panel they will be using. For instance, MTurk’s Acceptable Use Policy describes the permitted and prohibited use of its services. Some prohibited activities include collecting personally identifiable information and unsolicited contacting of users. Likewise, Prolific’s Researcher Terms of Service prohibits participants from claiming rewards outside of the service. Some prohibited activities include: collecting personally identifiable information and unsolicited contacting of users.

If a researcher is unsure as to whether their research plan aligns with the platform’s services and policies, they must contact the service directly to verify whether their research plan would breach any service policy or Terms of Use. Back to top

Privacy and Data Storage

Researchers should be aware of any privacy concerns related to data storage when using crowdsourcing platforms or other online panels (e.g., where stored, for how long, etc.).  Researchers should be familiar with the privacy information available on the service’s website, and this information should be reflected in the participant’s information consent letter. In general, participants should be anonymous to the researchers and only a participant ID should be recorded. Potential research participants should be informed if their IP address or system ID (e.g., worker ID/Prolific ID) is being collected even if it is only collected temporarily. In the case of IP address, participants should be informed when it is being collected, how it is being used (e.g., to prevent duplicate submissions, verify location) and when it will be removed from the dataset and deleted. 

If IP address and/or worker/participant ID is being collected, please consider adding sample wording to the information letter: "This survey website temporarily collects your worker ID and computer IP address to avoid duplicate responses but will nto collect information that could identify you. Researchers will de-identify the dataset as soon as possible by replacing worker/participant ID with an identify code (ID number)."

Survey platforms may also provide researchers with the option not to collect IP address.  For instance, Qualtrics allows researchers to turn on a feature called ‘Anonymize responses’ which will remove identifiable information such as IP address from the data. 

As with all online studies, researchers should be including a statement concerning limitations to privacy when collecting information online in their information-consent letter.  For researchers using Prolific, there is an addition to the online privacy statement to incorporate the General Data Protection Regulation (GDPR), which is a law on data protection and privacy in the European Union (EU).  See suggested language below:

"Prolific is situated in the United Kingdom and as such, your data will be temporarily stored on UK servers and subject to General Data Protection Regulation (GDPR) which serves to safeguard your privacy. If you prefer not to submit your online survey respones through the online platform, please do not participate in this study."

Researchers are advised that when linking to a different site such as Survey Monkey™ or Qualtrics™ to ensure the survey page opens in a new window (or tab). If participants are to click on a hyperlink to the survey and it opens in the same window, people may be unable to navigate back to the crowdsourcing page to submit their HIT. One suggestion is to add an instruction that reads: Please open this link in a new window.” Back to top

Recommended Remuneration

Historically, remuneration on MTurk has either been transferred to a worker’s virtual U.S. bank account or redeemed via an Amazon.com gift certificate. However, Amazon has started implementing plans for workers to withdraw earnings in their local currency (“What Canadians Should Know About Amazon Mechanical Turk”, 2020).  The cost to researchers using crowdsourcing services is typically made-up of two components: the amount paid to the worker/participant and the service fee. 

Researchers can decide what to pay workers for each study when using MTurk but should ensure that the rate-of-pay aligns with other studies offered on MTurk of similiar length and difficulty and set the remuneration appropriately. If there are justified reasons for lowering the remuneration this needs to be outlined in the research ethics application. Researchers are asked to verify their estimated participation time is accurate and includes time necessary to thoroughly read the information and consent materials and all other study documents (e.g., questionnaires, vignettes). Researchers are asked to keep in mind that MTurk workers, or non-expert participants, may take longer to complete the study than testers on their research team. Researchers are responsible for monitoring over time the amount of remuneration provided for participation in other studies (or HITs) of similar length and difficulty and to adjust the remuneration accordingly when planning future studies.

Prolific has established a fair pay policy where they recommend participants are paid £7.50/$9.60 USD per hour but that the minimal pay allowed is £5.00/$6.50 USD per hour.  In addition to typical remuneration (per task), other online panels such as Leger or Qualtrics offer a variety of incentives to their panelists including points-based programs and draws.  

Because study participants are remunerated in cash (or near cash by way of Amazon gift certificates) this is seen as income. Like any other income, remuneration may be taxable if it exceeds guidelines set by the Internal Revenue Service (IRS) in the USA or for the country for which the participant is a citizen. Researchers do not need to include information about the remuneration being taxable in their information letter if the service deducts taxes or enters into an agreement with participants/workers that includes the worker’s obligations concerning taxes. Back to top

Data Quality

More recently, researchers using crowdsourcing services have expressed concern about the use of bots. Bots are software programs that are designed to imitate human behaviour and can fraudulently complete tasks that are meant for human participants (“What is a bot?” 2021). It should be noted that this problem is not unique to crowdsourcing and has become a common concern for researchers recruiting participants using various means of social media (Pozzar, Hammer, Underhill-Blazey, 2020). As a result, some researchers have started to introduce attention check questions and captchas to check for bots, inattentive, or fraudulent responses. While data from participants who provide low-quality responses can be removed from the dataset (e.g., multiple responses from the same IP address, inconsistent responses, shorter than average response times), researchers must provide participants with their promised remuneration. Back to top

Incomplete task(s) or Withdrawal Without Loss of Remuneration

Research participants must receive the stated remuneration even if they choose not to complete a specific task or question(s) (i.e., leave it blank) or decide to withdraw from the study before finishing all tasks or questions. Although some crowdsourcing systems allow researchers to accept or reject completed tasks/submissions, research ethics guidelines preclude participants from not receiving remuneration because they do not complete the task to the researcher’s satisfaction, The TCPS2 statesThe participant should not suffer any disadvantage or reprisal for withdrawing, nor should any payment due prior to the point of withdrawal be withheld.” Thus, a research participant’s work should never be rejected.

Instructions are needed in the information-consent letter to inform participants as to what to do to receive the remuneration if they decide to withdraw from the study and stop participating. The following instructional statement is suggested: "You may decline to answer any quesitons that you do not wish to answer, and you can withdraw your participation at any time by ceasing to answer questions, without penalty or loss of remuneration. To receive remuneration please proceed to the end of the study and click submit." Back to top

Use of Eligbility Criteria

Crowdsourcing services generally collect some information from their participants upfront to allow clients to target a specific sample or group of people. For instance, MTurk allows researchers to restrict the audience eligible to view their study through a feature called qualification requirements.  This feature can be used by researchers to select participants who have demonstrated their ability to provide high quality responses. In addition, using a ‘Number of HITS Approved’ qualification will ensure that only those with a certain number of approved HITs will see a particular study. Researchers should justify the use any of approval ratings in their ethics application if they intend to restrict their study based on approvals.  

MTurk also allows researchers to select participants based on their location. If a researcher were to not place a ‘location’ restriction on their HIT they would have HITs completed by people living outside of the USA. There is, however, no feature on MTurk that allows a researcher to select participants based on various demographic characteristics such as age or gender (AWS, Selecting Eligible Workers, 2021). If researchers want to target a specific population (e.g., females, age 18 to 24) this should be stated in the recruitment and information-consent letter so that only those people who fit these criteria will participate in the study. The limitation of this approach is that people who do not fit the eligibility criteria may ignore these instructions. Alternatively, researchers could develop a pre-screen questionnaire as the first component to the study. Those who are determined to be eligible for the study based on their responses to a series of questions will be sent directly to the next phase of questionnaires to complete for the study. Those who are not eligible are notified of this along with an explanation as to why they were ineligible, thanked for their time, and provided remuneration, if applicable.

The Prolific platform is different as they ask that researchers do not screen participants as part of the study. Instead, this platform allows researchers to use their prescreening system which includes hundreds of demographic criteria. A few examples include sex, gender identity, country of birth, marital status, and age. Back to top

Risk Level

Because crowdsourcing or the use of a panel involves collecting data online, only studies identified as having no known or anticipated risks or studies identified as minimal risk may use these as a recruitment mechanism. Studies involving sensitive topics, false feedback, vulnerable populations, or requiring participants to share personal or health information would be reviewed on a case-by-case basis to see if the use of a crowdsourcing or online panel as a method to recruit potential participants is appropriate. Back to top

Studies Involving Deception or Partial Disclosure

Deception studies, including those involving partial disclosure of the study purpose, may be posted on crowdsourcing sites. However, these studies should involve only mild deception as identified by the Deception and Partial Disclosure Guideline. Studies involving fictitious information about the researchers, false feedback, and use of confederates would be reviewed on a case-by-case basis to see if conducting the study online is appropriate and if adequate safeguards are in place to mitigate risks. 

Researchers must ensure participants are fully debriefed about the purpose of the study and provided with the contact information for the researchers should they have questions or concerns about the use of deception or partial disclosure. The debriefing letter should be presented after the participant has completed the study questionnaire or tasks but before they submit their responses. This will ensure the participant sees the debriefing information before receiving their remuneration.

When using crowdsourcing or online panel services, researchers may not ask a participant in the post-debriefing consent form to provide their name and contact information if they have questions about the use of deception or partial disclosure. Doing this would be in violation of most crowdsourcing policies. Instead, researchers are to inform participants that if they have questions, they should contact the researcher(s). The researcher’s contact information (i.e., telephone and email) is to be restated in the post-debriefing consent form. Samples are available to assist researchers in preparing debriefing materials when conducting online studies that involve deception or partial disclosure. Please note the same policies may not apply to all crowdsourcing services. Be sure to review the service’s terms of use when planning your study. Back to top

Contacting "Workers" for Follow-up and Longitudinal Studies

Many crowdsourcing platforms allow for researchers to follow-up with the same set of participants over multiple time points (e.g., longitudinal studies). MTurk has a feature called "Include Workers" which allows researchers to limit the HIT to only the workers they specify by using their Worker ID. Prolific has a similar ‘allowlist’ feature which enables researchers to contact specific participants based on their Prolific ID. Researchers are responsible for collecting and securely storing participant ID for these types of studies. As mentioned above, researchers cannot solicit emails or contact participants through other means (i.e., outside of the crowdsourcing or online survey service). Back to top

Use of Draws for Remuneration

MTurk allows the use of draws if the terms and requirements of the draw comply with their policies. However, a "requester" cannot ask for a "worker's" email address to provide the prize. The "requester" would need to use the 'worker's" id assigned to them by MTurk and award a bonus to their account through the system.  To see if other crowdsourcing services or online panels permit the use of draws, check their polices and terms of use. Back to top

References

AWS, Selecting eligible workers (2021). Retrieved from: Selecting eligible workers - Amazon Mechanical Turk.

Berinsky, A.J., Huber, G.A. and Lenz, G.S. (2010). Using Mechanical Turk as a Subject Recruitment Tool for Experimental Research. Retrieved http://huber.research.yale.edu/materials/26_paper.pdf.

Bruhlmann, F. Petralito, S. Aeschbach, L. et al. (2020). The quality of data collected online: An investigation of careless responding in a crowdsourced sample. Methods in Psychology, Volume 2, 100022.

Buhrmester, M., Kwang, T., & Gosling, S.D. (2011). Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6, 3-5.

Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council of Canada. (2018). Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans--TCPS2, Ottawa, Ontario.

Chandler, J., Rosenzweig, C., Moss, A.J. et al. Online panels in social science research: Expanding sampling methods beyond Mechanical Turk. Behav Res 512022–2038 (2019). https://doi.org/10.3758/s13428-019-01273-7.

Djelassi, S. and Decoopman, I. (2016). Innovation through interactive crowdsourcing: The role of boundary objects. Recherche et Applications en Marketing (English Edition) Article Number: 650160.

Gleibs, I. (2017). Are all "research fields" equal? Rethinking practice for the use of data from crowdsourcing market places. Behavior Research Methods 49:1333–1342.

Mason, W. and Suri, S. (2011). Conducting behavioral research on Amazon’s Mechanical Turk. Behavior Research Methods, 44,pages 1-23.

Moss, A. and Litman, L. (2019). Understanding Turkers: How Do Gig Economy Workers Use Amazon’s Mechanical Turk?  Retrieved from: https://www.cloudresearch.com/resources/blog/trends-of-mturk-workers/.

Moss, A. and Litman, L. (2020). Conducting Online Research on Amazon Mechanical Turk and Beyond.  SAGE Publications. 

Moss, A., Rosenzweig, C., Robinson, J. et al. (2020). Is it Ethical to Use Mechanical Turk for Behavioral Research? Relevant Data from a Representative Survey of MTurk Participants and Wages. Psyarxiv.com.

Paolacci, G., Chandler, J. and Ipeirotis, P.G. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision-making, 5 (5), 411-419.

Panel on Research Ethics, TCPS2 Interpretations, Fairness and Equity, (2021).  Retrieved from https://ethics.gc.ca/eng/policy-politique_interpretations_fairness-justice.html.

Peer, E., Brandimarte, L., Samat, S. and Acquisti, A. (2017). Beyond the Turk: Alternative platforms for crowdsourcing behavioral research. Journal of Experimental Social Psychology, Volume 70, Pages 153-163.

Pozzar R, Hammer MJ, Underhill-Blazey M, Wright AA, Tulsky JA, Hong F, Gundersen DA, Berry DL. (2020). Threats of Bots and Other Bad Actors to Data Quality Following Research Participant Recruitment Through Social Media: Cross-Sectional Questionnaire J Med Internet Res 2020;22(10):e23021

Ross, J., Zaldivar, A., Irani, L. et al. (2010) Who are the crowdworkers?: shifting demographics in mechanical turk.  CHI Imagine all the People. 

Schnoebelen, T. and Kuperman, V. (2010) Using Amazon Mechanical Turk for linguistic research. Psihologija 43236599(4):441-464

Standing, S. and Standing, C. (2017). The ethical use of crowdsourcing. Business Ethics: A Eur Rev. 1–9.

What Canadians Should Know About Amazon Mechanical Turk. Clickwork Canada. (2020). Retrieved from https://clickworkcanada.com/mturk-for-canadians/.

What is a bot? (2021) Retrieved from: https://www.cloudflare.com/en-ca/learning/bots/what-is-a-bot/.

Why Prolific? (2021) Retrieved from: https://prolific.co/prolific-vs-mturk/.


Back to top