THIS SITE

Information for

Online labour markets, known as crowdsourcing, are becoming popular mechanisms for recruiting potential research participants. Crowdsourcing is the act of outsourcing tasks to a large group of people (a "crowd") through an open request via the internet in exchange for monetary remuneration. Crowdsourcing has become popular among social scientists as a source to recruit research participants from the general public for studies.

Researchers studying the use of crowdsourcing have found that individuals who sign up to complete tasks tend to use the monetary remuneration they receive as a supplementary source of income. Some websites report the vast majority of individuals state they use the site casually as a way to spend time and be “at least semi-productive” (i.e., as a replacement for surfing Facebook™, etc.)

These guidelines have been created to assist University of Waterloo researchers when planning online studies using a crowdsourcing service. These guidelines detail information about the most common research ethical dilemmas researchers may face such as:  contractual obligations required by the crowdsourcing service, recruitment and information-consent, inclusion/exclusion criteria, risks, use of deception or partial disclosure, appropriately estimating the time duration for the study, details concerning how to withdraw from the study, remuneration, privacy and confidentiality, and contacting participants. If there is new information that you have learned about a crowdsourcing service that would be helpful to other researchers, please contact the Office of Research Ethics so that we can include this information in the guidelines.

Crowdsourcing

The most common crowdsourcing service used by University of Waterloo researchers is a service based in the USA called Amazon Mechanical Turk. Rresearchers who are not citizens of the USA are not able to post a study directly to Amazon Mechanical Turk therefore many work through an intermediary service called Crowdflower. These two services are described in detail next.

Amazon Mechanical Turk

This service started in 2005 mainly as a service to “crowdsource” tasks that require human intelligence to complete (e.g., video or audio transcription). Groups, businesses, or individuals who use the crowdsourcing service Amazon Mechanical Turk are known as “requesters” and they post tasks to be completed as HITs (Human Intelligence Tasks). “Workers” can browse among the posted HITs and complete them for monetary remuneration set by the “requester”. “Workers” can be located anywhere in the world, but about 80% of “workers” using Amazon Mechanical Turk reside in the USA or India. Remuneration provided for completing tasks can be redeemed by “workers” on Amazon.com via a gift certificate or be transferred to a bank account. University of Waterloo researchers recruit mainly USA participants.

Some University of Waterloo researchers have reported that the vast majority of Amazon Mechanical Turk participants complete surveys and tasks conscientiously and completely; typically about 90%. However, researchers should keep in mind that online participants often have shorter periods of time and attention in which to complete a study in comparison to an in-lab participant and can be less tolerant to complete open-ended responses or complicated tasks (Buhrmester, Kwang, & Gosling, 2011).

Restrictions and Prohibited Uses

It is a researcher’s responsibility to become familiar with and adhere to the policies of the crowdsourcing service they will be using. Some prohibited activities include: collecting personally identifiable information and unsolicited contacting of users. HITS that violate Amazon Mechanical Turk policies include:

  • HITs requiring disclosure of the Worker's identity or e-mail address, either directly or indirectly
  • HITs requiring registration at another website or group
  • HITs that directly or indirectly promote a site, service, or opinion
  • HITs that violate the terms and conditions of an activity or website (for instance asking Workers to vote for something)
  • HITs that have explicit or offensive content, for instance, nudity, but do not have the Adult Content Qualification
  • HITs asking Workers to solicit third parties
  • HITs that generate "referred" site visits or click-through traffic
  • HITs that ask Workers to take action to manipulate a website's behavior or results
  • HITs that violate intellectual property rights of any party
  • HITs that require Workers to download software

If at any time you are unsure whether the process/procedure you are proposing breaches the policies of the crowdsourcing service, you are encouraged to contact the service to verify that what you are proposing does not breach their policies or Terms of Use.

Crowdflower

Crowdflower, formerly known as Dolores Labs, located in San Franciso, California, was founded to create tools to manage internet crowdsourcing. Crowdflower works with “requesters” to take large, data-heavy projects and break them into small tasks that are then distributed to various on-demand workforces around the world. One of these workforces is Amazon Mechanical Turk. If needed, the Crowdflower system will aggregate the results and control for quality of the work for the “requester”.

Amazon Mechanical Turk, a USA based company, requires “requesters” to provide a USA billing address and a USA credit card, Amazon Payments account or USA bank account to post a HIT. Therefore, it is not possible for “requesters” (i.e., researchers) from outside of the USA to post a HIT directly on Amazon Mechanical Turk. Non-USA researchers, including University of Waterloo researchers, often work with Crowdflower to coordinate their requests and listing HITs on Amazon Mechanical Turk. However, University of Waterloo researchers who are citizens of the USA (or have dual citizenship) can usually post studies directly to Amazon Mechanical Turk.

Data Collection Systems

Although the Crowdflower platform allows researchers to create their own surveys hosted by Crowdflower, typically University of Waterloo researchers use Crowdflower only as an intermediary to recruit participants through Amazon Mechanical Turk. Researchers tend to create the surveys themselves using SurveyMonkey™, Qualtrics™ or a University of Waterloo departmental system. Crowdflower is used mainly as a third-party platform for accessing the Amazon Mechanical Turk labour market.

Researchers may use programs where the computer servers are housed in the USA, such as Survey Monkey™ or Qualtrics™, to collect a participant’s response to the study tasks or questions. However, there are statements concerning the access of information as a result of the USA Patriot Act that need to be included in the information-consent letter. When preparing your information-consent letter be sure to review the samples on the ORE website for conducting questionnaire studies with implied consent (i.e., web survey).

Researchers are advised that when linking to a different site such as Survey Monkey™ or Qualtrics™ to ensure the survey page opens in a new window (or tab). If participants are to click on a hyperlink to the survey and it opens in the same window, people may be unable to navigate back to the crowdsourcing page to submit their HIT. One suggestion is to add an instruction that reads “Please open this link in a new window.”

Tracking IP addresses is an option in Survey Monkey™ and Qualtrics™. Many researchers wish to use this function to track if the survey or questionnaire is being completed multiple times by the same computer (or essentially the same participant). Potential research participants should be informed if their IP address and Mechanical Turk Worker ID will be collected even if it is only collected temporarily. Sample wording to add to the information letter is “The survey website temporarily collects your Mechanical Turk Worker ID and computer IP address to avoid duplicate responses but will not collect information that could identify you.”

Listing Studies

If you are using Crowdflower to post a HIT on Amazon Mechanical Turk, you first need to create an account with Crowdflower. There is no cost associated with opening an account. After creating a "job" (along with indicating the number of participants needed and the associated remuneration) you are provided a total cost for that job by Crowdflower.

Crowdflower collects a 33% fee and Amazon Mechanical Turk collects a fee of 10% on top of what researchers pay to participants. For example, if a HIT pays $0.50, Amazon Mechanical Turk collects $0.05 and Crowdflower collects 33% of $0.55 (or $0.18). The fee per participant equals $0.73. Researchers pay the total cost to Crowdflower who manages all remuneration both to Amazon Mechanical Turk and participants.

If you accept the cost of the job, the job is subsequently listed as a HIT on Amazon Mechanical Turk. Amazon Mechanical Turk is the checkbox that is selected by default. The job listing uses a few simple words to identify the study along with the stated remuneration. Crowdflower automatically inserts the first line of the “job” description as the listing name for the HIT. Be sure this line adequately and appropriately describes your study.

The “time allotted” section refers to the amount of time a “worker” has from the time they click “accept HIT” to when they are allowed to submit the HIT. Sixty minutes is automatically chosen by Crowdflower when the HIT is listed. The HIT expiration date is also automatically set by Crowdflower. This is set to one week from the date the HIT is listed on Amazon Mechanical Turk. The HITS available is the number of participants that you indicate when you list the job. See the example below:

Research study about psychology practices in America

View a HIT in this group

Requester: Dolores Labs

HIT Expiration Date:  April 30, 2011 (4 days 10 hours)

Reward: $2.50

Time Allotted: 60 minutes

HITS Available: 1000

Crowdflower acts as the “requester”. Therefore, the “requester” listed will be identified as Dolores Labs. However, when a potential research participant clicks on “view a HIT” they are to be immediately shown the study information-consent letter. This letter needs to identify as part of the investigator information block at the top of the page the name of the investigators, their Department name, institution, country, and contact information (phone and email). This will aid potential participants to identify the HIT as a University of Waterloo and Canadian research study. If you are posting directly to Amazon Mechanical Turk, your name will be used as the “requester”.

Recruitment and Information-Consent Materials

The SONA system is regularly used by University of Waterloo psychology researchers to recruit undergraduate psychology students. This system requires researchers to post basic, yet descriptive, information about the proposed research for potential participants (i.e., SONA Description).

Crowdflower provides a mechanism for posting a brief description on Amazon Mechanical Turk about the research like that in the SONA system.

  • The “Title" (top box) is the study title.
  • The "Description" (second box) is where a brief description of the study is inserted. It is important to remember that the first line in the description box is automatically inserted into the Amazon Mechanical Turk listing as the name of the HIT.
  • The third box is where you insert something similar to: "If you are interested in learning more about this study, click the link below to be taken to the study website." You would also insert the web link here.
  • The fourth box, labeled "Build form", is where you can insert text similar to: "Important! Enter the code you are given at the end of the study here to submit this HIT and receive your remuneration." Then, you would include a text box where the code can be entered.
  • Be sure to include a statement advising people that they need to keep their Mechanical Turk window open while they complete the study tasks.
  • Be sure to advise people that if they have any problems submitting the code for remuneration to contact you, the researcher, and reiterate your email address here.

It is preferable that University of Waterloo researchers use this mechanism to aid with recruitment and to provide the information-consent materials as the first page of the questionnaire or survey. However, in some cases it may be justified and appropriate to use the information-consent letter also as the recruitment letter/document. This means the information-consent letter would be inserted in the “Description” box. If a researcher wishes to use the information-consent letter also as the recruitment letter, this needs to be outlined in the research ethics application (Form 101) submitted to the ORE.

Use of Eligibility Criteria

Amazon Mechanical Turk has a function that allows “requesters” to make HITs available to “workers” with certain “qualifications” (i.e., eligibility criteria). Physical location is one of those criteria. If a researcher were to not place a “location” restriction on their HIT they would have HITs completed by people living outside of the USA. There is no feature that allows a researcher to select participants based on certain demographic characteristics such as age or gender.

Amazon Mechanical Turk also allows “requesters” to use the “qualifications” feature to make sure their HITs are completed by “workers” that have demonstrated their ability to give high quality responses. This is called the “approval rating”. As a Canadian researcher you do not have access to this feature through Crowdflower. In any case, researchers may not preclude a person from taking part in a study because an approval rating has been assigned to them by a third party.

What do you do if you are looking to include only males aged 18 to 24 in your study? Although Amazon Mechanical Turk has a unique feature that can notify participants if they qualify for a study, you do not have access to this feature if using Crowdflower. Therefore, one approach is to state in the information-consent letter that only people who fit these criteria should participate in the study. The limitation of this approach is that people who do not fit the eligibility criteria may ignore these instructions. Alternatively, you may want to consider building in a pre-screen questionnaire as the first component to the study. Those who are determined to be eligible for the study based on their responses to a series of questions are sent directly to the next phase of questionnaires to complete for the study. Those who are not eligible are notified of this along with an explanation as to why they were ineligible, thanked for their time, and provided remuneration, if applicable.

Risk Level

Because crowdsourcing is done through the internet only studies identified as having no known or anticipated risks or studies identified as minimal risk may use crowdsourcing as a recruitment mechanism. Studies involving sensitive topics, false feedback, vulnerable populations, or requiring participants to share personal or health information may not use crowdsourcing as a method to recruit potential participants.

Studies Involving Deception or Partial Disclosure

Deception studies, including those involving partial disclosure of the study purpose, may be posted on crowdsourcing sites. However, these studies must involve only mild deception as identified by the ORE Partial Disclosure and Deception Guidelines. For example, studies involving fictitious information about the researchers, false feedback, and use of confederates may not be conducted online.

Researchers must ensure participants (i.e., “workers”) are fully debriefed about the purpose of the study and how to contact the researchers if they have questions or concerns about the deception or partial disclosure. The debriefing letter should be presented after the participant (i.e., worker) has completed the study questionnaire/tasks but before they submit their responses. This will ensure the participant (i.e., worker) sees the debriefing information before receiving their remuneration.

If you are using Amazon Mechanical Turk as your crowdsourcing service, you may not ask a participant (i.e., “worker”) in the post-debriefing consent form to provide their name and contact information if they have questions about the use of deception or partial disclosure allowing you to follow-up with them. Doing this would be in violation of Amazon Mechanical Turk policies. Therefore, researchers are to inform participants that if participants have questions to contact the researcher(s). The researcher’s contact information (i.e., telephone and email) is to be restated in the post-debriefing consent form. Samples are available on the ORE website to assist researchers in preparing debriefing materials when conducting online studies that involve deception or incomplete disclosure. Please note that the same policies may not apply for other crowdsourcing services. Be sure to review service’s terms of use when planning your study.

Time Duration

The duration of time stated in an information-consent letter needs to reflect the total amount of time it takes a person to complete the proposed study. When estimating how long it may take a participant to complete a study posted on a crowdsourcing service researchers need to be sure to take into account the amount of time it takes a person to read and complete: a) the information-consent document, b) the study tasks and/or questionnaires, and c) the feedback/debriefing letter and post-debriefing consent form (if the study involves deception).

Incomplete task(s) or Withdrawal Without Loss of Remuneration

Research participants must receive the stated remuneration even if they choose not to complete a specific task or question(s) (i.e., leave it blank) or decide to withdraw from the study before finishing all tasks or questions. Although some crowdsourcing systems allow a “requester” to accept or reject completed tasks submitted by “workers”, research ethics guidelines (i.e., Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans) preclude participants from not receiving remuneration because they do not complete the task to the researcher’s satisfaction. Thus, a research participant’s “work” should never be rejected.

Instructions are needed in the information-consent letter to inform participants as to what to do to receive the remuneration if they decide to withdraw from the study and stop participating. The following instructional statement is needed:

“You may decline to answer any questions that you do not wish to answer and you can withdraw your participation at any time by ceasing to answer questions, without penalty or loss of remuneration. To receive remuneration please proceed to the end of the questionnaire, obtain the unique code for this HIT, and submit it.”

Recommended Remuneration

Although remuneration listed in a HIT can be as low as $0.01, Amazon Mechanical Turk recommends “requesters” look at HITs similar to their HIT to determine the appropriate remuneration. Some researchers state “the rule of thumb for payment is around half-a-penny per question” (Schnoebelen and Kuperman, in press). In the spirit of fairness, University of Waterloo researchers should consider providing individuals who participate in research via Amazon Mechanical Turk equivalent monetary remuneration to other HITs listed and set the remuneration appropriately. Researchers should provide remuneration that is comparable to other studies (or HITs) of similar length and difficulty.

Researchers should not provide study participants a remuneration amount that is below what other similar HITs on Amazon Mechanical Turk are providing as remuneration. If there are justified reasons for lowering the remuneration this needs to be outlined in the research ethics application form (Form 101). Researchers are responsible for monitoring over time the amount of remuneration provided for participation in other studies (or HITs) of similar length and difficulty and to adjust the remuneration accordingly when planning future studies.

It is important to keep in mind that an amount too low may attract too few participants while an amount too high (in comparison to similar HITs) may attract participants who are wishing to complete HITs strictly for the remuneration. Researchers studying the use of Amazon Mechanical Turk for research participant recruitment have shown that increases in remuneration appear to speed the rate of data collection, and although the quality of “work” is not affected by the remuneration, output declines when remuneration is lowered (Berinsky, Huber, Lenz, 2010 and Mason and Watts, 2009).

Because study participants are remunerated in cash (or near cash by way of Amazon gift certificates) this is seen as income, and like any other income, remuneration may be taxable if it exceeds guidelines set by the Internal Revenue Service (IRS) in the USA or for the country for which the participant is a citizen. Amazon Mechanical Turk and Crowdflower are responsible for any of these requirements.

Use of Draws for Remuneration

Amazon Mechanical Turk allows the use of draws as long as terms and requirements of the draw comply with Amazon Mechanical Turk’s policies. However, a “requester” cannot ask for a “workers” email address to provide the prize. The “requester” would need to use the “workers” id assigned to them by Amazon Mechanical Turk and the email function as outlined above to provide the prize. Thus, the prize would need to be a gift certificate that could be sent via email. As noted above, contacting a worker using their id and the email function is not a feature available through Crowdflower.

Pre-Testing

Amazon Mechanical Turk has a “sandbox” feature that allows “requesters” (i.e., researchers) to create a “HIT” but not post the HIT until they are satisfied everything works as planned. This is a good way to “debug” your study questionnaire/tasks before making the HIT available to participants. However, please note that this feature is only available to those can post studies directly to Amazon Mechanical Turk. This feature is not available through Crowdflower although Crowdflower does have a preview feature that is somewhat similar to the sandbox.

Privacy and Confidentiality

There can be privacy concerns regarding where the data is stored when using Amazon Mechanical Turk. Amazon has access to the data on a template “HIT” and although they state that they will not look at the data it may still be a concern due to the type of data being collected. Using external “HITs” where the data goes straight to an external server means the data is never available to Amazon. Researchers should also consider using the https protocol to ensure the data transferred between a “worker’s” browser and the “requester’s” server running an external “HIT” is encrypted.

“Workers” using a crowdsourcing services are anonymous to “requesters”. A “workers” id contains no personally identifiable information. However, if a “requester” sent a message to a “worker” using one of Amazon Mechanical Turks’ application programming interfaces (API) and the “worker” replied, the “workers” email address would be revealed. Please note that this feature is not available through the Crowdflower interface.   

Contacting “Workers” for Follow-up, Involvement in Other Research, or Future Studies

Although “requesters” are allowed to ask for a “worker’s” id to contact the worker directly for a follow-up or about future studies that will be posted on Amazon Mechanical Turk this feature is not available through Crowdflower. It is important to note that it is a violation of Amazon Mechanical Turk policies for a “requester” to ask a “worker” for their email or any other information to contact them via another means. A “requester” may not contact a “worker” to ask them to participate in research outside of the Amazon Mechanical Turk system. Moreover, researchers cannot solicit emails of third parties (i.e., other individuals known to the “worker”) about participating in research, whether directly through Amazon Mechanical Turk or indirectly on another website.

References

Berinsky, A.J., Huber, G.A. and Lenz, G.S. (2010). Using Mechanical Turk as a Subject Recruitment Tool for Experimental Research. Retrieved on April 1, 2011 from http://huber.research.yale.edu/materials/26_paper.pdf

Buhrmester, M., Kwang, T., & Gosling, S.D. (2011). Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6, 3-5.

Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council of Canada. (2010). Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans, Ottawa, Ontario.

Mason, W. and Suri, S. (2011). Conducting behavioral research on Amazon’s Mechanical Turk. Behav Res, published online June 30, 2011.

Paolacci, G., Chandler, J. and Ipeirotis, P.G. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision-making, 5 (5), 411-419.

Schnoebelen, T. and Kuperman, V. (in press). Using Amazon Mechanical Turk for linguistic research: Fast, cheap, easy, and reliable. Retrieved on April 1, 2011 from http://www.stanford.edu/~tylers/notes/empirical/Schnoebelen-Kuperman-Mechanical_Turk.pdf


Back to top