Crowdsourcing: A Review and Suggestions for Future Research

Abstract: as academic and practitioner studies on crowdsourcing have been building up since 2006, the subject itself has progressively gained in importance within the broad field of management. no systematic review on the topic has so far appeared in management journals, however; moreover, the field suffers from ambiguity in the topic's definition, which in turn has led to its largely unstructured evolution. the authors therefore investigate the existing body of knowledge on crowdsourcing systematically through a penetr… show more.

Search citation statements

Paper Sections

Citation Types

Year Published

Publication Types

Relationship

Cited by 238 publication s

References 116 publication s, citizen science: an information quality research frontier.

The rapid proliferation of online content producing and sharing technologies resulted in an explosion of user-generated content (UGC), which now extends to scientific data. Citizen science, in which ordinary people contribute information for scientific research, epitomizes UGC. Citizen science projects are typically open to everyone, engage diverse audiences, and challenge ordinary people to produce data of highest quality to be usable in science. This also makes citizen science a very exciting area to study both traditional and innovative approaches to information quality management. With this paper we position citizen science as a leading information quality research frontier. We also show how citizen science opens a unique opportunity for the information systems community to contribute to a broad range of disciplines in natural and social sciences and humanities.

Untapped potential of collective intelligence in conservation and environmental decision making

Environmental decisions are often deferred to groups of experts, committees, or panels to develop climate policy, plan protected areas, or negotiate trade‐offs for biodiversity conservation. There is, however, surprisingly little empirical research on the performance of group decision making related to the environment. We examined examples from a range of different disciplines, demonstrating the emergence of collective intelligence (CI) in the elicitation of quantitative estimates, crowdsourcing applications, and small‐group problem solving. We explored the extent to which similar tools are used in environmental decision making. This revealed important gaps (e.g., a lack of integration of fundamental research in decision‐making practice, absence of systematic evaluation frameworks) that obstruct mainstreaming of CI. By making judicious use of interdisciplinary learning opportunities, CI can be harnessed effectively to improve decision making in conservation and environmental management. To elicit reliable quantitative estimates an understanding of cognitive psychology and to optimize crowdsourcing artificial intelligence tools may need to be incorporated. The business literature offers insights into the importance of soft skills and diversity in team effectiveness. Environmental problems set a challenging and rich testing ground for collective‐intelligence tools and frameworks. We argue this creates an opportunity for significant advancement in decision‐making research and practice.

Crowdsourced idea generation: The effect of exposure to an original idea

Crowdsourcing is an increasingly important approach to pursuing innovation. In crowdsourcing ideation websites, crowd members are often exposed to some stimulus ideas, such as examples provided by companies or peers' ideas. Understanding the effect of being exposed to original stimulus ideas in this context may inform the design of the crowdsourcing process. To test this effect, an experiment was conducted where crowd workers were asked to design a public service advertisement. Depending on the experimental condition, the participants were exposed to an original idea, or a common idea, or no idea. As compared to the absence of exposure, exposure to an original idea decreased fluency, defined as the number of ideas generated by each person, and increased the average originality of ideas generated by each person. By contrast, exposure to a common idea had no effect on either idea originality or fluency. The semantic similarity between the stimulus idea and the first idea generated was higher when the stimulus was common versus original as measured by latent semantic analysis. The implications of these results for research and practice are discussed.

scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.

Contact Info

[email protected]

334 Leonard St

Brooklyn, NY 11211

Blog Terms and Conditions API Terms Privacy Policy Contact Cookie Preferences Do Not Sell or Share My Personal Information

Copyright © 2024 scite LLC. All rights reserved.

Made with 💙 for researchers

Part of the Research Solutions Family.

Crowdsourcing: A Review and Suggestions for Future Research

  • Scinapse’s Top 10 Citation Journals & Affiliations graph reveals the quality and authenticity of citations received by a paper.
  • Discover whether citations have been inflated due to self-citations, or if citations include institutional bias.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Glob Health
  • v.7(2); 2017 Dec

Logo of jogh

“Crowdsourcing” ten years in: A review

First coined by Howe in 2006, the field of crowdsourcing has grown exponentially. Despite its growth and its transcendence across many fields, the definition of crowdsourcing has still not been agreed upon, and examples are poorly indexed in peer–reviewed literature. Many examples of crowdsourcing have not been scaled–up past the pilot phase. In spite of this, crowdsourcing has great potential, especially in global health where resources are lacking. This narrative review seeks to review both indexed and grey crowdsourcing literature broadly in order to explore the current state of the field.

This is a review of reviews of crowdsourcing. Semantic searches were conducted using Google Scholar rather than indexed databases due to poor indexing of the topic. 996 articles were retrieved, of which 69 were initially identified as being reviews or theoretically–based. 21 of these were found to be irrelevant and 48 articles were reviewed.

This narrative review focuses on defining crowdsourcing, taxonomies of crowdsourcing, who constitutes the crowd, research that is amenable to crowdsourcing, regulatory and ethical aspects of crowdsourcing and some notable examples of crowdsourcing.

Conclusions

Crowdsourcing has the potential to be hugely promising, especially in global health, due to its ability to collect information rapidly, inexpensively and accurately. Rigorous ethical and regulatory controls are needed to ensure data are collected and analysed appropriately and crowdsourcing should be considered complementary to traditional research methods.

“No one knows everything, everyone knows something [and] all knowledge resides in humanity; digitalisation and communication technologies must become central in this coordination of far flung genius” [ 1 ]. Although examples of crowdsourcing and “wisdom of the crowds” have been reported hundreds of years ago [ 2 , 3 ], the term “crowdsourcing” was coined in 2006 by Howe in his Wired magazine article [ 4 ]. In the article, Howe defines crowdsourcing as “the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call” [ 4 ] and he adds that “crowdsourcing is the mechanism by which talent and knowledge is matched to those in need of it” to the definition in a later article [ 5 ]. Since Howe’s article and partially due to the availability of modern technology [ 6 , 7 ], use of crowdsourcing has skyrocketed [ 8 ]. Although research in this area has grown exponentially in the last decade, many authors feel that the potential of crowdsourcing is still underutilised and underexploited [ 5 , 9 – 11 ].

As crowdsourcing requires, depending on the definition, ‘outsourcing’ a task or tasks to a large crowd [ 12 ], advances in technology have facilitated the efficiency of this method [ 2 , 6 , 13 – 15 ]. Indeed, research that was previously inconceivable due to the scale is now achievable through crowdsourcing [ 6 ]. Kamajian states that 35% of smart phone users check their phones prior to getting out of bed and, as of 2013, over 5 billion people worldwide had access to mobile phones [ 14 , 16 ]. Prior to Howe’s Wired article, Louis van Ahn introduced the idea of human computing, where humans are used to solve complex problems that computers are not capable of [ 17 ]. While machine learning has made great strides, computers are poor at perception; humans can conceptualise, discriminate and filter, learn, adapt using their background knowledge and apply common sense and experience that machine are unable to do [ 18 ]. In addition to humans actively crowdsourcing data, ubiquitous computing, where computers exist through the physical environment, are virtually invisible to the user, and act as passive sensors has great potential for generating large amounts of data [ 16 , 18 ]. Cell phones, for example, can collect photo, video, acoustic, gyroscopic, acccelerometric and proximal information and can also be used to add pairing devices to collect additional information, such as pollution sensors [ 16 ]. Crowdsourced spatial analysis from GIS data can be very useful, especially for providing resources in emergency situations, for delivering logistics and for efficient targeting of interventions [ 19 ].

As individuals are biased towards the correct answer, Buecheler et al. estimate that if a million individuals were to contribute towards answering a problem via crowdsourcing, there would be a 97.7% likelihood that the crowd would arrive at the correct answer [ 20 ]. While pilot studies have not reached sample sizes close to that scale, many have had great success in achieving extremely promising results. For example, crowdsourcing has been demonstrated to produce accurate results across a range of medical diagnostic studies, including malaria, grading images for glaucoma and diabetic retinopathy, skin self–examination for skin cancers, and images for cancer polyps [ 21 – 26 ].

Despite the interest in the area of crowdsourcing exploding in the past decade, many authors do not agree on its definition or on what counts as crowdsourcing, with some academics considering Wikipedia a “classic” example of crowdsourcing, for example, while others insist it is not crowdsourcing [ 12 ]. Text and data mining is another example that is on the fringe of crowdsourcing’s definition.

In addition to there being many definitions of crowdsourcing, many authors have offered different taxonomies of crowdsourcing, some focusing on types of crowdsourcing while others focus on its production model. Furthermore, there are debates on who participates in crowdsourcing – whether it is laypersons, amateurs, professionals, experts, or a combination.

Although crowdsourcing has existed for decades, it is agreed upon that technology has facilitated its growth. Platforms such as Amazon Mechanical Turk and Crowdflower enable companies to hire workers to perform crowdsources exercises for extremely low prices. Other crowdsourcing platforms, such as Innocentive or Crowdmed, offer a competitive winner–takes–all model. Sensors in wearable technology have also facilitated the ability to collect mass amounts of information.

Crowdsourcing can increase the accuracy of computer automated tasks, lower costs, increase the scale of research, transcend boundaries and borders, produce novel discoveries and increase the speed of research progression, among other benefits. However, there are concerns with the generalisability of the samples, as the crowd is self–selected, security and data protection issues of sensitive data, and the possibility of malicious workers. Some studies have added quality protection measures to weed out malicious workers, such as adding cut–offs for scores on previous tasks and screening questions. Additional regulation is needed for ethical issues, such as obtaining informed consent and data use policies.

Crowdsourcing has considerable benefits in research, as it has the potential to substantially lower costs while massively increasing the sample size and researchers can receive the data in real–time [ 7 , 16 , 19 , 27 – 29 ]. Because of these qualities, crowdsourcing has potential to improve global health research. Indeed, crowdsourcing is used frequently to set research priorities in global health, most often in maternal, newborn and child health, due to the popularity of the Child Health and Nutrition Research Initiative’s (CHNRI) method of research priority setting which uses collective opinion to identify and score research priorities against a set list of criteria [ 30 ]. The CHNRI method is becoming the most frequently used research priority setting method due to its transparent, systematic nature; it was designed to capitalise on the principles of Surowiecki’s “Wisdom of the Crowd,” which will be described in the further in the paper [ 31 ]. Furthermore, research in global health faces an even larger burden than research in high–income countries with regards to funding, logistics, poor existing health care systems, health care workers to collect data, equipment, and patient access to health care, especially in rural or conflict areas [ 21 , 32 – 37 ]. As access to mobile phones in low– and middle–income countries is still increasing, crowdsourcing may provide a complementary route of data collection to traditional sources, capitalising on structures and knowledge already in place in the countries [ 38 ].

As previous authors had reported few search results in indexed journals [ 10 , 27 , 39 ] and crowdsourcing is a new method, semantic searches in Google Scholar were used to retrieve both peer–reviewed and grey literature published on crowdsourcing. “Crowdsourcing” as well as ‘crowdsourcing’ joint with health terms, such as genetics, diagnosis, epidemiology, surveillance, public health and disease were searched in August, 2015. Crowdsourcing and global health was searched initially, as well, but the results overlapped entirely with crowdsourcing and health and crowdsourcing and public health. The titles of results were scanned until it was clear that results appearing were no longer relevant. Full details of the searches, as well as the number of pages of Google Scholar results scanned, can be found in Box 1 . In total, 995 results were identified through the Google Scholar search, which is substantially more than any other reviews have identified. 375 results were discarded as duplicates or irrelevant once abstracts were read.

Crowdsourcing Semantic Searches Conducted in Google Scholar

  • Up to 25 pages
  • Up to 15 pages
  • Up to 5 pages
  • Up to 9 pages
  • Up to 20 pages
  • To 20 pages
  • Up to page 14

Results were organised within Endnote into categories, including reviews, theory of crowdsourcing, health, public planning, GPS–related, translation, robotics, visual perception, logistics of crowdsourcing, which was broken down into motivations, quality, reliability, stability, and others. This review reports on the papers reporting on reviews and theory as well as a portion of the health–related papers, as there were 285 health papers and many of their interventions overlapped. Further reviews can be conducted with the results of the search and organised Endnote library, but are outside the scope of the current review.

The reviews and theoretical papers generally covered the varying definitions of crowdsourcing, taxonomies of crowdsourcing, participants, modes of participation, when research is suitable for crowdsourcing, benefits and concerns with crowdsourcing, recommendations for regulation and quality control, including ethical regulations and examples of crowdsourcing.

Defining crowdsourcing

The definition of crowdsourcing as well as some ‘traditional’ examples of crowdsourcing, such as Wikipedia, are highly debated; this is likely due to both the relative newness of the term and the flexibility and adaptability of the method [ 1 , 5 , 7 , 8 , 10 – 12 , 20 , 40 – 43 ]. To further complicate authors’ attempts to define crowdsourcing, there are a variety of related concepts that have been used synonymously, including: citizen science, health 2.0, wisdom of the crowds, peer production, open sourcing, expert sourcing, collective intelligence, human computation, community–based participatory research, participatory epidemiology, outsourcing and open sourcing [ 1 , 3 , 7 , 12 , 43 ]. While some, like expert–sourcing, are easy to understand as crowdsourcing with experts, the differences between crowdsourcing and others are more nuanced.

Three terms, specifically, are used abundantly in literature and often interchangeably with crowdsourcing: health 2.0, wisdom of the crowds, and citizen science. While applications of crowdsourcing are often a combination of these, especially in the field of health, there are important distinctions between them [ 5 , 8 , 11 ].

Swan defines citizen science as non–professionals conducting science–related activities [ 8 ]. Non–professionals can include scientists of professionals who are conducting activities outside their own fields (so that they are amateurs in that field). All of the examples given by Swan include citizen science at a mass–scale, and thus are all citizen–science activities that are also using crowds [ 8 ]. It may be possible to imagine an activity in which citizens are acting as scientists, collecting data or participating in an experiment that is not at a mass scale, however, such as if citizens provide feedback in the design of a study at a small–scale. Therefore, not all citizen science must be crowdsourcing, but much of it will be.

Health 2.0 is defined, also by Swan, as active participation in one’s health care using web 2.0 technologies [ 8 ]. This could include using m–Health applications to track diet and exercise, for example. Using these applications itself would not be considered crowdsourcing, as data are not necessarily collected and there is no unified output. However, if data were collected, the act of collecting data from this could be considered crowdsourcing. Thus, health 2.0 technology can contribute towards crowdsourcing but is not necessarily crowdsourcing.

“Wisdom of the crowds” is another related term. This refers to the use of knowledge of a large crowd of people and also requires an intelligent crowd. This also differs, slightly, from the term crowdsourcing, as not all crowdsourcing tasks require knowledge or intelligence. Unlike citizen science and health 2.0, all ‘wisdom of the crowds’ tasks are forms of crowdsourcing, but not all crowdsourcing are necessarily applications of a ‘wisdom of the crowds.’ An example of a task requiring intelligence would be using a crowd to diagnose malaria cells in blood smears. In this, each participant needs to use their knowledge or intelligence to consider which blood smears contain or do not contain malaria parasites. Some, perhaps arguable, examples of crowdsourcing that would not be considered requiring knowledge could be RECAPTCHA, passive surveillance such as environmental surveillance using ubiquitous computing and mobile phones, reporting systems, or text mining. In his book, Surowiecki lists four requirements for an intelligent crowd that are particularly important for crowdsourcing tasks that require knowledge (ie, are ‘wisdom of the crowds’ tasks). They are: (i) diversity, which adds perspectives that would otherwise be absent; (ii) independence, limiting the influence of one person’s opinions on other’s; (iii) decentralisation, to develop tacit, specialised knowledge; and (iv) aggregation, to combine the diverse, independent, knowledgeable opinions of the crowd [ 31 ].

In addition to these three terms, crowdsourcing is often contrasted to open sourcing or outsourcing. Although some authors believe that crowdsourcing is a special form of outsourcing [ 3 ], many authors conclude that the major difference between crowdsourcing and outsourcing is the presence of a contract [ 10 ]. In addition, in a crowdsourcing exercise, the organisation or crowdsourcing initiator has the rights to whatever is produced and the crowd is aware of this [ 10 ]. Intellectual property rights are also one of the major differences between crowdsourcing and open sourcing or peer–production, along with the hierarchical structure of crowdsourcing [ 1 , 10 ]. In open–sourcing or peer–production, the product that is being worked on is free, will remain free and the crowd that is working on it volunteers their labour to make the free product better. In crowdsourcing, the crowd is volunteering but, if they are contributing to a product, it is unlikely to be available for free [ 1 ]. Furthermore, with open–sourced and peer–production models, which are usually software, the software and its code are released and coders work and submit bug fixers as they come up, with no hierarchy. With crowdsourcing, there is a clear call for work.

Crowdsourcing has other key features including a clear, open call for participants and a large crowd. Since there are many different definitions, Estelles–Arolas et al. reviewed definitions of crowdsourcing and developed an integrative definition using Tatarkiewicz’s approach, which is based on developing a global definition of the concept of art. In their review, the authors found 8 key qualities of a crowdsourcing definition, namely: a) who forms the crowd; b) what the crowd has to do; c) how the crowd is reimbursed; d) who initiates the crowdsourcing process; e) what the product of crowdsourcing is; f) what type of process is used; g) what type of call is used; and h) by what medium the call is made [ 12 ].

The integrative definition that the authors devise from their review is [ 12 ]:

“Crowdsourcing is a type of participative online activity in which an individual, and institution, a non–profit organisation, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number, via a flexible open call, the voluntary undertaking of a task. The undertaking of the task, of variable complexity and modularity, and in which the crowd should participate bringing their work, money, knowledge and/or experience, always entails mutual benefit. The user will receive the satisfaction of a given type of need, be it economic, social recognition, self–esteem, or the development of individual skills, while the crowdsourcer will obtain and utilize to their advantage what the user has brought to the venture, whose form will depend on the type of activity undertaken.”

Although he describes it as a taxonomy rather than a definition, the features of Geiger et al.’s description of crowdsourcing is similar to Estelles–Arolas et al.’s integrative definition. The key features Geiger describes are: (i) pre–selection of contributors (how ‘open’ the call is, but usually the authors state there are no limits); (ii) accessibility of peer contributors (whether they can access each other’s contributions); (iii) aggregation (to what extent the input is used); and (iv) remuneration (fixed, success–based or none) [ 12 , 44 ].

Each feature of Estelles–Arolas et al.’s integrative definition and Geiger et al.’s taxonomy is discussed below.

Estelles–Arolas et al.’s and Geiger et al.’s

Who forms the crowd (corresponds to geiger et al.’s pre–selection of contributors).

The majority of authors reviewed by Estelles–Arolas et al. did not provide a distinct definition for their crowds, instead describing a crowd as a large group of people or individuals, consumers, or volunteers [ 12 ]. The authors found that the crowds size could vary from a few thousand to several hundred thousand and their skill levels could also vary from being very unskilled, in the case of Amazon Mechanical Turk (AMT) workers to extremely skilled InnoCentive submitters, who are often hold PhDs [ 12 ]. However, in a study in business management, ideas generated by professionals and laypeople through crowdsourcing were compared and those by laypeople were more novel and offered the customer more benefits; however they were less feasible [ 14 ].

In contrast, Brabham specifically examined how authors refer to crowds and found that the majority of articles refer to crowds as being composed of amateurs [ 7 ]. However, he argues that ‘amateurism’ in crowdsourcing is a myth, and blames this partially on Howe’s original definition of crowdsourcing. In Brabham’s review, he found that most crowds were comprised of self–selected professionals, such as InnoCentive’s submitters being extremely well–educated, those who submitted advertisements for Doritos’ SuperBowl advertisement contest were mostly film school students and the majority of iStock Photos’ submitters are professional photographers [ 7 ]. Is amateurism not being paid or lacking access to tools? Brabham cites Stebbins’ definition of amateurs: “amateurs are guided by standards of excellence set by professionals and not necessarily inferior, feel an obligation to their pursuit, restrain professions from over–emphasising technique and from stressing superficialities instead of meaningful or profound work or products” [ 7 ]. He contrasts this definition with a definition of amateurs as “one lacking experience” and further argues that professionalism is a class about status and linked to capitalism. Crowdsourcing, then, represents the ‘race to the bottom’ to allow greater profit margins by falsely positioning who should be described as professionals as amateurs and underpaying them for their work [ 7 ].

With regards to the demographics of the crowd, Ranard et al.’s review found that few articles reported on demographics and for those that did, the level of demographics reported varied [ 27 ]. However, Khare et al. state that the crowd should be poorly defined and diverse [ 3 ]. Brabham believes that there are three types of diversity necessary: (i) identity; (ii) skills; and (iii) political investment. However, his vision of identity includes national, sex, gender, race, economic class, disability, religion, among other things [ 7 ]. As Surowiecki stated, diversity is important to having a wise crowd [ 14 , 31 ]. Kamajian found that technical and ‘social marginality’ were beneficial for success in InnoCentive submissions; social marginality was defined as being female [ 14 ].

Geiger et al. aim to classify different types of crowdsourcing processes, and in doing so describe the ‘openness’ of their calls. The authors found that most crowdsourcing processes have a completely open call but some restrict contributions from participants by using either qualification–based (ie, the contributors need to have demonstrated a certain level of qualification or skills prior to participating) or context–based (ie, the participants need to be in a certain demographic) limitations [ 44 ].

What the crowd has to do

Estelles–Arolas et al.’s review came across a dichotomy regarding the purpose of the crowd; one group of authors believed that the purpose of the crowd was to complete tasks and the other, to solve problems [ 12 ]. Some authors believe that tasks must be divisible into lower–level tasks in order to be suitable for crowdsourcing [ 5 , 12 , 28 ]. Estelles–Arolas et al. conclude that “any non–trivial problem can benefit from crowdsourcing” [ 12 ].

In this review, various authors attempted to make classifications of what crowdsourcing should aim to do. These are found in Table 1 . As one can see, some authors disagree that open innovation and peer production fall outside the realm of crowdsourcing. The authors also differ with regards to the level of detail of their classifications, ranging from a dichotomous classification of microtasks and megatasks [ 3 ] to Geiger et al.’s and Saxton et al.’s more detailed classifications of types of crowdsourcing processes [ 11 , 44 ]. However, at its heart, many of the classifications can be conflated to combination of Geiger et al.’s second and Aitamurto et al.’s classifications: crowd creation; crowd voting (including prediction markets); crowd processing; crowd rating; crowd solving; and crowd funding. However, crowd funding is the mobilisation of monetary funds for a common goal and thus is not covered by this review [ 44 , 45 ].

Classifications of what crowdsourcing should aim to do

How the crowd is reimbursed (corresponds to Geiger et al.’s remuneration)

Many of the authors in Estelles–Arolas et al.’s review identified reimbursement as monetary reimbursement. The range of monetary reimbursement is large, varying from US$ 0.01 for each human intelligence task (HIT) performed on the AMT platform to millions of dollars for the successful solution chosen from InnoCentive’s competitions [ 12 ]. Geiger et al. look at whether reimbursement is fixed, varied or voluntary as a means to classify crowdsourcing projects. AMT projects would have fixed reimbursements, where all members of the crowd are remunerated the same amount for their participation, whereas InnoCentive employs a success–based remuneration plan [ 44 ]. However, both Estelles–Arolas et al. and Geiger et al. acknowledge that not all crowdsourcing projects pay monetarily, and that monetary remuneration is not necessarily the primary motivation for the participants. Estelles–Arolas et al.’s review suggests that participant motivations mirror Maslow’s hierarchy of individual needs: economic reward, social recognition, self–esteem and development of individual skills. In addition to or in lieu of financial rewards, individuals participating in crowdsourcing are able to develop their skills through freelancing, contribute to their community, have fun, share knowledge and be recognised through their contributions, Parvanta et al. describe the motivations as the ‘four f’s:’ fun, fulfilment, fame, and fortune [ 46 ]. In addition to these, crowdsourcing activities such as RECAPTCHA have capitalised on task being integral to another task the user is trying to access and have been wildly successful [ 6 ]. An additional, similar, motivation that Swan identifies is biocitizenry, in which the crowd participates in order to gain access to studies [ 8 ]. Doan and colleagues suggest that, in addition to those listed above, making users pay for a service, providing ownership situations or requiring contribution to crowdsourcing through their employment, having instant gratification or providing an enjoyable experience of a necessary service will motivate a crowd [ 43 ]. In their review, Zhao and Zhu found that only 2/55 studies used motivational theories in designing their interventions [ 10 ]. Zhao and Zhu, Kostkova, and Kittur call for further research into crowd motivation, specifically the use of serious gaming, auction bidding and understanding crowd behaviour in task selection [ 10 , 38 , 48 ].

Some authors reviewed mentioned inequities regarding crowd contributions. Parvanta et al. describe a 90%/9%/1% rule for participation, in which 90% of the crowd observes, 9% participates from time–to–time and 1% participates regularly [ 46 ]. This breakdown would be more amenable to a service such as YouTube or Wikipedia, where observing or viewing a product is an option. Zhao and Zhu describe super contributors, contributors and outliers but do not give a percentage of contributions between the three categories [ 10 ]. Holley states that the majority of work is completed by 10% of the crowd and these super contributors are often retirees or young, dynamic professionals [ 49 ].

Who initiates the crowdsourcing process

Generally, an institution or organisation initiates the crowdsourcing process with an open call [ 12 ]. However, there have also been instances where the crowdsourcer has been a governmental department, such as in Iceland [ 42 ].

What the product of crowdsourcing is

Many authors reviewed by Estelles–Arolas et al. felt that the initiator receives the result sought for the task advertised, which was usually the result for a given problem. Others believed the product was either knowledge, ideas, or some type of added value [ 12 ]. The exact type product of crowdsourcing can be very diverse and has not been agreed upon, but generally is some type of result that is requested by and has value to the initiator.

What type of process is used

Estelles–Arolas et al.’s review found many authors who identified crowdsourcing as an outsourcing process, specifically referring to AMT while others referred to it as a problem–solving process or a production model [ 12 ]. As described previously, crowdsourcing differs from open sourcing, outsourcing and peer–production. Many articles reviewed in this review specifically mentioned the use of online, outsourcing–like mediums, such as AMT [ 3 , 6 , 48 ] and CrowdFlower [ 6 ]. In AMT and CrowdFlower, the initiator (or crowdsourcer) posts a task and the ‘crowd’ responds and are paid in small quantities for completing small HITs. Other online platforms use distributed online processes to compete for the best solution, such as InnoCentive or CrowdMed [ 27 , 32 ]. Advances in mHealth, such as wearable technologies and sensors, could enable real–time data collection and monitoring from mass amounts of people [ 38 ]. Kostkova estimates that 75 million wearable technological devices will have been shipped by 2018 and calls for behavioural research using these devices [ 38 ]. The data from these devices could be considered crowdsourcing if there is a specific call for data. Gamification has also been used to enhance the crowd’s experience while crowdsourcing and encourage participation [ 21 , 50 , 51 ]. Finally, another debatable form of crowdsourcing could be data mining, using Twitter posts or Google Flu Trends [ 32 , 52 , 53 ]. However, according to the definitions of crowdsourcing by both Estelles–Arolas et al. and Geiger et al., data mining would not be in the realm of crowdsourcing.

What type of call is used

The majority of authors reviewed by Estelles–Arolas et al. refer to an open call as the form of call that must be made in order to satisfy a crowdsourcing criterion. However, Estelles–Arolas et al. disagree and use the term ‘flexible open call’ meaning that participation is non–discriminatory but the call is tailored to the specific initiative and thus, can be limited to a community where there is specific knowledge or expertise (but anyone in this community can answer) [ 12 ].

By what medium the call is made

Estelles–Arolas et al. state that of the authors they reviewed, the medium the call was made through was unanimously agreed upon to be the Internet and Estelles–Arolas et al. agreed [ 12 ]. However, as stated previously, crowdsourcing has existed prior to the Internet, as has wisdom of the crowds. Thus, while the Internet has enabled crowdsourcing to be used much more effectively and efficiently, it is not necessarily reliant on the Internet as a medium and could be used over a different medium, though this would be less efficient.

Geiger et al.’s taxonomy/features of crowdsourcing

Accessibility of peer contributors.

Geiger et al. discuss the degree of which the crowd is able to access each other’s contributions to the product as a feature of the crowdsourcing process and have four categories: none, view, assess or modify [ 44 ]. In some crowdsourcing activities, members of the crowd cannot view each other’s contributions at all, while others use a crowd not only to for submissions but also to judge which submissions are the best (ie, Threadless). In other crowdsourcing exercises, participants can modify each other’s submissions. For example, Kittur posted a Spanish poem for translation through crowdsourcing and the crowd was able to interact with each other, discuss possible translations and together, the crowd submitted a final, translated poem. The authors found this translation to be better than the commonly accepted English translation [ 48 ]. Finally, Geiger et al. found that some crowdsourcing projects allow the crowd to view other submissions prior to submitting their own [ 44 ].

Aggregation

Aggregation refers to how the responses of the crowd are used by the crowdsourcer. The two major ways the responses can be used are to be combined or to be selected [ 44 ]. InnoCentive, Threadless, and CrowdMed, for example, are selective crowdsourcing companies, which choose the best solution or design to a particular problem. Crowdsourcing projects run on AMT often aggregate or combine solutions from the crowd as a whole.

The definition from Estelles–Arolas et al. excludes Wikipedia, YouTube and Flickr. Wikipedia is excluded on the grounds that there is no initiator (crowdsourcing organisation), that the authors do not feel that the initiator receives benefit, and that there is no open call. YouTube is excluded on the grounds that there is no clear goal, that the crowdsourcer’s benefit is not clear, which is arguable as YouTube ‘stars’ receive compensation for views, that there is no clear initiator, the initiator’s benefit is not clearly defined, the crowdsourcing process is not participative and there is no open call. Finally, Flickr, which is a photo sharing website, also fails due to lack of a clear goal, lack of clear benefit to the crowd, lack of clear benefit to the initiator, not being participative and not using an open call [ 12 ].

Despite Estelles–Arolas et al.’s integrative definition, some authors strongly believe websites such as Wikipedia are not only examples of crowdsourcing, but are the classic examples of crowdsourcing [ 10 ]. Indeed, Howe, who ‘coined’ crowdsourcing considers Wikipedia as a classic crowdsourcing example, as do others [ 4 , 10 , 20 ]. Osella’s review found that some authors’ definitions of crowdsourcing are so expansive that they consider the entire Internet a form of ‘crowdsourcing,’ citing O’Reilly and Batelle: “the Web as a whole is a marvel of crowdsourcing, as are marketplaces such as those on eBay and Craigslist, mixed media collections such as YouTube and Flickr, and the cast personal lifestream collections on Twitter, MySpace, and Facebook” [ 5 ].

When to use crowdsourcing

Many authors reviewed discussed situations that were amenable to crowdsourcing (see Box 2 ). First, crowdsourcing should be used in tasks that require humans, ie, where technology either cannot complete the task or where people can do it better [ 3 , 6 ] and where crowds are better than individuals or experts [ 6 ]. But, what specific features would a task need to have to satisfy these broad conditions? Authors have suggested a wide range of conditions which are laid out in Box 2 . These features are a combination of theoretical and application–based conditions and are, at times, conflicting. Kamajian reviewed crowdsourcing in medicine and his suggestions mirror Surowiecki’s wisdom of the crowds conditions – he believes that the crowd must have tacit knowledge, be diverse but the problem itself must not be tacit, that the firm must not have the knowledge (otherwise why is it seeking the crowd?), and he focuses on the likelihood of the crowd’s expertise and its diversity [ 14 ]. In comparison, Kittur describes applications of crowdsourcing, describing those typically conducted through AMT, that are verifiable, have an objectifiably ‘right’ answer, low cognitive load, and require little expertise are most conducive to crowdsourcing [ 48 ]. Kamajian’s and Kittur’s images of ideal crowdsourcing are in direct opposition to one another. One feature of ‘when to use crowdsourcing’ that has some agreement is that the task is divisible into lower–level tasks, though this is not a necessary condition [ 5 , 48 ].

Conditions for when to use Crowdsourcing found in review.

  • Tasks that require humans (i.e. where technology either cannot complete the task or where people can do it better) [ 3 , 6 ]
  • Crowds are better than individuals or experts [ 6 ]
  • Firm expertise is low [ 14 ]
  • Likelihood of crowd expertise is high [ 14 ]
  • Firm expertise is distant from solution [ 14 ]
  • Problem is not tacit, immobile, unique or complex [ 14 ]
  • Relevant experience is diverse [ 14 ]
  • Problem is modular [ 14 ]
  • Expertise is tacit, immobile, unique or diverse [ 14 ]
  • IP for problem is protect, problem is not legally protected [ 14 , 45 ]
  • No problems with ownership or usage of solution [ 48 ]
  • Problem does not contain sensitive information [ 5 ]
  • Problem divisible into lower–level tasks [ 5 , 48 ]
  • Low cognitive load [ 48 ]
  • Problem is fast to complete [ 48 ]
  • Task requires little expertise [ 48 ]
  • Solution is objective and verifiable [ 48 ]
  • Low barriers to entry [ 48 , 49 ]
  • Low interaction required [ 5 ]

As opposed to focusing on characteristics of projects amenable to crowdsourcing, Buecheler and colleagues describe the characteristics of a principal investigator who would be amenable to taking on a crowdsourcing project. They state that the career age, job satisfaction, cosmopolitan scale, tenure, funding, apparatus and time must be considered; however, the authors do not give an estimate of which features within these characteristics are ideal for crowdsourcing [ 20 ].

Other authors gave specific tasks that they felt crowdsourcing was most suitable for, such as solving problems, completing tasks, being creative, developing products or ideas [ 5 ]. Castillo believed that crowdsourcing was ideal for medical imaging research, in particular, while Thawrani and colleagues suggested that researchers should use crowdsourcing to capitalise off medical data to find more specific causes of illnesses and also to bring processes up–to–date, such as handwritten medical records in India [ 13 , 32 ].

Finally, some authors reviewed gave tips for using crowdsourcing in research. Most importantly, selecting a clear and appropriate research question was emphasised [ 2 , 45 , 49 ]. Having a big challenge, and clear, measurable goals that are communicated to participants was seen as important as this helps motivate the participants, along with as providing options regarding levels and modes of participation [ 49 ]. Finally, the importance of acknowledging participation was highlighted [ 49 ].

Benefits of using crowdsourcing

Benefits identified in the literature review are divided into process–based benefits and results–based benefits, and are displayed in Table 2 . Several of these benefits could have fit into both categories. Benefits include the speed of research progression, low cost, increased accuracy of results, ability to coordinate with machine–learning and improve algorithms, act as a public advocacy tool, work in emergency situations, and transcend boundaries and borders. Crowdsourcing is a powerful, flexible tool that can be used in many situations as a supplement to traditional research. Its mobility and low cost make it ideal for global health, where barriers such as lack of human resources, funding, conflict areas and baseline epidemiological data can create barriers to targeting interventions.

Benefits of crowdsourcing listed by articles reviewed, divided into process–based benefits and results–based benefits

Concerns with crowdsourcing

In spite of its benefits, crowdsourcing is still subject to numerous challenges, regulatory, and ethical issues that need to be addressed, considered, and anticipated prior to designing a crowdsourcing study or intervention.

Quality assurance issues were the most commonly identified by the articles reviewed. In instances where a crowd is asked to answer questions where there is no ‘right’ answer, it becomes difficult to verify if responses are true and not malicious [ 32 , 48 ]. Additionally, there is a debate regarding having untrained laypersons complete scientific activities that are normally reserved for experts; experts may protest these activities [ 8 , 32 ]. Finally, concerns were voiced regarding a potential so–called “Hawthorne observer–expected effect,” wherein which members of the ‘crowd’ acts in a way they feel the researcher may want them to [ 56 ]. Possible solutions for these issues were proposed, including having multi–level reviews. Here, there are multiple stages to each crowdsourcing task and each task is reviewed multiple times and aggregated [ 6 ] having objectifiable tasks to ‘weed out’ malicious workers or having standards by which workers must fulfil prior to be considered for the task [ 6 , 27 ]. For example, in AMT, workers may have obtained certain scores in previous tasks.

Regarding sampling, the denominator is rarely known in crowdsourcing tasks and this can pose problems for analysis [ 56 ]. Sampling bias can occur due to inverted sampling [ 6 , 8 , 56 ] and due to self–reported data [ 8 ]. Luan and Law reported cultural and geographical biases in GIS data reviewed [ 19 ]. Additionally, there is likely to be biased samples in comparison to the general population with regards to income, literacy, age, access to technology and values [ 19 , 56 ].

Other authors cited concerns for security, citing potential loss of data due to a rise of cyber–attacks [ 38 ] or mishandling of sensitive information [ 32 ]. Logistical issues cited were specific to platforms or types of crowdsourcing and included troubles with languages and file formats when data mining, trouble with battery life usage, competing with prioritisation of other application on mobile devices, and privacy for ubiquitous computing (sensors in mobile devices) [ 19 ] and for AMT, not having proof of payment for work completed and institutional issues gaining approval [ 3 ]. In addition, funding being non–traditional was identified as a barrier for all crowdsourcing research [ 8 ].

Regulatory and ethical issues

Despite Thawrani et al.’s and other’s concerns that crowdsourcing could compromise anonymity, other authors were concerned that the anonymity of crowdsourcing could raise ethical concerns [ 6 ]. Williams identified instances in which crowdsourcing may have resulted in the deaths of bloggers and could be used to falsely identify (or fail to identify) weapons of mass destruction (WMD) in Iran [ 6 ]. As crowdsourcing is a nascent field, there is no Review Ethics Board (REB) or Institutional Review Board (IRB) process specific to it, to the author’s knowledge, despite it being quite different from other methodologies. Exploitation of both the crowdsourcing worker and of the industries the crowdsourcing is taking place in are possible, thus REB/IRB review is very important [ 7 , 9 , 29 , 56 ]. Informed consent procedures will differ from general research, as researchers will not have in–person interaction with the participants and will not necessarily be aware of their levels of reading comprehension. The data use policies could represent a unique challenge to informed consent if products are used commercially.

Brabham reports that, while currently it is difficult for crowds to organise themselves against unfair labour practices, “crowdslapping” does happen [ 7 ]. This is when a crowd ‘rebels’ against the competition and is, essentially, a crowd of malicious workers, rallying against the project. A recent example of “crowdslapping” is a United Kingdom contest to name an RSS vessel, and the Natural Environment Research Council intended the boat to be named after an inspiring figure. The winning name was “Boaty McBoatface,” which was ultimately rejected in favour of “David Attenborough.” However, a remote undersea vessel was named “Boaty” in memory of the competition [ 57 ].

While not considered crowdsourcing by the working definition in this article, text/data mining has unique ethical issues, especially regarding consent, anonymity and researchers planning to use this method must consider this, through community engagement or other methods.

Notable (non–medical) examples of crowdsourcing

A second paper [ 58 ] will review health–related examples of crowdsourcing. Aside from health–related examples, there were over 50 examples of crowdsourcing named in the reviews, with purposes ranging from public policy [ 42 ] to mapping isolationist states [ 6 ], assisting with or reporting on human rights issues [ 6 , 18 ], mapping or reporting on the environment [ 6 , 27 ], designing t–shirts [ 1 ] or linking families [ 49 ]. Some notable, interesting and successful examples of crowdsourcing in the non–scientific or medical world are described below:

Guardian’s MP expenses

The UK newspaper, the Guardian, utilised crowdsourcing and freedom of information request to have the crowd comb through Members of Parliament’s (MP’s) expense claims to look for fraudulent claims. There were over 500 000 expense claims uploaded and over 170 000 documents were analysed within 80 hours alone [ 6 ]. As a result of this activity, British MPs were convicted of fraud, forced to resign or had to issue apologies.

Ushahidi is a SMS– and web–based platform that was created after the Kenyan election in 2007 to report on election violence [ 6 ]. It is an open–sourced platform that combines GIS information with time, allowing the crowdsourcing initiator to filter by place and time, which makes it ideal in disaster situations [ 18 ]. It has been used for elections, violence, corruption and disasters, including reporting cholera after the Haitian earthquake and in Kenya, Uganda, Nigeria, Haiti, Libya and Egypt [ 6 , 53 ].

GalaxyZoo is a crowdsourcing project that uses volunteers from around the globe to classify galaxies visually. As of 2013, it had successfully classified nearly 900 000 galaxies using hundreds of thousands of volunteers [ 27 ].

Transcribe Bentham

Transcribe Bentham is a project which aims to transcribe works of Jeremy Bentham, a famous utilitarian philosopher, in order for them to be available to all. There were over 12 000 un–transcribed manuscripts and the project is based at University College London (UCL) [ 59 ].

Captcha stands for “Completely Automated Public Turing test to tell Computers and Humans Apart.” Louis van Ahn, the father of human computing, extended CAPTCHA, adding an additional word so people would need to translate two words; the first was a known ‘anti–bot’ word but the second was from an archive that needed to be digitalised [ 60 ]. In 2009, RECAPTCHA was able to digitalise 20 years of the New York Times’ archives and 110 years of archives were projected to be completed by the end of 2010 [ 60 ].

Crowdsourcing is a field that is relatively nascent, yet blossoming. Because of its infancy, researchers have not yet agreed on its definition or what does or does not constitute its practice. Despite this, several key qualities have emerged. In order to be considered crowdsourcing, a task must be distributed by an organisation via a flexible open call for the purpose of obtaining some knowledge, idea or added value, through a medium that’s similar but not an outsourced model. Usually, crowdsourcing employs the Internet, though this is not necessary. A crowd can be formed by both experts and amateurs, and the crowd can be rewarded monetarily or through recognition or skill–development. Sometimes the results are aggregated, but in other exercises, the best solution is chosen. In this way, applications of crowdsourcing are themselves very diverse and it is not surprising that authors have struggled to provide an all–encompassing definition.

Despite the difficulties defining it, crowdsourcing is beneficial both in the process and in the results. It is often low–cost, rapid, and has the possibility to transcend fields, borders, can coordinate with machine–learning, raise public awareness and produce novel discoveries. Crowdsourcing could be hugely promising in global health where resources are low and there is a paucity of data if a concerted effort is made to bring it to scale, especially through marrying the global health community with crowdsourcing and computer science researchers.

Acknowledgments

I would like to thank Igor Rudan for his support in writing this paper.

Funding: None.

Authorship contribution: KW conceived of the paper, conducted the searches, analysed the information and drafted the manuscript.

Competing interests: The author has completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author), and declares no conflicts of interest.

vLex International Law

  • Apps & Integrations
  • Books and Journals
  • International Journal of Management Reviews
  • No. 20-2, April 2018

Crowdsourcing: A Review and Suggestions for Future Research

crowdsourcing a review and suggestions for future research

To continue reading

Subscribers can access the reported version of this case.

You can sign up for a trial and make the most of our service including these benefits.

crowdsourcing a review and suggestions for future research

Why Sign-up to vLex?

Over 100 countries.

Search over 120 million documents from over 100 countries including primary and secondary collections of legislation, case law, regulations, practical law, news, forms and contracts, books, journals, and more.

Thousands of Data Sources

Updated daily, vLex brings together legal information from over 750 publishing partners, providing access to over 2,500 legal and news sources from the world’s leading publishers.

Find What You Need, Quickly

Advanced A.I. technology developed exclusively by vLex editorially enriches legal information to make it accessible, with instant translation into 14 languages for enhanced discoverability and comparative research.

Over 2 million registered users

Founded over 20 years ago, vLex provides a first-class and comprehensive service for lawyers, law firms, government departments, and law schools around the world.

Subscribers are able to see a list of all the cited cases and legislation of a document.

Subscribers are able to see a list of all the documents that have cited the case.

Subscribers are able to see the revised versions of legislation with amendments.

Subscribers are able to see any amendments made to the case.

Subscribers are able to see a visualisation of a case and its relationships to other cases. An alternative to lists of cases, the Precedent Map makes it easier to establish which ones may be of most relevance to your research and prioritise further reading. You also get a useful overview of how the case was received.

crowdsourcing a review and suggestions for future research

Subscribers are able to see the list of results connected to your document through the topics and citations Vincent found.

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy . ACCEPT

Crowdsourcing Controls: A Review and Research Agenda for Crowdsourcing Controls Used for Macro-tasks

  • First Online: 07 August 2019

Cite this chapter

Book cover

  • Lionel P. Robert Jr. 7  

Part of the book series: Human–Computer Interaction Series ((HCIS))

639 Accesses

5 Citations

Crowdsourcing—the employment of ad hoc online labor to perform various tasks—has become a popular outsourcing vehicle. Our current approach to crowdsourcing—focusing on micro-tasks—fails to leverage the potential of crowds to tackle more complex problems. To leverage crowds to tackle more complex macro-tasks requires a better comprehension of crowdsourcing controls. Crowdsourcing controls are mechanisms used to align crowd workers’ actions with predefined standards to achieve a set of goals and objectives. Unfortunately, we know very little about the topic of crowdsourcing controls directed at accomplishing complex macro-tasks. To address issues associated with crowdsourcing controls for macro-tasks, this chapter has several objectives. First, it presents and discusses the literature on control theory. Second, this chapter presents a scoping literature review of crowdsourcing controls. Finally, the chapter identifies gaps and puts forth a research agenda to address these shortcomings. The research agenda focuses on understanding how to employ the controls needed to perform macro-tasking in crowds and the implications for crowdsourcing system designers.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Aker, A., El-Haj, M., Albakour, M. D., & Kruschwitz, U. (2012). Assessing crowdsourcing quality through objective tasks. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (pp. 1456–1461).

Google Scholar  

Ashikawa, M., Kawamura, T., & Ohsuga, A. (2015). Deployment of private crowdsourcing system with quality control methods. In 2015 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT) (Vol. 1, pp. 9–16). IEEE.

Baba, Y., & Kashima, H. (2013, August). Statistical quality estimation for general crowdsourcing tasks. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 554–562). ACM.

Baba, Y., Kashima, H., Kinoshita, K., Yamaguchi, G., & Akiyoshi, Y. (2013, June). Leveraging crowdsourcing to detect improper tasks in crowdsourcing marketplaces. In Twenty-fifth Innovative Applications of Artificial Intelligence Conference (pp. 1487–1492).

Baba, Y., Kashima, H., Kinoshita, K., Yamaguchi, G., & Akiyoshi, Y. (2014). Leveraging non-expert crowdsourcing workers for improper task detection in crowdsourcing marketplaces. Expert Systems with Applications, 41 (6), 2678–2687.

Article   Google Scholar  

Bell, S., & Bala, K. (2015). Learning visual similarity for product design with convolutional neural networks. ACM Transactions on Graphics (TOG), 34 (4), 98.

Bontcheva, K., Roberts, I., Derczynski, L., & Rout, D. (2014). The GATE crowdsourcing plugin: Crowdsourcing annotated corpora made easy. In Proceedings of the Demonstrations at the 14th Conference of the European Chapter of the Association for Computational Linguistics (pp. 97–100).

Bozzon, A., Brambilla, M., Ceri, S., & Mauri, A. (2013, May). Reactive crowdsourcing. In Proceedings of the 22nd International Conference on World Wide Web (pp. 153–164). ACM.

Bozzon, A., Brambilla, M., Ceri, S., Mauri, A., & Volonterio, R. (2014, July). Pattern-based specification of crowdsourcing applications. In International Conference on Web Engineering (pp. 218–235). Cham: Springer.

Bragg, J., & Weld, D. S. (2013, November). Crowdsourcing multi-label classification for taxonomy creation. In First AAAI Conference on Human Computation and Crowdsourcing .

Cardinal, L. B., Kreutzer, M., & Miller, C. C. (2017). An aspirational view of organizational control research: Re-invigorating empirical work to better meet the challenges of 21st century organizations. Academy of Management Annals, 11 (2), 559–592.

Cardinal, L. B., Sitkin, S. B., & Long, C. P. (2004). Balancing and rebalancing in the creation and evolution of organizational control. Organization Science, 15, 411–431.

Cardinal, L. B., Sitkin, S. B., & Long, C. P. (2010). A configurational theory of control. In S. B. Sitkin, L. B. Cardinal, & K. M. Bijlsma-Frankema (Eds.), Organizational control (pp. 51–79). Cambridge, UK: Cambridge University Press.

Chapter   Google Scholar  

Carpenter, M. A., Bauer, T., Erdogan, B., & Short, J. (2010). Principles of management . Flatworld Knowledge.

Causer, T., Tonra, J., & Wallace, V. (2012). Transcription maximized; expense minimized? Crowdsourcing and editing the collected works of Jeremy Bentham. Literary and Linguistic Computing, 27 (2), 119–137.

Chang, D., Chen, C. H., & Lee, K. M. (2014). A crowdsourcing development approach based on a neuro-fuzzy network for creating innovative product concepts. Neurocomputing, 142, 60–72.

Chatman, J. A. (2010). Norms in mixed sex and mixed race work groups. Academy of Management Annals, 4 (1), 447–484.

Chen, Z., Fu, R., Zhao, Z., Liu, Z., Xia, L., Chen, L., et al. (2014). gMission: A general spatial crowdsourcing platform. Proceedings of the VLDB Endowment, 7 (13), 1629–1632.

Cheng, J., Teevan, J., Iqbal, S. T., & Bernstein, M. S. (2015, April). Break it down: A comparison of macro-and microtasks. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 4061–4064). ACM.

Chiu, C. M., Liang, T. P., & Turban, E. (2014). What can crowdsourcing do for decision support? Decision Support Systems, 65, 40–49.

Choudhury, V., & Sabherwal, R. (2003). Portfolios of control in outsourced software development projects. Information Systems Research, 14 (3), 291–314.

Chung, M. J. Y., Forbes, M., Cakmak, M., & Rao, R. P. (2014, May). Accelerating imitation learning through crowdsourcing. In ICRA (pp. 4777–4784).

Dai, P., Lin, C. H., & Weld, D. S. (2013). POMDP-based control of workflows for crowdsourcing. Artificial Intelligence, 202, 52–85.

Article   MathSciNet   MATH   Google Scholar  

Dai, P., Rzeszotarski, J. M., Paritosh, P., & Chi, E. H. (2015, February). And now for something completely different: Improving crowdsourcing workflows with micro-diversions. In Proceedings of the 18th ACM Conference on Computer-Supported Cooperative Work & Social Computing (pp. 628–638). ACM.

Daniel, F., Kucherbaev, P., Cappiello, C., Benatallah, B., & Allahbakhsh, M. (2018). Quality control in crowdsourcing: A survey of quality attributes, assessment techniques, and assurance actions. ACM Computing Surveys (CSUR), 51 (1), 7.

de Herrera, A. G. S., Foncubierta-Rodrıguez, A., Markonis, D., Schaer, R., & Müller, H. (2014, September). Crowdsourcing for medical image classification. In Annual Congress SGMI (Vol. 2014).

Deng, J., Krause, J., & Fei-Fei, L. (2013). Fine-grained crowdsourcing for fine-grained recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 580–587).

Dennis, A. R., Robert, L. P., Kowalczyck, S. T., Curtis, A., & Hasty, B. K. (2012). Trust is in the eye of the beholder: A vignette study of postevent behavioral controls’ effects on individual trust in virtual teams. Information Systems Research, 23 (2), 546–558.

Difallah, D. E., Demartini, G., & Cudré-Mauroux, P. (2012, April). Mechanical cheat: Spamming schemes and adversarial techniques on crowdsourcing platforms. In CrowdSearch 2010 Workshop at WWW 2012 (pp. 26–30).

Duan, L., Oyama, S., Sato, H., & Kurihara, M. (2014). Separate or joint? Estimation of multiple labels from crowdsourced annotations. Expert Systems with Applications, 41 (13), 5723–5732.

Eickhoff, C., & de Vries, A. (2011, February). How crowdsourcable is your task? In Proceedings of the Workshop on Crowdsourcing for Search and Data Mining (CSDM) at the Fourth ACM International Conference on Web Search and Data Mining (WSDM) (pp. 11–14).

Eickhoff, C., & de Vries, A. P. (2013). Increasing cheat robustness of crowdsourcing tasks. Information Retrieval, 16 (2), 121–137.

Eisenhardt, K. M. (1985). Control: Organizational and economic approaches. Management Science, 31, 134–149.

Fan, J., Li, G., Ooi, B. C., Tan, K. L., & Feng, J. (2015, May). iCrowd: An adaptive crowdsourcing framework. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data (pp. 1015–1030). ACM.

Fang, Y., Sun, H., Li, G., Zhang, R., & Huai, J. (2016, April). Effective result inference for context-sensitive tasks in crowdsourcing. In International Conference on Database Systems for Advanced Applications (pp. 33–48). Cham: Springer.

Filatova, E. (2012, May). Irony and sarcasm: Corpus generation and analysis using crowdsourcing. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (pp. 392–398).

Finin, T., Murnane, W., Karandikar, A., Keller, N., Martineau, J., & Dredze, M. (2010, June). Annotating named entities in Twitter data with crowdsourcing. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk (pp. 80–88). Association for Computational Linguistics.

Foncubierta Rodríguez, A., & Müller, H. (2012, October). Ground truth generation in medical imaging: A crowdsourcing-based iterative approach. In Proceedings of the ACM Multimedia 2012 Workshop on Crowdsourcing for Multimedia (pp. 9–14). ACM.

Franklin, M. J., Kossmann, D., Kraska, T., Ramesh, S., & Xin, R. (2011, June). CrowdDB: Answering queries with crowdsourcing. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of Data (pp. 61–72). ACM.

Fu, W. T., & Liao, V. (2011, March). Crowdsourcing quality control of online information: A quality-based cascade model. In International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction (pp. 147–154). Berlin, Heidelberg: Springer.

Gadiraju, U., Kawase, R., Dietze, S., & Demartini, G. (2015). Understanding malicious behavior in crowdsourcing platforms: The case of online surveys. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 1631–1640). ACM.

Gao, Y., Chen, Y., & Liu, K. R. (2015). On cost-effective incentive mechanisms in microtask crowdsourcing. IEEE Transactions in Computational Intelligence and AI in Games, 7 (1), 3–15.

Gould, S. J., Cox, A. L., & Brumby, D. P. (2016). Diminished control in crowdsourcing: An investigation of crowdworker multitasking behavior. ACM Transactions on Computer-Human Interaction (TOCHI), 23 (3), 19.

Haas, D., Ansel, J., Gu, L., & Marcus, A. (2015). Argonaut: Macrotask crowdsourcing for complex data processing. Proceedings of the VLDB Endowment, 8 (12), 1642–1653.

Han, S., Dai, P., Paritosh, P., & Huynh, D. (2016). Crowdsourcing human annotation on web page structure: Infrastructure design and behavior-based quality control. ACM Transactions on Intelligent Systems and Technology (TIST), 7 (4), 56.

Hansen, D. L., Schone, P. J., Corey, D., Reid, M., & Gehring, J. (2013, February). Quality control mechanisms for crowdsourcing: Peer review, arbitration, & expertise at family search indexing. In Proceedings of the 2013 Conference on Computer-Supported Cooperative Work (pp. 649–660). ACM.

Hara, K., Le, V., & Froehlich, J. (2013, April). Combining crowdsourcing and Google Street View to identify street-level accessibility problems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 631–640). ACM.

Hirth, M., Hoßfeld, T., & Tran-Gia, P. (2010). Cheat-detection mechanisms for crowdsourcing. Research report series, report No. 474.

Hirth, M., Hoßfeld, T., & Tran-Gia, P. (2011, June). Cost-optimal validation mechanisms and cheat-detection for crowdsourcing platforms. In 2011 Fifth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS) (pp. 316–321). IEEE.

Hirth, M., Hoßfeld, T., & Tran-Gia, P. (2013). Analyzing costs and accuracy of validation mechanisms for crowdsourcing platforms. Mathematical and Computer Modelling, 57 (11–12), 2918–2932.

Hoßfeld, T., & Keimel, C. (2014). Crowdsourcing in QoE evaluation. In Quality of experience (pp. 315–327). Cham: Springer.

Homan, A. C., van Knippenberg, D., Van Kleef, G. A., & De Dreu, C. K. W. (2007). Bridging faultlines by valuing diversity: The effects of diversity beliefs on information elaboration and performance in diverse work groups. Journal of Applied Psychology, 92, 1189–1199.

Hosio, S., Goncalves, J., Lehdonvirta, V., Ferreira, D., & Kostakos, V. (2014, October). Situated crowdsourcing using a market model. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (pp. 55–64). ACM.

Hutton, A., Liu, A., & Martin, C. E. (2012, March). Crowdsourcing evaluations of classifier interpretability. In AAAI Spring Symposium: Wisdom of the Crowd .

Jaworski, B. J., & Kohli, A. K. (1993). Market orientation: Antecedents and consequences. Journal of Marketing, 57, 53–70.

Jo, J., Stevens, A., & Tan, C. (2013). A quality control model for trustworthy crowdsourcing in collaborative learning. In Robot intelligence technology and applications 2012 (pp. 85–90). Berlin, Heidelberg: Springer.

Kajino, H., Arai, H., & Kashima, H. (2014). Preserving worker privacy in crowdsourcing. Data Mining and Knowledge Discovery, 28 (5–6), 1314–1335.

Kamar, E. (2016, July). Directions in hybrid intelligence: Complementing AI systems with human intelligence. In IJCAI (pp. 4070–4073).

Kamar, E., Kapoor, A., Horvitz, E., & Redmond, W. A. (2013, August). Lifelong learning for acquiring the wisdom of the crowd. In IJCAI (Vol. 13, pp. 2313–2320).

Kannangara, S. N., & Uguccioni, P. (2013). Risk management in crowdsourcing-based business ecosystems. Technology Innovation Management Review , 3 (12).

Kazai, G. (2011, April). In search of quality in crowdsourcing for search engine evaluation. In European Conference on Information Retrieval (pp. 165–176). Berlin, Heidelberg: Springer.

Kazai, G., Kamps, J., Koolen, M., & Milic-Frayling, N. (2011, July). Crowdsourcing for book search evaluation: Impact of hit design on comparative system ranking. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 205–214). ACM.

Kazai, G., Kamps, J., & Milic-Frayling, N. (2012, October). The face of quality in crowdsourcing relevance labels: Demographics, personality and labeling accuracy. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management (pp. 2583–2586). ACM.

Kazai, G., Koolen, M., Kamps, J., Doucet, A., & Landoni, M. (2010, December). Overview of the INEX 2010 book track: Scaling up the evaluation using crowdsourcing. In International Workshop of the Initiative for the Evaluation of XML Retrieval (pp. 98–117). Berlin, Heidelberg: Springer.

Kazai, G., & Zitouni, I. (2016, February). Quality management in crowdsourcing using gold judges behavior. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining (pp. 267–276). ACM.

Khapra, M. M., Ramanathan, A., Kunchukuttan, A., Visweswariah, K., & Bhattacharyya, P. (2014). When transliteration met crowdsourcing: An empirical study of transliteration via crowdsourcing using efficient, non-redundant and fair quality control. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014) (pp. 196–202).

Khazankin, R., Psaier, H., Schall, D., & Dustdar, S. (2011, December). Qos-based task scheduling in crowdsourcing environments. In International Conference on Service-oriented Computing (pp. 297–311). Berlin, Heidelberg: Springer.

Kim, S., Marquis, E., Alahmad, R., Pierce, C., & Robert, L. P. (2018). The impacts of platform quality on gig workers’ autonomy and satisfaction. In Proceedings of the 21th ACM Conference on Computer-supported Cooperative Work and Social Computing Companion. Jersey City, NJ, USA.

Kirsch, L. J. (1997). Portfolios of control modes and IS project management. Information Systems Research, 8 (3), 215–239.

Kirsch, L. J., Ko, D. G., & Haney, M. H. (2010). Investigating the antecedents of team-based clan control: Adding social capital as a predictor. Organization Science, 21 (2), 469–489.

Lange, R., & Lange, X. (2012, March). Quality control in crowdsourcing: An objective measurement approach to identifying and correcting rater effects in the social evaluation of products and services. In AAAI Spring Symposium: Wisdom of the Crowd (Vol. 12, p. 6).

Lasecki, W. S., & Bigham, J. P. (2012, October). Online quality control for real-time crowd captioning. In Proceedings of the 14th international ACM SIGACCESS Conference on Computers and Accessibility (pp. 143–150). ACM.

Lasecki, W. S., Miller, C. D., & Bigham, J. P. (2013, April). Warping time for more effective real-time crowdsourcing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2033–2036). ACM.

Lasecki, W. S., Murray, K. I., White, S., Miller, R. C., & Bigham, J. P. (2011, October). Real-time crowd control of existing interfaces. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (pp. 23–32). ACM.

Lasecki, W. S., Teevan, J., & Kamar, E. (2014, February). Information extraction and manipulation threats in crowd-powered systems. In Proceedings of the 17th ACM Conference on Computer-supported Cooperative Work & Social Computing (pp. 248–256). ACM.

Le, J., Edmonds, A., Hester, V., & Biewald, L. (2010, July). Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution. In SIGIR 2010 Workshop on Crowdsourcing for Search Evaluation (Vol. 2126).

Lee, C. Y., & Glass, J. (2011). A transcription task for crowdsourcing with automatic quality control . Paper Presented at the Twelfth Annual Conference of the International Speech Communication Association.

Li, H., Zhao, B., & Fuxman, A. (2014, April). The wisdom of minority: Discovering and targeting the right group of workers for crowdsourcing. In Proceedings of the 23rd International Conference on World Wide Web (pp. 165–176). ACM.

Li, Q., Vempaty, A., Varshney, L. R., & Varshney, P. K. (2017). Multi-object classification via crowdsourcing with a reject option. IEEE Transactions on Signal Processing, 65 (4), 1068–1081.

Lin, C. H., & Weld, D. (2012). In N. de Freitas & K. Murphy (Eds.), Proceedings of the Twenty-eighth Conference on Uncertainty in Artificial Intelligence (UAI’12) (pp. 491–500). Arlington, VA: AUAI Press.

Liu, Q., Ihler, A. T., & Steyvers, M. (2013). Scoring workers in crowdsourcing: How many control questions are enough? In Advances in neural information processing systems (pp. 1914–1922).

Liu, S. (2015). Effects of control on the performance of information systems projects: The moderating role of complexity risk. Journal of Operations Management, 36, 46–62.

Liu, Z., Shabani, S., Balet, N. G., Sokhn, M., & Cretton, F. (2018, January). How to motivate participation and improve quality of crowdsourcing when building accessibility maps. In 2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC) (pp. 1–6). IEEE.

Loni, B., Menendez, M., Georgescu, M., Galli, L., Massari, C., Altingovde, I. S., … & Larson, M. (2013, February). Fashion-focused creative commons social dataset. In Proceedings of the 4th ACM Multimedia Systems Conference (pp. 72–77). ACM.

Malhotra, A., & Majchrzak, A. (2014). Managing crowds in innovation challenges. California Management Review, 56 (4), 103–123.

Maruping, L. M., Venkatesh, V., & Agarwal, R. (2009). A control theory perspective on agile methodology use and changing user requirements. Information Systems Research, 20 (3), 377–399.

Massung, E., Coyle, D., Cater, K. F., Jay, M., & Preist, C. (2013, April). Using crowdsourcing to support pro-environmental community activism. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 371–380). ACM.

Mays, N., Roberts, E., & Popay, J. (2001). Synthesising research evidence. In N. Fulop, P. Allen, A. Clarke, & N. Black (Eds.), Studying the organisation and delivery of health services: Research methods (pp. 188–219). London: Routledge.

McGraw, I., & Polifroni, J. (2013). How to control and utilize crowd-collected speech. In M. Eskenazi, G. Levow, H. Meng, G. Parent, & D. Suendermann (Eds.), Crowdsourcing for speech processing: Applications to data collection, transcription and assessment (pp. 106–136). Chichester, UK: Wiley.

Melchior, P., Sheldon, E., Drlica-Wagner, A., Rykoff, E. S., Abbott, T. M. C., Abdalla, F. B., et al. (2016). Crowdsourcing quality control for Dark Energy Survey images. Astronomy and Computing, 16, 99–108.

Munro, R., Bethard, S., Kuperman, V., Lai, V. T., Melnick, R., Potts, C., … & Tily, H. (2010, June). Crowdsourcing and language studies: The new generation of linguistic data. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk (pp. 122–130). Association for Computational Linguistics.

Negri, M., Bentivogli, L., Mehdad, Y., Giampiccolo, D., & Marchetti, A. (2011, July). Divide and conquer: Crowdsourcing the creation of cross-lingual textual entailment corpora. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (pp. 670–679). Association for Computational Linguistics.

Oleson, D., Sorokin, A., Laughlin, G. P., Hester, V., Le, J., & Biewald, L. (2011). Programmatic gold: Targeted and scalable quality assurance in crowdsourcing. Human Computation , 11 (11).

Otani, N., Baba, Y., & Kashima, H. (2016). Quality control of crowdsourced classification using hierarchical class structures. Expert Systems with Applications, 58, 155–163.

Ouchi, W. G. (1979). A conceptual framework for the design of organizational control mechanisms. Management Science, 25 (9), 833–848.

Ouchi, W. G. (1980). Markets, bureaucracies, and clans. Administrative Science Quarterly, 25 (1), 129–141.

Ouchi, W. G., & Price, R. L. (1978). Hierarchies, clans, and theory Z: A new perspective on organization development. Organizational Dynamics, 7 (2), 25–44.

Oyama, S., Baba, Y., Ohmukai, I., Dokoshi, H., & Kashima, H. (2015). From one star to three stars: Upgrading legacy open data using crowdsourcing. In IEEE International Conference on Data Science and Advanced Analytics (pp. 1–9). IEEE.

Oyama, S., Baba, Y., Sakurai, Y., & Kashima, H. (2013, August). Accurate integration of crowdsourced labels using workers’ self-reported confidence scores. In Twenty-third International Joint Conference on Artificial Intelligence (pp. 2554–2560).

Paul, S. A., Hong, L., & Chi, E. H. (2011). What is a question? Crowdsourcing tweet categorization . Paper Presented at HCOMP Workshop CHI 2011.

Peterson, J., Pearce, P. F., Ferguson, L. A., & Langford, C. A. (2017). Understanding scoping reviews: Definition, purpose, and process. Journal of the American Association of Nurse Practitioners, 29 (1), 12–16.

Piccoli, G., & Ives, B. (2003). Trust and the unintended effects of behavior control in virtual teams. MIS Quarterly, 27 (3), 365–395.

Post, M., Callison-Burch, C., & Osborne, M. (2012, June). Constructing parallel corpora for six Indian languages via crowdsourcing. In Proceedings of the Seventh Workshop on Statistical Machine Translation (pp. 401–409). Association for Computational Linguistics.

Qiu, C., Squicciarini, A. C., Carminati, B., Caverlee, J., & Khare, D. R. (2016, October). Crowdselect: Increasing accuracy of crowdsourcing tasks through behavior prediction and user selection. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management (pp. 539–548). ACM.

Rhyn, M., & Blohm, I. (2017) A machine learning approach for classifying textual data in crowdsourcing. In J. M. Leimeister & W. Brenner, W. (Eds.), Proceedings der 13. Internationalen Tagung Wirtschaftsinformatik (WI 2017) (pp. 1171–1185).

Riccardi, G., Ghosh, A., Chowdhury, S. A., & Bayer, A. O. (2013, August). Motivational feedback in crowdsourcing: A case study in speech transcription. In INTERSPEECH (pp. 1111–1115).

Riegler, M., Gaddam, V. R., Larson, M., Eg, R., Halvorsen, P., & Griwodz, C. (2016, June). Crowdsourcing as self-fulfilling prophecy: Influence of discarding workers in subjective assessment tasks. In 2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI) (pp. 1–6). IEEE.

Robert, L. P. (2016). Monitoring and trust in virtual teams. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2016). ACM.

Robert, L. P., Jr., Dennis, A. R., & Ahuja, M. K. (2008). Social capital and knowledge integration in digitally enabled teams. Information Systems Research, 19 (3), 314–334.

Salehi, N., McCabe, A., Valentine, M., & Bernstein, M. (2017). Huddler: Convening stable and familiar crowd teams despite unpredictable availability. In Proceedings of the 2017 ACM Conference on Computer-Supported Cooperative Work and Social Computing (pp. 1700–1713). ACM.

Salk, C. F., Sturn, T., See, L., Fritz, S., & Perger, C. (2016). Assessing quality of volunteer crowdsourcing contributions: Lessons from the Cropland Capture game. International Journal of Digital Earth, 9 (4), 410–426.

Satzger, B., Psaier, H., Schall, D., & Dustdar, S. (2013). Auction-based crowdsourcing supporting skill management. Information Systems, 38 (4), 547–560.

Schmitz, H., & Lykourentzou, I. (2018). Online sequencing of non-decomposable macrotasks in expert crowdsourcing. ACM Transactions on Social Computing, 1 (1), 1.

See, L., Schepaschenko, D., Lesiv, M., McCallum, I., Fritz, S., Comber, A., et al. (2015). Building a hybrid land cover map with crowdsourcing and geographically weighted regression. ISPRS Journal of Photogrammetry and Remote Sensing, 103, 48–56.

Sitkin, S. B., & George, E. (2005). Managerial trust-building through the use of legitimating formal and informal control mechanisms. International Sociology, 20 (3), 307–338.

Sorokin, A., Berenson, D., Srinivasa, S. S., & Hebert, M. (2010, October). People helping robots helping people: Crowdsourcing for grasping novel objects. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 2117–2122). IEEE.

Sprugnoli, R., Moretti, G., Fuoli, M., Giuliani, D., Bentivogli, L., Pianta, E., … & Brugnara, F. (2013, May). Comparing two methods for crowdsourcing speech transcription. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 8116–8120). IEEE.

Star, S., & Griesemer, J. (1989). Institutional ecology, ‘translations’ and boundary objects: Amateurs and professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39. Social Studies of Science, 19 (3), 387–420.

Stolee, K. T., & Elbaum, S. (2010, September). Exploring the use of crowdsourcing to support empirical studies in software engineering. In Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement (p. 35). ACM.

Su, H., Deng, J., & Fei-Fei, L. (2012, July). Crowdsourcing annotations for visual object detection. In Workshops at the Twenty-sixth AAAI Conference on Artificial Intelligence (Vol. 1, No. 2).

Tai, L., Chuang, Z., Tao, X., Ming, W., & Jingjing, X. (2011). Quality control of crowdsourcing through workers [sic] experience. In Proceedings of the ACM SIGIR Workshop on Crowdsourcing for Information Retrieval .

Tang, W., & Lease, M. (2011, July). Semi-supervised consensus labeling for crowdsourcing. In SIGIR 2011 Workshop on Crowdsourcing for Information Retrieval (CIR) (pp. 1–6).

Tiwana, A. (2010). Systems development ambidexterity: Explaining the complementary and substitutive roles of formal and informal controls. Journal of Management Information Systems, 27 (2), 87–126.

Article   MathSciNet   Google Scholar  

Tran-Thanh, L., Huynh, T. D., Rosenfeld, A., Ramchurn, S. D., & Jennings, N. R. (2014, May). BudgetFix: Budget limited crowdsourcing for interdependent task allocation with quality guarantees. In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems (pp. 477–484). International Foundation for Autonomous Agents and Multiagent Systems.

Trompette, P., Chanal, V., & Pelissier, C. (2008, July). Crowdsourcing as a way to access external knowledge for innovation. In 24th EGOS Colloquium .

Turner, K. L., & Makhija, M. V. (2006). The role of organizational controls in managing knowledge. Academy of Management Review, 31 (1), 197–217.

Ul Hassan, U., Zaveri, A., Marx, E., Curry, E., & Lehmann, J. (2016, November). ACRyLIQ: Leveraging DBpedia for adaptive crowdsourcing in linked data quality assessment. In European Knowledge Acquisition Workshop (pp. 681–696). Cham: Springer.

Vempaty, A., Varshney, L. R., & Varshney, P. K. (2014). Reliable crowdsourcing for multi-class labeling using coding theory. IEEE Journal of Selected Topics in Signal Processing, 8 (4), 667–679.

Venetis, P., & Garcia-Molina, H. (2012, August). Quality control for comparison microtasks. In Proceedings of the First International Workshop on Crowdsourcing and Data Mining (pp. 15–21). ACM.

Vliegendhart, R., Larson, M., Kofler, C., Eickhoff, C., & Pouwelse, J. (2011, February). Investigating factors influencing crowdsourcing tasks with high imaginative load. In Proceedings of the Workshop on Crowdsourcing for Search and Data Mining (CSDM) at the Fourth ACM International Conference on Web Search and Data Mining (pp. 27–30). ACM.

Wais, P., Lingamneni, S., Cook, D., Fennell, J., Goldenberg, B., Lubarov, D., … & Simons, H. (2010). Towards building a high-quality workforce with Mechanical Turk. In Proceedings of Computational Social Science and the Wisdom of Crowds (NIPS) (pp. 1–5).

Wang, S., Huang, C. R., Yao, Y., & Chan, A. (2014). Exploring mental lexicon in an efficient and economic way: Crowdsourcing method for linguistic experiments. In Proceedings of the 4th Workshop on Cognitive Aspects of the Lexicon (CogALex) (pp. 105–113).

Weibel, A., Den Hartog, D. N., Gillespie, N., Searle, R., Six, F., & Skinner, D. (2016). How do controls impact employee trust in the employer? Human Resource Management, 55 (3), 437–462.

Windeler, J. B., Maruping, L. M., Robert, L. P., & Riemenschneider, C. K. (2015). E-profiles, conflict, and shared understanding in distributed teams. Journal of the Association for Information Systems, 16 (7), 608.

Wu, C. C., Chen, K. T., Chang, Y. C., & Lei, C. L. (2013). Crowdsourcing multimedia QoE evaluation: A trusted framework. IEEE Transactions on Multimedia, 15 (5), 1121–1137.

Xia, T., Zhang, C., Xie, J., & Li, T. (2012, September). Real-time quality control for crowdsourcing relevance evaluation. In 2012 3rd IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC) (pp. 535–539). IEEE.

Ye, T., You, S., & Robert, L. P. (2017). When does more money work? Examining the role of perceived fairness in pay on the performance quality of crowdworkers. In Proceedings of the 11th International AAAI Conference on Web and Social Media .

You, S., Robert Jr, L. P., & Rieh, S. Y. (2015, April). The appropriation paradox: Benefits and burdens of appropriating collaboration technologies. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (pp. 1741–1746). ACM.

Yung, D., Li, M. L., & Chang, S. (2014). Evolutionary approach for crowdsourcing quality control. Journal of Visual Languages & Computing, 25 (6), 879–890.

Zaidan, O. F., & Callison-Burch, C. (2011, June). Crowdsourcing translation: Professional quality from non-professionals. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (Vol. 1, pp. 1220–1229). Association for Computational Linguistics.

Zhai, H., Lingren, T., Deleger, L., Li, Q., Kaiser, M., Stoutenborough, L., & Solti, I. (2013). Web 2.0-based crowdsourcing for high-quality gold standard development in clinical natural language processing. Journal of Medical Internet Research , 15 (4).

Zhang, G., & Chen, H. (2013, October). Quality control for crowdsourcing with spatial and temporal distribution. In International Conference on Internet and Distributed Computing Systems (pp. 169–182). Berlin, Heidelberg: Springer.

Zhang, G., & Chen, H. (2013, December). Quality control of massive data for crowdsourcing in location-based services. In International Conference on Algorithms and Architectures for Parallel Processing (pp. 112–121). Cham: Springer.

Zogaj, S., & Bretschneider, U. (2014). Analyzing governance mechanisms for crowdsourcing information systems: A multiple case analysis. In Proceedings of the European Conference on Information Systems 2014.

Download references

Acknowledgements

This book chapter was supported in part by the National Science Foundation [grant CHS-1617820].

Author information

Authors and affiliations.

University of Michigan School of Information, Ann Arbor, USA

Lionel P. Robert Jr.

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Lionel P. Robert Jr. .

Editor information

Editors and affiliations.

Eindhoven University of Technology, Eindhoven, The Netherlands

Vassillis-Javed Khan

Xi’an Jiaotong-Liverpool University, Suzhou, China

Konstantinos Papangelis

Department of Information and Computing Sciences, Utrecht University, Utrecht, The Netherlands

Ioanna Lykourentzou

Department of Industrial Design, Eindhoven University of Technology, Eindhoven, The Netherlands

Panos Markopoulos

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Robert, L.P. (2019). Crowdsourcing Controls: A Review and Research Agenda for Crowdsourcing Controls Used for Macro-tasks. In: Khan, VJ., Papangelis, K., Lykourentzou, I., Markopoulos, P. (eds) Macrotask Crowdsourcing. Human–Computer Interaction Series. Springer, Cham. https://doi.org/10.1007/978-3-030-12334-5_3

Download citation

DOI : https://doi.org/10.1007/978-3-030-12334-5_3

Published : 07 August 2019

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-12333-8

Online ISBN : 978-3-030-12334-5

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

MINI REVIEW article

This article is part of the research topic.

Horizons in Smart Grids

Cyber Resilience Methods for Smart Grids against False Data Injection Attacks: Categorization, Review and Future Directions Provisionally Accepted

  • 1 National Technical University of Athens, Greece

The final, formatted version of the article will be published soon.

For a more efficient monitoring and control of electrical energy, the physical components of conventional power systems are continuously integrated with information and communication technologies, converting them into smart grids. However, energy digitalization exposes power systems into a wide range of digital risks. The term cyber resilience for electrical grids expands the conventional resilience of power systems, which mainly refers to extreme weather phenomena.Since this is a relatively new term, there is a need for the establishment of a solid conceptual framework. This paper analyzes and classifies the state-of-the-art research methodologies proposed for strengthening the cyber resilience of smart grids. To this end, the proposed work categorizes the cyberattacks against smart grids, identifies the vulnerable spots of power system automation and establishes a common ground about the cyber resilience. The paper concludes with a discussion about the limitations of the proposed methods in order to extract useful suggestions for future directions.

Keywords: Smart Grids, Cyber-physical security, Cyber resilience, False data injection attacks, Categorization, Observers, artificial intelligence

Received: 07 Mar 2024; Accepted: 15 Apr 2024.

Copyright: © 2024 Syrmakesis and Hatziargyriou. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Mr. Andrew D. Syrmakesis, National Technical University of Athens, Athens, Greece

People also looked at

IMAGES

  1. 5 Ways Crowdsourcing Will Transform Your Branding Strategy

    crowdsourcing a review and suggestions for future research

  2. crowdsourcing ideas 4 Ways to Crowdsource Ideas at Work

    crowdsourcing a review and suggestions for future research

  3. (PDF) Crowdsourcing: A Review and Suggestions for Future Research

    crowdsourcing a review and suggestions for future research

  4. What Is Crowdsourcing and How Exactly Does it Work?

    crowdsourcing a review and suggestions for future research

  5. Crowdsourcing

    crowdsourcing a review and suggestions for future research

  6. Crowdsourcing, Citizen Science, and Data-sharing

    crowdsourcing a review and suggestions for future research

VIDEO

  1. Trial chambers REVIEW + SUGGESTIONS!

  2. Investigating Code Generation Performance of ChatGPT with Crowdsourcing Social Data

  3. Consider This Before Choosing a Market Research Company

  4. The Open Source Neuro Tech Revolution // OpenBCI Founder Q&A + Live Demo

  5. Panel discussion: Emerging computing technologies in academia and industry

  6. Chapter 5.2: Crowdsourcing to gather data

COMMENTS

  1. Crowdsourcing: A Review and Suggestions for Future Research

    The authors therefore investigate the existing body of knowledge on crowdsourcing systematically through a penetrating review in which the strengths and weakness of this literature stream are presented clearly and then future avenues of research are set out. The review is based on 121 scientific articles published between January 2006 and ...

  2. Crowdsourcing: A Review and Suggestions for Future Research

    The review recognizes that crowdsourcing is ingrained in two mainstream disciplines within the broader subject matter of innovation and management: (1) open innovation; and (2) co-creation. The ...

  3. Crowdsourcing: A Review and Suggestions for Future Research

    This paper thoroughly reviews the literature to identify both areas of saturation and gaps, with a focus on the strategic organizational context, and offers a road map for future research that brings together fine-grained insights from existing crowdsourcing studies towards developing a high-level, macro-perspective of the crowdsourcing phenomenon and its strategic impact.

  4. PDF Crowdsourcing: A Review and Suggestions for Future Research

    crowdsourcing from an Input-Process-Output (I-P- O) perspective. We, therefore, set out the following objectives: (1) to provide a systematic review of the process of crowdsourcing; and (2) to make sugges-tions for future research. The review is structured as follows. The second section contains an original framework for organiz-

  5. PDF Crowdsourcing: a review and suggestions for future research

    the following objectives: (i) to provide a systematic review of the process of crowdsourcing; and (ii) to make suggestions for future research. The review is structured as follows. Section 2 contains an original framework for organizing the literature, and the method employed to conduct the review is described in Section 3.

  6. Crowdsourcing: A Review and Suggestions for Future Research

    Abstract As academic and practitioner studies on crowdsourcing have been building up since 2006, the subject itself has progressively gained in importance within the broad field of management. No s...

  7. Crowdsourcing as a Tool for Research: Methodological, Fair, and

    It is too early to tell whether or not more researchers turned to crowdsourcing during the pandemic as it became more difficult if not impossible to use face-to-face methods, or what long-term effects—if any—such a shift could have on academic research and publishing. Moreover, future research could assess whether there was an uptick in the ...

  8. PDF Crowdsourcing: A Systematic Literature Review and Future Research Agenda

    outside the organisations. This paper reviews the literature on crowdsourcing, analysing the main trends in the area. To do so, the paper uses different bibliometric techniques, as well as an in-depth literature review. The results indicate that the study of crowdsourcing has moved from a conceptual review to increasingly practical applications.

  9. Crowdsourcing as a strategic IS sourcing phenomenon: Critical review

    Crowdsourcing: A Review and Suggestions for Future Research (Ghezzi et al., 2018) International Journal of Management Reviews: General review: 121: Conduct a review of existing knowledge relating to crowdsourcing. Investigate crowdsourcing from an input-process-output (I-P-O) perspective. Input-process-output.

  10. Crowdsourcing: A Review and Suggestions for Future Research

    AbstractAs academic and practitioner studies on crowdsourcing have been building up since 2006, the subject itself has progressively gained in importance within the broad field of management. No systematic

  11. Crowdsourcing: A Review and Suggestions for Future Research

    The authors therefore investigate the existing body of knowledge on crowdsourcing systematically through a penetrating review in which the strengths and weakness of this literature stream are presented clearly and then future avenues of research are set out. The review is based on 121 scientific articles published between January 2006 and ...

  12. Crowdsourcing: A Review and Suggestions for Future Research

    As academic and practitioner studies on crowdsourcing have been building up since 2006, the subject itself has progressively gained in importance within the broad field of management. No systematic review on the topic has so far appeared in management journals, however; moreover, the field suffers from ambiguity in the topic's definition, which ...

  13. Crowdsourcing as a Strategic is Sourcing Phenomenon: Critical Review

    The proposed crowdsourcing in science typology matrix may be a starting point for future research and decision-making by practitioners regarding the choice of a specific type of crowdsourcing in ...

  14. "Crowdsourcing" ten years in: A review

    First coined by Howe in 2006, the field of crowdsourcing has grown exponentially. Despite its growth and its transcendence across many fields, the definition of crowdsourcing has still not been agreed upon, and examples are poorly indexed in peer-reviewed literature. Many examples of crowdsourcing have not been scaled-up past the pilot phase.

  15. Crowdsourcing: A Review and Suggestions for Future Research

    As academic and practitioner studies on crowdsourcing have been building up since 2006, the subject itself has progressively gained in importance within th.. ... Crowdsourcing: A Review and Suggestions for Future Research. Document Cited in Related. Vincent. Published date: 01 April 2018: Date: 01 April 2018: DOI:

  16. Evaluation on crowdsourcing research: Current status and future

    This paper thoroughly reviews the literature to identify both areas of saturation and gaps, with a focus on the strategic organizational context, and offers a road map for future research that brings together fine-grained insights from existing crowdsourcing studies towards developing a high-level, macro-perspective of the crowdsourcing phenomenon and its strategic impact.

  17. Crowdsourcing Controls: A Review and Research Agenda for Crowdsourcing

    This section outlines a research agenda as a roadmap for future research by giving specific suggestions on how to shift toward the study of crowdsourcing controls for macro-tasking. Our research agenda is based on three assumptions: 1. Macro-tasks are not decomposed when assigned to a crowd; therefore, they require the crowd to decompose the task.

  18. Understanding crowdsourcing projects: A review on the key design

    First, the elaboration of design-related decisions along the four crowdsourcing dimensions contributes to a unitary understanding of the process of designing a crowdsourcing contest. Future research can thus benefit from this refined understanding as further insights on crowdsourcing can be clearly positioned within this framework, resulting in ...

  19. Crowdsourcing as a strategic IS sourcing phenomenon: Critical review

    Guided by our analysis, we offer a road map for future research that brings together fine-grained insights from existing crowdsourcing studies towards developing a high-level, macro-perspective of ...

  20. Crowdsourcing as a Strategic IS Sourcing Phenomenon: Critical Review

    This is followed by an in depth critical analysis of selected studies published in top IS and general management journals to date. Through this review, we identify key themes that emerge out of the crowdsourcing literature and synthesize the literature to chart a more focused research path moving forward.

  21. Crowdsourcing and open innovation: a systematic literature review, an

    A comprehensive, systematic, and objective review of academic research is carried out to help shed light on the relationship between OI and crowdsourcing, and provides a qualitative analysis of the emerging and trending themes. In recent years, Open Innovation (OI) and crowdsourcing have been very popular topics in the innovation management literature, attracting significant interest and ...

  22. The Newest Vital Sign

    A Health Literacy Assessment Tool for Patient Care and Research The Newest Vital Sign (NVS) is a valid and reliable screening tool available in English and Spanish that identifies patients at risk for low health literacy. It is easy and quick to administer, requiring just three minutes. In clinical settings, the test allows providers to appropriately adapt their communication practices to the ...

  23. Water

    The methodological approach includes a comprehensive literature review and online interviews with experts from six riparian countries working in the fields of sediment research and management. Based on our investigations, we have derived several research topics, each consisting of research questions. Three project ideas were defined that should ...

  24. Frontiers

    To this end, the proposed work categorizes the cyberattacks against smart grids, identifies the vulnerable spots of power system automation and establishes a common ground about the cyber resilience. The paper concludes with a discussion about the limitations of the proposed methods in order to extract useful suggestions for future directions.