Paul Hemsen

Paderborn University

Crowdsourcing platforms for paid work

A literature review from a personnel economics and psychology perspective

Paul Hemsen, Paderborn University
Julian Schulte, Bielefeld University
Katharina Schlicher, Bielefeld University

Crowdsourcing describes a type of participative online activity in which an organization proposes a task to a group of individuals via a flexible open call (Estellés-Arolas & González-Ladrón-de-Guevara, 2012).

This study focuses on a literature review of crowdsourcing platforms especially for paid work on so-called “crowdworking platforms”. The main contributions are the overview of empirical studies on the subject from a personnel economics and psychological perspective and the development of a comprehensive Input-Process-Output Model (IPO-Model) from the workers’ perspective. In the remainder of this abstract, we will discuss the focus, theoretical background and design of our systematic review.

Research on crowd work is heterogeneous in nature and driven by multiple disciplines. Not surprisingly, researchers have already conducted literature reviews (e.g. Chittilappilly, Chen, & Amer-Yahia, 2016; Ghezzi, Gabelloni, Martini, & Natalicchio, 2017; Kittur et al., 2013; Zhao & Zhu, 2014). However, these focus broadly of the crowdsourcing concept, thereby not differentiating whether it is a digital gainful work or an unpaid voluntary participation. Such reviews usually study what crowdsourcing is, how it is different from similar or related concepts, and how crowdsourcing works (conceptualization focus) (Zhao & Zhu, 2014). They also discuss how crowdsourcing is applied in different situations and for different purposes.

This review contributes to the previous literature by evaluating existing research systematically and describing empirical connections between constructs by grouping similar research into clusters. In contrast to past research which approaches crowdsourcing holistically, we focus specifically on crowd work, i.e. crowd sourcing in which contributors are paid and enter a particular employment relationship with the platform. We are particularly interested in outcomes of this relationship for the individual crowd worker.

Since crowdworking has important similarities to other types of work both typical (e.g. permanent and temporary employment) and atypical (e.g. teleworking, freelancing, self-employment), extant research in personnel economics and personnel psychology can be used to shed light on the factors that might influence crowdworking initiatives.

We reviewed 91 empirical articles, systematically codified these studies and developed an IPO-Model from a personnel economics and psychological perspective. IPO-Models are widely used in sciences for describing processes in system analysis and mechanisms of action in research.

Studies were identified by applying a number of search terms: crowd work*, crowdwork*, crowd sourc*, crowdsource*, platform economy, gig economy or crowd employment. The most important databases for both psychology and economics were searched, namely PsycINFO, EconLit and Business Source Complete. Due to a high number of hits, the search was narrowed down to empirical studies. Additionally, we applied a backward and forward search strategy on the references of key articles. This search resulted in 1173 primary studies overall. We selected relevant primary studies by applying three selection criteria. The studies had to (1) report research on the construct of crowdworking, (2) show an emphasis on personnel economic and psychological research questions, (3) and collect empirical data.

As a result, 91 studies remained and were systematically codified by publication data; information about sample, crowdworking platform, research design, methodology and findings. An iterative bottom-up approach then aggregated these codified variables into clusters based on similarity and content-related proximity. The clusters are divided into the three stages of the IPO-model, namely input, process and output.

The input variables were grouped into seven clusters: monetary incentives; nonmonetary incentives; task design; market-related variables; workers’ qualification/profile; workers’ traits/characteristics; individual working history on the platform. The input variables were modeled in primary studies to explain the variations of specific process- or output variables.

The output variables were grouped into six clusters: job satisfaction, worker commitment towards the platform, participation in crowd work, qualitative performance, quantitative performance and employability of the crowd worker.

Involved process variables which potentially moderate or mediate the relationship between input and output variables were grouped into six clusters: workers’ intrinsic and extrinsic motivation; workers’ affect; workers' perceived competence; invested effort for task completion; workers’ trust towards the platform and workers’ perceived fairness of the processes on crowdworking platforms.

Further analyses of the literature expand the IPO-Model by information about statistically significant and non-significant relations. Hence, our review shows how often a research question has been addressed and which statistical effects evolved in the studies.

Overall, our review provides a roadmap for future research on the topic of crowd work as digital gainful work. We identify and quantify the state of the art in current research of personal economics and psychology on the topic of crowd work. Our review has important implications on how to enhance factors that are critical to worker and platforms alike, such as attraction, motivation and commitment of self-employed workers on crowdworking platforms.


  • Chittilappilly, A. I., Chen, L., & Amer-Yahia, S. 2016. A Survey of General-Purpose Crowdsourcing Techniques. IEEE Transactions on Knowledge and Data Engineering, 28(9): 2246–2266.
  • Estellés-Arolas, E., & González-Ladrón-de-Guevara, F. 2012. Towards an integrated crowdsourcing definition. Journal of Information Science, 38(2): 189–200.
  • Ghezzi, A., et al. 2017. Crowdsourcing: A review and suggestions for future research. International Journal of Management Reviews.
  • Kittur, A., et al. 2013. The future of crowd work, Proceedings of the 2013 conference on Computer supported cooperative work: 1301–1318.
  • Zhao, Y., & Zhu, Q. 2014. Evaluation on crowdsourcing research: Current status and future direction. Information Systems Frontiers, 16(3): 417–434.


Monetary incentivized ratings on crowdsourcing platforms for paid work

Paul Hemsen, Paderborn University

This study focuses on monetary incentivized ratings (e.g. Five-Star-Ratings) for self-employed workers on so-called “crowdworking platforms”, i.e. crowdsourcing platforms for paid work. The main question is whether particular monetary incentivized ratings can support platforms in motivating and committing workers and at the same time allow workers to earn a sufficient and fair regular wage. Empirical evidence comes from new data gained on a questionnaire survey conducted in 2018 among some 600 workers on three German platforms. I would like to present the findings of the study at ILERA. The data are currently being processed. In the remainder of this abstract, I will discuss the theoretical background and will present first descriptive findings from the survey.

Like other organizations, crowdworking platforms need to attract, motivate and commit workers. Platforms who coordinate highly skilled tasks such as designing, testing, or texting platforms are particularly dependent on skilled workers. These workers, in turn, are not bound to one platform only and are free to leave or not contribute to a particular platform.

Findings from the survey support these assumptions. 597 Workers from three platforms (two microtask-platforms and one texting-platform) were asked about various aspects of their work and their personal background. The workers are highly educated (46.4% with an academic degree, 41.54% with a vocational training) and therefore potentially qualified for different tasks. They are on average active on 2.6 platforms with an average membership of 3 years on at least one of the three surveyed platforms.

First results of the survey seem to indicate that workers’ expectations on their crowdwork activities are not met by the present conditions. Workers report as their reasons for doing crowdwork: a better coordination between work and personal life; a source of income; improvement of financial situation; performing interesting tasks. When we compare expectations and actual conditions, many workers seem to miss an appropriate balance between their effort and the income they receive and a fair appraisal of the results they deliver.

Platforms with a need for specific skills are well advised to take the unmet expectations into consideration as they may harm worker motivation and commitment. In particular, short-term and narrow, task-oriented compensation systems on these platforms are a potential cause for unmet expectations.

More long-term oriented, monetary incentivized ratings may be favorable to both the platforms and the workers. Such ratings assign a particular predefined rating (e.g. Stars; Level) or status level to each registered worker, based on experience, measured performance or subjective appraisals by the platform, peers or clients. This approach is especially supported by the concepts of standards in rank-order tournaments (Lazear & Gibbs, 2009) and goal-setting theory (Locke & Latham, 2002). Such ratings may also combine different rewards such as pay or access to particular tasks in an ingenious way.

Today, only few crowdworking platforms have implemented such ratings. Examples include the testing-platform Applause; texting platforms such as and Textbroker; and designing platforms such as 99Designs, DesignenLassen, Fotolia, AdobeStock and iStockphoto. In these platforms, pay still is attached to the execution of a particular task, of a particular quality, but it also depends on the worker’s prior achievement as measured by rating or status level. Since other rewards may also be differentiated by level, such as the access to an extended task pool, a monetary incentivized rating is potentially able to address extrinsic motivation (for instance through additional monetary compensation and reputations concerns) (Bayus, 2010; Brabham, 2008; Leimeister, Huber, Bretschneider, & Krcmar, 2009) as well as intrinsic motivation (for instance through self-satisfaction and personal development) (Chittilappilly, Chen, & Amer-Yahia, 2016). Yet the existing empirical literature remains largely unclear as to whether such monetary compensated ratings are indeed effective in increasing work participation and performance; and worker’s commitment towards the platform (Goes, Guo, & Lin, 2016).

The survey also includes for the first time information on workers’ perception of the incentive system: appropriateness of evaluation and the result rewards; transparency; perceived influences on the factors income, task pool, success on platform, motivation and recognition through different parties (platform, client, peers). Interestingly, these dimensions were rated more favorably by workers on one texting platform we surveyed than by workers on two microtask platforms – the latter do not have monetary incentivized rating whereas as the first one does.

The survey also includes findings on workers’ platform commitment. For example, we find that perceived continuance commitment towards the platform is slightly stronger than affective commitment, which is not surprising given the predominance of monetary motivation for participation. Workers of the texting platform report the highest level of affective and continuance commitment towards the platform. In line with this, 74% of workers of the texting platform intend to continue this relation for at least another year, compared to a share of 56% and 62% for the two microtask platforms. Further analysis will show to what extent the rating system explains these findings. In addition, I will analyze how participation and performance differ within and between the performance ratings on different platforms.

This research should help to shed light on the important role of incentive systems, especially monetary incentivized ratings, in this highly flexible working environment. More long-term oriented incentive systems may be favorable to both workers and platforms: Platforms may be able to find, motivate, and commit more qualified workers; and workers may generate more appropriate pay levels and interesting tasks, which would be an important step to creating a more desirable digital working environment.


  • Bayus, B. 2010. Crowdsourcing and individual creativity over time: The detrimental effects of past success.
  • Brabham, D. C. 2008. Crowdsourcing as a Model for Problem Solving. Convergence: The International Journal of Research into New Media Technologies, 14(1): 75–90.
  • Chittilappilly, A. I., Chen, L., & Amer-Yahia, S. 2016. A Survey of General-Purpose Crowdsourcing Techniques. IEEE Transactions on Knowledge and Data Engineering, 28(9): 2246–2266.
  • Goes, P. B., Guo, C., & Lin, M. 2016. Do Incentive Hierarchies Induce User Effort?: Evidence from an Online Knowledge Exchange. Information Systems Research, 27(3): 497–516.
  • Lazear, E. P., & Gibbs, M. 2009. Personnel economics in practice (2nd ed.). Hoboken, NJ: Wiley.
  • Leimeister, J. M., et al. 2009. Leveraging Crowdsourcing: Activation-Supporting Components for IT-Based Ideas Competition. Journal of Management Information Systems, 26(1): 197–224.
  • Locke, E. A., & Latham, G. P. 2002. Building a practically useful theory of goal setting and task motivation: A 35-year odyssey. American Psychologist, 57(9): 705–717.


Subscribe to RSS - Paul Hemsen