Special session – Crowdsourcing

Focus and objectives of the special session:

Crowdsourcing has become a valuable tool for studying and evaluating the perceptual quality of systems. It provides an easy, fast, and cost effective way to access a large number of diverse users. However, due to the highly uncontrolled environment in which crowd-based experiments are conducted, well-established lab test setups and experimental methodologies cannot be deployed without adaptions. These adaptions might involve technical changes of the test setup to support the remote devices of the participants, or methodical changes like the introduction of reliability checks to verify that the experimental task has been understood and executed properly.

In this session, we aim to foster the discussion between researchers in the field of crowdsourcing and perceptual studies to further close the gap between subjective testing and crowdsourcing. We solicit contributions addressing novel techniques to ease the use of crowdsourcing for the study of the perceptual quality of systems, successful and unsuccessful examples of crowd-sourced subjective studies that can help deriving general best practices or pitfalls, and methodical approaches that, e.g.,  enable better reproducibility and comparability of crowdsourcing studies

 

Special Session Organizers:

– Judith Redi, TU Delft, The Netherlands (j.a.redi /at/ tudelft.nl)

– Matthias Hirth, University of Würzburg, Germany (matthias.hirth /at/ informatik.uni-wuerzburg.de)

– Tim Polzehl, TU Berlin, Germany (tim.polzehl /at/ telekom.de)

 

Submissions are solicited on all aspects of Crowdsourcing including (but not limited to):

– Crowdsourcing as methodology for studying and predicting the perceptual quality of systems

– Test design including adaptation of classic psychometric and user testing methodologies to a crowdsourcing environment

– Robust data analysis methodologies, quality assurance and unreliability detection, reproducibility of results

– Incentive design and deployment

– Tools and platforms providing enhanced support for crowd-based perceptual quality testing

– Inherent biases, limitations and trade-offs of crowd-centered approaches to the evaluation of perceptual quality of systems

– Pitfalls leading to the failure of previous studies and lessons learned from successful experiments

– Crowdsourcing usage in user behavior analysis and modelling

– Collaborative crowdsourcing and multi-party setup

– Crowd workers, their motivation and representativeness

– Ethical-, privacy or security-related issues with crowdsourcing in user-studies

 

Note: Papers submitted to the special session will undergo the same reviewing process by anonymous and independent reviewers as regular papers.