Best-Worst Scaling aka Maximum Difference Scaling (MaxDiff)

 

Contact:
Saif M. Mohammad (saif.mohammad@nrc-cnrc.gc.ca)


Obtaining real-valued annotations has several challenges. Respondents are faced with a higher cognitive load when asked for real-valued scores as opposed to simply classifying terms into pre-chosen discrete classes. Besides, it is difficult for an annotator to remain consistent with his/her annotations. Further, the same score may map to different sentiment scores in the minds of different annotators. One could overcome these problems by providing annotators with pairs of terms and asking which is stronger in terms of association with the property of interest (a comparative approach); however, that requires a much larger set of annotations (order NxN, where N is the number of instances to be annotated).

Best–Worst Scaling (BWS), also sometimes referred to as Maximum Difference Scaling (MaxDiff), is an annotation scheme that exploits the comparative approach to annotation (Louviere and Woodworth, 1990; Cohen, 2003; Louviere et al., 2015). Annotators are given four items (4-tuple) and asked which item is the Best (highest in terms of the property of interest) and which is the Worst (least in terms of the property of interest). These annotations can then be easily converted into real-valued scores of association between the items and the property, which eventually allows for creating a ranked list of items as per their association with the property of interest.

We show that ranking of terms remains remarkably consistent even when the annotation process is repeated with a different set of annotators (Kiritchenko and Mohammad, 2016, 2017). Go here for details on Reliability of the Annotations and a comparison of BWS with Rating Scales.

   

BWS Datasets:

Papers:

Capturing Reliable Fine-Grained Sentiment Associations by Crowdsourcing and Best-Worst Scaling. Svetlana Kiritchenko and Saif M. Mohammad. In Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. June 2016. San Diego, CA.
Paper (pdf)   BibTeX    Presentation   

Best-Worst Scaling More Reliable than Rating Scales: A Case Study on Sentiment Intensity Annotation. Kiritchenko, S. and Mohammad, S. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL-2017), Vancouver, Canada, 2017.
Paper (pdf)    BibTeX       Poster       All Data       Rating Scale Questionnaire       BWS Questionnaire      
Rating Scale Annotations       Scores Obtained By Rating Scale        BWS Annotations        Scores Obtained by BWS 

Scripts (last updated May, 2017): Code to assist with best-worst-scaling annotations can be downloaded by clicking here. It includes a script to produce 4-tuples with desired term distributions, a script to produce real-valued scores from best-worst annotations, as well as a script to calculate split-half reliability of the annotations.

 

Last updated: July 2017.