When to stop making relevance judgments? A study of stopping methods for building information retrieval test collections
Por favor, use este identificador para citas ou ligazóns a este ítem:
http://hdl.handle.net/10347/24650
Ficheiros no ítem
Metadatos do ítem
Título: | When to stop making relevance judgments? A study of stopping methods for building information retrieval test collections |
Autor/a: | Losada Carril, David Enrique Parapar, Javier Barreiro, Álvaro |
Centro/Departamento: | Universidade de Santiago de Compostela. Centro de Investigación en Tecnoloxías da Información Universidade de Santiago de Compostela. Departamento de Electrónica e Computación |
Data: | 2019 |
Editor: | Wiley |
Cita bibliográfica: | Losada, D.E., Parapar, J. and Barreiro, A. (2019), When to stop making relevance judgments? A study of stopping methods for building information retrieval test collections. Journal of the Association for Information Science and Technology, 70: 49-60. https://doi.org/10.1002/asi.24077 |
Resumo: | In information retrieval evaluation, pooling is a well‐known technique to extract a sample of documents to be assessed for relevance. Given the pooled documents, a number of studies have proposed different prioritization methods to adjudicate documents for judgment. These methods follow different strategies to reduce the assessment effort. However, there is no clear guidance on how many relevance judgments are required for creating a reliable test collection. In this article we investigate and further develop methods to determine when to stop making relevance judgments. We propose a highly diversified set of stopping methods and provide a comprehensive analysis of the usefulness of the resulting test collections. Some of the stopping methods introduced here combine innovative estimates of recall with time series models used in Financial Trading. Experimental results on several representative collections show that some stopping methods can reduce up to 95% of the assessment effort and still produce a robust test collection. We demonstrate that the reduced set of judgments can be reliably employed to compare search systems using disparate effectiveness metrics such as Average Precision, NDCG, P@100, and Rank Biased Precision. With all these measures, the correlations found between full pool rankings and reduced pool rankings is very high |
Descrición: | This is the peer reviewed version of the following article: David E. Losada, Javier Parapar and Alvaro Barreiro (2019) When to Stop Making Relevance Judgments? A Study of Stopping Methods for Building Information Retrieval Test Collections. Journal of the Association for Information Science and Technology, 70 (1), 49-60, which has been published in final form at https://doi.org/10.1002/asi.24077. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions |
Versión do editor: | https://doi.org/10.1002/asi.24077 |
URI: | http://hdl.handle.net/10347/24650 |
DOI: | 10.1002/asi.24077 |
E-ISSN: | 2330-1643 |
Dereitos: | © 2018 ASIS&T. Published by Wiley. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions |
Coleccións
-
- CiTIUS-Artigos [192]
- EC-Artigos [176]