On Thursday, November 28 (9-11hrs, location to be determined) there will be presentations by Krisztian Balog (Knowledge Base Acceleration: A First Round of TEA) and Diane Kelly (Development and Evaluation of Search Tasks for Sharing). Room D1.115.
Diane Kelly (University of North Carolina at Chapel Hill)
Development and Evaluation of Search Tasks for Sharing
Search tasks are one of the most important components of interactive information retrieval user studies. In most experimental studies, researchers assign search tasks to people in order to study search behavior and evaluate systems. In some cases, search tasks are ancillary to the study purposes but are needed for study participants to exercise systems, while in other cases search tasks act as independent variables. The development of search tasks can be difficult and time consuming, and often requires specialized knowledge and skills. Search task development is further complicated by the abundance of research demonstrating how variations in search tasks and search task properties can impact searcher behavior. While there have been long-standing calls for the development of standardized task sets, reference tasks and sharable tasks that can be used in information search studies, little effort has been made to address these calls. In this talk, I will present a set of search tasks that were developed for use in interactive information retrieval studies using the cognitive complexity dimension of a well-known learning taxonomy from the field of education. I will describe the general framework used to create the tasks, how research participants went about addressing these tasks and how they evaluated the tasks and their experiences. Although participants’ behaviors for each different task type varied along a range of measures, including number of queries issued, query diversity, time taken and number of queries without clicks, participants did not report any significant differences in task difficulty or satisfaction. These results provide evidence about the potential usefulness of these tasks in other IIR studies, as well as the general framework used to develop the tasks. The results also question the general assumption about the relationship between task difficulty and effort.
Krisztian Balog (University of Stavanger)
Knowledge Base Acceleration: A First Round of TEA
Knowledge bases such as Wikipedia are increasingly being utilized in various information access contexts. It is therefore critical that they rely on the latest information available and get updated every time new facts surface. Knowledge base acceleration (KBA) systems seek to help humans maintain and expand knowledge bases by automatically recommending edits based on incoming content streams (news, blogs, and tweets). Following the Theory-Experiments-Application (TEA) methodology, we start by considering a simple abstraction of this complex problem: filtering a time-ordered corpus for documents that are highly relevant to a predefined set of entities. This task can naturally be approached either as a classification or as a ranking problem. We discuss and empirically compare both type of methods in a supervised learning setting, using the benchmarking platform devised by the TREC KBA track. Further insights are obtained through a purpose-built tool developed for the assessment and comparison of KBA systems. Our findings prompt for a revision of the evaluation methodology. We conclude by introducing a time-aware evaluation paradigm that is driven by a more realistic usage scenario. The proposed framework provides a reasonable basis for guiding the second TEA round.