Wednesday, April 27, 2016 12:30 pm
-
12:30 pm
EDT (GMT -04:00)
Speaker: | Gaurav Baruah |
Abstract: | Nugget-based evaluation requires assessors to judge whether or not a given nugget is found in a given piece of text. In TREC tracks such as Temporal Summarization and Question Answering, assessors may need to keep track of over 100 nuggets per search topic. Matching these sets of nuggets to run submissions is time-consuming and tedious. In this talk, we present our work on estimating the potential for assistive user interfaces to reduce assessors’ nugget matching effort. We iteratively build upon different matching strategies continuous active learning to help assessors match nuggets with sentences. The proposed matching strategies may simplify assessment for secondary assessors by potentially alleviating the memory information overload caused by a large number of nuggets. Across four nugget-based test collections, we found that our proposed matching strategies have the potential to reduce assessor effort while not hurting the quality of the collected judgements. |