Temporal Summarization Track

News events such as protests, accidents or natural disasters represent a unique information access problem where traditional approaches fail. For example, immediately after an event, the corpus may be sparsely populated with relevant content. Even when, after a few hours, relevant content becomes available, it is often inaccurate or highly redundant. At the same time, crisis events demonstrate a scenario where users urgently need information, especially if they are directly affected by the event. 

The goal of the TREC Temporal Summarization Track is to develop systems for efficiently monitoring the information associated with an event over time. Specifically, we are interested in developing systems which can broadcast short, relevant, and reliable sentencelength updates about a developing event. The track has the following four main aims:
  • To develop algorithms which detect sub-events with low latency,
  • To model information reliability in the presence of a dynamic corpus,
  • To understand and address the sensitivity of text summarization algorithms in an online, sequential setting, and
  • To understand and address the s sensitivity of information extraction algorithms in dynamic settings.


Nov 20, 2015: Thank you for participating in this year's track! And thanks to participants over the last three years. The track has been succeeded by the Real Time Summarization Track. Check them out (and their server tools) and sign up to participate!


June 10, 2015: Test events, guidelines, and metrics available on downloads page.

April 23, 2015: The organizers are actively developing new topics for this year's evaluation.  Please join the google group for updates!

November 21, 2014: Thanks for attending TREC! Stay tuned for more concrete information about the track changes. Also, the Google Group has changed name in order to be year-agnostic.

July 7, 2014: TS-specific corpus subset released. This uses the same gpg key (and access agreement) as the full corpus. You are welcome to submit runs using either the full corpus or this sample.

June 12, 2014: draft guidelines, metrics, and test topics released. All topics from 2013 may be used as training.

April 15, 2014: we are currently in topic development and will release guidelines for the 2014 track soon.

October 22, 2013: evaluation results have been released. Evaluation script available here. Also see the Match View Interface.

August 28, 2013: participants no longer are required to submit "internal only" runs.  guidelines updated.  more information here.

July 12, 2013: training event data published.  more information here.

July 5, 2013: draft metrics published.  more information here.

July 1, 2013: guidelines updated.  more information here.

June 22, 2013: test events and updated guidelines released.  more information here.

April 22, 2013: Draft guidelines updated with corpus information.

April 21, 2013: Draft guidelines updated with output format.

March 25, 2013: Draft guidelines posted.

November 2012: For now, enjoy some slides from our planning session. [pdf]