Humans continuously make new discoveries, and understanding temporal sequence of
events leading to these breakthroughs is essential for advancing science and society.
This ability to reason over time allows us to identify future steps
and understand the effects of financial and political decisions on our lives.
However, large language models (LLMs) are typically trained on static datasets,
limiting their ability to perform effective temporal reasoning.
To assess the temporal reasoning capabilities of LLMs, we present the TransientTables dataset,
which comprises 3,971 questions derived from over 14,000 tables, spanning 1,238 entities across multiple time periods.
We introduce a template-based question-generation pipeline that harnesses LLMs to refine both templates and questions.
Additionally, we establish baseline results using state-of-the-art LLMs to create a benchmark.
We also introduce novel modeling strategies centered around task decomposition, enhancing LLM performance.