Leveraging LLM For Synchronizing Information Across Multilingual Tables
Published in Nations of the Americas Chapter of the Association for Computational Linguistics, 2025
Humans continuously make new discoveries, and understanding temporal sequence of events leading to these breakthroughs is essential for advancing science and society. This ability to reason over time allows us to identify future steps and understand the effects of financial and political decisions on our lives. However, large language models (LLMs) are typically trained on static datasets, limiting their ability to perform effective temporal reasoning. To assess the temporal reasoning capabilities of LLMs, we present the TRANSIENTTABLES dataset, which comprises 3,971 questions derived from over 14,000 tables, spanning 1,238 entities across multiple time periods. We introduce a template-based question-generation pipeline that harnesses LLMs to refine both templates and questions. Additionally, we establish baseline results using state-of-the-art LLMs to create a benchmark. We also introduce novel modeling strategies centered around task decomposition, enhancing LLM performance.
Recommended citation: Khincha, Siddharth, Tushar Kataria, Ankita Anand, Dan Roth, and Vivek Gupta. "Leveraging LLM For Synchronizing Information Across Multilingual Tables." arXiv preprint arXiv:2504.02559 (2025). https://aclanthology.org/2025.naacl-long.329/
