Understanding time series data is fundamental to many real-world applications. Recent work explores multimodal large language models (MLLMs) to enhance time series understanding with contextual information beyond numerical signals. This area has grown from 7 papers in 2023 to over 580 in 2025, yet existing methods struggle in real-world settings. We analyze 20 influential works from 2025 across model design, task formulation, and evaluation, and identify critical gaps: methods adapt NLP techniques with limited attention to core time series properties; tasks remain restricted to traditional prediction and classification; and evaluations emphasize benchmarks over robustness, interpretability, or decision relevance. We argue that achieving time series reasoning requires rethinking model design, task formulation, and evaluation together. We define time series reasoning, outline challenges and future directions, and call on researchers to develop unified frameworks for robust, interpretable, and decision-relevant reasoning in real-world applications. The material is available atthis https URL.
View on arXiv