Abstract
A novel agentic framework addresses complex tabular reasoning tasks by combining hierarchical meta-graph construction, expectation-aware path selection, and siamese structured memory for iterative refinement.
Large language models often struggle with complex long-horizon analytical tasks over unstructured tables, which typically feature hierarchical and bidirectional headers and non-canonical layouts. We formalize this challenge as Deep Tabular Research (DTR), requiring multi-step reasoning over interdependent table regions. To address DTR, we propose a novel agentic framework that treats tabular reasoning as a closed-loop decision-making process. We carefully design a coupled query and table comprehension for path decision making and operational execution. Specifically, (i) DTR first constructs a hierarchical meta graph to capture bidirectional semantics, mapping natural language queries into an operation-level search space; (ii) To navigate this space, we introduce an expectation-aware selection policy that prioritizes high-utility execution paths; (iii) Crucially, historical execution outcomes are synthesized into a siamese structured memory, i.e., parameterized updates and abstracted texts, enabling continual refinement. Extensive experiments on challenging unstructured tabular benchmarks verify the effectiveness and highlight the necessity of separating strategic planning from low-level execution for long-horizon tabular reasoning.
Community
A novel agentic deep research framework that treats tabular reasoning as a closed-loop decision-making process. Specifically, (i) DTR first constructs a hierarchical meta graph to capture bidirectional semantics, mapping natural language queries into an operation-level search space; (ii) To navigate this space, we introduce an expectation-aware selection policy that prioritizes high-utility execution paths; (iii) Crucially, historical execution outcomes are synthesized into a siamese structured memory, i.e., parameterized updates and abstracted texts, enabling continual refinement.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Orthogonal Hierarchical Decomposition for Structure-Aware Table Understanding with Large Language Models (2026)
- QUIETT: Query-Independent Table Transformation for Robust Reasoning (2026)
- Enhancing TableQA through Verifiable Reasoning Trace Reward (2026)
- ReThinker: Scientific Reasoning by Rethinking with Guided Reflection and Confidence Control (2026)
- FastCode: Fast and Cost-Efficient Code Understanding and Reasoning (2026)
- SciAgentGym: Benchmarking Multi-Step Scientific Tool-use in LLM Agents (2026)
- ROMA: Recursive Open Meta-Agent Framework for Long-Horizon Multi-Agent Systems (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper