200

Chunked TabPFN: Exact Training-Free In-Context Learning for Long-Context Tabular Data

Main:6 Pages
8 Figures
Bibliography:1 Pages
4 Tables
Appendix:7 Pages
Abstract

TabPFN v2 achieves better results than tree-based models on several tabular benchmarks, which is notable since tree-based models are usually the strongest choice for tabular data. However, it cannot handle more than 10K context tokens because transformers have quadratic computation and memory costs.

View on arXiv
Comments on this paper