190

Continual Learning Using Only Large Language Model Prompting

International Conference on Computational Linguistics (COLING), 2024
Main:5 Pages
1 Figures
Bibliography:3 Pages
3 Tables
Appendix:2 Pages
Abstract

We introduce CLOB, a novel continual learning (CL) paradigm wherein a large language model (LLM) is regarded as a black box. Learning is done incrementally via only verbal prompting. CLOB does not fine-tune any part of the LLM or add any trainable parameters to it. It is particularly suitable for LLMs that are accessible via APIs. We also propose a new CL technique, called CIS, based on incremental summarization that also overcomes the LLM's input length limit. Experiments show CIS outperforms baselines by a very large margin.

View on arXiv
Comments on this paper