FiSTECH: Financial Style Transfer to Enhance Creativity without
Hallucinations in LLMs
Financial report generation using general purpose large language models (LLMs) pose two major challenges namely, the lack of compound sentences and hallucinations. Advanced prompt engineering and retrieval augmented generation (RAG) techniques are limited in scope for curing these writing style discrepancies. In this work we propose a novel two-stage fine-tuning (FT) process wherein public domain financial reports are processed into prompt-completions and augmented using simple LLM prompts to then enable sectional financial report generation using minimal instructions and tabular data inputs. The proposed fine-tuning process exploits the self-learning capability of LLMs by allowing hallucinations in the first stage and showing the corrections in the second stage. Our proposed fine-tuning framework results doubles the number of correct questions answers and reduces hallucinations by over 50%. Additionally, the two-stage FT model has lower perplexity, improved ROUGE, TER and BLEU scores, higher creativity and knowledge density with lower uncertainty and cross entropy. Thus, the proposed framework can be generalized to domain specific fine-tuning tasks at minimized tuning costs.
View on arXiv