We present a novel reasoning approach called Flow-of-Options (FoO), designed to address intrinsic biases in Large Language Models (LLMs). FoO enables LLMs to systematically explore a diverse range of possibilities in their reasoning, as demonstrated by an FoO-based agentic system for autonomously solving Machine Learning tasks (AutoML). Our framework outperforms state-of-the-art baselines, achieving improvements of 38.2% - 69.2% on standard data science tasks, and 37.4% - 47.9% on therapeutic chemistry tasks. With an overall operation cost under 1pertask,ourframeworkiswell−suitedforcost−sensitiveapplications.Beyondclassificationandregression,weillustratethebroaderapplicabilityofourFoO−basedagenticsystemtotaskssuchasreinforcementlearningandimagegeneration.Ourframeworkpresentssignificantadvancementscomparedtocurrentstate−of−the−artagenticsystemsforAutoML,duetothebenefitsofFoOinenforcingdiversityinLLMsolutionsthroughcompressed,explainablerepresentationsthatalsosupportlong−termmemorywhencombinedwithcase−basedreasoning.
@article{nair2025_2502.12929,
title={ Flow-of-Options: Diversified and Improved LLM Reasoning by Thinking Through Options },
author={ Lakshmi Nair and Ian Trase and Mark Kim },
journal={arXiv preprint arXiv:2502.12929},
year={ 2025 }
}