81
v1v2 (latest)

HADES: Hardware Accelerated Decoding for Efficient Speculation in Large Language Models

International Conference Civil Engineering and Architecture (ICCEA), 2024
Main:5 Pages
3 Figures
1 Tables
Abstract

Large Language Models (LLMs) have revolutionized natural language processing by understanding and generating human-like text. However, the increasing demand for more sophisticated LLMs presents significant computational challenges due to their scale and complexity. This paper introduces Hardware Accelerated Decoding (HADES), a novel approach to enhance the performance and energy efficiency of LLMs. We address the design of an LLM accelerator with hardware-level speculative decoding support, a concept not previously explored in existing literature. Our work demonstrates how speculative decoding can significantly improve the efficiency of LLM operations, paving the way for more advanced and practical applications of these models.

View on arXiv
Comments on this paper