ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.14187
30
257

HAT: Hardware-Aware Transformers for Efficient Natural Language Processing

28 May 2020
Hanrui Wang
Zhanghao Wu
Zhijian Liu
Han Cai
Ligeng Zhu
Chuang Gan
Song Han
ArXivPDFHTML
Abstract

Transformers are ubiquitous in Natural Language Processing (NLP) tasks, but they are difficult to be deployed on hardware due to the intensive computation. To enable low-latency inference on resource-constrained hardware platforms, we propose to design Hardware-Aware Transformers (HAT) with neural architecture search. We first construct a large design space with arbitrary encoder-decoder attention\textit{arbitrary encoder-decoder attention}arbitrary encoder-decoder attention and heterogeneous layers\textit{heterogeneous layers}heterogeneous layers. Then we train a SuperTransformer\textit{SuperTransformer}SuperTransformer that covers all candidates in the design space, and efficiently produces many SubTransformers\textit{SubTransformers}SubTransformers with weight sharing. Finally, we perform an evolutionary search with a hardware latency constraint to find a specialized SubTransformer\textit{SubTransformer}SubTransformer dedicated to run fast on the target hardware. Extensive experiments on four machine translation tasks demonstrate that HAT can discover efficient models for different hardware (CPU, GPU, IoT device). When running WMT'14 translation task on Raspberry Pi-4, HAT can achieve 3×\textbf{3}\times3× speedup, 3.7×\textbf{3.7}\times3.7× smaller size over baseline Transformer; 2.7×\textbf{2.7}\times2.7× speedup, 3.6×\textbf{3.6}\times3.6× smaller size over Evolved Transformer with 12,041×\textbf{12,041}\times12,041× less search cost and no performance loss. HAT code is https://github.com/mit-han-lab/hardware-aware-transformers.git

View on arXiv
Comments on this paper