ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.18377
  4. Cited By
LLaMA-NAS: Efficient Neural Architecture Search for Large Language
  Models

LLaMA-NAS: Efficient Neural Architecture Search for Large Language Models

28 May 2024
Anthony Sarah
S. N. Sridhar
Maciej Szankin
Sairam Sundaresan
ArXivPDFHTML

Papers citing "LLaMA-NAS: Efficient Neural Architecture Search for Large Language Models"

4 / 4 papers shown
Title
Optimizing LLMs for Resource-Constrained Environments: A Survey of Model Compression Techniques
Optimizing LLMs for Resource-Constrained Environments: A Survey of Model Compression Techniques
Sanjay Surendranath Girija
Shashank Kapoor
Lakshit Arora
Dipen Pradhan
Aman Raj
Ankit Shetgaonkar
49
0
0
05 May 2025
ZSMerge: Zero-Shot KV Cache Compression for Memory-Efficient Long-Context LLMs
ZSMerge: Zero-Shot KV Cache Compression for Memory-Efficient Long-Context LLMs
Xin Liu
Pei Liu
Guoming Tang
MoMe
49
0
0
13 Mar 2025
LLMs as Debate Partners: Utilizing Genetic Algorithms and Adversarial
  Search for Adaptive Arguments
LLMs as Debate Partners: Utilizing Genetic Algorithms and Adversarial Search for Adaptive Arguments
Prakash Aryan
64
1
0
09 Dec 2024
A Hardware-Aware Framework for Accelerating Neural Architecture Search
  Across Modalities
A Hardware-Aware Framework for Accelerating Neural Architecture Search Across Modalities
Daniel Cummings
Anthony Sarah
S. N. Sridhar
Maciej Szankin
J. P. Muñoz
Sairam Sundaresan
19
8
0
19 May 2022
1