FLAT-LLM: Fine-grained Low-rank Activation Space Transformation for Large Language Model Compression
v1v2 (latest)

FLAT-LLM: Fine-grained Low-rank Activation Space Transformation for Large Language Model Compression

Papers citing "FLAT-LLM: Fine-grained Low-rank Activation Space Transformation for Large Language Model Compression"

Title
No papers