ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.05972
  4. Cited By
Self-Distilled Quantization: Achieving High Compression Rates in
  Transformer-Based Language Models

Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models

Annual Meeting of the Association for Computational Linguistics (ACL), 2023
12 July 2023
James OÑeill
Sourav Dutta
    VLMMQ
ArXiv (abs)PDFHTML

Papers citing "Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models"

0 / 0 papers shown

No papers found