ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.08417
  4. Cited By
AdapterSwap: Continuous Training of LLMs with Data Removal and Access-Control Guarantees

AdapterSwap: Continuous Training of LLMs with Data Removal and Access-Control Guarantees

12 April 2024
William Fleshman
Aleem Khan
Marc Marone
Benjamin Van Durme
    CLL
    KELM
ArXivPDFHTML

Papers citing "AdapterSwap: Continuous Training of LLMs with Data Removal and Access-Control Guarantees"

4 / 4 papers shown
Title
Training Plug-n-Play Knowledge Modules with Deep Context Distillation
Training Plug-n-Play Knowledge Modules with Deep Context Distillation
Lucas Page-Caccia
Alan Ansell
E. Ponti
Ivan Vulić
Alessandro Sordoni
SyDa
79
0
0
11 Mar 2025
RE-Adapt: Reverse Engineered Adaptation of Large Language Models
RE-Adapt: Reverse Engineered Adaptation of Large Language Models
William Fleshman
Benjamin Van Durme
VLM
19
3
0
23 May 2024
StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in
  Question Answering Models
StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in Question Answering Models
Adam Livska
Tomávs Kovciský
E. Gribovskaya
Tayfun Terzi
Eren Sezener
...
Susannah Young
Ellen Gilsenan-McMahon
Sophia Austin
Phil Blunsom
Angeliki Lazaridou
KELM
215
89
0
23 May 2022
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
275
3,784
0
18 Apr 2021
1