ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.04089
  4. Cited By
Editing Models with Task Arithmetic

Editing Models with Task Arithmetic

8 December 2022
Gabriel Ilharco
Marco Tulio Ribeiro
Mitchell Wortsman
Suchin Gururangan
Ludwig Schmidt
Hannaneh Hajishirzi
Ali Farhadi
    KELM
    MoMe
    MU
ArXivPDFHTML

Papers citing "Editing Models with Task Arithmetic"

50 / 350 papers shown
Title
Training Neural Networks from Scratch with Parallel Low-Rank Adapters
Training Neural Networks from Scratch with Parallel Low-Rank Adapters
Minyoung Huh
Brian Cheung
Jeremy Bernstein
Phillip Isola
Pulkit Agrawal
25
10
0
26 Feb 2024
InstructEdit: Instruction-based Knowledge Editing for Large Language
  Models
InstructEdit: Instruction-based Knowledge Editing for Large Language Models
Ningyu Zhang
Bo Tian
Siyuan Cheng
Xiaozhuan Liang
Yi Hu
Kouying Xue
Yanjie Gou
Xi Chen
Huajun Chen
KELM
40
4
0
25 Feb 2024
Knowledge Fusion of Chat LLMs: A Preliminary Technical Report
Knowledge Fusion of Chat LLMs: A Preliminary Technical Report
Fanqi Wan
Ziyi Yang
Longguang Zhong
Xiaojun Quan
Xinting Huang
Wei Bi
MoMe
19
1
0
25 Feb 2024
Does Combining Parameter-efficient Modules Improve Few-shot Transfer
  Accuracy?
Does Combining Parameter-efficient Modules Improve Few-shot Transfer Accuracy?
Nader Asadi
Mahdi Beitollahi
Yasser H. Khalil
Yinchuan Li
Guojun Zhang
Xi Chen
MoMe
25
8
0
23 Feb 2024
Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity
  Tracking
Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking
Nikhil Prakash
Tamar Rott Shaham
Tal Haklay
Yonatan Belinkov
David Bau
30
51
0
22 Feb 2024
Q-Probe: A Lightweight Approach to Reward Maximization for Language
  Models
Q-Probe: A Lightweight Approach to Reward Maximization for Language Models
Kenneth Li
Samy Jelassi
Hugh Zhang
Sham Kakade
Martin Wattenberg
David Brandfonbrener
27
9
0
22 Feb 2024
Multilinear Mixture of Experts: Scalable Expert Specialization through
  Factorization
Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization
James Oldfield
Markos Georgopoulos
Grigorios G. Chrysos
Christos Tzelepis
Yannis Panagakis
M. Nicolaou
Jiankang Deng
Ioannis Patras
MoE
32
8
0
19 Feb 2024
Rethinking Machine Unlearning for Large Language Models
Rethinking Machine Unlearning for Large Language Models
Sijia Liu
Yuanshun Yao
Jinghan Jia
Stephen Casper
Nathalie Baracaldo
...
Hang Li
Kush R. Varshney
Mohit Bansal
Sanmi Koyejo
Yang Liu
AILaw
MU
63
79
0
13 Feb 2024
Learning to Route Among Specialized Experts for Zero-Shot Generalization
Learning to Route Among Specialized Experts for Zero-Shot Generalization
Mohammed Muqeeth
Haokun Liu
Yufan Liu
Colin Raffel
MoMe
32
33
0
08 Feb 2024
On the Emergence of Cross-Task Linearity in the Pretraining-Finetuning
  Paradigm
On the Emergence of Cross-Task Linearity in the Pretraining-Finetuning Paradigm
Zhanpeng Zhou
Zijun Chen
Yilan Chen
Bo-Wen Zhang
Junchi Yan
MoMe
16
9
0
06 Feb 2024
Representation Surgery for Multi-Task Model Merging
Representation Surgery for Multi-Task Model Merging
Enneng Yang
Li Shen
Zhenyi Wang
Guibing Guo
Xiaojun Chen
Xingwei Wang
Dacheng Tao
MoMe
46
37
0
05 Feb 2024
MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly
  Mixed Classifiers
MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers
Yatong Bai
Mo Zhou
Vishal M. Patel
Somayeh Sojoudi
AAML
14
6
0
03 Feb 2024
Merging Multi-Task Models via Weight-Ensembling Mixture of Experts
Merging Multi-Task Models via Weight-Ensembling Mixture of Experts
A. Tang
Li Shen
Yong Luo
Nan Yin
Lefei Zhang
Dacheng Tao
MoMe
20
39
0
01 Feb 2024
How Useful is Continued Pre-Training for Generative Unsupervised Domain Adaptation?
How Useful is Continued Pre-Training for Generative Unsupervised Domain Adaptation?
Rheeya Uppaal
Yixuan Li
Junjie Hu
30
4
0
31 Jan 2024
WARM: On the Benefits of Weight Averaged Reward Models
WARM: On the Benefits of Weight Averaged Reward Models
Alexandre Ramé
Nino Vieillard
Léonard Hussenot
Robert Dadashi
Geoffrey Cideron
Olivier Bachem
Johan Ferret
100
92
0
22 Jan 2024
LLM Augmented LLMs: Expanding Capabilities through Composition
LLM Augmented LLMs: Expanding Capabilities through Composition
Rachit Bansal
Bidisha Samanta
Siddharth Dalmia
Nitish Gupta
Shikhar Vashishth
Sriram Ganapathy
Abhishek Bapna
Prateek Jain
Partha P. Talukdar
CLL
13
34
0
04 Jan 2024
PILoRA: Prototype Guided Incremental LoRA for Federated
  Class-Incremental Learning
PILoRA: Prototype Guided Incremental LoRA for Federated Class-Incremental Learning
Haiyang Guo
Fei Zhu
Wenzhuo Liu
Xu-Yao Zhang
Cheng-Lin Liu
CLL
27
6
0
04 Jan 2024
A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO
  and Toxicity
A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity
Andrew Lee
Xiaoyan Bai
Itamar Pres
Martin Wattenberg
Jonathan K. Kummerfeld
Rada Mihalcea
64
95
0
03 Jan 2024
A Comprehensive Study of Knowledge Editing for Large Language Models
A Comprehensive Study of Knowledge Editing for Large Language Models
Ningyu Zhang
Yunzhi Yao
Bo Tian
Peng Wang
Shumin Deng
...
Lei Liang
Zhiqiang Zhang
Xiao-Jun Zhu
Jun Zhou
Huajun Chen
KELM
26
76
0
02 Jan 2024
Partial Fine-Tuning: A Successor to Full Fine-Tuning for Vision
  Transformers
Partial Fine-Tuning: A Successor to Full Fine-Tuning for Vision Transformers
Peng Ye
Yongqi Huang
Chongjun Tu
Minglei Li
Tao Chen
Tong He
Wanli Ouyang
22
4
0
25 Dec 2023
Merging Vision Transformers from Different Tasks and Domains
Merging Vision Transformers from Different Tasks and Domains
Peng Ye
Chenyu Huang
Mingzhu Shen
Tao Chen
Yongqi Huang
Yuning Zhang
Wanli Ouyang
MoMe
10
11
0
25 Dec 2023
Multimodal Attention Merging for Improved Speech Recognition and Audio
  Event Classification
Multimodal Attention Merging for Improved Speech Recognition and Audio Event Classification
Anirudh S. Sundar
Chao-Han Huck Yang
David M. Chan
Shalini Ghosh
Venkatesh Ravichandran
P. S. Nidadavolu
MoMe
33
8
0
22 Dec 2023
Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models:
  A Critical Review and Assessment
Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models: A Critical Review and Assessment
Lingling Xu
Haoran Xie
S. J. Qin
Xiaohui Tao
F. Wang
24
130
0
19 Dec 2023
Model Breadcrumbs: Scaling Multi-Task Model Merging with Sparse Masks
Model Breadcrumbs: Scaling Multi-Task Model Merging with Sparse Masks
Mohammad-Javad Davari
Eugene Belilovsky
MoMe
27
54
0
11 Dec 2023
Concrete Subspace Learning based Interference Elimination for Multi-task
  Model Fusion
Concrete Subspace Learning based Interference Elimination for Multi-task Model Fusion
A. Tang
Li Shen
Yong Luo
Liang Ding
Han Hu
Bo Du
Dacheng Tao
MoMe
18
21
0
11 Dec 2023
Merging by Matching Models in Task Parameter Subspaces
Merging by Matching Models in Task Parameter Subspaces
Derek Tam
Mohit Bansal
Colin Raffel
MoMe
19
10
0
07 Dec 2023
Knowledge Unlearning for LLMs: Tasks, Methods, and Challenges
Knowledge Unlearning for LLMs: Tasks, Methods, and Challenges
Nianwen Si
Hao Zhang
Heyu Chang
Wenlin Zhang
Dan Qu
Weiqiang Zhang
KELM
MU
70
26
0
27 Nov 2023
ComPEFT: Compression for Communicating Parameter Efficient Updates via
  Sparsification and Quantization
ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and Quantization
Prateek Yadav
Leshem Choshen
Colin Raffel
Mohit Bansal
19
12
0
22 Nov 2023
In-context Vectors: Making In Context Learning More Effective and
  Controllable Through Latent Space Steering
In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering
Sheng Liu
Haotian Ye
Lei Xing
James Y. Zou
26
83
0
11 Nov 2023
LCM-LoRA: A Universal Stable-Diffusion Acceleration Module
LCM-LoRA: A Universal Stable-Diffusion Acceleration Module
Simian Luo
Yiqin Tan
Suraj Patil
Daniel Gu
Patrick von Platen
Apolinário Passos
Longbo Huang
Jian Li
Hang Zhao
MoMe
108
139
0
09 Nov 2023
Language Models are Super Mario: Absorbing Abilities from Homologous
  Models as a Free Lunch
Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Le Yu
Yu Bowen
Haiyang Yu
Fei Huang
Yongbin Li
MoMe
28
266
0
06 Nov 2023
A Survey on Knowledge Editing of Neural Networks
A Survey on Knowledge Editing of Neural Networks
Vittorio Mazzia
Alessandro Pedrani
Andrea Caciolai
Kay Rottmann
Davide Bernardi
KELM
12
24
0
30 Oct 2023
SoK: Memorization in General-Purpose Large Language Models
SoK: Memorization in General-Purpose Large Language Models
Valentin Hartmann
Anshuman Suri
Vincent Bindschaedler
David E. Evans
Shruti Tople
Robert West
KELM
LLMAG
16
19
0
24 Oct 2023
SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial
  Understanding
SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding
Haoxiang Wang
Pavan Kumar Anasosalu Vasu
Fartash Faghri
Raviteja Vemulapalli
Mehrdad Farajtabar
Sachin Mehta
Mohammad Rastegari
Oncel Tuzel
Hadi Pouransari
VLM
11
65
0
23 Oct 2023
Function Vectors in Large Language Models
Function Vectors in Large Language Models
Eric Todd
Millicent Li
Arnab Sen Sharma
Aaron Mueller
Byron C. Wallace
David Bau
8
99
0
23 Oct 2023
Equivariant Deep Weight Space Alignment
Equivariant Deep Weight Space Alignment
Aviv Navon
Aviv Shamsian
Ethan Fetaya
Gal Chechik
Nadav Dym
Haggai Maron
16
21
0
20 Oct 2023
Model Merging by Uncertainty-Based Gradient Matching
Model Merging by Uncertainty-Based Gradient Matching
Nico Daheim
Thomas Möllenhoff
E. Ponti
Iryna Gurevych
Mohammad Emtiyaz Khan
MoMe
FedML
27
43
0
19 Oct 2023
Personalized Soups: Personalized Large Language Model Alignment via
  Post-hoc Parameter Merging
Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging
Joel Jang
Seungone Kim
Bill Yuchen Lin
Yizhong Wang
Jack Hessel
Luke Zettlemoyer
Hannaneh Hajishirzi
Yejin Choi
Prithviraj Ammanabrolu
MoMe
26
130
0
17 Oct 2023
Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from
  a Parametric Perspective
Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective
Ming Zhong
Chenxin An
Weizhu Chen
Jiawei Han
Pengcheng He
21
8
0
17 Oct 2023
Quantifying Language Models' Sensitivity to Spurious Features in Prompt
  Design or: How I learned to start worrying about prompt formatting
Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting
Melanie Sclar
Yejin Choi
Yulia Tsvetkov
Alane Suhr
25
295
0
17 Oct 2023
Can We Edit Multimodal Large Language Models?
Can We Edit Multimodal Large Language Models?
Siyuan Cheng
Bo Tian
Qingbin Liu
Xi Chen
Yongheng Wang
Huajun Chen
Ningyu Zhang
MLLM
28
28
0
12 Oct 2023
Measuring Feature Sparsity in Language Models
Measuring Feature Sparsity in Language Models
Mingyang Deng
Lucas Tao
Joe Benton
17
1
0
11 Oct 2023
A Meta-Learning Perspective on Transformers for Causal Language Modeling
A Meta-Learning Perspective on Transformers for Causal Language Modeling
Xinbo Wu
L. Varshney
16
6
0
09 Oct 2023
Establishing Trustworthiness: Rethinking Tasks and Model Evaluation
Establishing Trustworthiness: Rethinking Tasks and Model Evaluation
Robert Litschko
Max Müller-Eberstein
Rob van der Goot
Leon Weber
Barbara Plank
LRM
6
2
0
09 Oct 2023
Uncovering hidden geometry in Transformers via disentangling position
  and context
Uncovering hidden geometry in Transformers via disentangling position and context
Jiajun Song
Yiqiao Zhong
18
10
0
07 Oct 2023
Parameter Efficient Multi-task Model Fusion with Partial Linearization
Parameter Efficient Multi-task Model Fusion with Partial Linearization
A. Tang
Li Shen
Yong Luo
Yibing Zhan
Han Hu
Bo Du
Yixin Chen
Dacheng Tao
MoMe
16
30
0
07 Oct 2023
AdaMerging: Adaptive Model Merging for Multi-Task Learning
AdaMerging: Adaptive Model Merging for Multi-Task Learning
Enneng Yang
Zhenyi Wang
Li Shen
Shiwei Liu
Guibing Guo
Xingwei Wang
Dacheng Tao
MoMe
20
93
0
04 Oct 2023
BYOM: Building Your Own Multi-Task Model For Free
BYOM: Building Your Own Multi-Task Model For Free
Weisen Jiang
Baijiong Lin
Han Shi
Yu Zhang
Zhenguo Li
James T. Kwok
MoMe
27
5
0
03 Oct 2023
Merge, Then Compress: Demystify Efficient SMoE with Hints from Its
  Routing Policy
Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy
Pingzhi Li
Zhenyu (Allen) Zhang
Prateek Yadav
Yi-Lin Sung
Yu Cheng
Mohit Bansal
Tianlong Chen
MoMe
15
33
0
02 Oct 2023
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by
  Learning to Scale
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale
Markus Frohmann
Carolin Holtermann
Shahed Masoudian
Anne Lauscher
Navid Rekabsaz
13
2
0
02 Oct 2023
Previous
1234567
Next