ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.12827
  4. Cited By
Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained
  Models

Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models

22 May 2023
Guillermo Ortiz-Jiménez
Alessandro Favero
P. Frossard
    MoMe
ArXivPDFHTML

Papers citing "Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models"

50 / 88 papers shown
Title
CAT Merging: A Training-Free Approach for Resolving Conflicts in Model Merging
CAT Merging: A Training-Free Approach for Resolving Conflicts in Model Merging
Wenju Sun
Qingyong Li
Yangli-ao Geng
Boyang Li
MoMe
19
0
0
11 May 2025
Mitigating Parameter Interference in Model Merging via Sharpness-Aware Fine-Tuning
Mitigating Parameter Interference in Model Merging via Sharpness-Aware Fine-Tuning
Yeoreum Lee
Jinwook Jung
Sungyong Baik
MoMe
40
0
0
20 Apr 2025
When is Task Vector Provably Effective for Model Editing? A Generalization Analysis of Nonlinear Transformers
When is Task Vector Provably Effective for Model Editing? A Generalization Analysis of Nonlinear Transformers
Hongkang Li
Yihua Zhang
Shuai Zhang
M. Wang
Sijia Liu
Pin-Yu Chen
MoMe
53
2
0
15 Apr 2025
Leveraging Submodule Linearity Enhances Task Arithmetic Performance in LLMs
Leveraging Submodule Linearity Enhances Task Arithmetic Performance in LLMs
Rui Dai
Sile Hu
Xu Shen
Yonggang Zhang
Xinmei Tian
Jieping Ye
MoMe
42
2
0
15 Apr 2025
Exact Unlearning of Finetuning Data via Model Merging at Scale
Exact Unlearning of Finetuning Data via Model Merging at Scale
Kevin Kuo
Amrith Rajagopal Setlur
Kartik Srinivas
Aditi Raghunathan
Virginia Smith
MoMe
CLL
MU
45
0
0
06 Apr 2025
MASS: MoErging through Adaptive Subspace Selection
MASS: MoErging through Adaptive Subspace Selection
Donato Crisostomi
Alessandro Zirilli
Antonio Andrea Gargiulo
Maria Sofia Bucarelli
Simone Scardapane
Fabrizio Silvestri
Iacopo Masi
Emanuele Rodolà
MoMe
40
0
0
06 Apr 2025
Efficient Model Editing with Task-Localized Sparse Fine-tuning
Efficient Model Editing with Task-Localized Sparse Fine-tuning
Leonardo Iurada
Marco Ciccone
Tatiana Tommasi
KELM
MoMe
40
0
0
03 Apr 2025
When Domain Generalization meets Generalized Category Discovery: An Adaptive Task-Arithmetic Driven Approach
When Domain Generalization meets Generalized Category Discovery: An Adaptive Task-Arithmetic Driven Approach
Vaibhav Rathore
S. Bagchi
Saikat Dutta
Sarthak Mehrotra
Zsolt Kira
Biplab Banerjee
OOD
74
1
0
19 Mar 2025
DitHub: A Modular Framework for Incremental Open-Vocabulary Object Detection
Chiara Cappellino
G. Mancusi
Matteo Mosconi
Angelo Porrello
Simone Calderara
Rita Cucchiara
ObjD
VLM
81
0
0
12 Mar 2025
Disrupting Model Merging: A Parameter-Level Defense Without Sacrificing Accuracy
Wei Junhao
Yu Zhe
Sakuma Jun
AAML
MoMe
49
0
0
08 Mar 2025
Multi-Level Collaboration in Model Merging
Qi Li
Runpeng Yu
Xinchao Wang
MoMe
FedML
86
0
0
03 Mar 2025
Extrapolating and Decoupling Image-to-Video Generation Models: Motion Modeling is Easier Than You Think
Jie Tian
Xiaoye Qu
Zhenyi Lu
Wei Wei
Sichen Liu
Yu-Xi Cheng
DiffM
VGen
44
0
0
02 Mar 2025
Scalable Model Merging with Progressive Layer-wise Distillation
Scalable Model Merging with Progressive Layer-wise Distillation
Jing Xu
Jiazheng Li
J. Zhang
MoMe
FedML
83
0
0
18 Feb 2025
Portable Reward Tuning: Towards Reusable Fine-Tuning across Different Pretrained Models
Portable Reward Tuning: Towards Reusable Fine-Tuning across Different Pretrained Models
Daiki Chijiwa
Taku Hasegawa
Kyosuke Nishida
Kuniko Saito
Susumu Takeuchi
39
0
0
18 Feb 2025
Linear Mode Connectivity in Differentiable Tree Ensembles
Linear Mode Connectivity in Differentiable Tree Ensembles
Ryuichi Kanoh
M. Sugiyama
60
1
0
17 Feb 2025
Propagation of Chaos for Mean-Field Langevin Dynamics and its Application to Model Ensemble
Atsushi Nitanda
Anzelle Lee
Damian Tan Xing Kai
Mizuki Sakaguchi
Taiji Suzuki
AI4CE
53
1
0
09 Feb 2025
Soup-of-Experts: Pretraining Specialist Models via Parameters Averaging
Soup-of-Experts: Pretraining Specialist Models via Parameters Averaging
Pierre Ablin
Angelos Katharopoulos
Skyler Seto
David Grangier
MoMe
45
0
0
03 Feb 2025
Task Arithmetic in Trust Region: A Training-Free Model Merging Approach to Navigate Knowledge Conflicts
Wenju Sun
Qingyong Li
Wen Wang
Yangli-ao Geng
Boyang Li
36
1
0
28 Jan 2025
Physics of Skill Learning
Physics of Skill Learning
Ziming Liu
Yizhou Liu
Eric J. Michaud
Jeff Gore
Max Tegmark
41
0
0
21 Jan 2025
Visual RAG: Expanding MLLM visual knowledge without fine-tuning
Visual RAG: Expanding MLLM visual knowledge without fine-tuning
Mirco Bonomo
Simone Bianco
VLM
58
5
0
18 Jan 2025
Direct Unlearning Optimization for Robust and Safe Text-to-Image Models
Direct Unlearning Optimization for Robust and Safe Text-to-Image Models
Yong-Hyun Park
Sangdoo Yun
Jin-Hwa Kim
Junho Kim
Geonhui Jang
Yonghyun Jeong
Junghyo Jo
Gayoung Lee
73
12
0
17 Jan 2025
Multi-Task Model Merging via Adaptive Weight Disentanglement
Multi-Task Model Merging via Adaptive Weight Disentanglement
Feng Xiong
Runxi Cheng
Wang Chen
Zhanqiu Zhang
Yiwen Guo
Chun Yuan
Ruifeng Xu
MoMe
86
4
0
10 Jan 2025
Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic
Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic
Yifei He
Yuzheng Hu
Yong Lin
Tong Zhang
Han Zhao
FedML
MoMe
54
17
0
08 Jan 2025
Parameter-Efficient Interventions for Enhanced Model Merging
Parameter-Efficient Interventions for Enhanced Model Merging
Marcin Osial
Daniel Marczak
Bartosz Zieliñski
MoMe
82
1
0
22 Dec 2024
SafetyDPO: Scalable Safety Alignment for Text-to-Image Generation
SafetyDPO: Scalable Safety Alignment for Text-to-Image Generation
Runtao Liu
Chen I Chieh
Jindong Gu
Jipeng Zhang
Renjie Pi
Qifeng Chen
Philip H. S. Torr
Ashkan Khakzar
Fabio Pizzati
EGVM
99
0
0
13 Dec 2024
Task Arithmetic Through The Lens Of One-Shot Federated Learning
Task Arithmetic Through The Lens Of One-Shot Federated Learning
Zhixu Tao
I. Mason
Sanjeev R. Kulkarni
Xavier Boix
MoMe
FedML
77
3
0
27 Nov 2024
Is Multiple Object Tracking a Matter of Specialization?
Is Multiple Object Tracking a Matter of Specialization?
G. Mancusi
Mattia Bernardi
Aniello Panariello
Angelo Porrello
Rita Cucchiara
Simone Calderara
MoMe
29
1
0
01 Nov 2024
Efficient and Effective Weight-Ensembling Mixture of Experts for
  Multi-Task Model Merging
Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging
Li Shen
A. Tang
Enneng Yang
G. Guo
Yong Luo
Lefei Zhang
Xiaochun Cao
Bo Du
Dacheng Tao
MoMe
27
5
0
29 Oct 2024
Model merging with SVD to tie the Knots
Model merging with SVD to tie the Knots
George Stoica
Pratik Ramesh
B. Ecsedi
Leshem Choshen
Judy Hoffman
MoMe
21
8
0
25 Oct 2024
Closed-form merging of parameter-efficient modules for Federated Continual Learning
Closed-form merging of parameter-efficient modules for Federated Continual Learning
Riccardo Salami
Pietro Buzzega
Matteo Mosconi
Jacopo Bonato
Luigi Sabetta
Simone Calderara
FedML
MoMe
CLL
29
2
0
23 Oct 2024
SurgeryV2: Bridging the Gap Between Model Merging and Multi-Task
  Learning with Deep Representation Surgery
SurgeryV2: Bridging the Gap Between Model Merging and Multi-Task Learning with Deep Representation Surgery
Enneng Yang
Li Shen
Zhenyi Wang
G. Guo
Xingwei Wang
Xiaocun Cao
Jie Zhang
Dacheng Tao
MoMe
24
4
0
18 Oct 2024
Mitigating the Backdoor Effect for Multi-Task Model Merging via Safety-Aware Subspace
Mitigating the Backdoor Effect for Multi-Task Model Merging via Safety-Aware Subspace
Jinluan Yang
A. Tang
Didi Zhu
Zhengyu Chen
Li Shen
Fei Wu
MoMe
AAML
50
2
0
17 Oct 2024
CollabEdit: Towards Non-destructive Collaborative Knowledge Editing
CollabEdit: Towards Non-destructive Collaborative Knowledge Editing
Jiamu Zheng
Jinghuai Zhang
Tianyu Du
Xuhong Zhang
Jianwei Yin
Tao Lin
KELM
17
0
0
12 Oct 2024
Diversity-Rewarded CFG Distillation
Diversity-Rewarded CFG Distillation
Geoffrey Cideron
A. Agostinelli
Johan Ferret
Sertan Girgin
Romuald Elie
Olivier Bachem
Sarah Perrin
Alexandre Ramé
34
2
0
08 Oct 2024
NegMerge: Consensual Weight Negation for Strong Machine Unlearning
NegMerge: Consensual Weight Negation for Strong Machine Unlearning
Hyoseo Kim
Dongyoon Han
Junsuk Choe
MoMe
MU
18
1
0
08 Oct 2024
What Matters for Model Merging at Scale?
What Matters for Model Merging at Scale?
Prateek Yadav
Tu Vu
Jonathan Lai
Alexandra Chronopoulou
Manaal Faruqui
Mohit Bansal
Tsendsuren Munkhdalai
MoMe
44
12
0
04 Oct 2024
Understanding Reasoning in Chain-of-Thought from the Hopfieldian View
Understanding Reasoning in Chain-of-Thought from the Hopfieldian View
Lijie Hu
Liang Liu
Shu Yang
Xin Chen
Zhen Tan
Muhammad Asif Ali
Mengdi Li
Di Wang
LRM
39
1
0
04 Oct 2024
Parameter Competition Balancing for Model Merging
Parameter Competition Balancing for Model Merging
Guodong Du
Junlin Lee
Jing Li
Runhua Jiang
Yifei Guo
...
Hanting Liu
S. Goh
Ho-Kin Tang
Daojing He
Min Zhang
MoMe
17
10
0
03 Oct 2024
DaWin: Training-free Dynamic Weight Interpolation for Robust Adaptation
DaWin: Training-free Dynamic Weight Interpolation for Robust Adaptation
Changdae Oh
Yixuan Li
Kyungwoo Song
Sangdoo Yun
Dongyoon Han
OOD
MoMe
36
4
0
03 Oct 2024
Disentangling Latent Shifts of In-Context Learning Through Self-Training
Disentangling Latent Shifts of In-Context Learning Through Self-Training
Josip Jukić
Jan Snajder
21
0
0
02 Oct 2024
Foldable SuperNets: Scalable Merging of Transformers with Different
  Initializations and Tasks
Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks
Edan Kinderman
Itay Hubara
Haggai Maron
Daniel Soudry
MoMe
45
0
0
02 Oct 2024
Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models
Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models
Lucas Bandarkar
Benjamin Muller
Pritish Yuvraj
Rui Hou
Nayan Singhal
Hongjiang Lv
Bing-Quan Liu
KELM
LRM
MoMe
28
2
0
02 Oct 2024
Towards Diverse Device Heterogeneous Federated Learning via Task
  Arithmetic Knowledge Integration
Towards Diverse Device Heterogeneous Federated Learning via Task Arithmetic Knowledge Integration
Mahdi Morafah
Vyacheslav Kungurtsev
Hojin Chang
C. L. P. Chen
Bill Lin
FedML
22
0
0
27 Sep 2024
Realistic Evaluation of Model Merging for Compositional Generalization
Realistic Evaluation of Model Merging for Compositional Generalization
Derek Tam
Yash Kant
Brian Lester
Igor Gilitschenski
Colin Raffel
MoMe
16
5
0
26 Sep 2024
Efficient Pareto Manifold Learning with Low-Rank Structure
Efficient Pareto Manifold Learning with Low-Rank Structure
Weiyu Chen
James T. Kwok
23
6
0
30 Jul 2024
Pareto Low-Rank Adapters: Efficient Multi-Task Learning with Preferences
Pareto Low-Rank Adapters: Efficient Multi-Task Learning with Preferences
Nikolaos Dimitriadis
Pascal Frossard
F. Fleuret
MoE
54
5
0
10 Jul 2024
MagMax: Leveraging Model Merging for Seamless Continual Learning
MagMax: Leveraging Model Merging for Seamless Continual Learning
Daniel Marczak
Bartłomiej Twardowski
Tomasz Trzciñski
Sebastian Cygert
MoMe
CLL
26
17
0
08 Jul 2024
Learning Scalable Model Soup on a Single GPU: An Efficient Subspace
  Training Strategy
Learning Scalable Model Soup on a Single GPU: An Efficient Subspace Training Strategy
Tao Li
Weisen Jiang
Fanghui Liu
X. Huang
James T. Kwok
MoMe
51
1
0
04 Jul 2024
Knowledge Composition using Task Vectors with Learned Anisotropic
  Scaling
Knowledge Composition using Task Vectors with Learned Anisotropic Scaling
Frederic Z. Zhang
Paul Albert
Cristian Rodriguez-Opazo
Anton van den Hengel
Ehsan Abbasnejad
MoMe
34
7
0
03 Jul 2024
WARP: On the Benefits of Weight Averaged Rewarded Policies
WARP: On the Benefits of Weight Averaged Rewarded Policies
Alexandre Ramé
Johan Ferret
Nino Vieillard
Robert Dadashi
Léonard Hussenot
Pierre-Louis Cedoz
Pier Giuseppe Sessa
Sertan Girgin
Arthur Douillard
Olivier Bachem
47
13
0
24 Jun 2024
12
Next