ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.19563
  4. Cited By
Unlearning Climate Misinformation in Large Language Models

Unlearning Climate Misinformation in Large Language Models

29 May 2024
Michael Fore
Simranjit Singh
Chaehong Lee
Amritanshu Pandey
Antonios Anastasopoulos
Dimitrios Stamoulis
    MU
ArXivPDFHTML

Papers citing "Unlearning Climate Misinformation in Large Language Models"

5 / 5 papers shown
Title
LLM-dCache: Improving Tool-Augmented LLMs with GPT-Driven Localized Data
  Caching
LLM-dCache: Improving Tool-Augmented LLMs with GPT-Driven Localized Data Caching
Simranjit Singh
Michael Fore
Andreas Karatzas
Chaehong Lee
Yanan Jian
Longfei Shangguan
Fuxun Yu
Iraklis Anagnostopoulos
Dimitrios Stamoulis
RALM
19
2
0
10 Jun 2024
Poisoning Language Models During Instruction Tuning
Poisoning Language Models During Instruction Tuning
Alexander Wan
Eric Wallace
Sheng Shen
Dan Klein
SILM
90
124
0
01 May 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
236
780
0
14 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
1