ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.16824
  4. Cited By
Can Programming Languages Boost Each Other via Instruction Tuning?

Can Programming Languages Boost Each Other via Instruction Tuning?

31 August 2023
Daoguang Zan
Ailun Yu
Bo Shen
Jiaxin Zhang
Taihong Chen
Bing Geng
B. Chen
Jichuan Ji
Yafen Yao
Yongji Wang
Qianxiang Wang
    ALM
ArXivPDFHTML

Papers citing "Can Programming Languages Boost Each Other via Instruction Tuning?"

8 / 8 papers shown
Title
Raw Text is All you Need: Knowledge-intensive Multi-turn Instruction
  Tuning for Large Language Model
Raw Text is All you Need: Knowledge-intensive Multi-turn Instruction Tuning for Large Language Model
Xia Hou
Qifeng Li
Jian Yang
Tongliang Li
Linzheng Chai
...
Hangyuan Ji
Zhoujun Li
Jixuan Nie
Jingbo Dun
Wenfeng Song
25
3
0
03 Jul 2024
UniCoder: Scaling Code Large Language Model via Universal Code
UniCoder: Scaling Code Large Language Model via Universal Code
Tao Sun
Linzheng Chai
Jian Yang
Yuwei Yin
Hongcheng Guo
Jiaheng Liu
Bing Wang
Liqun Yang
Zhoujun Li
OffRL
LRM
60
16
0
24 Jun 2024
SCAR: Efficient Instruction-Tuning for Large Language Models via Style
  Consistency-Aware Response Ranking
SCAR: Efficient Instruction-Tuning for Large Language Models via Style Consistency-Aware Response Ranking
Zhuang Li
Yuncheng Hua
Thuy-Trang Vu
Haolan Zhan
Lizhen Qu
Gholamreza Haffari
37
2
0
16 Jun 2024
The Devil is in the Neurons: Interpreting and Mitigating Social Biases
  in Pre-trained Language Models
The Devil is in the Neurons: Interpreting and Mitigating Social Biases in Pre-trained Language Models
Yan Liu
Yu Liu
Xiaokang Chen
Pin-Yu Chen
Daoguang Zan
Min-Yen Kan
Tsung-Yi Ho
38
1
0
14 Jun 2024
Improving Long Text Understanding with Knowledge Distilled from
  Summarization Model
Improving Long Text Understanding with Knowledge Distilled from Summarization Model
Yan Liu
Yazheng Yang
Xiaokang Chen
VLM
RALM
21
1
0
08 May 2024
Comments as Natural Logic Pivots: Improve Code Generation via Comment
  Perspective
Comments as Natural Logic Pivots: Improve Code Generation via Comment Perspective
Yijie Chen
Yijin Liu
Fandong Meng
Yufeng Chen
Jinan Xu
Jie Zhou
32
1
0
11 Apr 2024
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for
  Code Understanding and Generation
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation
Yue Wang
Weishi Wang
Shafiq R. Joty
S. Hoi
201
1,451
0
02 Sep 2021
1