ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.07383
  4. Cited By
RankNAS: Efficient Neural Architecture Search by Pairwise Ranking

RankNAS: Efficient Neural Architecture Search by Pairwise Ranking

15 September 2021
Chi Hu
Chenglong Wang
Xiangnan Ma
Xia Meng
Yinqiao Li
Tong Xiao
Jingbo Zhu
Changliang Li
ArXivPDFHTML

Papers citing "RankNAS: Efficient Neural Architecture Search by Pairwise Ranking"

10 / 10 papers shown
Title
Learning Evaluation Models from Large Language Models for Sequence Generation
Learning Evaluation Models from Large Language Models for Sequence Generation
Chenglong Wang
Hang Zhou
Kai-Chun Chang
Tongran Liu
Chunliang Zhang
Quan Du
Tong Xiao
Yue Zhang
Jingbo Zhu
ELM
34
3
0
08 Aug 2023
ESRL: Efficient Sampling-based Reinforcement Learning for Sequence
  Generation
ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation
Chenglong Wang
Hang Zhou
Yimin Hu
Yi Huo
Bei Li
Tongran Liu
Tong Xiao
Jingbo Zhu
11
8
0
04 Aug 2023
Improving Autoregressive NLP Tasks via Modular Linearized Attention
Improving Autoregressive NLP Tasks via Modular Linearized Attention
Victor Agostinelli
Lizhong Chen
22
1
0
17 Apr 2023
RD-NAS: Enhancing One-shot Supernet Ranking Ability via Ranking
  Distillation from Zero-cost Proxies
RD-NAS: Enhancing One-shot Supernet Ranking Ability via Ranking Distillation from Zero-cost Proxies
Peijie Dong
Xin-Yi Niu
Lujun Li
Zhiliang Tian
Xiaodong Wang
Zimian Wei
H. Pan
Dongsheng Li
77
15
0
24 Jan 2023
Efficient Automatic Machine Learning via Design Graphs
Efficient Automatic Machine Learning via Design Graphs
Shirley Wu
Jiaxuan You
J. Leskovec
Rex Ying
GNN
19
1
0
21 Oct 2022
DARTFormer: Finding The Best Type Of Attention
DARTFormer: Finding The Best Type Of Attention
Jason Brown
Yiren Zhao
Ilia Shumailov
Robert D. Mullins
22
6
0
02 Oct 2022
Wide Attention Is The Way Forward For Transformers?
Wide Attention Is The Way Forward For Transformers?
Jason Brown
Yiren Zhao
Ilia Shumailov
Robert D. Mullins
19
7
0
02 Oct 2022
Neural Architecture Search on Efficient Transformers and Beyond
Neural Architecture Search on Efficient Transformers and Beyond
Zexiang Liu
Dong Li
Kaiyue Lu
Zhen Qin
Weixuan Sun
Jiacheng Xu
Yiran Zhong
29
19
0
28 Jul 2022
The NiuTrans System for the WMT21 Efficiency Task
The NiuTrans System for the WMT21 Efficiency Task
Chenglong Wang
Chi Hu
Yongyu Mu
Zhongxiang Yan
Siming Wu
...
Hang Cao
Bei Li
Ye Lin
Tong Xiao
Jingbo Zhu
22
2
0
16 Sep 2021
AutoDropout: Learning Dropout Patterns to Regularize Deep Networks
AutoDropout: Learning Dropout Patterns to Regularize Deep Networks
Hieu H. Pham
Quoc V. Le
70
56
0
05 Jan 2021
1