ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.02789
  4. Cited By
Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book
  Question Answering

Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering

8 September 2018
Todor Mihaylov
Peter Clark
Tushar Khot
Ashish Sabharwal
ArXiv (abs)PDFHTML

Papers citing "Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering"

50 / 1,409 papers shown
BTC-LLM: Efficient Sub-1-Bit LLM Quantization via Learnable Transformation and Binary Codebook
BTC-LLM: Efficient Sub-1-Bit LLM Quantization via Learnable Transformation and Binary Codebook
Hao Gu
Lujun Li
Zheyu Wang
B. Liu
Qiyuan Zhu
Sirui Han
Wenhan Luo
Qiyuan Zhu
Sirui Han
Yike Guo
MQ
168
3
0
10 Apr 2026
PATCH: Learnable Tile-level Hybrid Sparsity for LLMs
PATCH: Learnable Tile-level Hybrid Sparsity for LLMs
Younes Hourri
Mohammad Mozaffari
M. Dehnavi
261
0
0
24 Dec 2025
SQ-format: A Unified Sparse-Quantized Hardware-friendly Data Format for LLMs
SQ-format: A Unified Sparse-Quantized Hardware-friendly Data Format for LLMs
Ruixuan Huang
Hao Zeng
Hantao Huang
Jinyuan Shi
Minghui Yu
Ian En-Hsu Yen
Shuai Wang
MQ
198
0
0
05 Dec 2025
SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs
SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs
Wenhua Cheng
Weiwei Zhang
Heng Guo
Haihao Shen
MQ
192
1
0
04 Dec 2025
Dripper: Token-Efficient Main HTML Extraction with a Lightweight LM
Dripper: Token-Efficient Main HTML Extraction with a Lightweight LM
Mengjie Liu
Jiahui Peng
Pei Chu
Jiantao Qiu
Ren Ma
...
Zhenxiang Li
Chao Xu
Zhongying Tu
Wentao Zhang
Conghui He
281
2
0
28 Nov 2025
A Rosetta Stone for AI Benchmarks
A Rosetta Stone for AI Benchmarks
A. Ho
Jean-Stanislas Denain
David Atanasov
Samuel Albanie
Rohin Shah
ELM
329
5
0
28 Nov 2025
Ghosting Your LLM: Without The Knowledge of Your Gradient and Data
Ghosting Your LLM: Without The Knowledge of Your Gradient and Data
Abeer Matar A. Almalky
Ziyan Wang
Mohaiminul Al Nahian
Li Yang
Adnan Siraj Rakin
AAML
247
0
0
27 Nov 2025
CacheTrap: Injecting Trojans in LLMs without Leaving any Traces in Inputs or Weights
CacheTrap: Injecting Trojans in LLMs without Leaving any Traces in Inputs or Weights
Mohaiminul Al Nahian
Abeer Matar A. Almalky
Gamana Aragonda
Ranyang Zhou
Sabbir Ahmed
Dmitry Ponomarev
Li Yang
Shaahin Angizi
Adnan Siraj Rakin
109
0
0
27 Nov 2025
Mosaic Pruning: A Hierarchical Framework for Generalizable Pruning of Mixture-of-Experts Models
Mosaic Pruning: A Hierarchical Framework for Generalizable Pruning of Mixture-of-Experts Models
Wentao Hu
Mingkuan Zhao
Shuangyong Song
Xiaoyan Zhu
Xin Lai
Jiayin Wang
193
4
0
25 Nov 2025
ROOT: Robust Orthogonalized Optimizer for Neural Network Training
ROOT: Robust Orthogonalized Optimizer for Neural Network Training
Wei He
Kai Han
Hang Zhou
Hanting Chen
Zhicheng Liu
Xinghao Chen
Yunhe Wang
AAML
214
3
0
25 Nov 2025
Mirror, Mirror on the Wall -- Which is the Best Model of Them All?
Mirror, Mirror on the Wall -- Which is the Best Model of Them All?
Dina Sayed
Heiko Schuldt
125
0
0
25 Nov 2025
FastForward Pruning: Efficient LLM Pruning via Single-Step Reinforcement Learning
FastForward Pruning: Efficient LLM Pruning via Single-Step Reinforcement Learning
Xin Yuan
S. Li
Jiateng Wei
Chengrui Zhu
Yanming Wu
Qingpeng Li
Jiajun Lv
Xiaoke Lan
Jun Chen
Yong-Jin Liu
OffRL
470
0
0
24 Nov 2025
How Learning Rate Decay Wastes Your Best Data in Curriculum-Based LLM Pretraining
How Learning Rate Decay Wastes Your Best Data in Curriculum-Based LLM Pretraining
Kairong Luo
Zhenbo Sun
Haodong Wen
Xinyu Shi
Jiarui Cui
Chenyi Dang
Kaifeng Lyu
Wenguang Chen
264
3
0
24 Nov 2025
Blu-WERP (Web Extraction and Refinement Pipeline): A Scalable Pipeline for Preprocessing Large Language Model Datasets
Blu-WERP (Web Extraction and Refinement Pipeline): A Scalable Pipeline for Preprocessing Large Language Model Datasets
Gowtham
Sai Rupesh
Sanjay Kumar
Saravanan
Venkata Chaithanya
VLM
259
1
0
22 Nov 2025
Fantastic Bugs and Where to Find Them in AI Benchmarks
Fantastic Bugs and Where to Find Them in AI Benchmarks
Sang Truong
Yuheng Tu
Michael Hardy
Anka Reuel
Zeyu Tang
...
Jonathan Perera
Chibuike Uwakwe
Ben Domingue
Nick Haber
Sanmi Koyejo
167
6
0
20 Nov 2025
AICC: Parse HTML Finer, Make Models Better -- A 7.3T AI-Ready Corpus Built by a Model-Based HTML Parser
AICC: Parse HTML Finer, Make Models Better -- A 7.3T AI-Ready Corpus Built by a Model-Based HTML Parser
Ren Ma
Jiantao Qiu
Chao Xu
Pei Chu
Kaiwen Liu
...
Wentao Zhang
Zhongying Tu
Wentao Zhang
Dahua Lin
Conghui He
188
1
0
20 Nov 2025
Breaking Expert Knowledge Limits: Self-Pruning for Large Language Models
Breaking Expert Knowledge Limits: Self-Pruning for Large Language Models
Haidong Kang
Lihong Lin
Enneng Yang
Hongning Dai
Hao Wang
LRM
251
0
0
19 Nov 2025
DiffuMamba: High-Throughput Diffusion LMs with Mamba Backbone
DiffuMamba: High-Throughput Diffusion LMs with Mamba Backbone
Vaibhav Singh
Oleksiy Ostapenko
Pierre-Andre Noel
Torsten Scholak
Torsten Scholak
MambaAI4CE
516
0
0
19 Nov 2025
GPS: General Per-Sample Prompter
GPS: General Per-Sample Prompter
Pawel Batorski
Paul Swoboda
109
3
0
18 Nov 2025
OTARo: Once Tuning for All Precisions toward Robust On-Device LLMs
OTARo: Once Tuning for All Precisions toward Robust On-Device LLMs
Shaoyuan Chen
Zhixuan Chen
Dawei Yang
Zhihang Yuan
Qiang Wu
MQ
198
0
0
17 Nov 2025
OAD-Promoter: Enhancing Zero-shot VQA using Large Language Models with Object Attribute Description
OAD-Promoter: Enhancing Zero-shot VQA using Large Language Models with Object Attribute Description
Quanxing Xu
Ling Zhou
Feifei Zhang
Jinyu Tian
Rubing Huang
VLM
345
0
0
15 Nov 2025
GateRA: Token-Aware Modulation for Parameter-Efficient Fine-Tuning
GateRA: Token-Aware Modulation for Parameter-Efficient Fine-Tuning
Jie Ou
Shuaihong Jiang
Yingjun Du
Cees G. M. Snoek
199
0
0
15 Nov 2025
AlignTree: Efficient Defense Against LLM Jailbreak Attacks
AlignTree: Efficient Defense Against LLM Jailbreak Attacks
Gil Goren
Shahar Katz
Lior Wolf
AAML
249
2
0
15 Nov 2025
SpecQuant: Spectral Decomposition and Adaptive Truncation for Ultra-Low-Bit LLMs Quantization
SpecQuant: Spectral Decomposition and Adaptive Truncation for Ultra-Low-Bit LLMs Quantization
Zhixiong Zhao
Fangxin Liu
Junjie Wang
Chenyang Guan
Z. Wang
Li Jiang
Haibing Guan
MQ
158
3
0
11 Nov 2025
Range Asymmetric Numeral Systems-Based Lightweight Intermediate Feature Compression for Split Computing of Deep Neural Networks
Range Asymmetric Numeral Systems-Based Lightweight Intermediate Feature Compression for Split Computing of Deep Neural Networks
Mingyu Sung
Suhwan Im
Vikas Palakonda
Jae-Mo Kang
136
0
0
11 Nov 2025
MobileLLM-Pro Technical Report
MobileLLM-Pro Technical Report
Patrick Huber
Ernie Chang
Wei Wen
Igor Fedorov
Tarek Elgamal
...
Vikas Chandra
Ahmed Aly
Anuj Kumar
Raghuraman Krishnamoorthi
Adithya Sagar
201
1
0
10 Nov 2025
Routing Manifold Alignment Improves Generalization of Mixture-of-Experts LLMs
Routing Manifold Alignment Improves Generalization of Mixture-of-Experts LLMs
Zhongyang Li
Ziyue Li
Tianyi Zhou
MoEMoMe
694
0
0
10 Nov 2025
Teaching Pretrained Language Models to Think Deeper with Retrofitted Recurrence
Teaching Pretrained Language Models to Think Deeper with Retrofitted Recurrence
Sean McLeish
Ang Li
John Kirchenbauer
Dayal Singh Kalra
Brian Bartoldson
B. Kailkhura
Avi Schwarzschild
Jonas Geiping
Tom Goldstein
Micah Goldblum
338
9
0
10 Nov 2025
Better Datasets Start From RefineLab: Automatic Optimization for High-Quality Dataset Refinement
Better Datasets Start From RefineLab: Automatic Optimization for High-Quality Dataset Refinement
Xiaonan Luo
Yue Huang
Ping He
Xiangliang Zhang
149
1
0
09 Nov 2025
MuonAll: Muon Variant for Efficient Finetuning of Large Language Models
MuonAll: Muon Variant for Efficient Finetuning of Large Language Models
Saurabh Page
Advait Joshi
S. Sonawane
MoE
209
3
0
08 Nov 2025
KGFR: A Foundation Retriever for Generalized Knowledge Graph Question Answering
KGFR: A Foundation Retriever for Generalized Knowledge Graph Question Answering
Yuanning Cui
Zequn Sun
Wei Hu
Zhangjie Fu
RALM
321
0
0
06 Nov 2025
DartQuant: Efficient Rotational Distribution Calibration for LLM Quantization
DartQuant: Efficient Rotational Distribution Calibration for LLM Quantization
Yuantian Shao
Yuanteng Chen
Peisong Wang
Jianlin Yu
Jing Lin
Yiwu Yao
Zhihui Wei
Jian Cheng
MQ
422
2
0
06 Nov 2025
Block Rotation is All You Need for MXFP4 Quantization
Block Rotation is All You Need for MXFP4 Quantization
Yuantian Shao
Peisong Wang
Yuanteng Chen
Chang Xu
Zhihui Wei
Jian Cheng
490
8
0
06 Nov 2025
IG-Pruning: Input-Guided Block Pruning for Large Language Models
IG-Pruning: Input-Guided Block Pruning for Large Language Models
Kangyu Qiao
Shaolei Zhang
Yang Feng
VLM
286
0
0
04 Nov 2025
Multi-Step Knowledge Interaction Analysis via Rank-2 Subspace Disentanglement
Multi-Step Knowledge Interaction Analysis via Rank-2 Subspace Disentanglement
Sekh Mainul Islam
Pepa Atanasova
Isabelle Augenstein
199
1
0
03 Nov 2025
CryptoMoE: Privacy-Preserving and Scalable Mixture of Experts Inference via Balanced Expert Routing
CryptoMoE: Privacy-Preserving and Scalable Mixture of Experts Inference via Balanced Expert RoutingIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2025
Yifan Zhou
Tianshi Xu
Jue Hong
Ye Wu
Meng Li
MoE
632
2
0
03 Nov 2025
Improving Romanian LLM Pretraining Data using Diversity and Quality Filtering
Improving Romanian LLM Pretraining Data using Diversity and Quality Filtering
Vlad Negoita
Mihai Masala
Traian Rebedea
171
1
0
02 Nov 2025
Consistency Training Helps Stop Sycophancy and Jailbreaks
Consistency Training Helps Stop Sycophancy and Jailbreaks
Alex Irpan
Alexander Matt Turner
Mark Kurzeja
David Elson
Rohin Shah
263
0
0
31 Oct 2025
TetraJet-v2: Accurate NVFP4 Training for Large Language Models with Oscillation Suppression and Outlier Control
TetraJet-v2: Accurate NVFP4 Training for Large Language Models with Oscillation Suppression and Outlier Control
Yuxiang Chen
Xiaoming Xu
Pengle Zhang
Michael Beyer
Martin Rapp
Jun Zhu
Jianfei Chen
MQ
186
8
0
31 Oct 2025
MossNet: Mixture of State-Space Experts is a Multi-Head Attention
MossNet: Mixture of State-Space Experts is a Multi-Head Attention
Shikhar Tuli
James Smith
Haris Jeelani
Chi-Heng Lin
Abhishek Patel
Vasili Ramanishka
Yen-Chang Hsu
Hongxia Jin
MoE
320
0
0
30 Oct 2025
INT v.s. FP: A Comprehensive Study of Fine-Grained Low-bit Quantization Formats
INT v.s. FP: A Comprehensive Study of Fine-Grained Low-bit Quantization Formats
Mengzhao Chen
Meng Wu
Hui Jin
Zhihang Yuan
Jing Liu
...
Jin Ma
Zeyue Xue
Zhiheng Liu
Xingyan Bin
Ping Luo
MQ
287
10
0
29 Oct 2025
A Survey on Unlearning in Large Language Models
A Survey on Unlearning in Large Language Models
Ruichen Qiu
Jiajun Tan
Jiayue Pu
Honglin Wang
Xiao-Shan Gao
Fei Sun
MUAILawPILM
789
2
0
29 Oct 2025
NeuronMM: High-Performance Matrix Multiplication for LLM Inference on AWS Trainium
NeuronMM: High-Performance Matrix Multiplication for LLM Inference on AWS Trainium
Dinghong Song
Jierui Xu
Weichu Yang
Pengfei Su
Dong Li
236
0
0
29 Oct 2025
LoRA-DA: Data-Aware Initialization for Low-Rank Adaptation via Asymptotic Analysis
LoRA-DA: Data-Aware Initialization for Low-Rank Adaptation via Asymptotic Analysis
Qingyue Zhang
Chang Chu
Tianren Peng
Qi Li
Xiangyang Luo
Zhihao Jiang
Shao-Lun Huang
AI4CE
143
1
0
28 Oct 2025
Calibrating and Rotating: A Unified Framework for Weight Conditioning in PEFT
Calibrating and Rotating: A Unified Framework for Weight Conditioning in PEFT
Da Chang
Peng Xue
Yu Li
Yongxiang Liu
P. Xu
Shixun Zhang
269
2
0
28 Oct 2025
FALQON: Accelerating LoRA Fine-tuning with Low-Bit Floating-Point Arithmetic
FALQON: Accelerating LoRA Fine-tuning with Low-Bit Floating-Point Arithmetic
Kanghyun Choi
Hyeyoon Lee
S. Park
Dain Kwon
Jinho Lee
MQ
219
0
0
28 Oct 2025
Beyond Line-Level Filtering for the Pretraining Corpora of LLMs
Beyond Line-Level Filtering for the Pretraining Corpora of LLMs
Chanwoo Park
Suyoung Park
Yelim Ahn
Jongmin Kim
Jongyeon Park
Jaejin Lee
134
0
0
28 Oct 2025
ChessQA: Evaluating Large Language Models for Chess Understanding
ChessQA: Evaluating Large Language Models for Chess Understanding
Qianfeng Wen
Zhenwei Tang
Ashton Anderson
ELMLRM
238
2
0
28 Oct 2025
MISA: Memory-Efficient LLMs Optimization with Module-wise Importance Sampling
MISA: Memory-Efficient LLMs Optimization with Module-wise Importance Sampling
Yuxi Liu
Renjia Deng
Yutong He
Xue Wang
Tao Yao
Kun Yuan
239
1
0
28 Oct 2025
Multi-Agent Evolve: LLM Self-Improve through Co-evolution
Multi-Agent Evolve: LLM Self-Improve through Co-evolution
Yixing Chen
Yiding Wang
Siqi Zhu
Haofei Yu
Tao Feng
Muhan Zhang
M. Patwary
Jiaxuan You
LLMAGLRM
363
19
0
27 Oct 2025
1234...272829
Next
Page 1 of 29
Pageof 29