ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.07176
  4. Cited By
Quick and (not so) Dirty: Unsupervised Selection of Justification
  Sentences for Multi-hop Question Answering

Quick and (not so) Dirty: Unsupervised Selection of Justification Sentences for Multi-hop Question Answering

17 November 2019
Vikas Yadav
Steven Bethard
Mihai Surdeanu
ArXivPDFHTML

Papers citing "Quick and (not so) Dirty: Unsupervised Selection of Justification Sentences for Multi-hop Question Answering"

17 / 17 papers shown
Title
BitNet v2: Native 4-bit Activations with Hadamard Transformation for 1-bit LLMs
BitNet v2: Native 4-bit Activations with Hadamard Transformation for 1-bit LLMs
Hongyu Wang
Shuming Ma
Furu Wei
MQ
48
1
0
25 Apr 2025
Single-pass Detection of Jailbreaking Input in Large Language Models
Single-pass Detection of Jailbreaking Input in Large Language Models
Leyla Naz Candogan
Yongtao Wu
Elias Abad Rocamora
Grigorios G. Chrysos
V. Cevher
AAML
51
0
0
24 Feb 2025
Tensor Product Attention Is All You Need
Tensor Product Attention Is All You Need
Yifan Zhang
Yifeng Liu
Huizhuo Yuan
Zhen Qin
Yang Yuan
Q. Gu
Andrew Chi-Chih Yao
77
9
0
11 Jan 2025
Unleashing Multi-Hop Reasoning Potential in Large Language Models through Repetition of Misordered Context
Unleashing Multi-Hop Reasoning Potential in Large Language Models through Repetition of Misordered Context
Sangwon Yu
Ik-hwan Kim
Jongyoon Song
Saehyung Lee
Junsung Park
Sungroh Yoon
LRM
67
0
0
09 Oct 2024
Better & Faster Large Language Models via Multi-token Prediction
Better & Faster Large Language Models via Multi-token Prediction
Fabian Gloeckle
Badr Youbi Idrissi
Baptiste Rozière
David Lopez-Paz
Gabriele Synnaeve
24
93
0
30 Apr 2024
A Differentiable Integer Linear Programming Solver for Explanation-Based
  Natural Language Inference
A Differentiable Integer Linear Programming Solver for Explanation-Based Natural Language Inference
Mokanarangan Thayaparan
Marco Valentino
André Freitas
23
0
0
03 Apr 2024
Random-LTD: Random and Layerwise Token Dropping Brings Efficient
  Training for Large-scale Transformers
Random-LTD: Random and Layerwise Token Dropping Brings Efficient Training for Large-scale Transformers
Z. Yao
Xiaoxia Wu
Conglong Li
Connor Holmes
Minjia Zhang
Cheng-rong Li
Yuxiong He
28
11
0
17 Nov 2022
Towards Better Few-Shot and Finetuning Performance with Forgetful Causal
  Language Models
Towards Better Few-Shot and Finetuning Performance with Forgetful Causal Language Models
Hao Liu
Xinyang Geng
Lisa Lee
Igor Mordatch
Sergey Levine
Sharan Narang
Pieter Abbeel
KELM
CLL
33
2
0
24 Oct 2022
ZeroQuant: Efficient and Affordable Post-Training Quantization for
  Large-Scale Transformers
ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers
Z. Yao
Reza Yazdani Aminabadi
Minjia Zhang
Xiaoxia Wu
Conglong Li
Yuxiong He
VLM
MQ
45
441
0
04 Jun 2022
METGEN: A Module-Based Entailment Tree Generation Framework for Answer
  Explanation
METGEN: A Module-Based Entailment Tree Generation Framework for Answer Explanation
Ruixin Hong
Hongming Zhang
Xintong Yu
Changshui Zhang
ReLM
LRM
30
32
0
05 May 2022
ActKnow: Active External Knowledge Infusion Learning for Question
  Answering in Low Data Regime
ActKnow: Active External Knowledge Infusion Learning for Question Answering in Low Data Regime
K. Annervaz
Pritam Kumar Nath
Ambedkar Dukkipati
RALM
10
1
0
17 Dec 2021
Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks
Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks
Akari Asai
Matt Gardner
Hannaneh Hajishirzi
RALM
21
45
0
16 Dec 2021
Dynamic Semantic Graph Construction and Reasoning for Explainable
  Multi-hop Science Question Answering
Dynamic Semantic Graph Construction and Reasoning for Explainable Multi-hop Science Question Answering
Weiwen Xu
Huihui Zhang
Deng Cai
Wai Lam
26
34
0
25 May 2021
Encoding Explanatory Knowledge for Zero-shot Science Question Answering
Encoding Explanatory Knowledge for Zero-shot Science Question Answering
Zili Zhou
Marco Valentino
Dónal Landers
André Freitas
11
7
0
12 May 2021
Why do you think that? Exploring Faithful Sentence-Level Rationales
  Without Supervision
Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision
Max Glockner
Ivan Habernal
Iryna Gurevych
LRM
21
25
0
07 Oct 2020
A Survey on Explainability in Machine Reading Comprehension
A Survey on Explainability in Machine Reading Comprehension
Mokanarangan Thayaparan
Marco Valentino
André Freitas
FaML
12
50
0
01 Oct 2020
A causal framework for explaining the predictions of black-box
  sequence-to-sequence models
A causal framework for explaining the predictions of black-box sequence-to-sequence models
David Alvarez-Melis
Tommi Jaakkola
CML
227
201
0
06 Jul 2017
1