ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.04441
  4. Cited By
How Robust Are Character-Based Word Embeddings in Tagging and MT Against
  Wrod Scramlbing or Randdm Nouse?

How Robust Are Character-Based Word Embeddings in Tagging and MT Against Wrod Scramlbing or Randdm Nouse?

14 April 2017
G. Heigold
G. Neumann
Josef van Genabith
ArXivPDFHTML

Papers citing "How Robust Are Character-Based Word Embeddings in Tagging and MT Against Wrod Scramlbing or Randdm Nouse?"

12 / 12 papers shown
Title
Evaluating Large Language Models along Dimensions of Language Variation: A Systematik Invesdigatiom uv Cross-lingual Generalization
Evaluating Large Language Models along Dimensions of Language Variation: A Systematik Invesdigatiom uv Cross-lingual Generalization
Niyati Bafna
Kenton Murray
David Yarowsky
63
2
0
19 Jun 2024
Assessing Hidden Risks of LLMs: An Empirical Study on Robustness,
  Consistency, and Credibility
Assessing Hidden Risks of LLMs: An Empirical Study on Robustness, Consistency, and Credibility
Wen-song Ye
Mingfeng Ou
Tianyi Li
Yipeng Chen
Xuetao Ma
...
Sai Wu
Jie Fu
Gang Chen
Haobo Wang
Jun Zhao
46
36
0
15 May 2023
Make Every Example Count: On the Stability and Utility of Self-Influence
  for Learning from Noisy NLP Datasets
Make Every Example Count: On the Stability and Utility of Self-Influence for Learning from Noisy NLP Datasets
Irina Bejan
Artem Sokolov
Katja Filippova
TDI
32
9
0
27 Feb 2023
Systematicity, Compositionality and Transitivity of Deep NLP Models: a
  Metamorphic Testing Perspective
Systematicity, Compositionality and Transitivity of Deep NLP Models: a Metamorphic Testing Perspective
Edoardo Manino
Julia Rozanova
Danilo S. Carvalho
André Freitas
Lucas C. Cordeiro
30
7
0
26 Apr 2022
Understanding Model Robustness to User-generated Noisy Texts
Understanding Model Robustness to User-generated Noisy Texts
Jakub Náplava
Martin Popel
Milan Straka
Jana Straková
36
16
0
14 Oct 2021
Evaluating the Robustness of Neural Language Models to Input
  Perturbations
Evaluating the Robustness of Neural Language Models to Input Perturbations
M. Moradi
Matthias Samwald
AAML
50
96
0
27 Aug 2021
Will it Unblend?
Will it Unblend?
Yuval Pinter
Cassandra L. Jacobs
Jacob Eisenstein
21
14
0
18 Sep 2020
Don't Take the Premise for Granted: Mitigating Artifacts in Natural
  Language Inference
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Yonatan Belinkov
Adam Poliak
Stuart M. Shieber
Benjamin Van Durme
Alexander M. Rush
27
94
0
09 Jul 2019
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
718
6,748
0
26 Sep 2016
Learning Robust Representations of Text
Learning Robust Representations of Text
Yitong Li
Trevor Cohn
Timothy Baldwin
OOD
154
15
0
20 Sep 2016
Neural versus Phrase-Based Machine Translation Quality: a Case Study
Neural versus Phrase-Based Machine Translation Quality: a Case Study
L. Bentivogli
Arianna Bisazza
Mauro Cettolo
Marcello Federico
191
328
0
16 Aug 2016
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,638
0
03 Jul 2012
1