ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.00391
  4. Cited By
Learning to Look Inside: Augmenting Token-Based Encoders with
  Character-Level Information

Learning to Look Inside: Augmenting Token-Based Encoders with Character-Level Information

1 August 2021
Yuval Pinter
Amanda Stent
Mark Dredze
Jacob Eisenstein
ArXiv (abs)PDFHTML

Papers citing "Learning to Look Inside: Augmenting Token-Based Encoders with Character-Level Information"

4 / 4 papers shown
Title
Learn Your Tokens: Word-Pooled Tokenization for Language Modeling
Learn Your Tokens: Word-Pooled Tokenization for Language ModelingConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Avijit Thawani
Saurabh Ghanekar
Xiaoyuan Zhu
Jay Pujara
198
9
0
17 Oct 2023
Language Modelling with Pixels
Language Modelling with PixelsInternational Conference on Learning Representations (ICLR), 2022
Phillip Rust
Jonas F. Lotz
Emanuele Bugliarello
Elizabeth Salesky
Miryam de Lhoneux
Desmond Elliott
VLM
227
55
0
14 Jul 2022
Revisiting Pre-trained Language Models and their Evaluation for Arabic
  Natural Language Understanding
Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Abbas Ghaddar
Yimeng Wu
Sunyam Bagga
Ahmad Rashid
Khalil Bibi
...
Zhefeng Wang
Baoxing Huai
Xin Jiang
Qun Liu
Philippe Langlais
113
8
0
21 May 2022
Integrating Approaches to Word Representation
Integrating Approaches to Word Representation
Yuval Pinter
NAI
144
5
0
10 Sep 2021
1