ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.05372
  4. Cited By
FlauBERT: Unsupervised Language Model Pre-training for French

FlauBERT: Unsupervised Language Model Pre-training for French

11 December 2019
Hang Le
Loïc Vial
Jibril Frej
Vincent Segonne
Maximin Coavoux
Benjamin Lecouteux
A. Allauzen
Benoît Crabbé
Laurent Besacier
D. Schwab
    AI4CE
ArXivPDFHTML

Papers citing "FlauBERT: Unsupervised Language Model Pre-training for French"

50 / 159 papers shown
Title
FQuAD2.0: French Question Answering and knowing that you know nothing
FQuAD2.0: French Question Answering and knowing that you know nothing
Quentin Heinrich
Gautier Viaud
Wacim Belblidia
4
8
0
27 Sep 2021
Is the Number of Trainable Parameters All That Actually Matters?
Is the Number of Trainable Parameters All That Actually Matters?
A. Chatelain
Amine Djeghri
Daniel Hesslow
Julien Launay
Iacopo Poli
43
7
0
24 Sep 2021
BERTweetFR : Domain Adaptation of Pre-Trained Language Models for French
  Tweets
BERTweetFR : Domain Adaptation of Pre-Trained Language Models for French Tweets
Yanzhu Guo
Virgile Rennard
Christos Xypolopoulos
Michalis Vazirgiannis
VLM
AI4CE
40
19
0
21 Sep 2021
Are Transformers a Modern Version of ELIZA? Observations on French
  Object Verb Agreement
Are Transformers a Modern Version of ELIZA? Observations on French Object Verb Agreement
Bingzhi Li
Guillaume Wisniewski
Benoît Crabbé
36
6
0
21 Sep 2021
Language Models are Few-shot Multilingual Learners
Language Models are Few-shot Multilingual Learners
Genta Indra Winata
Andrea Madotto
Zhaojiang Lin
Rosanne Liu
J. Yosinski
Pascale Fung
ELM
LRM
26
132
0
16 Sep 2021
MDAPT: Multilingual Domain Adaptive Pretraining in a Single Model
MDAPT: Multilingual Domain Adaptive Pretraining in a Single Model
Rasmus Jorgensen
Mareike Hartmann
Xiang Dai
Desmond Elliott
AI4CE
16
13
0
14 Sep 2021
Transfer Learning for Multi-lingual Tasks -- a Survey
Transfer Learning for Multi-lingual Tasks -- a Survey
A. Jafari
Behnam Heidary
R. Farahbakhsh
Mostafa Salehi
Mahdi Jalili
LRM
21
5
0
28 Aug 2021
Code-switched inspired losses for generic spoken dialog representations
Code-switched inspired losses for generic spoken dialog representations
E. Chapuis
Pierre Colombo
Matthieu Labeau
Chloe Clave
19
12
0
27 Aug 2021
A Statutory Article Retrieval Dataset in French
A Statutory Article Retrieval Dataset in French
Antoine Louis
Gerasimos Spanakis
RALM
AILaw
9
37
0
26 Aug 2021
Are the Multilingual Models Better? Improving Czech Sentiment with
  Transformers
Are the Multilingual Models Better? Improving Czech Sentiment with Transformers
Pavel Přibáň
J. Steinberger
28
11
0
24 Aug 2021
AMMUS : A Survey of Transformer-based Pretrained Models in Natural
  Language Processing
AMMUS : A Survey of Transformer-based Pretrained Models in Natural Language Processing
Katikapalli Subramanyam Kalyan
A. Rajasekharan
S. Sangeetha
VLM
LM&MA
26
258
0
12 Aug 2021
Deriving Disinformation Insights from Geolocalized Twitter Callouts
Deriving Disinformation Insights from Geolocalized Twitter Callouts
David Tuxworth
Dimosthenis Antypas
Luis Espinosa-Anke
Jose Camacho-Collados
Alun D. Preece
David Rogers
15
0
0
06 Aug 2021
Context-aware Adversarial Training for Name Regularity Bias in Named
  Entity Recognition
Context-aware Adversarial Training for Name Regularity Bias in Named Entity Recognition
Abbas Ghaddar
Philippe Langlais
Ahmad Rashid
Mehdi Rezagholizadeh
30
42
0
24 Jul 2021
MarIA: Spanish Language Models
MarIA: Spanish Language Models
Asier Gutiérrez-Fandiño
Jordi Armengol-Estapé
Marc Pàmies
Joan Llop-Palao
Joaquín Silveira-Ocampo
C. Carrino
Aitor Gonzalez-Agirre
Carme Armentano-Oller
Carlos Rodríguez-Penagos
Marta Villegas
VLM
14
119
0
15 Jul 2021
FLAT: An Optimized Dataflow for Mitigating Attention Bottlenecks
FLAT: An Optimized Dataflow for Mitigating Attention Bottlenecks
Sheng-Chun Kao
Suvinay Subramanian
Gaurav Agrawal
Amir Yazdanbakhsh
T. Krishna
30
57
0
13 Jul 2021
A Primer on Pretrained Multilingual Language Models
A Primer on Pretrained Multilingual Language Models
Sumanth Doddapaneni
Gowtham Ramesh
Mitesh M. Khapra
Anoop Kunchukuttan
Pratyush Kumar
LRM
35
73
0
01 Jul 2021
Sentiment analysis in tweets: an assessment study from classical to
  modern text representation models
Sentiment analysis in tweets: an assessment study from classical to modern text representation models
Sérgio Barreto
Ricardo Moura
Jonnathan Carvalho
A. Paes
A. Plastino
23
14
0
29 May 2021
Automatic Construction of Sememe Knowledge Bases via Dictionaries
Automatic Construction of Sememe Knowledge Bases via Dictionaries
Fanchao Qi
Yangyi Chen
Fengyu Wang
Zhiyuan Liu
Xiao Chen
Maosong Sun
6
6
0
26 May 2021
A systematic review of Hate Speech automatic detection using Natural
  Language Processing
A systematic review of Hate Speech automatic detection using Natural Language Processing
Md Saroar Jahan
Mourad Oussalah
29
8
0
22 May 2021
KLUE: Korean Language Understanding Evaluation
KLUE: Korean Language Understanding Evaluation
Sungjoon Park
Jihyung Moon
Sungdong Kim
Won Ik Cho
Jiyoon Han
...
Seonghyun Kim
Lucy Park
Alice H. Oh
Jung-Woo Ha
Kyunghyun Cho
ELM
VLM
14
191
0
20 May 2021
Evaluation Of Word Embeddings From Large-Scale French Web Content
Evaluation Of Word Embeddings From Large-Scale French Web Content
Hadi Abdine
Christos Xypolopoulos
Moussa Kamal Eddine
Michalis Vazirgiannis
27
6
0
05 May 2021
HerBERT: Efficiently Pretrained Transformer-based Language Model for
  Polish
HerBERT: Efficiently Pretrained Transformer-based Language Model for Polish
Robert Mroczkowski
Piotr Rybak
Alina Wróblewska
Ireneusz Gawlik
16
81
0
04 May 2021
Scalar Adjective Identification and Multilingual Ranking
Scalar Adjective Identification and Multilingual Ranking
Aina Garí Soler
Marianna Apidianaki
20
6
0
03 May 2021
Let's Play Mono-Poly: BERT Can Reveal Words' Polysemy Level and
  Partitionability into Senses
Let's Play Mono-Poly: BERT Can Reveal Words' Polysemy Level and Partitionability into Senses
Aina Garí Soler
Marianna Apidianaki
MILM
201
68
0
29 Apr 2021
MAGMA: An Optimization Framework for Mapping Multiple DNNs on Multiple
  Accelerator Cores
MAGMA: An Optimization Framework for Mapping Multiple DNNs on Multiple Accelerator Cores
Sheng-Chun Kao
T. Krishna
14
49
0
28 Apr 2021
Problems and Countermeasures in Natural Language Processing Evaluation
Problems and Countermeasures in Natural Language Processing Evaluation
Qingxiu Dong
Zhifang Sui
W. Zhan
Baobao Chang
ELM
13
2
0
20 Apr 2021
Czert -- Czech BERT-like Model for Language Representation
Czert -- Czech BERT-like Model for Language Representation
Jakub Sido
O. Pražák
P. Pribán
Jan Pasek
Michal Seják
Miloslav Konopík
10
43
0
24 Mar 2021
Bilingual Language Modeling, A transfer learning technique for Roman
  Urdu
Bilingual Language Modeling, A transfer learning technique for Roman Urdu
Usama Khalid
M. O. Beg
Muhammad Umair Arshad
16
3
0
22 Feb 2021
Civil Rephrases Of Toxic Texts With Self-Supervised Transformers
Civil Rephrases Of Toxic Texts With Self-Supervised Transformers
Leo Laugier
John Pavlopoulos
Jeffrey Scott Sorensen
Lucas Dixon
6
47
0
01 Feb 2021
Fine-tuning BERT-based models for Plant Health Bulletin Classification
Fine-tuning BERT-based models for Plant Health Bulletin Classification
Shufan Jiang
Rafael Angarita
Stéphane Cormier
Francis Rousseaux
6
2
0
29 Jan 2021
KoreALBERT: Pretraining a Lite BERT Model for Korean Language
  Understanding
KoreALBERT: Pretraining a Lite BERT Model for Korean Language Understanding
HyunJae Lee
Jaewoong Yoon
Bonggyu Hwang
Seongho Joe
Seungjai Min
Youngjune Gwon
SSeg
21
16
0
27 Jan 2021
BanglaBERT: Language Model Pretraining and Benchmarks for Low-Resource
  Language Understanding Evaluation in Bangla
BanglaBERT: Language Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in Bangla
Abhik Bhattacharjee
Tahmid Hasan
Wasi Uddin Ahmad
Kazi Samin Mubasshir
Md. Saiful Islam
Anindya Iqbal
M. Rahman
Rifat Shahriyar
SSL
VLM
17
166
0
01 Jan 2021
ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic
ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic
Muhammad Abdul-Mageed
AbdelRahim Elmadany
El Moatez Billah Nagoudi
VLM
60
447
0
27 Dec 2020
GottBERT: a pure German Language Model
GottBERT: a pure German Language Model
Raphael Scheible
Fabian Thomczyk
P. Tippmann
V. Jaravine
M. Boeker
VLM
9
76
0
03 Dec 2020
GLGE: A New General Language Generation Evaluation Benchmark
GLGE: A New General Language Generation Evaluation Benchmark
Dayiheng Liu
Yu Yan
Yeyun Gong
Weizhen Qi
Hang Zhang
...
Jiancheng Lv
Ruofei Zhang
Winnie Wu
Ming Zhou
Nan Duan
ELM
35
66
0
24 Nov 2020
Large Scale Multimodal Classification Using an Ensemble of Transformer
  Models and Co-Attention
Large Scale Multimodal Classification Using an Ensemble of Transformer Models and Co-Attention
Varnith Chordia
B. Vijaykumar
8
7
0
23 Nov 2020
On the use of Self-supervised Pre-trained Acoustic and Linguistic
  Features for Continuous Speech Emotion Recognition
On the use of Self-supervised Pre-trained Acoustic and Linguistic Features for Continuous Speech Emotion Recognition
Manon Macary
Marie Tahon
Yannick Esteve
Anthony Rousseau
SSL
12
54
0
18 Nov 2020
EstBERT: A Pretrained Language-Specific BERT for Estonian
EstBERT: A Pretrained Language-Specific BERT for Estonian
Hasan Tanvir
Claudia Kittask
Sandra Eiche
Kairit Sirts
9
36
0
09 Nov 2020
RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark
RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark
Tatiana Shavrina
Alena Fenogenova
Anton A. Emelyanov
Denis Shevelev
Ekaterina Artemova
Valentin Malykh
Vladislav Mikhailov
Maria Tikhonova
Andrey Chertok
Andrey Evlampiev
VLM
ELM
22
81
0
29 Oct 2020
BARThez: a Skilled Pretrained French Sequence-to-Sequence Model
BARThez: a Skilled Pretrained French Sequence-to-Sequence Model
Moussa Kamal Eddine
A. Tixier
Michalis Vazirgiannis
BDL
101
64
0
23 Oct 2020
mT5: A massively multilingual pre-trained text-to-text transformer
mT5: A massively multilingual pre-trained text-to-text transformer
Linting Xue
Noah Constant
Adam Roberts
Mihir Kale
Rami Al-Rfou
Aditya Siddhant
Aditya Barua
Colin Raffel
13
2,439
0
22 Oct 2020
German's Next Language Model
German's Next Language Model
Branden Chan
Stefan Schweter
Timo Möller
10
263
0
21 Oct 2020
EFSG: Evolutionary Fooling Sentences Generator
EFSG: Evolutionary Fooling Sentences Generator
Marco Di Giovanni
Marco Brambilla
AAML
27
2
0
12 Oct 2020
AnchiBERT: A Pre-Trained Model for Ancient ChineseLanguage Understanding
  and Generation
AnchiBERT: A Pre-Trained Model for Ancient ChineseLanguage Understanding and Generation
Huishuang Tian
Kexin Yang
Dayiheng Liu
Jiancheng Lv
17
31
0
24 Sep 2020
Hierarchical Pre-training for Sequence Labelling in Spoken Dialog
Hierarchical Pre-training for Sequence Labelling in Spoken Dialog
E. Chapuis
Pierre Colombo
Matteo Manica
Matthieu Labeau
Chloé Clavel
6
57
0
23 Sep 2020
Latin BERT: A Contextual Language Model for Classical Philology
Latin BERT: A Contextual Language Model for Classical Philology
David Bamman
P. Burns
12
78
0
21 Sep 2020
The birth of Romanian BERT
The birth of Romanian BERT
Stefan Daniel Dumitrescu
Andrei-Marius Avram
S. Pyysalo
VLM
6
76
0
18 Sep 2020
Adding Recurrence to Pretrained Transformers for Improved Efficiency and
  Context Size
Adding Recurrence to Pretrained Transformers for Improved Efficiency and Context Size
Davis Yoshida
Allyson Ettinger
Kevin Gimpel
AI4CE
8
7
0
16 Aug 2020
Pre-training Polish Transformer-based Language Models at Scale
Pre-training Polish Transformer-based Language Models at Scale
Slawomir Dadas
Michal Perelkiewicz
Rafal Poswiata
6
38
0
07 Jun 2020
Exploring Cross-sentence Contexts for Named Entity Recognition with BERT
Exploring Cross-sentence Contexts for Named Entity Recognition with BERT
Jouni Luoma
S. Pyysalo
9
81
0
02 Jun 2020
Previous
1234
Next