ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.11314
  4. Cited By
RobeCzech: Czech RoBERTa, a monolingual contextualized language
  representation model

RobeCzech: Czech RoBERTa, a monolingual contextualized language representation model

24 May 2021
Milan Straka
Jakub Náplava
Jana Straková
David Samuel
ArXivPDFHTML

Papers citing "RobeCzech: Czech RoBERTa, a monolingual contextualized language representation model"

25 / 25 papers shown
Title
Unpacking Robustness in Inflectional Languages: Adversarial Evaluation and Mechanistic Insights
Unpacking Robustness in Inflectional Languages: Adversarial Evaluation and Mechanistic Insights
Paweł Walkowiak
Marek Klonowski
Marcin Oleksy
Arkadiusz Janz
AAML
34
0
0
08 May 2025
A Survey of Large Language Models for European Languages
A Survey of Large Language Models for European Languages
Wazir Ali
S. Pyysalo
39
2
0
27 Aug 2024
Open-Source Web Service with Morphological Dictionary-Supplemented Deep
  Learning for Morphosyntactic Analysis of Czech
Open-Source Web Service with Morphological Dictionary-Supplemented Deep Learning for Morphosyntactic Analysis of Czech
Milan Straka
Jana Straková
21
0
0
18 Jun 2024
GPT Czech Poet: Generation of Czech Poetic Strophes with Language Models
GPT Czech Poet: Generation of Czech Poetic Strophes with Language Models
Michal Chudoba
Rudolf Rosa
41
2
0
18 Jun 2024
How Gender Interacts with Political Values: A Case Study on Czech BERT
  Models
How Gender Interacts with Political Values: A Case Study on Czech BERT Models
Adnan Al Ali
Jindvrich Libovický
23
0
0
20 Mar 2024
Pipeline and Dataset Generation for Automated Fact-checking in Almost
  Any Language
Pipeline and Dataset Generation for Automated Fact-checking in Almost Any Language
Jan Drchal
Herbert Ullrich
Tomás Mlynár
Václav Moravec
HILM
19
1
0
15 Dec 2023
Some Like It Small: Czech Semantic Embedding Models for Industry
  Applications
Some Like It Small: Czech Semantic Embedding Models for Industry Applications
Jirí Bednár
Jakub Náplava
Petra Barancíková
Ondrej Lisický
18
5
0
23 Nov 2023
AlbNER: A Corpus for Named Entity Recognition in Albanian
AlbNER: A Corpus for Named Entity Recognition in Albanian
Erion Çano
19
1
0
15 Sep 2023
A Dataset and Strong Baselines for Classification of Czech News Texts
A Dataset and Strong Baselines for Classification of Czech News Texts
Hynek Kydlívcek
Jindrich Libovický
16
0
0
20 Jul 2023
Quality and Efficiency of Manual Annotation: Pre-annotation Bias
Quality and Efficiency of Manual Annotation: Pre-annotation Bias
Marie Mikulová
Milan Straka
J. Stepánek
B. Štěpánková
Jan Hajic
31
7
0
15 Jun 2023
L3Cube-IndicSBERT: A simple approach for learning cross-lingual sentence
  representations using multilingual BERT
L3Cube-IndicSBERT: A simple approach for learning cross-lingual sentence representations using multilingual BERT
Samruddhi Deode
Janhavi Gadre
Aditi Kajale
Ananya Joshi
Raviraj Joshi
19
20
0
22 Apr 2023
Unsupervised extraction, labelling and clustering of segments from
  clinical notes
Unsupervised extraction, labelling and clustering of segments from clinical notes
Petr Zelina
J. Halámková
V. Nováček
14
3
0
21 Nov 2022
L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking
  BERT Sentence Representations for Hindi and Marathi
L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking BERT Sentence Representations for Hindi and Marathi
Ananya Joshi
Aditi Kajale
Janhavi Gadre
Samruddhi Deode
Raviraj Joshi
39
11
0
21 Nov 2022
Speaking Multiple Languages Affects the Moral Bias of Language Models
Speaking Multiple Languages Affects the Moral Bias of Language Models
Katharina Hämmerl
Bjorn Deiseroth
P. Schramowski
Jindrich Libovický
Constantin Rothkopf
Alexander M. Fraser
Kristian Kersting
21
31
0
14 Nov 2022
ÚFAL CorPipe at CRAC 2022: Effectivity of Multilingual Models for
  Coreference Resolution
ÚFAL CorPipe at CRAC 2022: Effectivity of Multilingual Models for Coreference Resolution
Milan Straka
Jana Straková
LRM
42
13
0
15 Sep 2022
Czech Dataset for Cross-lingual Subjectivity Classification
Czech Dataset for Cross-lingual Subjectivity Classification
Pavel Přibáň
J. Steinberger
34
4
0
29 Apr 2022
Mono vs Multilingual BERT for Hate Speech Detection and Text
  Classification: A Case Study in Marathi
Mono vs Multilingual BERT for Hate Speech Detection and Text Classification: A Case Study in Marathi
Abhishek Velankar
H. Patil
Raviraj Joshi
28
31
0
19 Apr 2022
BERTuit: Understanding Spanish language in Twitter through a native
  transformer
BERTuit: Understanding Spanish language in Twitter through a native transformer
Javier Huertas-Tato
Alejandro Martín
David Camacho
18
9
0
07 Apr 2022
Do Multilingual Language Models Capture Differing Moral Norms?
Do Multilingual Language Models Capture Differing Moral Norms?
Katharina Hämmerl
Bjorn Deiseroth
P. Schramowski
Jindrich Libovický
Alexander M. Fraser
Kristian Kersting
11
15
0
18 Mar 2022
L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT
  Language Models, and Resources
L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources
Raviraj Joshi
33
52
0
02 Feb 2022
CsFEVER and CTKFacts: Acquiring Czech data for fact verification
CsFEVER and CTKFacts: Acquiring Czech data for fact verification
Herbert Ullrich
Jan Drchal
Martin Rýpar
Hana Vincourová
Václav Moravec
HILM
17
9
0
26 Jan 2022
Training dataset and dictionary sizes matter in BERT models: the case of
  Baltic languages
Training dataset and dictionary sizes matter in BERT models: the case of Baltic languages
Matej Ulvcar
Marko Robnik-vSikonja
11
12
0
20 Dec 2021
Siamese BERT-based Model for Web Search Relevance Ranking Evaluated on a
  New Czech Dataset
Siamese BERT-based Model for Web Search Relevance Ranking Evaluated on a New Czech Dataset
M. Kocián
Jakub Náplava
Daniel Stancl
V. Kadlec
8
17
0
03 Dec 2021
AMMUS : A Survey of Transformer-based Pretrained Models in Natural
  Language Processing
AMMUS : A Survey of Transformer-based Pretrained Models in Natural Language Processing
Katikapalli Subramanyam Kalyan
A. Rajasekharan
S. Sangeetha
VLM
LM&MA
26
258
0
12 Aug 2021
Comparison of Czech Transformers on Text Classification Tasks
Comparison of Czech Transformers on Text Classification Tasks
Jan Lehevcka
Jan vSvec
VLM
14
13
0
21 Jul 2021
1