ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.03788
  4. Cited By
Harnessing the Power of BERT in the Turkish Clinical Domain: Pretraining
  Approaches for Limited Data Scenarios

Harnessing the Power of BERT in the Turkish Clinical Domain: Pretraining Approaches for Limited Data Scenarios

5 May 2023
Hazal Türkmen
Oğuz Dikenelli
C. Eraslan
Mehmet Cem Çalli
S. Özbek
ArXivPDFHTML

Papers citing "Harnessing the Power of BERT in the Turkish Clinical Domain: Pretraining Approaches for Limited Data Scenarios"

2 / 2 papers shown
Title
Investigating Large Language Models and Control Mechanisms to Improve
  Text Readability of Biomedical Abstracts
Investigating Large Language Models and Control Mechanisms to Improve Text Readability of Biomedical Abstracts
Z. Li
Samuel Belkadi
Nicolo Micheletti
Lifeng Han
Matthew Shardlow
Goran Nenadic
28
5
0
22 Sep 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
308
11,909
0
04 Mar 2022
1