ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.03275
  4. Cited By
To pretrain or not to pretrain? A case study of domain-specific
  pretraining for semantic segmentation in histopathology

To pretrain or not to pretrain? A case study of domain-specific pretraining for semantic segmentation in histopathology

6 July 2023
Tushar Kataria
Beatrice S. Knudsen
Shireen Elhabian
    VLM
ArXivPDFHTML

Papers citing "To pretrain or not to pretrain? A case study of domain-specific pretraining for semantic segmentation in histopathology"

2 / 2 papers shown
Title
BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs
BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs
Sheng Zhang
Yanbo Xu
Naoto Usuyama
Hanwen Xu
J. Bagga
...
Carlo Bifulco
M. Lungren
Tristan Naumann
Sheng Wang
Hoifung Poon
LM&MA
MedIm
154
205
0
10 Jan 2025
Assistive Image Annotation Systems with Deep Learning and Natural
  Language Capabilities: A Review
Assistive Image Annotation Systems with Deep Learning and Natural Language Capabilities: A Review
Moseli Motsóehli
VLM
3DV
30
0
0
28 Jun 2024
1