ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.09356
  4. Cited By
UniMiSS: Universal Medical Self-Supervised Learning via Breaking
  Dimensionality Barrier

UniMiSS: Universal Medical Self-Supervised Learning via Breaking Dimensionality Barrier

17 December 2021
Yutong Xie
Jianpeng Zhang
Yong-quan Xia
Qi Wu
    MedIm
ArXivPDFHTML

Papers citing "UniMiSS: Universal Medical Self-Supervised Learning via Breaking Dimensionality Barrier"

10 / 10 papers shown
Title
Towards Universal Text-driven CT Image Segmentation
Yuheng Li
Yuxiang Lai
Maria Thor
Deborah Marshall
Zachary Buchwald
D. Yu
Xiaofeng Yang
MedIm
VLM
40
2
0
08 Mar 2025
How Well Do Supervised 3D Models Transfer to Medical Imaging Tasks?
How Well Do Supervised 3D Models Transfer to Medical Imaging Tasks?
Wenxuan Li
Alan L. Yuille
Zongwei Zhou
MedIm
33
7
0
20 Jan 2025
MiM: Mask in Mask Self-Supervised Pre-Training for 3D Medical Image Analysis
MiM: Mask in Mask Self-Supervised Pre-Training for 3D Medical Image Analysis
Jiaxin Zhuang
Linshan Wu
Qiong Wang
V. Vardhanabhuti
Lin Luo
Hao Chen
Hao Chen
35
4
0
24 Apr 2024
How to build the best medical image segmentation algorithm using foundation models: a comprehensive empirical study with Segment Anything Model
How to build the best medical image segmentation algorithm using foundation models: a comprehensive empirical study with Segment Anything Model
Han Gu
Haoyu Dong
Jichen Yang
Maciej Mazurowski
MedIm
VLM
54
10
0
15 Apr 2024
Towards Foundation Models and Few-Shot Parameter-Efficient Fine-Tuning for Volumetric Organ Segmentation
Towards Foundation Models and Few-Shot Parameter-Efficient Fine-Tuning for Volumetric Organ Segmentation
Julio Silva-Rodríguez
Jose Dolz
Ismail Ben Ayed
32
12
0
29 Mar 2023
Preservational Learning Improves Self-supervised Medical Image Models by
  Reconstructing Diverse Contexts
Preservational Learning Improves Self-supervised Medical Image Models by Reconstructing Diverse Contexts
Hong-Yu Zhou
Chi-Ken Lu
Sibei Yang
Xiaoguang Han
Yizhou Yu
SSL
CLL
47
84
0
09 Sep 2021
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
283
4,299
0
29 Apr 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
260
3,538
0
24 Feb 2021
Dual-Teacher++: Exploiting Intra-domain and Inter-domain Knowledge with
  Reliable Transfer for Cardiac Segmentation
Dual-Teacher++: Exploiting Intra-domain and Inter-domain Knowledge with Reliable Transfer for Cardiac Segmentation
Kang Li
Shujun Wang
Lequan Yu
Pheng-Ann Heng
43
22
0
07 Jan 2021
Improved Baselines with Momentum Contrastive Learning
Improved Baselines with Momentum Contrastive Learning
Xinlei Chen
Haoqi Fan
Ross B. Girshick
Kaiming He
SSL
227
3,029
0
09 Mar 2020
1