Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2304.10261
Cited By
Anything-3D: Towards Single-view Anything Reconstruction in the Wild
19 April 2023
Qiuhong Shen
Xingyi Yang
Xinchao Wang
DiffM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Anything-3D: Towards Single-view Anything Reconstruction in the Wild"
8 / 8 papers shown
Title
Mix-QSAM: Mixed-Precision Quantization of the Segment Anything Model
Navin Ranjan
Andreas E. Savakis
MQ
VLM
61
0
0
08 May 2025
Laser: Efficient Language-Guided Segmentation in Neural Radiance Fields
Xingyu Miao
Haoran Duan
Yang Bai
Tejal Shah
Jun Song
Yang Long
R. Ranjan
Ling Shao
76
4
0
31 Jan 2025
Zero-Shot Pupil Segmentation with SAM 2: A Case Study of Over 14 Million Images
Virmarie Maquiling
Sean Anthony Byrne
D. Niehorster
Marco Carminati
Enkelejda Kasneci
VLM
45
0
0
11 Oct 2024
TetSphere Splatting: Representing High-Quality Geometry with Lagrangian Volumetric Meshes
Minghao Guo
Bohan Wang
Kaiming He
Wojciech Matusik
3DGS
83
6
0
30 May 2024
SPAD : Spatially Aware Multiview Diffusers
Yash Kant
Ziyi Wu
Michael Vasilkovsky
Guocheng Qian
Jian Ren
R. A. Guler
Bernard Ghanem
Sergey Tulyakov
Igor Gilitschenski
Aliaksandr Siarohin
DiffM
22
34
0
07 Feb 2024
RefSAM: Efficiently Adapting Segmenting Anything Model for Referring Video Object Segmentation
Yonglin Li
Jing Zhang
Xiao Teng
Long Lan
VOS
VLM
19
16
0
03 Jul 2023
A Comprehensive Survey on Segment Anything Model for Vision and Beyond
Chunhui Zhang
Li Liu
Yawen Cui
Guanjie Huang
Weilin Lin
Yiqian Yang
Yuehong Hu
VLM
32
89
0
14 May 2023
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Junnan Li
Dongxu Li
Caiming Xiong
S. Hoi
MLLM
BDL
VLM
CLIP
388
4,110
0
28 Jan 2022
1