ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.14811
  4. Cited By
Tinker: Diffusion's Gift to 3D--Multi-View Consistent Editing From Sparse Inputs without Per-Scene Optimization

Tinker: Diffusion's Gift to 3D--Multi-View Consistent Editing From Sparse Inputs without Per-Scene Optimization

20 August 2025
Canyu Zhao
Xiaoman Li
Tianjian Feng
Zhiyue Zhao
Hao Chen
Chunhua Shen
    DiffMVGen
ArXiv (abs)PDFHTMLHuggingFace (39 upvotes)Github (24237★)

Papers citing "Tinker: Diffusion's Gift to 3D--Multi-View Consistent Editing From Sparse Inputs without Per-Scene Optimization"

2 / 2 papers shown
Title
Dynamic-eDiTor: Training-Free Text-Driven 4D Scene Editing with Multimodal Diffusion Transformer
Dong In Lee
Hyungjun Doh
Seunggeun Chi
Runlin Duan
Sangpil Kim
K. Ramani
DiffM3DGSVGen
108
0
0
30 Nov 2025
InstructMix2Mix: Consistent Sparse-View Editing Through Multi-View Model Personalization
InstructMix2Mix: Consistent Sparse-View Editing Through Multi-View Model Personalization
Daniel Gilo
Or Litany
145
0
0
18 Nov 2025
1