Abstract
LaS-Comp presents a zero-shot 3D shape completion method using 3D foundation models with a two-stage approach for faithful reconstruction and seamless boundary refinement.
This paper introduces LaS-Comp, a zero-shot and category-agnostic approach that leverages the rich geometric priors of 3D foundation models to enable 3D shape completion across diverse types of partial observations. Our contributions are threefold: First, harnesses these powerful generative priors for completion through a complementary two-stage design: (i) an explicit replacement stage that preserves the partial observation geometry to ensure faithful completion; and (ii) an implicit refinement stage ensures seamless boundaries between the observed and synthesized regions. Second, our framework is training-free and compatible with different 3D foundation models. Third, we introduce Omni-Comp, a comprehensive benchmark combining real-world and synthetic data with diverse and challenging partial patterns, enabling a more thorough and realistic evaluation. Both quantitative and qualitative experiments demonstrate that our approach outperforms previous state-of-the-art approaches. Our code and data will be available at https://github.com/DavidYan2001/LaS-Comp{LaS-Comp}.
Community
null
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- AnchoredDream: Zero-Shot 360° Indoor Scene Generation from a Single View via Geometric Grounding (2026)
- Joint Geometry-Appearance Human Reconstruction in a Unified Latent Space via Bridge Diffusion (2026)
- MGPC: Multimodal Network for Generalizable Point Cloud Completion With Modality Dropout and Progressive Decoding (2026)
- InpaintHuman: Reconstructing Occluded Humans with Multi-Scale UV Mapping and Identity-Preserving Diffusion Inpainting (2026)
- Gen3R: 3D Scene Generation Meets Feed-Forward Reconstruction (2026)
- GaMO: Geometry-aware Multi-view Diffusion Outpainting for Sparse-View 3D Reconstruction (2025)
- FlowSSC: Universal Generative Monocular Semantic Scene Completion via One-Step Latent Diffusion (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper