A Unified Framework for 3D Scene Understanding

Huazhong University of Science & Technology
* Equal contribution. Corresponding author.

Abstract

We propose UniSeg3D, a unified 3D segmentation framework that achieves panoptic, semantic, instance, interactive, referring, and open-vocabulary semantic segmentation tasks within a single model. Most previous 3D segmentation approaches are specialized for a specific task, thereby limiting their understanding of 3D scenes to a task-specific perspective. In contrast, the proposed method unifies six tasks into unified representations processed by the same Transformer. It facilitates inter-task knowledge sharing and, therefore, promotes comprehensive 3D scene understanding. To take advantage of multi-task unification, we enhance the performance by leveraging task connections. Specifically, we design a knowledge distillation method and a contrastive learning method to transfer task-specific knowledge across different tasks. Benefiting from extensive inter-task knowledge sharing, our UniSeg3D becomes more powerful. Experiments on three benchmarks, including the ScanNet20, ScanRefer, and ScanNet200, demonstrate that the UniSeg3D consistently outperforms current SOTA methods, even those specialized for individual tasks. We hope UniSeg3D can serve as a solid unified baseline and inspire future work.

Overview

MY ALT TEXT

Unified Tasks

Instance Segmentation

Pipeline

MY ALT TEXT

Experimental Results

Comparision

Interpolate start reference image.

Ablation

Interpolate start reference image.

Qualitative Results

MVTec-AD and VisA dataset.

BibTeX


    @article{xu2024unified,
      title={A Unified Framework for 3D Scene Understanding},
      author={Xu, Wei and Shi, Chunsheng and Tu, Sifan and Zhou, Xin and Liang, Dingkang and Bai, Xiang},
      journal={arXiv preprint arXiv:2407.03263},
      year={2024}
    }
    
Free-Counters.org