NeRF-HuGS

Improved Neural Radiance Fields in Non-static Scenes Using Heuristics-Guided Segmentation

1Sun Yat-sen University    2Cardiff University    3University of Pennsylvania    4SmartMore Corporation
+corresponding authors
CVPR 2024 (Oral)

Abstract

Neural Radiance Field (NeRF) has been widely recognized for its excellence in novel view synthesis and 3D scene reconstruction. However, their effectiveness is inherently tied to the assumption of static scenes, rendering them susceptible to undesirable artifacts when confronted with transient distractors such as moving objects or shadows.

In this work, we propose a novel paradigm, namely ''Heuristics-Guided Segmentation'' (HuGS), which significantly enhances the separation of static scenes from transient distractors by harmoniously combining the strengths of hand-crafted heuristics and state-of-the-art segmentation models, thus significantly transcending the limitations of previous solutions. Furthermore, we delve into the meticulous design of heuristics, introducing a seamless fusion of Structure-from-Motion (SfM)-based heuristics and color residual heuristics, catering to a diverse range of texture profiles.

Extensive experiments demonstrate the superiority and robustness of our method in mitigating transient distractors for NeRFs trained in non-static scenes.

Video

Heuristics-Guided Segmentation (HuGS)

HuGS Pipeline

Pipeline of HuGS: (a) Given unordered images of a static scene disturbed by transient distractors as input, we first obtain two types of heuristics. (b) SfM-based heuristics use SfM to distinguish between static (green) and transient features (red). The static features are then employed as point prompts to generate dense masks using SAM. (c) Residual-based heuristics are based on a partially trained NeRF (ie, trained for several thousands of iterations) that can provide reasonable color residuals. (d) Their combination finally guides SAM again to generate (e) the static map for each input image.


Visualization: Here are examples of HuGS on different scenes (datasets). More results can be found in the paper and the data.

(1) Input
(2) Seg. w/ SfM
(3) HSfM
(4) Color Residual
(5) HCR
(6) Static Map
yoda gt
yoda sfm
yoda sfm mask
yoda residual
yoda res mask
yoda final mask
crab gt
crab sfm
crab sfm mask
crab residual
crab res mask
crab final mask
statue gt
statue sfm
statue sfm mask
statue residual
statue res mask
statue final mask
andbot gt
andbot sfm
andbot sfm mask
andbot residual
andbot res mask
andbot final mask
pillow gt
pillow sfm
pillow sfm mask
pillow residual
pillow res mask
pillow final mask
cars gt
cars sfm
cars sfm mask
cars residual
cars res mask
cars final mask
brandenburg gt
brandenburg sfm
brandenburg sfm mask
brandenburg residual
brandenburg res mask
brandenburg final mask
sacre gt
sacre sfm
sacre sfm mask
sacre residual
sacre res mask
sacre final mask
taj gt
taj sfm
taj sfm mask
taj residual
taj res mask
taj final mask
trevi gt
trevi sfm
trevi sfm mask
trevi residual
trevi res mask
trevi final mask

Rendering Results

Comparisons on the Distractor Dataset: Our method can better preserve static details while ignoring transient distractors.


BabyYoda
Mip-NeRF 360
w/ RobustNeRF
w/ HuGS (ours)
Crab
Statue
Android

Comparisons on the Kubric Dataset:


Pillow
Mip-NeRF 360
w/ RobustNeRF
w/ HuGS (ours)
Chairs
Cars

Comparisons on the Phototourism Dataset:


Brandenburg Gate
Mip-NeRF 360
w/ RobustNeRF
w/ HuGS (ours)
Sacre Coeur
Taj Mahal
Trevi Fountain

BibTeX

@article{chen2024nerfhugs,
  author    = {Chen, Jiahao and Qin, Yipeng and Liu, Lingjie and Lu, Jiangbo and Li, Guanbin},
  title     = {NeRF-HuGS: Improved Neural Radiance Fields in Non-static Scenes Using Heuristics-Guided Segmentation},
  journal   = {CVPR},
  year      = {2024},
}