Recognition: unknown
SceneNet: Understanding Real World Indoor Scenes With Synthetic Data
read the original abstract
Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments. Recent attempts with supervised learning have shown promise in this direction but also highlighted the need for enormous quantity of supervised data --- performance increases in proportion to the amount of data used. However, this quickly becomes prohibitive when considering the manual labour needed to collect such data. In this work, we focus our attention on depth based semantic per-pixel labelling as a scene understanding problem and show the potential of computer graphics to generate virtually unlimited labelled data from synthetic 3D scenes. By carefully synthesizing training data with appropriate noise models we show comparable performance to state-of-the-art RGBD systems on NYUv2 dataset despite using only depth data as input and set a benchmark on depth-based segmentation on SUN RGB-D dataset. Additionally, we offer a route to generating synthesized frame or video data, and understanding of different factors influencing performance gains.
This paper has not been read by Pith yet.
Forward citations
Cited by 1 Pith paper
-
The Replica Dataset: A Digital Replica of Indoor Spaces
Replica is a new dataset of 18 highly detailed 3D reconstructions of indoor spaces with meshes, high-resolution HDR textures, per-primitive semantics, and mirror/glass reflectors for realistic ML training.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.