Indoor environment modeling has become a relevant topic in several application fields, including augmented, virtual, and extended reality.With the digital transformation, many industries Mens have investigated two possibilities: generating detailed models of indoor environments, allowing viewers to navigate through them; and mapping surfaces so as to insert virtual elements into real scenes.The scope of the paper is twofold.We first review the existing state-of-the-art (SoA) of learning-based methods for 3D scene reconstruction based on structure from motion (SFM) that predict depth maps and Kegel Tools / Ben Wa camera poses from video streams.
We then present an extensive evaluation using a recent SoA network, with particular attention on the capability of generalizing on new unseen data of indoor environments.The evaluation was conducted by using the absolute relative (AbsRel) measure of the depth map prediction as the baseline metric.