Look Over Here! Comparing Interaction Methods for User-Assisted Remote Scene Reconstruction

Liebers, Carina and Pfützenreuter, Niklas and Prochazka, Marvin and Megarajan, Pranav and Furuno, Eike and Löber, Jan and Stratmann, Tim C. and Auda, Jonas and Degraen, Donald and Gruenefeld, Uwe and Schneegass, Stefan
Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems
Detailed digital representations of physical scenes are key in many cases, such as historical site preservation or hazardous area inspection. To automate the capturing process, robots or drones mounted with sensors can algorithmically record the environment from different viewpoints. However, environmental complexities often lead to incomplete captures. We believe humans can support scene capture as their contextual understanding enables easy identification of missing areas and recording errors. Therefore, they need to perceive the recordings and suggest new sensor poses. In this work, we compare two human-centric approaches in Virtual Reality for scene reconstruction through the teleoperation of a remote robot arm, i.e., directly providing sensor poses (direct method) or specifying missing areas in the scans (indirect method). Our results show that directly providing sensor poses leads to higher efficiency and user experience. In future work, we aim to compare the quality of human assistance to automatic approaches.
Association for Computing Machinery
CHI EA '24
Mensch-unterstützte Synthese von Simulationsdaten für die Robotik