THÖR-MAGNI: A Large-scale Indoor Motion Capture Recording of Human Movement and Interaction
Beskrivning
The THÖR-MAGNI Dataset Tutorials
THÖR-MAGNI datasets is a novel dataset of accurate human and robot navigation and interaction in diverse indoor contexts, building on the previous THÖR dataset protocol. We provide position and head orientation motion capture data, 3D LiDAR scans and gaze tracking. In total, THÖR-MAGNI captures 3.5 hours of motion of 40 participants on 5 recording days.
This data collection is designed around systematic variation of factors in the environment to allow building cue-conditioned models of human motion and verifying hypotheses on factor impact. To that end, THÖR-MAGNI encompasses 5 scenarios, in which some of them have different conditions (i.e., we vary some factor):
Scenario 1 (plus conditions A and B): Participants move in groups and individually; Robot as static obstacle; Environment with 3 obstacles and lane marking on the floor for condition B;
Scenario 2: Participants move in groups, individually and transport objects with variable difficulty (i.e. bucket, boxes and a poster stand); Robot as static obstacle; Environment with 3 obstacles;
Scenario 3 (plus conditions A and B): Participants move in groups, individually and transporting objects with variable difficulty (i.e. bucket, boxes and a poster stand). We denote each role as: Visitors-Alone, Visitors-Group 2, Visitors-Group 3, Carrier-Bucket, Carrier-Box, Carrier-Large Object; Teleoperated robot as moving agent: in condition A, the robot moves with differential drive; in condition B, the robot moves with omni-directional drive; Environment with 2 obstacles;
Scenario 4 (plus conditions A and B): All participants, denoted as Visitors-Alone HRI interacted with the teleoperated mobile robot; Robot interacted in two ways: in condition A (Verbal-Only), the Anthropomorphic Robot Mock Driver (ARMoD), a small humanoid NAO robot on top of the mobile platform, only used speech to communicate the next goal point to the participant; in condition B the ARMoD used speech, gestures and robotic gaze to convey the same message; Free space environment
Scenario 5: Participants move alone (Visitors-Alone) and one of the participants, denoted as Visitors-Alone HRI, transport objects and interact with the robot; The ARMoD is remotely controlled by an experimenter and proactively offers help; Free space environment;
Visa merPubliceringsår
2023
Typ av data
Upphovspersoner
Department of Electrical Engineering and Automation
Achim J. Lilienthal - Medarbetare
Kai O. Arras - Medarbetare
Luigi Palmieri - Medarbetare
Martin Magnusson - Medarbetare
Andrey Rudenko - Upphovsperson
Eduardo Gutierrez Maestro - Upphovsperson
Tiago Rodrigues de Almeida - Upphovsperson
Tim Schreiter - Upphovsperson
Yufei Zhu - Upphovsperson
Robert Bosch GmbH - Medarbetare
Technical University of Munich - Medarbetare
University of Stuttgart - Medarbetare
Zenodo - Utgivare
Örebro University - Medarbetare
Projekt
Övriga uppgifter
Vetenskapsområden
El-, automations- och telekommunikationsteknik, elektronik
Språk
Öppen tillgång
Öppet