That is, we devise machine vision methods to perceive structures and objects such that robots act in and learn from every day situations. This paves the way to automated manufacturing and robots performing household tasks. Solutions develop the situated approach to integrate task, robot and perception knowledge. Core expertise is safe navigation, 2D and 3D attention, object modelling, object class detection, affordance-based grasping, and manipulation of objects in relation to object functions.
Within this research area of the V4R research group, we investigate different Human-Robot-Interaction (HRI) scenarios. We are especially interested in enabling long-term HRI and perform research from a user-centered perspective as well as from a robot/cognition-centered perspective.
We study joint action scenarios, usability and acceptance of service robots in the domestic and the industrial context, adaptive behaviour coordination, and educational robotics.
Within this research area of the V4R research group, we focus on reasoning about scenes and environments from different perspectives, with the goal of better perceiving, representing and understanding a robots’ surrounding world to enable a more advanced behavior. Research topics include scene reconstruction, semantic scene parsing, efficient representation of semantic knowledge and its exploitation in the robotics scope, especially intelligent robot navigation and robotic interaction with the environment.
Within this research area of the V4R research group, we investigate different research aspects related to objects. Our research focusses on object modelling (multi-view reconstruction, RGBD as well as stereo), object recognition, object detection, object classification, object affordances and manipulation of objects like grasping known and unknown objects. The goal of our research is to empower autonomous robots in their perception and manipulation tasks.
Visit our Youtube channel, V4RatTUVienna, for more information and demonstrations of our research.