14.07.2008
Austrian Kangaroos

Vienna has a new soccer team

Since the beginning of this year Vienna has it’s first humanoid robot soccer team called the Austrian-Kangaroos. The team composed of researchers is supported by the Automation and Control Institute and the Institute of Computer Languages Compilers and Languages Group (COMPLANG) from the TU Wien, as well as from the Institute of Computer Science from the University of Applied Sciences Technikum Wien. The teams goal is to play at the Standard Platform League at the RoboCup World Champion Ship, which will be held this year in Austria (Graz) (29.06.-05.07).

  14.07.2008
CogX

Cognitive Systems that Self-Understand and Self-Extend

A specific, if very simple example of the kind of task that we will tackle is a domestic robot assistant or gopher that is asked by a human to: “Please bring me the box of cornflakes.” There are many kinds of knowledge gaps that could be present (we will not tackle all of these):

  • What this particular box looks like.
  • Which room this particular item is currently in.
  • What cereal boxes look like in general.
  • Where cereal boxes are typically to be found within a house.
  • How to grasp this particular packet.
  • How to grasp cereal packets in general.
  • What the cornflakes box is to be used for by the human.

The robot will have to fill the knowledge gaps necessary to complete the task, but this also offers opportunities for learning. To self-extend, the robot must identify and exploit these opportunities. We will allow this learning to be curiosity driven. This provides us, within the confines of our scenario, with the ability to study mechanisms able to generate a spectrum of behaviours from purely task driven information gathering to purely curiosity driven learning. To be flexible the robot must be able to do both. It must also know how to trade-off efficient execution of the current task – find out where the box is and get it – against curiosity driven learning of what might be useful in future – find out where you can usually find cereal boxes, or spend time when you find it performing grasps and pushes on it to see how it behaves. One extreme of the spectrum we can characterise as a task focused robot assistant, the other as a kind of curious robotic child scientist that tarries while performing its assigned task in order to make discoveries and experiments. One of our objectives is to show how to embed both these characteristics in the same system, and how architectural mechanisms can allow an operator – or perhaps a higher order system in the robot – to alter their relative priority, and thus the behaviour of the robot.

Goals

The ability to manipulate novel objects detected in the environment and to predict their behaviour after a certain action is applied to them is important for a robot that can extend its own abilities. The goal is to provide the necessary sensory input for the above by exploiting the interplay between perception and manipulation. We will develop robust, generalisable and extensible manipulation strategies based on visual and haptic input. We envisage two forms of object manipulation: pushing using a “finger” containing a force-torque sensor and grasping using a parallel jaw gripper and a three-finger hand. Through coupling of perception and action we will thus be able to extract additional information about objects, e.g. weight, and reason about object properties such as empty or full.

[Task1] – Contour based shape representations. Investigate methods to robustly extract object contours using edge-based perceptual grouping methods. Develop representations of 3D shape based on contours of different views of the object, as seen from different camera positions or obtained by the robot holding and turning the object actively. Investigate how to incorporate learned perceptual primitives and spatial relations. (M1 – M12)

[Task2] – Early grasping strategies. Based on the visual sensory input extracted in Task 1, define motor representations of grasping actions for two and three-fingered hands. The initial grasping strategies will be defined by a suitable approach vector (relative pose wrt object/grasping part) and preshape strategy (grasp type). (M7 – M18)

[Task3] – Active segmentation. Use haptic information, pushing and grasping actions i) for interactive scene segmentation into meaningful objects, and ii) for extracting a more detailed model of the object (visual and haptic). Furthermore use information inside regions (surface markings, texture, shading) to complement contour information and build denser and more accurate models. (M13 – M36)

[Task4] – Active Visual Search. Survey the literature and evaluate different methods for visual object search in realistic environments with a mobile robot. Based on this survey develop a system that can detect and recognise objects in a natural (possibly simplified) environment. (M1- M24)

[Task5] – Object based spatial modeling Investigate how to include objects into the spatial representation such that properties of the available vision systems are captured and taken into account. The purpose of this task is to develop a framework that allows for a hybrid representation where objects and traditional metric spatial models can coexist. (M7 – M24)

[Task6] – Functional understanding of space Investigate by analyzing spatial models over time. (M24 – M48)

[Task7] – Grasping novel objects. Based on the object models acquired, we will investigate the scalability of the system with respect to grasping novel, previously unseen objects. We will demonstrate how the system can execute tasks that involve grasping based on the extracted sensory input (both about scene and individual objects) and taking into account its embodiment. (M25 – M48)

[Task8] – Theory revision. Given a qualitative, causal physics model, the robot should be able to revise its causal model by its match or mismatch with the qualitative object behaviour. When qualitative predictions are incorrect the system will identify where the gap is in the model, and generate hypotheses for actions that will fill in these gaps. (M37 – M48)

[Task9] – Representations of gaps in object knowledge and manipulation skills Enabling all models of objects and grasps to also represent missing knowledge is a necessary prerequisite to reason about information-gathering actions and to represent beliefs about beliefs and is therefore an ongoing task throughout the project. (M1 – 48)

Partners

  • University of Birmingham BHAM United Kingdom
  • Deutsches Forschungszentrum für Künstliche Intelligenz GmbH
  • DFKI Germany
  • Kungliga Tekniska Högskolan KTH Sweden
  • Univerza v Ljubljani UL Slovenia
  • Albert-Ludwigs-Universität ALU-FR Germany

  26.07.2007
Robots @ Home

An Open Platform for Home Robotics

The objective of robots@home is to provide an open mobile platform for the massive introduction of robots into the homes of everyone. The innovations will be: (1) A scaleable, affordable platform in response to the different application scenarios of the four industrial partners: domotics, security, food delivery, and elderly care. (2) An embedded perception system providing multi-modal sensor data for learning and mapping of the rooms and classifying the main items of furniture. (3) A safe and robust navigation method that finally sets the case for using the platform in homes everywhere. The system is tested in four homes and at a large furniture store, e.g., IKEA. Developers as well as lay persons will show the robot around, indicate rooms and furniture and then test the capabilities by commanding to go to the refrigerator or dining table.

The scenario-driven approach is inspired by recent work in cognitive science, neuroscience and animal navigation: a hierarchical cognitive map incorporates topological, metric and semantic information. It builds on structural features observed in a newly developed dependable embedded stereo vision system complimented by time-of-flight and sonar/infrared sensors. This solution will be developed along three progressively more challenging milestones leading up to a mobile platform that learns four homes, navigates safely and heads for at least ten annotated pieces of furniture.

Goals

The milestones propose an approach with three more and more demanding scenarios:

  • M12 Prototype of sizeable platform coping with basic room and floor types and first table classification,
  • M24 Room layout learned by “showing the robot around”, classify main types of furniture, and then safely navigation in the home
  • M36 Platforms learn four homes and safely navigate to each room and ten annotated pieces of furniture.

Partners

  • ETH Zürich, CH, Prof. Roland Siegwart
  • Bluebotics, CH, Dr. Nicola Tomatis
  • ARC Seibersdorf, A, Dr. Wilfried Kubinger
  • Legrand, F
  • Nestlé Nespresso, CH
  • Otto Bock, D

  26.07.2006
XPERO

Experimentation with the physical world is a key to gaining insights about the real world and to develop cognitive skills necessary to exist and act in the real world. The more comprehensive and instructive the experiment, the more comprehensive and meaningful can be the insight drawn from it. Learning by experimentation has one crucial advantage over other facets of learning: the resources for learning, i.e. the learning material or training data, are basically unlimited.


In the XPERO consortium some of the top European research institutions in robotics, cognitive systems, cognitive robotics, cognitive vision, and artificial intelligence have gathered.
The overall objective of the XPERO project is to develop an embodied cognitive system, which is able to conduct experiments in the real world with the purpose of gaining new insights about the world and the objects therein and to develop and improve its own cognitive skills and overall performance.

  26.07.2003
NFN

Cognitive Vision – Key Technology for Personal Assistance

Each of us might have encountered the situation to desperately search for a personal item or a location in an unknown environment. At present there is no technical solution for such an assistive system. The newly granted Joint Research Project “Cognitive Vision” attempts to find first solutions in this direction. A human shall be supported with a system that can not only find things, but that can understand the relationship between the human activities and objects involved. This understanding of new information and new knowledge is the key aspect of the cognitive approach to computer vision.

The solution proposed is based on a trans-disciplinary approach. It integrates partners from theoretical computer science (TU Graz), neuroscience (Max-Planck-Institut Tübingen), machine learning (MU Leoben), and the main computer vision groups in Austria (ACIN & PRIP at TU Wien, EMT & ICG at TU Graz and Joanneum Research Graz).

One aspect of the project is to investigate the relations of the different brain regions in visual cortex. While individual functions of these regions are relatively well studied, new methods of screening brain functions enable deeper insights that contradict present hypotheses. It could be shown that human vision profits enormously from expectations in a given situation. For example, objects in an atypical environment are spotted much more quickly than in the expected environment.

Using this analysis of the only “working” vision system we will develop computer models to describe objects under different conditions, for example, different illumination, shape, scale, clutter and occlusion, and to describe the relationships between objects and the environment. A particular emphasis is on learning these models and relationships. In the same way one shows a new object to a child, we want to relieve the user from the present exhaustive learning phases.

Another aspect of the research work is the analysis of the interrelations of the different seeing functions, namely, mechanisms to guide attention, the detection and identification of objects, the prediction of motions and intentions of the user, the integration of knowledge of the present situation, and the creation of an appropriate system reaction. The coordination of these functions is resolved using an agent/based optimisation of the utility to the system’s functioning.

The techniques devised will be implemented in prototype systems. The objective of the next three years is to track and predict where objects are moved to and where they are hidden and could be refound. A user could then ask the system where her mug is or where a specific shop is when entering unknown parts of a city. In both cases the user would be assisted and guided to the location.

  26.07.2001
ActIPret

Interpreting and Understanding Activities of Expert Operators for Teaching and Education

The objective of ActIPret is to develope a cognitive vision methodology that interprets and records the activities of people handling tools. Focus is on active observation and interpretation of activities, on parsing the sequences into constituent behaviour elements, and on extracting the essential activities and their functional dependence. By providing this functionality ActIPret will enable observation of experts executing intricate tasks such as repairing machines and maintaining plants. The expert activities are interpreted and stored using natural language expressions in an activity plan. The activity plan is an indexed manual in the form of 3D reconstructed scenes, which can be replayed at any time and location to many users using Augmented Reality equipment.

Please visit the German-language site for more information.

  26.07.2000
RobVision

Manouver a walking robot into ship sections using vision

Industries using a CAD-system to design parts or working areas need a means of feedback to enable a comparison of designed and manufactured structures. Using vision, based on the CAD information, is an effective tool to establish this link. For example, the autonomy of a robotic vehicle is needed in several applications in building and inspecting of large structures, such as ship bodies. The navigation of a walking robot will be demonstrated using this vision tool. Furthermore, the vision tool can be used for the task of dimensional measurements of parts. The project costs over a period of two years are 1125 kECU including 750 kECU funding from the CEC.

Objectives

This project develops a vision system that finds and measures the location of 3D structures with respect to a CAD-model. The integration of a CAD-model to visual measurement and direct feedback of measurement results is a key aspect. The objective is to render visual processing robust to deviations in parts and environmental conditions. To achieve this goal a technique is developed that integrates different cues of images to obtain confidence of the measurement result.

Approach

Reliability is tackled by developing a theory of robust visual recognition by integrating redundant low level image cues and sparse high level object knowledge. Image cues and object knowledge are exploited and integrated both at a local and global level. For the extraction of basic visual cues independent and complimentary modules are envisaged. The modularity of the toolbox is the basis for integrating the acquisition of visual information with tools of the control and engineering process.

Demonstration

The project focuses on using the vision system for guiding a robotic vehicle to enable it to navigate and position itself in order to deliver work packages for inspection, welding and other tasks for the structure/body of a large vessel during production. The final demonstration will see the walking robot enter and climb the vessel structure..

Exploitation/Results

The ROBVISION project will achieve the following results:

  • a tool to measure 3D object position and orientation with the aid of a CAD-model,
  • a toolbox of modules for cue extraction from images and models and a theory to integrate these cues to obtain robustness and reliability,
  • a theory of integrating object knowledge from CAD-models for cue extraction to increase the reliability of cue and therefore object detection, and
  • the integrated vision system capable of providing adequate information to guide an advanced robotic vehicle through a complex structure.

The potential uses for such a tool are quite diverse. The principal capability is to use a CAD-model to find features in images and to return the position and orientation measured back into the CAD-model.

Consortium

  26.07.2000
FlexPaint

The objective of the project FlexPaint is to provide a system for automatic spray painting. The goal is to paint all arriving parts and to reach a batch size of one. The final solution will make it possible to paint any arriving part in the paint cell without the need for models or other data. The final product will provide a fully self-contained solution to the spray painting problem. The project is funded by the European commission with 1.1 MEuro and will last until July 2002 (partner).

The technical problems are solved by the academic partners of this project. They proposed and already tested in prototypes the following approach:

  • Sensing the geometry of the parts with one or more range sensors
  • Extracting the geometry for painting from the sensor data
  • Using the geometry to determine a painting trajectory
  • Generating the robot program for the trajectories with a planner that also avoids collisions

All these steps will be executed automatically such that interference of operators is not needed and all arriving parts can be painted. The steps will be executed in real-time. The cycle time is selected such that the sensing cell can be placed directly in front of the painting.

  12.06.2000
Tracking Evaluation

A Methodology for Performance Evaluation of Model-based Tracking*

Model-based object tracking has become an important means to perform robot navigation and visual servoing tasks. Until today it is still difficult to define robustness parameters which allow the direct comparison of tracking approaches and that provide objective measures of progress achieved with respect to robustness. Particularly, extensive algorithm testing is an obstacle because of the difficulty to extract ground truth. In this paper, we propose a methodology based on the evaluation of a video database which contains real-world image sequences with well-defined movements of modeled objects. It is suggested to set up and extend this database as a benchmark. Moreover, tests of the performance evaluation of the tracking system V4R (Vision for Robotics) are presented.

Video database of real-world image sequences

First image preview | Sequence description | Sequence zipped

 Gray cube moving backwards left | gray_cube1.zip

 Gray cube moving backwards left | gray_cube2.zip

 Color cube moving backwards left | color_cube3.zip

 Color cube moving backwards left | color_cube4.zip

 Color cube moving towards right | color_cube5.zip

 Color cube moving towards right | color_cube6.zip

 Magazine box moving backwards left | magazine_box7.zip

 Magazine box moving backwards left | magazine_box8.zip

 Magazine box moving towards right | magazine_box9.zip

 Magazine box moving towards right | magazine_box10.zip

 Toy copter moving backwards left | toy_copter11.zip

 Toy copter moving backwards right | toy_copter12.zip

 Toy copter moving backwards right | toy_copter13.zip

* This work has been supported by the EU-Project ActIPret under grant IST-2001-32184.

  12.06.2000
TOS

Trainings Optimierungs System

  • Ballverfolgung
  • Stereo-Bildverarbeitung
  • statistische Auswertung

Beschreibung

Das Trainings-Optimierungs-System eignet sich für:

Automatisches Erfassen der Ballflugbahn mit PC-gesteuertem Zweikamerasystem Bestimmung der Ballposition auf 5cm genau und Bestimmung der Schußschärfe auf ± 1 % der Ballgeschwindigkeit.