STRANDS aims to enable a robot to achieve robust and intelligent behaviour in human environments through adaptation to, and the exploitation of, long-term experience. Our approach is based on understanding 3D space and how it changes over time, from milliseconds to months.
We will develop novel approaches to extract quantitative and qualitative spatio-temporal structure from sensor data gathered during months of autonomous operation. Extracted structure will include reoccurring geometric primitives, objects, people, and models of activity. We will also develop control mechanisms which exploit these structures to yield adaptive behaviour in highly demanding, real-world security and care scenarios.
The spatio-temporal dynamics presented by such scenarios (e.g. humans moving, furniture changing position, objects (re-)appearing) are largely treated as anomalous readings by state-of-the-art robots. Errors introduced by these readings accumulate over the lifetime of such systems, preventing many of them from running for more than a few hours. By autonomously modelling spatio-temporal dynamics, our robots will be able run for significantly longer than current systems (at least 120 days by the end of the project). Long runtimes provide previously unattainable opportunities for a robot to learn about its world. Our systems will take these opportunities, advancing long-term mapping, life-long learning about objects, person tracking, human activity recognition and self-motivated behaviour generation.
We will integrate our advances into complete cognitive systems to be deployed and evaluated at two end-user sites. The tasks these systems will perform are impossible without long-term adaptation to spatio-temporal dynamics, yet they are tasks demanded by early adopters of cognitive robots. We will measure our progress by benchmarking these systems against detailed user requirements and a range of objective criteria including measures of system runtime and autonomous behaviour.
FP7 no. 600623