Proceedings of a conference held in Manchester, 5.4.2001. U. Nehmzow and C. Melhuish (eds.)
From the study of biological acoustic sensorimotor systems it can be seen that Doppler is a rich source of information which is not exploited by commercial ultrasonic range sensors such as the Polaroid. In this work we present a simple collision detection and convoy controller for RoBat, a mobile robot provided with a biomimetic sonarhead, based on the presence or absence of Doppler shifts in the received echos while navigating in real life scenarios. Preliminary results using another robot moving along RoBat's path show how exploiting the physics of the robot and the environment before going to higher levels of abstraction results in a simple and efficient way for collision detection and smooth convoy navigation.
This work examines the ability of a biologically inspired novelty detection model to learn and detect changes in the environment of a mobile robot. The novelty detection model used was inspired by recent neurological findings of novelty neurons in monkeys' perirhinal cortices. The novelty detection model examined is based on calculating the energy of a Hopfield network. Experiments examine the difference required between stimuli before the novelty detection model recognises them as novel and the ability of the model to learn its environment on-line.
A major concern when building an intelligent robot is: ``How can it develop increasingly intelligent behaviours?'' This problem is widely recognised, and present research with the ARBIB autonomous robot also addresses this issue. In this paper, we describe how ARBIB can scale in complexity in two directions. First, by allowing its neural simulator, called `HiNoon', to take advantage of distributed computer hardware, ARBIB's nervous system can attain a high degree of complexity aimed at increasing its sensory-motor capabilities. Second, through evolution based on ideas of genetic programming, HiNoon is free to develop nervous system architectures whose complexity is no longer governed by initial human design and subsequent intervention. Hence, evolved nervous systems are supported by a simulator architecture that expands to take advantage of additional compute hardware when needed.
There are applications for autonomous mobile robots in which continuous,
unsupervised operation over long periods of time is desirable.
Examples are surveillance, monitoring and cleaning tasks. Since
real-world environments are typically changeable, and since a
modification of such environments to ease robot applications is costly,
control systems requiring pre-installed knowledge are disadvantageous.
Rather, control systems with the ability to learn are preferable. {\it
Continuous} operation over long periods of time poses particular
learning and operation problems, which are addressed in this paper.
This paper presents a learning procedure of staged competence
acquisition, where a complex functionality is decomposed into simpler
competences, and acquired in stages at appropriate times during robot
operation. To control this staging, each competence has a unique
sensitisation function, based upon the operational time of the robot.
In our experiments, the staged-learning procedure was applied to a
self-recharging robot which was required to wander and recharge when its
batteries were low. Experimental results indicate that the
proposed approach is suitable for controlling robots operating
continuously over long periods of time.
This paper proposes a mechanism for imitating hand-object interactions like grasps, manipulations, etc. It is jointly based on neurophysiology and robotics. The aim of the mechanism is both the task goal and a reasonable degree of accuracy in the task itself. In addition, it attends to achieve both the problem of how to imitate and when. The test platform consists of two simulated robots and an object that they interact with. The results presented here show both successful imitation of familiar actions and poor imitation of unfamiliar interactions. It is intended that this mechanism is used to control the movements of real robots that would imitate other robots or humans.
Novelty detection systems detect inputs that do not conform
to an acquired model of normality. In general this is done
by training a neural network on a training set that is known
not to contain any examples of classes that should be considered
novel. Then, inputs to the filter are compared to the model,
and those that do not match are rejected. Typically, such systems
are used when there are many examples of one class, but relatively
few of an important class. This is true, for example, in
many health applications where there are many more examples of
healthy test results than unhealthy results.
This paper describes a novelty filter that can operate online,
so that the filter can assess the novelty of perceptions as they appear
with respect to the model learnt so far, and then add the current
perception into the model. A new
neural network is described that adds new nodes into the map
space in an ordered manner in order to maintain the topology
of the input space. This forms the basis for the novelty filter.
The resulting novelty filter is applied to a robot inspection task.
A mobile robot explores a corridor environment using its sonar sensors
to perceive the environment, which consists of 300\,m of corridor.
The novelty filter created using the new network
is compared to the same filter based on the Self-Organising Map.
It is shown that the new network does not have difficulties adapting
to large environments.
Robots are like humans in that they cannot and do not pay attention all the time; they do not process all the information perceived, and make decisions about what is worth their attention. If we put a robot in a social environment (either with other robots, or humans), this can help it analyse perceptual information: modelling attention is very hard, and a social context simplifies the problem. The purpose of this paper is to examine the effect of interacting with a robot on the characteristics of the data it is exposed to. We want to show that the perceptual data is better structured when the robot is in a social situation. We analyse the data using Principal Component Analysis, which is a multivariate data-analysis tool. We do not actually present here a computational attention algorithm for analysing perceptual data, although we will mention the system we are currently developing, which is based on the Self Organising Feature Map.
The poster reports on preliminary simulation experiments that have been conducted in the search for a control system design that will allow any number of autonomous dirigibles to manoeuvre into a pre defined 3D formation. 2D Line, Square and Cross shape formations have been produced and the results of which are detailed in the poster.
This poster briefly describes a project to use principles from the human sciences to inform the design of interactive robots and to aid in interpretation of actions observed by a simple vision system. The vision system simply tracks blobs representing head, torso, hands, legs and feet of a child and another blob representing a robot. Using these blobs to acquire information about position, orientation and velocity we intend to create a robot capable of engaging in simple, sub-semantic interactions that nonetheless are interesting from a human science perspective and may create a sense of engagement with the robot for interactants. Local rules will be used to generate actions for the robot and to create expectations about what is likely to be observed next to aid in interpretation.
Evolutionary techniques are often used in the design of mobile robot controllers. When carried out on real robots evolution is time consuming so it is often done in simulated worlds. However, the problems surrounding the use of simulators are well known, and much work has gone into improving and validating these simulations. This paper reports on work that tests a methodology of evolution in simulation described in previous work by Ram et al (1994). The work is recapitulated on a different simulator for a different robot, and the results of the first set of experiments, from the first group of world types are presented. In comparing the success of this project with the previous, the indications are that the method can transfer successfully to a completely different robot.
This paper describes some aspects of recent and ongoing work in the area of Cognitive Robotics in the Department of Electrical and Electronic Engineering at Imperial College. Our approach to Cognitive Robotics has been to apply abductive reasoning procedures using the Event Calculus, an extension to First Order Predicate Calculus (FOPC), to provide a unified view of several related mobile robotics tasks: sensor data assimilation, map-building, planning and navigation, and localisation within a model office environment. The first part of this paper introduces aspects of the Event Calculus, then describes how sensor events and the effects of actions are represented in the Event Calculus for a mobile robot. It then describes how abductive reasoning is applied to important mobile robot tasks, sensor data assimilation, map-building, planning and localisation. We next provide a description of how an Event Calculus based robot controller is interfaced to a Khepera, a real, if miniature, mobile robot. In the second part we widen the discussion by introducing a new visual Region Occlusion Calculus (ROC), and indicate how it may be used to formally describe a greater range of more complex sensory events and to generate a spatial segmentation mapping of the robot's environment, in which the robot may navigate and reason about its surroundings. Our approach to Cognitive robotics depends on an explicit declarative representation. While this greatly facilitates reasoning about domain knowledge, it comes with an extra computational overhead. This is the basis of the semantic knife-edge, maintaining a delicate balance between expressivity and efficient implementation.
Navigating a mobile robot instructed with a schematic map involves a variety of tasks. Besides route planning, it is especially challenging to align map and environment. Perceptual information about shape of the passable space need to be accumulated and transformed into a suitable representation that allows for both, data- and model-driven recognition processes.
Successful shape matching techniques that originated from computer vision serve as a starting point in deriving a new shape similarity measure suitable to match shape information perceived from different viewpoints and to align perceived shapes and shapes represented in the schematic map. Shape matching produces perfect matches, unlike conventional scan matching approaches. So, perception maps made of shape information need not be reduced to occupancy grids for planning navigation. This does not only save space, but it is also crucial in providing object oriented access to environmental data as it is necessary in high-level applications such as map-based navigation.