Behavioral experiments on fruit flies had shown that they are attracted by near objects and they prefer front-to-back motion.
In this paper a visual orientation model is implemented on the Eye-Ris vision system and tested using a roving platform.
Robotic experiments are used to collect statistical data regarding the system behaviour: followed trajectories, dwelling
time, distribution of gaze direction and others strictly resembling the biological experimental setup on the flies. The
statistical analysis has been performed in different scenarios where the robot faces with different object distribution in the
arena. The acquired data has been used to validate the proposed model making a comparison with the fruit fly experiments.
Animals for surviving have developed cognitive abilities allowing them an abstract representation of the environment.
This internal representation (IR) may contain a huge amount of information concerning the evolution and interactions of
the animal and its surroundings. The temporal information is needed for IRs of dynamic environments and is one of the
most subtle points in its implementation as the information needed to generate the IR may eventually increase
dramatically. Some recent studies have proposed the compaction of the spatiotemporal information into only space,
leading to a stable structure suitable to be the base for complex cognitive processes in what has been called Compact
Internal Representation (CIR). The Compact Internal Representation is especially suited to be implemented in
autonomous robots as it provides global strategies for the interaction with real environments. This paper describes an
FPGA implementation of a Causal Neural Network based on a modified FitzHugh-Nagumo neuron to generate a
Compact Internal Representation of dynamic environments for roving robots, developed under the framework of SPARK
and SPARK II European project, to avoid dynamic and static obstacles.
Animals for surviving have developed cognitive abilities allowing them an abstract
representation of the environment. This Internal Representation (IR) could contain a huge
amount of information concerning the evolution and interactions of the elements in their
surroundings. The complexity of this information should be enough to ensure the maximum
fidelity in the representation of those aspects of the environment critical for the agent, but not so
high to prevent the management of the IR in terms of neural processes, i.e. storing, retrieving,
etc. One of the most subtle points is the inclusion of temporal information, necessary in IRs of
dynamic environments. This temporal information basically introduces the environmental
information for each moment, so the information required to generate the IR would eventually
be increased dramatically. The inclusion of this temporal information in biological neural
processes remains an open question. In this work we propose a new IR, the Compact Internal
Representation (CIR), based on the compaction of spatiotemporal information into only space,
leading to a stable structure (with no temporal dimension) suitable to be the base for complex
cognitive processes, as memory or learning. The Compact Internal Representation is especially
appropriate for be implemented in autonomous robots because it provides global strategies for
the interaction with real environments (roving robots, manipulators, etc.). This paper presents
the mathematical basis of CIR hardware implementation in the context of navigation in dynamic
environments. The aim of such implementation is the obtaining of free-collision trajectories
under the requirements of an optimal performance by means of a fast and accurate process.
This paper aims to describe how the AnaFocus' Eye-RIS family of vision systems has been successfully embedded
within the roving robots developed under the framework of SPARK and SPARK II European projects to solve the
action-oriented perception problem in real time. Indeed, the Eye-RIS family is a set of vision systems which are
conceived for single-chip integration using CMOS technologies. The Eye-RIS systems employ a bio-inspired
architecture where image acquisition and processing are truly intermingled and the processing itself is carried out in two
steps. At the first step, processing is fully parallel owing to the concourse of dedicated circuit structures which are
integrated close to the sensors. These structures handle basically analog information. At the second step, processing is
realized on digitally-coded information data by means of digital processors. On the other hand, SPARK I and SPARK II
are European research projects which goal is to develop completely new sensing-perceiving-moving artefacts inspired by
the basic principles of living systems and based on the concept of "selforganization". As a result, its low-power
consumption together with its huge image-processing capabilities makes the Eye-RIS vision system a suitable choice to
be embedded within the roving robots developed under the framework of SPARK projects and to implement in real time
the resulting mathematical models for action-oriented perception.
Ángel Rodríguez-Vázquez, Rafael Domínguez-Castro, Francisco Jiménez-Garrido, Sergio Morillas, Juan Listán, Luis Alba, Cayetana Utrera, Rafael Romay, Fernando Medeiro
KEYWORDS: Image processing, Analog electronics, Sensors, Control systems, Binary data, Image sensors, Interfaces, Biomimetics, High dynamic range imaging, Optical sensors
We present architectures, CMOS circuits and CMOS chips to process image flows at very high speed. This is achieved
by exploiting bio-inspiration and performing processing tasks in parallel manner and concurrently with image
acquisition. A vision system is presented which makes decisions within sub-msec range. This is very well suited for
defense and security applications requiring segmentation and tracking of rapidly moving objects.
In order to solve the navigation problem of a mobile robot in an unstructured environment a versatile sensory
system and efficient locomotion control algorithms are necessary. In this paper an innovative sensory system for
action-oriented perception applied to a legged robot is presented. An important problem we address is how to
utilize a large variety and number of sensors, while having systems that can operate in real time. Our solution is
to use sensory systems that incorporate analog and parallel processing, inspired by biological systems, to reduce
the required data exchange with the motor control layer. In particular, as concerns the visual system, we use the
Eye-RIS v1.1 board made by Anafocus, which is based on a fully parallel mixed-signal array sensor-processor
chip. The hearing sensor is inspired by the cricket hearing system and allows efficient localization of a specific
sound source with a very simple analog circuit. Our robot utilizes additional sensors for touch, posture, load,
distance, and heading, and thus requires customized and parallel processing for concurrent acquisition. Therefore
a Field Programmable Gate Array (FPGA) based hardware was used to manage the multi-sensory acquisition
and processing. This choice was made because FPGAs permit the implementation of customized digital logic
blocks that can operate in parallel allowing the sensors to be driven simultaneously. With this approach the
multi-sensory architecture proposed can achieve real time capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.