Thursday, April 12, 2012

Symposium on Self-* Systems – Biological Foundations and Technological Applications

The Symposium on Self-* Systems – Biological Foundations and Technological Applications was part of the European Meeting on Cybernetics and Systems Research (EMCSR 2012) taking place from April 10-13 in Vienna, Austria. It was organized by Vesna Sesum-Cavic, Carlos Gershenson and Wilfried Elmenreich.

Tomonori Hasegawa presented insights on the self-referential logic of self-reproduction originally formulated by John von Neumann and introduced an implementation of this abstract architecture embedded within the Avida world [1]. In the experiments, a sophisticated von Neumann Self-Referential Machine, which was introduced as seeding mechanism, can degrade to a mere copy machine that has dropped the self-referential part. Thus, with this particular implementation, in this particular world, the von Neumann architecture proves to be evolutionarily unstable and degenerates, surprisingly easily, to a primitive, non-self-referential, “copying” or “template replication”, mode of reproduction
Questions arose if a von Neumann Self-Referential Machine could evolve from a simple self-copy machine in a different set-up. The von Neumann model has the advantage of enabling new and more ways to change the system upon mutation - but what could be the evolutionary pressure to have a von Neumann architecture evolved in the first place?
Current experiments did not include sexual reproduction - could that facilitate the evolution of more complex architectures?
More from the group can be found at http://evosym.rince.ie/

Modern software systems suffer from increased complexity. Large software systems are composed of many components that are interlinked. The internal states of these systems contain a huge amount of information. The main obstacles lie in the lack of the reliability and robustness which lead to poor performance. For complex intractable problems, a random search (Monte-Carlo method) does not perform well. New and advanced approaches are necessary to deal with complexity.
Milan Tuba proposed a guided Monte-Carlo search method based on a hybrid of reinforcement learning and a genetic algorithm[2]. As a proof-of-concept the approach is applied to the problem of information retrieval in the internet.
In the discussion, the performance of the algorithm in comparison to commercial search providers, like Google, was discussed.

Sander van Splunter presented ideas on the coordination and self-organization in crisis management[3]. The main idea is to move from a task-oriented top down approach towards an emergence-oriented bottom-up approach, while keeping, a hierarchical structure. However, higher level entities control lower levels by policy, not directly. Policy defines interaction, prioritization and coordination of entities.
An entity works as an independent agent following the given policies. An important feature could be the ability to predict the failure in a given subsystem.
According to EMCSR'12 keynote lecture of Peter Csermely we have to watch signs like slower recovery, increased self-similarity, and increased variance of fluctuation patterns in order to predict a system change into a state where it cannot handle the its environment with its current policies.
In the proposed crisis management of van Splunter and van Veelen a subsystem is supposed to emerge a warning when local adaption fails to handle the problem, e.g. if a small team of firefighters cannot confine a fire in their assigned area.

Carlos Gershenson told us about "Living in living cities"[4]. One of the challenges of 21st century is preventing the problems in the ultra-fast growing cities all over the world.
These are non-stationary (changing problems), traditional algorithms do not work well. Such challenges are for example urban mobility, logistics, telecommunications, governance, safety, sustainability, society and culture. A solution is to exploit properties of living systems, which are adaptive, learning, evolving, robust, autonomous, self-repairing, and self-reproducing, and to understand cities metaphorically as organisms.
Engineering methods cannot find a single solution to these changing problems. Instead it is necessary to constantly adapt the solution, thus have a self-organizing solution to a complex problem. Will cities become the "killer app" of cybernetics and systems research?
Discussion arose around the following issues:
But how do you get the officers and responsibles of a city to cooperate? There is a need for a strong motivation to overcome the inertia of the system.
Could cities instead built from scratch? No, because of the legacy issues - it is not possible to just tear down a large city and build it anew every few decades.
Can we prove that the system is robust against malicious behavior? Difficult since such a complex system cannot be easily predicted for all sets of possible inputs.
Are explicit measures necessary or would people themselves care for the necessary adaptations? This would not increase living standard for the people.

Anita Sobe presented ideas on self-organizing content sharing at social events by such interesting examples as the marriage of Kate and William of Windsor or Barack Obama's inauguration[5].
The presented approach allows people to share their self-generated content like photos or short videos instantly at such events. Existing platforms like flickr or youtube do not provide this liveness since most content is uploaded with a few days delay. The proposed approach organizes the content using an artificial hormone system. The hormone distribution is sensitive to the quality of a network connection and therefore, reflects a quality-of-service for a network path. The system is solely based on local decisions for forwarding, replicating and moving content. Over time, the content distribution in the network gets optimized in order to support short response times for requesters. Simulations show that the system competes well to other epidemic information dissemination methods such as Gossip.
The follow-up discussion brought up interesting questions:
How is overhead reflected in the simulation? Currently overhead is implicitly modeled into the transmission cost, which is valid for a constant packet handling overhead.
Furthermore the relation of the hormone-based approach to an ant colony optimization (ACO) algorithm was discussed. We identified a major difference to ACO, since there typically either the network or the content is assumed to be static. However ACO could be extended to handle the described scenario, which might inspire future work.
What is the effect, if tags are (more) complex? The system was started with a predefined tag hierarchy which can be extended to a more complex tag hierarchy. However, with more complex tags there is no guarantee for finding content.

In the last talk, Wilfried Elmenreich gave a talk on evolving a distributed control algorithm for flying UAV drones for a coverage problem[6]. The problem of having multiple mobile agents covering (or as we say in robotics, "sweeping") an area is relevant for many applications like lawn mowing, snow removal, floor cleaning,  environmental monitoring, communication assistance and several military and security applications.
The work by Istvan Fehervari, Wilfried Elmenreich and Evsen Yanmaz described a simple grid-based abstraction of the problem which was used to test two evolved and one handcrafted control algorithm which were compared to  reference algorithms like random walk and random direction.
A short summary of Wilfried's talk and the slides are available here.
The talk triggered interesting discussion involving the comparison with the “belief-based” algorithm. A further question triggering future work is on the influence on the layout and number of sensors. What will happen if the environment changes? Since the algorithm has no memory of a map, a changing environment does not affect the result.

References
  1. B. McMullin, T. Hasegawa. Von Neumann Redux: Revisiting the Self-referential Logic of Machine Reproduction Using the Avida World. In R. M. Bichler, S. Blachfellner, and W. Hofkirchner, editors, European Meeting on Cybernetics and Systems Research Book of Abstracts, Vienna, Austria, April 2012.
  2. V. Sesum-Cavic, M. Tuba, and S. Rankow. The Influence of Self-Organization on Reducing Complexity in Information Retrieval. In R. M. Bichler, S. Blachfellner, and W. Hofkirchner, editors, European Meeting on Cybernetics and Systems Research Book of Abstracts, Vienna, Austria, April 2012.
  3. S. van Splunter, B. van Veelen. Coordination and Self-Organisation in Crisis Management. In R. M. Bichler, S. Blachfellner, and W. Hofkirchner, editors, European Meeting on Cybernetics and Systems Research Book of Abstracts, Vienna, Austria, April 2012.
  4. C. Gershenson. Living in Living Cities. In R. M. Bichler, S. Blachfellner, and W. Hofkirchner, editors, European Meeting on Cybernetics and Systems Research Book of Abstracts, Vienna, Austria, April 2012.
  5. A. Sobe, W. Elmenreich, and M. del Fabro. Self-organizing content sharing at social events. In R. M. Bichler, S. Blachfellner, and W. Hofkirchner, editors, European Meeting on Cybernetics and Systems Research Book of Abstracts, Vienna, Austria, April 2012.
  6. I. Fehérvári, W. Elmenreich, and E. Yanmaz. Evolving a team of self-organizing UAVs to address spatial coverage problems. In R. M. Bichler, S. Blachfellner, and W. Hofkirchner, editors, European Meeting on Cybernetics and Systems Research Book of Abstracts, Vienna, Austria, April 2012.

Wednesday, April 11, 2012

Evolving a Team of Self-organizing UAVs to Address Spatial Coverage Problems

Typical small UAV (AscTec Pelican)
Coordinating a team of agents, which could be a search team, cleaning robots, flying drones for surveillance or environmental monitoring is a highly relevant problem. If the environment is unknown or subject to change, an a priori planning algorithm becomes difficult to apply. Therefore we looked into decentralized self-organizing algorithms to do the job.
In a joint work with István Fehérvári, Evsen Yanmaz and Wilfried Elmenreich (me), we evolve controllers for a team of unmanned aerial vehicles (UAVs) with the task to observe or cover a partially obstructed area.
The respective agents are limited in their sensory inputs to local observations of the environment without the ability to determine their absolute position or those of others. Each agent is equipped with a number of sensors that can detect the presence of other agents, an obstacle and the border of the area.
Simulation and evaluation model
The controller of an agent is implemented as an artificial neural network. The fitness for a given configuration is derived from the average spatial coverage over several simulation runs. The area coverage performance of the evolved controllers with different number of sensors is compared to reference movement models like random walk, random direction, and an algorithm based on the belief of the intention of agents met during the execution of the simulation. Our results show that evolved controllers can create a self-organizing cooperating team of agents that exploit the advantages provided by their sensors and outperform naïve coverage algorithms and also reach the performance of a recent algorithm that is using additional information as well.

The work was presented in a talk at the European Meeting on Cybernetics and Systems Research (EMCSR 2012) in Vienna, Austria. Slides are available via slideshare: