Monday, November 26, 2012

Advent Programming Contest

Welcome, Advent Programmers!
An Advent calendar is a special calendar used to count or celebrate the days in anticipation of Christmas. Advent calendars typically begin on December 1 and provide a window to open until December 24. Usually they have windows, which you can open each day containing some chocolate or other stuff. But what is better to kill some time until Christmas, Hanukkah, Yule, Kwanzaa, Diwali, Boxing Day, etc. than an Advent calendar giving you a programming problem every day?

The Advent Programming Contest, being organized by the newly formed IEEE Student Branch Klagenfurt will provide a new problem every day from December 1st to December 24th. You can submit solutions any day until the contest ends on December 26. You can choose to use C, C++, Java, Python or Perl as programming language. The programming tasks can be solved with short programs (typically less than 100 lines of code). Until a solution is correct you can submit your program as often as you want (but please don't spam our server). The number of tries will not be a criterion for determining your score. The idea is to do it just for fun, but we will try to announce a winner after the contest is closed.

The event is open to everyone as long as our server can handle the load. There are separate categories for pupils, university students and others.

If you want to participate, please register at
This is an individuals competition, not a team contest - be fair!
(Registration will be still possible after 1st December)

See also: Sixth IEEE Xtreme Programming Contest: Bunnies in the Forest

Sunday, November 18, 2012

Open PhD/Postdoc Positions at Alpen-Adria-Universität Klagenfurt, Austria

It could be you!

PhD researcher on Complex Systems Engineering (full-time)

We are offering a position for a researcher within the MESON (Modeling and Engineering of Self-Organizing Networks) project. We are a highly motivated international team of researchers situated at the Lakeside Science & Technology Park/University of Klagenfurt, Austria. We offer best work conditions, a beautiful campus with a pleasant, intercultural work environment, and a highly competitive salary. Potential candidates should have a master's degree in computer science, computer engineering, mathematics, physics or related studies and should have skills in creative problem solving, Java programming, the use of the English language and knowledge in at least one of the following subjects: Complex systems, Machine Learning, Multi-agent systems, Smart Grid. Further information about the MESON project can be found on the project webpage:

1 Postdoc and 1 PhD researcher on Smart Energy Systems (full-time)

We work on integrating energy generators and consumers into a smart electricity grid. The goal is to build up a microgrid lab and perform applied research with industrial partners. One PhD and one Senior Researcher will be hired. We are a highly motivated international team of researchers situated at the Lakeside Science & Technology Park/University of Klagenfurt, Austria. We offer best work conditions, a beautiful campus with a pleasant, intercultural work environment, and a highly competitive salary. Potential candidates should have a master's degree (for PhD position)/a doctoral degree (for postdoc position) in computer science, computer engineering, mathematics, physics or related studies and should have skills in creative problem solving, energy systems, and the use of the English language. Further information about the project can be found on:

Information for Applicants

Research will be conducted at the Institute of Networked and Embedded Systems under the supervision of Professor Wilfried Elmenreich. Working language is English. The institute cooperates with national and international partners from industry and academia and is part of the research cluster Lakeside Labs. Women are especially encouraged to apply. Please mail applications containing a letter of interest, curriculum vitae, copies of academic certificates and courses, list of publications, and contact details of two references in a single PDF file to by deadline of January15th, 2013.

Monday, November 5, 2012

The Labside Lakes

Road to Lakeside Park (Foto: K. Schweiger)
Klagenfurt got some bad weather. My workplace there is usually known as the Lakeside Labs at the Alpen-Adria-Universität Klagenfurt, where we do research on self-organizing networked systems. But the last rainfall left really large puddles, so that we have the Labside Lakes instead (sorry for the terrible pun). But our researchers are excellent swimmers and in the worst case we will submit our papers as message-in-a-bottle to the next conference!
Pathway beside University (Foto: S. Mak)

Wednesday, October 31, 2012

Vampires vs. Werewolves

Tonight is Halloween! Typical Halloween activities include telling scary stories, so I am going to tell you a story about vampires and werewolves.
Once upon a time in a valley in Complexania, there were Werewolves and Vampires. They could live from the magic field in the valley, as long as they did not grow too large. The valley is also magically rolled up to a torus surface, so have no fear kids, the creatures cannot escape. Their size is genetically given, but when they reproduce, the target size might mutate by plus/minus 10 percent. If one of this creatures could gather enough magic (which is easier when they are small), an offspring was created in a free field beside it. So far it is clear that being smaller is advantageous because you can save more energy and reproduce faster. However, a werewolf is also able to kill a smaller vampire and steal its energy. Vice versa, large vampires are able to kill and consume werewolves which are smaller than them. This triggered an arms race of larger and larger creatures in the valley. At one time they grew so large that they had to constantly feed on their foes, since the magic field alone was not able to support their hunger for energy any more. So they grew and fought each other, numbers went up and down on both sides, until one species was left. Or both died.

Do you want to know who won the battle? Find out for yourself and use the simulation below:

In case you cannot see the simulation, your browser does not support applets. I made a video of the simulation for this case:

Small vampires and werewolves, which can feed sufficiently from the magic field, are characterized by light red and gray color, respectively. The red and black squares indicate larger vampires and werewolves.

Have fun and happy Halloween!

Monday, October 22, 2012

Sixth IEEE Xtreme Programming Contest: Bunnies in the Forest

At the annual IEEE Xtreme Programming Contest teams of three programmers are given a set approximately 20 problems, for which they have to write a program that solves the task. Choice of programming language is mostly free; the contest system supports Java, C, C++, C#, PHP, Python, Ruby. The contest goes on for exactly 24 hours, therefore the "Xtreme" in its name. In the 2012 issue, there had been 1900 teams worldwide. To be successful, it is necessary to work concentrated under pressure for hours and have excellent programming skills. In fact it is rather software engineering skills, since a sloppy or ad-hoc programming style does not lead to successful solutions. Watching a good team one can observe the classic stages of software engineering like specification of operational and performance qualification, design specification, implementation, black box/white box testing, and validation in a fast-forward manner within a few hours.

One important aspect of software engineering is also the proper specification of the intended project by the customer. A mistake in the initial specification is crucial, so it is important to state a task in a clear unambiguous manner. In practice, unfortunately, a software engineering team often has to guess what the customer really wants. This was the case for problem AA at IEEE Xtreme 2012:

In a forest, there were 'x' bunnies, 50% male, and 50% female, all adults. Bunnies doubles every 15 days, 10% of the baby rabbits dies at birth. They mature after 30 days, 30% leave the forest, and rest becomes rabbits. In every 30 days , 25% dies off due to flu. If every bunny dies off, the bunny world ends. Calculate the final number of bunnies alive after 1 year for any number of initial bunnies, x.

The problem is very interesting since it defines a simulation of an ecologic system. See for comparison the description of the Lotka Volterra system featuring rabbits and foxes. However, the problem specification is unclear in many aspects. What is the essential difference between an "adult bunny", a bunny, a "baby bunny" and a rabbit? It is not mentioned how to handle rounding, if the leaving of the forest happens once for a group that just matured or if an adult rabbit is tempted to leave the forest every time.

The problem had been complemented by these two test cases:
Test Case 1
444 (input)
0 (output)
Test Case 2
30000 (input)
56854 (output)
So a group of 444 dies out after one year, while a group of 30000 almost doubles. Given that the described effects are all linearly superimposable (except for possible rounding errors), it seems odd how the two groups yield so different results. 30000 is around 68 times as much as 444, so the results just also differ by that factor. The following python program implements one possible interpretation of this problem:
b0=0            #newborn bunnies  
b15=0           #15 day old bunnies 
b30=input()     #30 day old, read from stdin
for i in range(25): #one year
    print "t:",(i*15)," bunnies:",(b0+b15+b30)
    b0=int(b30*0.9) # babies, 10% die
    b30=int(b30+b15*0.7) # maturing, 30% leave forest
    b0=int(b0*0.75)      # 25% die off by the flue
    b15=int(b15*0.75)    # 25% die off by the flue
    b30=int(b30*0.75)    # 25% die off by the flue

Running the program gives us 772 bunnies after one year with a starting population of 444 and 54816 for a starting population of 30000, both in contradiction to the test cases. Obviously the specification is unclear or wrong. Among all 1900 participating teams, not a single one was able to find the correct solution. Poor bunnies :-)

On rabbits and foxes, see also section 2 of

Sunday, October 14, 2012

Why is it important to get cited?

♫ "Hey! I just met you, and this is crazy,
      but here's my paper, so cite me, maybe?" ♫

(to be sung to the tune of "Call me maybe" by Carly Rae Jepsen, idea for text adaptation by Nikolaj Marchenko)

If this would work, you would find a lot of people singing that tune at conferences. The reason why it is important to get cited is because the number of citations of your publications have become an assessment of your scientific performance. The most famous indicator is the h-index [1], stating the largest number h for which there are at least h papers with at least h citations each.

20 years ago, a scientist was assessed by the number of papers she or he managed to write and publish. Getting a paper published was difficult, because there was limited space in the journals and each issue was a costly and time-consuming endeavor including typesetting, printing, distribution. In economic terms this means we had a shortage of a resource which made it valuable.

Today, things are better with regard to cost and effort for a publication - typsetting software is fast and easy to use and costs are lower than in the past. And when the Internet became the main medium instead of paper, printing costs vanished. If you like, you can found a new journal by just investing some time into the setup of a webpage template. Except from your own work time, personal costs would be no issue, since traditionally, being a journal's editor or reviewer is considered an honorary but unpaid job. This issues a quality assurance problem: If everybody can publish by themselves or provide an easy publication opportunity for others, the number of publications lose their status as a criterion for scientific quality and success. Therefore, attention has shifted to measure the actual impact of a publication in order to infer about its quality. The easy formula is: the more other works are influenced by a publication, the better this publication/work must have been.

This concept has its pros and cons. On the positive side, for at least all publications on the internet, the number of citations can be calculated automatically - Google scholar does it for you. Second, there is a good correlation between successful scientists and their number of citations. Negative aspects are that citations from publications which are not online are usually not included. There is also a bias depending on the scientific field, although there is work suggesting correction factors for this bias. The method further counts citations without distinction of the quality of the citation (be it positive, negative, long, brief, etc.) And finally, counting citations primarily measures the popularity of a paper, which explains why successful (popular) scientists have lots of citations. Still it appears that counting citations is currently the best way to assess publications with low effort. And it is a nice application of network theory.

Citations can be also used to assess journals. The more the publications in a journal are cited by others, the better is the journal. If everybody tries to get their papers published in journals with high impact, i.e. many citations, the competition leads to situation with a shortage on excellent publication places. Interesting is that a top journal does not require more effort than one with lower impact. The self-organizing effect of authors competing for the 'best' journals puts these journals in the convenient situation that they can pick the best papers - which in turn helps them in keeping their position. Regardless of the flaws of the citation-based impact analysis, as long as it is used by so many people, you have to play along.

Finally, some tips you might have been waiting for:
How can you push your h-index by maximizing the chance to get cited?
  • Make your publications available online (mine are here btw)
  • Discuss your work with others
  • Write good papers. Interesting comprehensive work is more likely to be cited.
  • Avoid low-impact journals and conferences
  • Publish in the language which is most common for your field of research. In most cases this is english. 
  • Add your paper as reference to appropriate pages in social networks (e.g., Wikipedia)
Note that these tips in general are part of serious research work. They make sense either you believe in the h-index or not. Don't try to fake you h-index, e.g. by massively citing yourself. Self-citations are likely to be excluded in future h-index calculations. Technically, excluding self-citations will be easy to implement by Google Scholar and co.

  1. h-index. Wikipedia
  2. Google Scholar citation count (took myself as example)
  3. J. E. Iglesias and C. Pecharromán. Scaling the h-index for different scientific ISI fields. Scientometrics, Vol. 73, No. 3 (2007)
  4. W. Elmenreich. Google Scholar, Citation Indices, and the University of Klagenfurt. TEWI-Blog. November 2011

Wednesday, October 10, 2012

Call for Papers for the 7th International Workshop on Self-organizing Systems (IWSOS 2013)

Palma de Mallorca, Spain
May 9-10, 2013

Extended Paper Submission Deadline: January 18, 2013

See you in Palma de Mallorca!
(Photo: Rafael Ortega Díaz/Wikipedia)

The main themes of IWSOS 2013 are from the fields of techno-social systems and networks-of-networks with their unique and complex blend of cognitive, social, and technological aspects. We will analyse how these systems self-organize, acquire their structure, and evolve. Thus, we aim to advance our understanding of such key infrastructures in our societies and, more generally, of these sorts of self-organizational processes in nature.

We are further interested in learning how to engineer such self-organizing networked systems to have desirable properties including dependability, predictability, and resilience in the face of the inevitable challenges that they face.

Building on the success of its predecessors, this multi-disciplinary workshop aims at bringing together leading international researchers from complex systems, distributed systems, and communication networks to create a visionary forum for discussing the future of self-organization in networked systems. We invite the submission of manuscripts that present original research results on the themes of self-organization in techno-social systems and networks-of-networks.

Key Topics

The workshop scope includes, but is not limited to, the following topical areas of self-organizing systems:

- Design and analysis of self-organizing and self-managing systems
- Inspiring models of self-organization in nature and society
- Structure, characteristics, and dynamics of self-organizing networks
- Self-organization in techno-social systems
- Self-organized social computation
- Self-organized communication systems
- Citizen Science
- Techniques and tools for modeling self-organizing systems
- Tools to quantify self-organization
- Control and control parameters of self-organizing systems
- Phase transitions in self-organizing systems
- Robustness and adaptation in self-organizing systems
- Self-organization in complex networks such as peer-to-peer, sensor,
  ad-hoc, vehicular, and social networks
- Self-organization in socio-economic systems
- User and operator-related aspects of man-made self-organizing systems
- Self-organizing multi-service networks and multi-network services
- Methods for configuration and management of large, complex networks
- Self-protection, self-configuration, diagnosis, and healing
- Self-organizing group and pattern formation
- Self-organizing mechanisms for task allocation, coordination and
  resource allocation
- Self-organizing information dissemination and content search
- Security and safety in self-organizing networked systems
- Risks and limits of self-organization
- The human in the loop of self-organizing networks
- Social, cognitive, and semantic aspects of self-organization
- Evolutionary principles of the (future, emerging) Internet
- Decentralized power management in the smart grid

Important Dates

Submission deadline: January 18, 2013 (extended)
Notification of acceptance: January 15, 2013
Camera-ready papers due: February 3, 2013
Conference dates: May 9-10, 2013


IWSOS 2013 invites the submission of manuscripts that present original research results which have not been previously published and are not currently under review by another conference or journal. Any previous or simultaneous publication of related material should be explicitly noted in the submission. All papers must be submitted in PDF format. Submissions will be peer reviewed by at least three members of the international technical program committee and judged on originality, significance, clarity, relevance, and correctness.

The Springer “LNCS Proceedings” style should be used for submission. Templates for LaTeX and Word are available at

Full papers should describe original research results. Submissions should be full-length papers up to 12 pages using the LNCS style (including figures, references, and a short abstract).

Short Papers should be position papers, challenging papers, and papers presenting first results. Short papers are up to 6 pages using the LNCS style (including figures, references, and a short abstract).


The proceedings will be published by Springer-Verlag in their Lecture Notes in Computer Science (LNCS) series. At least one of the authors of each accepted paper must attend IWSOS 2013 to present the paper.


General Chairs
Maxi San Miguel, IFISC (CSIC-University Balearic Islands), Spain
Hermann de Meer, University of Passau, Germany

Program Chairs
Falko Dressler, University of Innsbruck, Austria
Vittorio Loreto, Sapienza University of Rome, Italy

Publicity Chairs
Karin Anna Hummel, ETH Zurich, Switzerland
Carlos Gershenson, Universidad Nacional Autónoma de México, Mexico

Publication Chair
Wilfried Elmenreich, University of Passau, Germany

Local Organization Chair
Pere Colet, IFISC (CSIC-University Balearic Islands), Spain

Steering Committee
Hermann de Meer, Univ. Passau, Germany
David Hutchison, Lancaster University, UK
Bernhard Plattner, ETH Zurich, Switzerland
James Sterbenz, University of Kansas, USA
Randy Katz, UC Berkeley, USA
Georg Carle, TU Munich, Germany (IFIP TC6 Representative)
Karin Anna Hummel, ETH Zurich, Switzerland
Shlomo Havlin, Bar-Ilan University, Israel

Technical Program Committee
Karl Aberer, EPFL
Andrea Baronchelli, Northeastern University
Alain Barrat, Centre de Physique Theorique
Marc Barthelemy, Institut de Physique Théorique
Christian Bettstetter, University of Klagenfurt
Raffaele Bruno, Consiglio Nazionale delle Ricerche (CNR)
Claudio Castellano, CNR-ISC Rome
Ciro Cattuto, ISI Foundation Turin
Albert Diaz-Guilera, Universitat de Barcelona
Alois Ferscha, Johannes Kepler University Linz
Andreas Fischer, University of Passau
Santo Fortunato, Aalto University
Carlos Gershenson, Universidad Nacional Autónoma de México
Salima Hassas, University of Lyon 1
Boudewijn Haverkort, University of Twente
Poul Heegaard, Norwegian University of Science and Technology
Tom Holvoet, Katholieke Universiteit Leuven
Karin Anna Hummel, ETH Zurich
Sebastian Lehnhoff, OFFIS Institute for Information Technology
Hein Meling, University of Stavanger
Mirco Musolesi, University of Birmingham
Dimitri Papadimitriou, Alcatel-Lucent Bell
Christian Prehofer, Fraunhofer ESK
Jose Ramasco, Inst. for Cross-Disciplinary Physics and Complex Systems
Andreas Riener, Johannes Kepler University Linz
Kave Salamatian, Universite De Savoie
Hiroki Sayama, Binghamton University
Paul Smith, Austrian Institute of Technology
Bosiljka Tadic, Jozef Stefan Institute
Dirk Trossen, University of Cambridge

Wednesday, September 26, 2012

The 6-minute introduction to FREVO

Evolution is a slow process. In earth history, evolution took millions of years to achieve something. Computer simulations of evolutionary processes are much better, but still taking weeks of simulation time on a cluster to run some evolutionary algorithm.

We challenge this!

Our software FREVO (FRamework for EVOlutionary design) is aimed to reduce the time to implement, set up and run an evolutionary algorithm to evolve an agent's behavior as a solution to a particular control problem. FREVO is decomposing the task into problem definition, solution representation and the optimization method, in order to ... let's stop talking! I will show you in just 6 minutes!

As demonstrated in the video below, 6 minutes are sufficient to download the framework, install it, set up a simulation, evolve a neural network controller for an inverted pendulum problem and check the results! See for yourself!

The tool is available at
There are also tutorials on advanced projects with FREVO. If you are doing research in engineering complex systems, this tool might be useful for your thesis ;-)

If FREVO is useful for you, please cite this paper:

A. Sobe, I. Fehérvári, and W. Elmenreich. Frevo: A tool for evolving and evaluating self-organizing systems. In Proceedings of the 1st International Workshop on Evaluation for Self-Adaptive and Self-Organizing Systems, Lyon, France, September 2012.

Thursday, April 12, 2012

Symposium on Self-* Systems – Biological Foundations and Technological Applications

The Symposium on Self-* Systems – Biological Foundations and Technological Applications was part of the European Meeting on Cybernetics and Systems Research (EMCSR 2012) taking place from April 10-13 in Vienna, Austria. It was organized by Vesna Sesum-Cavic, Carlos Gershenson and Wilfried Elmenreich.

Tomonori Hasegawa presented insights on the self-referential logic of self-reproduction originally formulated by John von Neumann and introduced an implementation of this abstract architecture embedded within the Avida world [1]. In the experiments, a sophisticated von Neumann Self-Referential Machine, which was introduced as seeding mechanism, can degrade to a mere copy machine that has dropped the self-referential part. Thus, with this particular implementation, in this particular world, the von Neumann architecture proves to be evolutionarily unstable and degenerates, surprisingly easily, to a primitive, non-self-referential, “copying” or “template replication”, mode of reproduction
Questions arose if a von Neumann Self-Referential Machine could evolve from a simple self-copy machine in a different set-up. The von Neumann model has the advantage of enabling new and more ways to change the system upon mutation - but what could be the evolutionary pressure to have a von Neumann architecture evolved in the first place?
Current experiments did not include sexual reproduction - could that facilitate the evolution of more complex architectures?
More from the group can be found at

Modern software systems suffer from increased complexity. Large software systems are composed of many components that are interlinked. The internal states of these systems contain a huge amount of information. The main obstacles lie in the lack of the reliability and robustness which lead to poor performance. For complex intractable problems, a random search (Monte-Carlo method) does not perform well. New and advanced approaches are necessary to deal with complexity.
Milan Tuba proposed a guided Monte-Carlo search method based on a hybrid of reinforcement learning and a genetic algorithm[2]. As a proof-of-concept the approach is applied to the problem of information retrieval in the internet.
In the discussion, the performance of the algorithm in comparison to commercial search providers, like Google, was discussed.

Sander van Splunter presented ideas on the coordination and self-organization in crisis management[3]. The main idea is to move from a task-oriented top down approach towards an emergence-oriented bottom-up approach, while keeping, a hierarchical structure. However, higher level entities control lower levels by policy, not directly. Policy defines interaction, prioritization and coordination of entities.
An entity works as an independent agent following the given policies. An important feature could be the ability to predict the failure in a given subsystem.
According to EMCSR'12 keynote lecture of Peter Csermely we have to watch signs like slower recovery, increased self-similarity, and increased variance of fluctuation patterns in order to predict a system change into a state where it cannot handle the its environment with its current policies.
In the proposed crisis management of van Splunter and van Veelen a subsystem is supposed to emerge a warning when local adaption fails to handle the problem, e.g. if a small team of firefighters cannot confine a fire in their assigned area.

Carlos Gershenson told us about "Living in living cities"[4]. One of the challenges of 21st century is preventing the problems in the ultra-fast growing cities all over the world.
These are non-stationary (changing problems), traditional algorithms do not work well. Such challenges are for example urban mobility, logistics, telecommunications, governance, safety, sustainability, society and culture. A solution is to exploit properties of living systems, which are adaptive, learning, evolving, robust, autonomous, self-repairing, and self-reproducing, and to understand cities metaphorically as organisms.
Engineering methods cannot find a single solution to these changing problems. Instead it is necessary to constantly adapt the solution, thus have a self-organizing solution to a complex problem. Will cities become the "killer app" of cybernetics and systems research?
Discussion arose around the following issues:
But how do you get the officers and responsibles of a city to cooperate? There is a need for a strong motivation to overcome the inertia of the system.
Could cities instead built from scratch? No, because of the legacy issues - it is not possible to just tear down a large city and build it anew every few decades.
Can we prove that the system is robust against malicious behavior? Difficult since such a complex system cannot be easily predicted for all sets of possible inputs.
Are explicit measures necessary or would people themselves care for the necessary adaptations? This would not increase living standard for the people.

Anita Sobe presented ideas on self-organizing content sharing at social events by such interesting examples as the marriage of Kate and William of Windsor or Barack Obama's inauguration[5].
The presented approach allows people to share their self-generated content like photos or short videos instantly at such events. Existing platforms like flickr or youtube do not provide this liveness since most content is uploaded with a few days delay. The proposed approach organizes the content using an artificial hormone system. The hormone distribution is sensitive to the quality of a network connection and therefore, reflects a quality-of-service for a network path. The system is solely based on local decisions for forwarding, replicating and moving content. Over time, the content distribution in the network gets optimized in order to support short response times for requesters. Simulations show that the system competes well to other epidemic information dissemination methods such as Gossip.
The follow-up discussion brought up interesting questions:
How is overhead reflected in the simulation? Currently overhead is implicitly modeled into the transmission cost, which is valid for a constant packet handling overhead.
Furthermore the relation of the hormone-based approach to an ant colony optimization (ACO) algorithm was discussed. We identified a major difference to ACO, since there typically either the network or the content is assumed to be static. However ACO could be extended to handle the described scenario, which might inspire future work.
What is the effect, if tags are (more) complex? The system was started with a predefined tag hierarchy which can be extended to a more complex tag hierarchy. However, with more complex tags there is no guarantee for finding content.

In the last talk, Wilfried Elmenreich gave a talk on evolving a distributed control algorithm for flying UAV drones for a coverage problem[6]. The problem of having multiple mobile agents covering (or as we say in robotics, "sweeping") an area is relevant for many applications like lawn mowing, snow removal, floor cleaning,  environmental monitoring, communication assistance and several military and security applications.
The work by Istvan Fehervari, Wilfried Elmenreich and Evsen Yanmaz described a simple grid-based abstraction of the problem which was used to test two evolved and one handcrafted control algorithm which were compared to  reference algorithms like random walk and random direction.
A short summary of Wilfried's talk and the slides are available here.
The talk triggered interesting discussion involving the comparison with the “belief-based” algorithm. A further question triggering future work is on the influence on the layout and number of sensors. What will happen if the environment changes? Since the algorithm has no memory of a map, a changing environment does not affect the result.

  1. B. McMullin, T. Hasegawa. Von Neumann Redux: Revisiting the Self-referential Logic of Machine Reproduction Using the Avida World. In R. M. Bichler, S. Blachfellner, and W. Hofkirchner, editors, European Meeting on Cybernetics and Systems Research Book of Abstracts, Vienna, Austria, April 2012.
  2. V. Sesum-Cavic, M. Tuba, and S. Rankow. The Influence of Self-Organization on Reducing Complexity in Information Retrieval. In R. M. Bichler, S. Blachfellner, and W. Hofkirchner, editors, European Meeting on Cybernetics and Systems Research Book of Abstracts, Vienna, Austria, April 2012.
  3. S. van Splunter, B. van Veelen. Coordination and Self-Organisation in Crisis Management. In R. M. Bichler, S. Blachfellner, and W. Hofkirchner, editors, European Meeting on Cybernetics and Systems Research Book of Abstracts, Vienna, Austria, April 2012.
  4. C. Gershenson. Living in Living Cities. In R. M. Bichler, S. Blachfellner, and W. Hofkirchner, editors, European Meeting on Cybernetics and Systems Research Book of Abstracts, Vienna, Austria, April 2012.
  5. A. Sobe, W. Elmenreich, and M. del Fabro. Self-organizing content sharing at social events. In R. M. Bichler, S. Blachfellner, and W. Hofkirchner, editors, European Meeting on Cybernetics and Systems Research Book of Abstracts, Vienna, Austria, April 2012.
  6. I. Fehérvári, W. Elmenreich, and E. Yanmaz. Evolving a team of self-organizing UAVs to address spatial coverage problems. In R. M. Bichler, S. Blachfellner, and W. Hofkirchner, editors, European Meeting on Cybernetics and Systems Research Book of Abstracts, Vienna, Austria, April 2012.

Wednesday, April 11, 2012

Evolving a Team of Self-organizing UAVs to Address Spatial Coverage Problems

Typical small UAV (AscTec Pelican)
Coordinating a team of agents, which could be a search team, cleaning robots, flying drones for surveillance or environmental monitoring is a highly relevant problem. If the environment is unknown or subject to change, an a priori planning algorithm becomes difficult to apply. Therefore we looked into decentralized self-organizing algorithms to do the job.
In a joint work with István Fehérvári, Evsen Yanmaz and Wilfried Elmenreich (me), we evolve controllers for a team of unmanned aerial vehicles (UAVs) with the task to observe or cover a partially obstructed area.
The respective agents are limited in their sensory inputs to local observations of the environment without the ability to determine their absolute position or those of others. Each agent is equipped with a number of sensors that can detect the presence of other agents, an obstacle and the border of the area.
Simulation and evaluation model
The controller of an agent is implemented as an artificial neural network. The fitness for a given configuration is derived from the average spatial coverage over several simulation runs. The area coverage performance of the evolved controllers with different number of sensors is compared to reference movement models like random walk, random direction, and an algorithm based on the belief of the intention of agents met during the execution of the simulation. Our results show that evolved controllers can create a self-organizing cooperating team of agents that exploit the advantages provided by their sensors and outperform naïve coverage algorithms and also reach the performance of a recent algorithm that is using additional information as well.

The work was presented in a talk at the European Meeting on Cybernetics and Systems Research (EMCSR 2012) in Vienna, Austria. Slides are available via slideshare:

Wednesday, January 18, 2012

6th IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO 2012)

6th IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO 2012)
Lyon, France
10-14 September 2012 
Important Dates
Abstract submission: April 23rd, 2012
Full paper submission: April 30rd, 2012
Notification of acceptance : June 20th, 2012

The aim of the SASO conference series is to provide a forum for presenting the latest results about self-adaptive and self-organizing systems, networks and services. To this end, the meeting aims to attract participants with different backgrounds, to foster cross-pollination between research fields, to expose and discuss innovative theories, frameworks, methodologies, tools, and applications, and to identify new challenges. The complexity of current and emerging computing systems has led the software engineering, distributed systems and management communities to look for inspiration in diverse fields (e.g., complex systems, control theory, artificial intelligence, sociology, biology, etc.) to find new ways of designing and managing networks, systems and services. In this endeavor, self-organization and self-adaptation have emerged as two promising interrelated facets of a paradigm shift.

Self-adaptive systems work in a top down manner. They evaluate their own global behavior and change it when the evaluation indicates that they are not accomplishing what they were intended to do, or when better function or performance is possible. A challenge is often to identify how to change specific behaviors to achieve the desired improvement. Self-organizing systems work bottom up. They are composed of a large number of components that interact locally according to typically simple rules. The global behavior of the system emerges from these local interactions. Here, a challenge is often to predict and control the resulting global behavior.

Topics of Interest
The SASO conference is interested in both theoretical and practical aspects of systems exhibiting self-* characteristics. A particular focus is the modeling of natural, man-made and social systems that exhibit self-adaptation and self-organization characteristics as well as the constructive use of the underlying basic principles in technical systems. The sixth edition of SASO particularly encourages submissions from the following, non-exclusive list of topic areas:

- Principles, Theory, Methods and Architectures for SASO Systems
- Robustness, Resilience and Fault-Tolerance in/with Self-* Systems
- Self-* Behavior in Communication Networks
- (Self-)Control, (Self-)Observation, (Self-)Monitoring of Engineered Systems
- Collective Phenomena in Social and Socio-Technical Systems
- Self-Organization and Self-Adaptation in Biological/Natural Systems
- Applications of Spatial and Physics-Inspired Self-Organization
- SASO Principles in Cyber-Security
- SASO Principles in Collective Robotic Systems
- SASO Principles in Cyber-Physical Systems
- Real-World Experience with Engineered Systems Exhibiting Self-* Properties

All contributions must present novel theoretical or experimental results, or practical approaches and experiences in building or deploying real-world systems and applications. Contributions that contrast "conventional" engineering principles with novel approaches making use of SASO principles are especially welcome.

Submissions Instructions
All submissions should be 10 pages and formatted according to the IEEE Computer Society Press proceedings style guide and submitted electronically in PDF format. Please register as authors and submit your papers using the SASO 2012 conference management system. The proceedings will be published by IEEE Computer Society Press, and made available as a part of the IEEE digital library. Note that a separate call for poster and demo submissions has also been issued.

Emerging Topic Papers
In addition to regular papers, SASO also encourages the submission of papers on emerging topics. These submissions should be clearly marked as such (indicating "Emerging Topic:" in the title) and should provide a well-rounded survey of novel questions, methods and abstractions that are relevant for the design of SASO systems along with a clear indication of the possible impact on the SASO community. In this category we particularly encourage submissions that present innovative applications of methodological frameworks being used in other fields of science that study SASO related phenomena, thus highlighting connections and potential for collaboration between different scientific communities.

Review Criteria
Papers should present novel ideas in the topic domains listed above, clearly motivated by problems from current practice or applied research. We expect claims of contribution to be clearly stated and substantiated by formal analysis, experimental evaluations or comparative studies. Appropriate references must be made to related work. Since SASO is a cross-disciplinary conference, a particular criterion that will be strictly enforced by the program committee is that all papers must be understandable by researchers that are not members of the particular, highly-specialize scientific community. Emphasis should rather be placed on cross-cutting aspects that are relevant to a wider audience of researchers and engineers dealing with SASO systems. Furthermore, submissions making use of principles inspired by phenomena occurring in fields like biology, physics, sociology, economics, etc. are required to provide references for all relevant work in the respective field. Papers demonstr
ating SASO principles in practical applications are expected to provide an indication of the real world relevance of the problem that is solved, including some form of evaluation of performance, usability, or superiority to alternative state-of-the-art approaches. If the application is still early work in progress, then the authors are expected to provide strong arguments as to why the proposed approach will work in the chosen domain.

The program committee strongly suggests to review the list of common reasons for SASO submissions being rejected, which is available online. Furthermore, a collection of interdisciplinary approaches to the study of SASO-related phenomena is provided. Prospective authors are invited to check whether their research question can be related to this rich body of work, thus benefiting from tools, methods and findings developed in various disciplines.

Technical Meeting Committee
General chairs
Salima Hassas, Universite Claude Bernard-Lyon 1, France
Paul Robertson, DOLL, USA

PC chairs
Anwitaman Datta (Distributed Systems), NTU, Singapore
Marie-Pierre Gleizes (Self-organization), Universite de Toulouse, France
Ingo Scholtes (Socio-tecnical Systems), ETH Zurich, Switzerland

Local chair
Gauthier Picard, Ecole Nationale Superieure des Mines de Saint-Etienne

Finance chair
Frederic Armetta, Universite Claude Bernard-Lyon 1, France

Poster chair
Stefan Dulman, Univ. Delft, Netherlands

Contest and Demos track Chairs
Olivier Simonin, LORIA, France 

Antonio Coronato, ICAR-CNR, Italy

Workshop chair
Jeremy Pitt, Imperial College London, UK

Tutorial chair
Giuseppe (Peppo) Valetto, Drexel University, USA

Publicity chair
Jose Luis Fernandez-Marquez, Univ. Geneva, Switzerland
Zhang Jie, Univ. Singapore, Singapore
Sam Malek, George Mason Univ., Fairfax, USA

Publication chair
Sven Brueckner, Jacobs Technology Inc., USA

Sponsor chair
Bob Laddaga, DOLL, USA

Web and Wiki chair
Haytham El Ghazel, Universite Claude Bernard-Lyon 1, France