Wednesday, October 31, 2012

Vampires vs. Werewolves

Tonight is Halloween! Typical Halloween activities include telling scary stories, so I am going to tell you a story about vampires and werewolves.
Once upon a time in a valley in Complexania, there were Werewolves and Vampires. They could live from the magic field in the valley, as long as they did not grow too large. The valley is also magically rolled up to a torus surface, so have no fear kids, the creatures cannot escape. Their size is genetically given, but when they reproduce, the target size might mutate by plus/minus 10 percent. If one of this creatures could gather enough magic (which is easier when they are small), an offspring was created in a free field beside it. So far it is clear that being smaller is advantageous because you can save more energy and reproduce faster. However, a werewolf is also able to kill a smaller vampire and steal its energy. Vice versa, large vampires are able to kill and consume werewolves which are smaller than them. This triggered an arms race of larger and larger creatures in the valley. At one time they grew so large that they had to constantly feed on their foes, since the magic field alone was not able to support their hunger for energy any more. So they grew and fought each other, numbers went up and down on both sides, until one species was left. Or both died.

Do you want to know who won the battle? Find out for yourself and use the simulation below:

In case you cannot see the simulation, your browser does not support applets. I made a video of the simulation for this case:

Small vampires and werewolves, which can feed sufficiently from the magic field, are characterized by light red and gray color, respectively. The red and black squares indicate larger vampires and werewolves.

Have fun and happy Halloween!

Monday, October 22, 2012

Sixth IEEE Xtreme Programming Contest: Bunnies in the Forest

At the annual IEEE Xtreme Programming Contest teams of three programmers are given a set approximately 20 problems, for which they have to write a program that solves the task. Choice of programming language is mostly free; the contest system supports Java, C, C++, C#, PHP, Python, Ruby. The contest goes on for exactly 24 hours, therefore the "Xtreme" in its name. In the 2012 issue, there had been 1900 teams worldwide. To be successful, it is necessary to work concentrated under pressure for hours and have excellent programming skills. In fact it is rather software engineering skills, since a sloppy or ad-hoc programming style does not lead to successful solutions. Watching a good team one can observe the classic stages of software engineering like specification of operational and performance qualification, design specification, implementation, black box/white box testing, and validation in a fast-forward manner within a few hours.

One important aspect of software engineering is also the proper specification of the intended project by the customer. A mistake in the initial specification is crucial, so it is important to state a task in a clear unambiguous manner. In practice, unfortunately, a software engineering team often has to guess what the customer really wants. This was the case for problem AA at IEEE Xtreme 2012:

In a forest, there were 'x' bunnies, 50% male, and 50% female, all adults. Bunnies doubles every 15 days, 10% of the baby rabbits dies at birth. They mature after 30 days, 30% leave the forest, and rest becomes rabbits. In every 30 days , 25% dies off due to flu. If every bunny dies off, the bunny world ends. Calculate the final number of bunnies alive after 1 year for any number of initial bunnies, x.

The problem is very interesting since it defines a simulation of an ecologic system. See for comparison the description of the Lotka Volterra system featuring rabbits and foxes. However, the problem specification is unclear in many aspects. What is the essential difference between an "adult bunny", a bunny, a "baby bunny" and a rabbit? It is not mentioned how to handle rounding, if the leaving of the forest happens once for a group that just matured or if an adult rabbit is tempted to leave the forest every time.

The problem had been complemented by these two test cases:
Test Case 1
444 (input)
0 (output)
Test Case 2
30000 (input)
56854 (output)
So a group of 444 dies out after one year, while a group of 30000 almost doubles. Given that the described effects are all linearly superimposable (except for possible rounding errors), it seems odd how the two groups yield so different results. 30000 is around 68 times as much as 444, so the results just also differ by that factor. The following python program implements one possible interpretation of this problem:
b0=0            #newborn bunnies  
b15=0           #15 day old bunnies 
b30=input()     #30 day old, read from stdin
for i in range(25): #one year
    print "t:",(i*15)," bunnies:",(b0+b15+b30)
    b0=int(b30*0.9) # babies, 10% die
    b30=int(b30+b15*0.7) # maturing, 30% leave forest
    b0=int(b0*0.75)      # 25% die off by the flue
    b15=int(b15*0.75)    # 25% die off by the flue
    b30=int(b30*0.75)    # 25% die off by the flue

Running the program gives us 772 bunnies after one year with a starting population of 444 and 54816 for a starting population of 30000, both in contradiction to the test cases. Obviously the specification is unclear or wrong. Among all 1900 participating teams, not a single one was able to find the correct solution. Poor bunnies :-)

On rabbits and foxes, see also section 2 of

Sunday, October 14, 2012

Why is it important to get cited?

♫ "Hey! I just met you, and this is crazy,
      but here's my paper, so cite me, maybe?" ♫

(to be sung to the tune of "Call me maybe" by Carly Rae Jepsen, idea for text adaptation by Nikolaj Marchenko)

If this would work, you would find a lot of people singing that tune at conferences. The reason why it is important to get cited is because the number of citations of your publications have become an assessment of your scientific performance. The most famous indicator is the h-index [1], stating the largest number h for which there are at least h papers with at least h citations each.

20 years ago, a scientist was assessed by the number of papers she or he managed to write and publish. Getting a paper published was difficult, because there was limited space in the journals and each issue was a costly and time-consuming endeavor including typesetting, printing, distribution. In economic terms this means we had a shortage of a resource which made it valuable.

Today, things are better with regard to cost and effort for a publication - typsetting software is fast and easy to use and costs are lower than in the past. And when the Internet became the main medium instead of paper, printing costs vanished. If you like, you can found a new journal by just investing some time into the setup of a webpage template. Except from your own work time, personal costs would be no issue, since traditionally, being a journal's editor or reviewer is considered an honorary but unpaid job. This issues a quality assurance problem: If everybody can publish by themselves or provide an easy publication opportunity for others, the number of publications lose their status as a criterion for scientific quality and success. Therefore, attention has shifted to measure the actual impact of a publication in order to infer about its quality. The easy formula is: the more other works are influenced by a publication, the better this publication/work must have been.

This concept has its pros and cons. On the positive side, for at least all publications on the internet, the number of citations can be calculated automatically - Google scholar does it for you. Second, there is a good correlation between successful scientists and their number of citations. Negative aspects are that citations from publications which are not online are usually not included. There is also a bias depending on the scientific field, although there is work suggesting correction factors for this bias. The method further counts citations without distinction of the quality of the citation (be it positive, negative, long, brief, etc.) And finally, counting citations primarily measures the popularity of a paper, which explains why successful (popular) scientists have lots of citations. Still it appears that counting citations is currently the best way to assess publications with low effort. And it is a nice application of network theory.

Citations can be also used to assess journals. The more the publications in a journal are cited by others, the better is the journal. If everybody tries to get their papers published in journals with high impact, i.e. many citations, the competition leads to situation with a shortage on excellent publication places. Interesting is that a top journal does not require more effort than one with lower impact. The self-organizing effect of authors competing for the 'best' journals puts these journals in the convenient situation that they can pick the best papers - which in turn helps them in keeping their position. Regardless of the flaws of the citation-based impact analysis, as long as it is used by so many people, you have to play along.

Finally, some tips you might have been waiting for:
How can you push your h-index by maximizing the chance to get cited?
  • Make your publications available online (mine are here btw)
  • Discuss your work with others
  • Write good papers. Interesting comprehensive work is more likely to be cited.
  • Avoid low-impact journals and conferences
  • Publish in the language which is most common for your field of research. In most cases this is english. 
  • Add your paper as reference to appropriate pages in social networks (e.g., Wikipedia)
Note that these tips in general are part of serious research work. They make sense either you believe in the h-index or not. Don't try to fake you h-index, e.g. by massively citing yourself. Self-citations are likely to be excluded in future h-index calculations. Technically, excluding self-citations will be easy to implement by Google Scholar and co.

  1. h-index. Wikipedia
  2. Google Scholar citation count (took myself as example)
  3. J. E. Iglesias and C. Pecharromán. Scaling the h-index for different scientific ISI fields. Scientometrics, Vol. 73, No. 3 (2007)
  4. W. Elmenreich. Google Scholar, Citation Indices, and the University of Klagenfurt. TEWI-Blog. November 2011

Wednesday, October 10, 2012

Call for Papers for the 7th International Workshop on Self-organizing Systems (IWSOS 2013)

Palma de Mallorca, Spain
May 9-10, 2013

Extended Paper Submission Deadline: January 18, 2013

See you in Palma de Mallorca!
(Photo: Rafael Ortega Díaz/Wikipedia)

The main themes of IWSOS 2013 are from the fields of techno-social systems and networks-of-networks with their unique and complex blend of cognitive, social, and technological aspects. We will analyse how these systems self-organize, acquire their structure, and evolve. Thus, we aim to advance our understanding of such key infrastructures in our societies and, more generally, of these sorts of self-organizational processes in nature.

We are further interested in learning how to engineer such self-organizing networked systems to have desirable properties including dependability, predictability, and resilience in the face of the inevitable challenges that they face.

Building on the success of its predecessors, this multi-disciplinary workshop aims at bringing together leading international researchers from complex systems, distributed systems, and communication networks to create a visionary forum for discussing the future of self-organization in networked systems. We invite the submission of manuscripts that present original research results on the themes of self-organization in techno-social systems and networks-of-networks.

Key Topics

The workshop scope includes, but is not limited to, the following topical areas of self-organizing systems:

- Design and analysis of self-organizing and self-managing systems
- Inspiring models of self-organization in nature and society
- Structure, characteristics, and dynamics of self-organizing networks
- Self-organization in techno-social systems
- Self-organized social computation
- Self-organized communication systems
- Citizen Science
- Techniques and tools for modeling self-organizing systems
- Tools to quantify self-organization
- Control and control parameters of self-organizing systems
- Phase transitions in self-organizing systems
- Robustness and adaptation in self-organizing systems
- Self-organization in complex networks such as peer-to-peer, sensor,
  ad-hoc, vehicular, and social networks
- Self-organization in socio-economic systems
- User and operator-related aspects of man-made self-organizing systems
- Self-organizing multi-service networks and multi-network services
- Methods for configuration and management of large, complex networks
- Self-protection, self-configuration, diagnosis, and healing
- Self-organizing group and pattern formation
- Self-organizing mechanisms for task allocation, coordination and
  resource allocation
- Self-organizing information dissemination and content search
- Security and safety in self-organizing networked systems
- Risks and limits of self-organization
- The human in the loop of self-organizing networks
- Social, cognitive, and semantic aspects of self-organization
- Evolutionary principles of the (future, emerging) Internet
- Decentralized power management in the smart grid

Important Dates

Submission deadline: January 18, 2013 (extended)
Notification of acceptance: January 15, 2013
Camera-ready papers due: February 3, 2013
Conference dates: May 9-10, 2013


IWSOS 2013 invites the submission of manuscripts that present original research results which have not been previously published and are not currently under review by another conference or journal. Any previous or simultaneous publication of related material should be explicitly noted in the submission. All papers must be submitted in PDF format. Submissions will be peer reviewed by at least three members of the international technical program committee and judged on originality, significance, clarity, relevance, and correctness.

The Springer “LNCS Proceedings” style should be used for submission. Templates for LaTeX and Word are available at

Full papers should describe original research results. Submissions should be full-length papers up to 12 pages using the LNCS style (including figures, references, and a short abstract).

Short Papers should be position papers, challenging papers, and papers presenting first results. Short papers are up to 6 pages using the LNCS style (including figures, references, and a short abstract).


The proceedings will be published by Springer-Verlag in their Lecture Notes in Computer Science (LNCS) series. At least one of the authors of each accepted paper must attend IWSOS 2013 to present the paper.


General Chairs
Maxi San Miguel, IFISC (CSIC-University Balearic Islands), Spain
Hermann de Meer, University of Passau, Germany

Program Chairs
Falko Dressler, University of Innsbruck, Austria
Vittorio Loreto, Sapienza University of Rome, Italy

Publicity Chairs
Karin Anna Hummel, ETH Zurich, Switzerland
Carlos Gershenson, Universidad Nacional Autónoma de México, Mexico

Publication Chair
Wilfried Elmenreich, University of Passau, Germany

Local Organization Chair
Pere Colet, IFISC (CSIC-University Balearic Islands), Spain

Steering Committee
Hermann de Meer, Univ. Passau, Germany
David Hutchison, Lancaster University, UK
Bernhard Plattner, ETH Zurich, Switzerland
James Sterbenz, University of Kansas, USA
Randy Katz, UC Berkeley, USA
Georg Carle, TU Munich, Germany (IFIP TC6 Representative)
Karin Anna Hummel, ETH Zurich, Switzerland
Shlomo Havlin, Bar-Ilan University, Israel

Technical Program Committee
Karl Aberer, EPFL
Andrea Baronchelli, Northeastern University
Alain Barrat, Centre de Physique Theorique
Marc Barthelemy, Institut de Physique Théorique
Christian Bettstetter, University of Klagenfurt
Raffaele Bruno, Consiglio Nazionale delle Ricerche (CNR)
Claudio Castellano, CNR-ISC Rome
Ciro Cattuto, ISI Foundation Turin
Albert Diaz-Guilera, Universitat de Barcelona
Alois Ferscha, Johannes Kepler University Linz
Andreas Fischer, University of Passau
Santo Fortunato, Aalto University
Carlos Gershenson, Universidad Nacional Autónoma de México
Salima Hassas, University of Lyon 1
Boudewijn Haverkort, University of Twente
Poul Heegaard, Norwegian University of Science and Technology
Tom Holvoet, Katholieke Universiteit Leuven
Karin Anna Hummel, ETH Zurich
Sebastian Lehnhoff, OFFIS Institute for Information Technology
Hein Meling, University of Stavanger
Mirco Musolesi, University of Birmingham
Dimitri Papadimitriou, Alcatel-Lucent Bell
Christian Prehofer, Fraunhofer ESK
Jose Ramasco, Inst. for Cross-Disciplinary Physics and Complex Systems
Andreas Riener, Johannes Kepler University Linz
Kave Salamatian, Universite De Savoie
Hiroki Sayama, Binghamton University
Paul Smith, Austrian Institute of Technology
Bosiljka Tadic, Jozef Stefan Institute
Dirk Trossen, University of Cambridge