but here's my paper, so cite me, maybe?" ♫
(to be sung to the tune of "Call me maybe" by Carly Rae Jepsen, idea for text adaptation by Nikolaj Marchenko)
20 years ago, a scientist was assessed by the number of papers she or he managed to write and publish. Getting a paper published was difficult, because there was limited space in the journals and each issue was a costly and time-consuming endeavor including typesetting, printing, distribution. In economic terms this means we had a shortage of a resource which made it valuable.
Today, things are better with regard to cost and effort for a publication - typsetting software is fast and easy to use and costs are lower than in the past. And when the Internet became the main medium instead of paper, printing costs vanished. If you like, you can found a new journal by just investing some time into the setup of a webpage template. Except from your own work time, personal costs would be no issue, since traditionally, being a journal's editor or reviewer is considered an honorary but unpaid job. This issues a quality assurance problem: If everybody can publish by themselves or provide an easy publication opportunity for others, the number of publications lose their status as a criterion for scientific quality and success. Therefore, attention has shifted to measure the actual impact of a publication in order to infer about its quality. The easy formula is: the more other works are influenced by a publication, the better this publication/work must have been.
This concept has its pros and cons. On the positive side, for at least all publications on the internet, the number of citations can be calculated automatically - Google scholar does it for you. Second, there is a good correlation between successful scientists and their number of citations. Negative aspects are that citations from publications which are not online are usually not included. There is also a bias depending on the scientific field, although there is work suggesting correction factors for this bias. The method further counts citations without distinction of the quality of the citation (be it positive, negative, long, brief, etc.) And finally, counting citations primarily measures the popularity of a paper, which explains why successful (popular) scientists have lots of citations. Still it appears that counting citations is currently the best way to assess publications with low effort. And it is a nice application of network theory.
Citations can be also used to assess journals. The more the publications in a journal are cited by others, the better is the journal. If everybody tries to get their papers published in journals with high impact, i.e. many citations, the competition leads to situation with a shortage on excellent publication places. Interesting is that a top journal does not require more effort than one with lower impact. The self-organizing effect of authors competing for the 'best' journals puts these journals in the convenient situation that they can pick the best papers - which in turn helps them in keeping their position. Regardless of the flaws of the citation-based impact analysis, as long as it is used by so many people, you have to play along.
Finally, some tips you might have been waiting for:
How can you push your h-index by maximizing the chance to get cited?
- Make your publications available online (mine are here btw)
- Discuss your work with others
- Write good papers. Interesting comprehensive work is more likely to be cited.
- Avoid low-impact journals and conferences
- Publish in the language which is most common for your field of research. In most cases this is english.
- Add your paper as reference to appropriate pages in social networks (e.g., Wikipedia)
- h-index. Wikipedia
- Google Scholar citation count (took myself as example)
- J. E. Iglesias and C. Pecharromán. Scaling the h-index for different scientific ISI fields. Scientometrics, Vol. 73, No. 3 (2007)
- W. Elmenreich. Google Scholar, Citation Indices, and the University of Klagenfurt. TEWI-Blog. November 2011