Abstract
Though computer scientists agree that conference publications enjoy greater status in computer science than in other disciplines, there is little quantitative evidence to support this view. The importance of journal publication in academic promotion makes it a highly personal issue, since focusing exclusively on journal papers misses many significant papers published by CS conferences.
Here, we aim to quantify the relative importance of CS journal and conference papers, showing that CS papers in leading conferences match the impact of papers in mid-ranking journals and surpass the impact of papers in journals in the bottom half of the Thompson Reuters rankings (http://www.isiknowledge.com) for impact measured in terms of citations in Google Scholar. We also show that poor correlation between this measure and conference acceptance rates indicates conference publication is an inefficient market where venues equally challenging in terms of rejection rates offer quite different returns in terms of citations.
How to measure the quality of academic research and performance of particular researchers has always involved debate. Many CS researchers feel that performance assessment is an exercise in futility, in part because academic research cannot be boiled down to a set of simple performance metrics, and any attempt to introduce them would expose the entire research enterprise to manipulation and gaming. On the other hand, many researchers want some reasonable way to evaluate academic performance, arguing that even an imperfect system sheds light on research quality, helping funding agencies and tenure committees make more informed decisions.
One long-standing way of evaluating academic performance is through publication output. Best practice for academics is to write key research contributions as scholarly articles for submission to relevant journals and conferences; the peer-review model has stood the test of time in determining the quality of accepted articles. However, today's culture of academic publication accommodates a range of publication opportunities yielding a continuum of quality, with a significant gap between the lower and upper reaches of the continuum; for example, journal papers are routinely viewed as superior to conference papers, which are generally considered superior to papers at workshops and local symposia. Several techniques are used for evaluating publications and publication outlets, mostly targeting journals. For example, Thompson Reuters (the Institute for Scientific Information) and other such organizations record and assess the number of citations accumulated by leading journals (and some high-ranking conferences) in the ISI Web of Knowledge (http://www.isiknowledge.com) to compute the impact factor of a journal as a measure of its ability to attract citations. Less-reliable indicators of publication quality are also available for judging conference quality; for example, a conference's rejection rate is often cited as a quality indicator on the grounds that a high rejection rate means a more selective review process able to generate higher-quality papers. However, as the devil is in the details, the details in this case vary among academic disciplines and subdisciplines.
Here, we examine the issue of publication quality from a CS/engineering perspective, describing how related publication practices differ from those of other disciplines, in that CS/engineering research is mainly published in conferences rather than in journals. This culture presents an important challenge when evaluating CS research because traditional impact metrics are better suited to evaluating journal rather than conference publications.
In order to legitimize the role of conference papers to the wider scientific community, we offer an impact measure based on an analysis of Google Scholar citation data suited to CS conferences. We validate this new measure with a large-scale experiment covering 8,764 conference and journal papers to demonstrate a strong correlation between traditional journal impact and our new citation score. The results highlight how leading conferences compare favorably to mid-ranking journals, surpassing the impact of journals in the bottom half of the traditional ISI Web of Knowledge ranking. We also discuss a number of interesting anomalies in the CS conference circuit, highlighting how conferences with similar rejection rates (the traditional way of evaluating conferences) can attract quite different citation counts. We also note interesting geographical distinctions in this regard, particularly with respect to European and U.S. conferences.
- Althouse, B., West, J., Bergstrom, C., and Bergstrom, T. Differences in impact factor across fields and over time. Journal of the American Society for Information Science and Technology 60, 1 (Jan. 2009), 27--34. Google ScholarDigital Library
- Butler, L. ICT assessment: Moving beyond journal outputs. Scientometrics 74, 1 (Jan. 2008), 39--55.Google ScholarCross Ref
- Cole, J. and Cole, S. Measuring the quality of sociological research: Problems in the use of the Science Citation Index. American Sociologist 6, 1 (Feb. 1971), 23--29.Google Scholar
- Cronin, B. Bibliometrics and beyond: Some thoughts on Web-based citation analysis. Journal of Information Science 27, 1 (Feb. 2001), 1--7.Google ScholarCross Ref
- Garfield, E. How can impact factors be improved? British Medical Journal 313, 7054 (Aug. 1996), 411--413.Google ScholarCross Ref
- Garfield, E. Citation indexes for science. Science 122, 3159 (July 1955), 108--111.Google ScholarCross Ref
- Ley, M, and Reuther, P. Maintaining an online bibliographical database: The problem of data quality. Extraction et Gestion des Connaissances Actes des Sixiemes Journees Extraction et Gestion des Connaissances, G. Ritschard and C. Djeraba, Eds. (Lille, France, Jan.17--20). Revue des Nouvelles Technologies de l'Information, Cepadues-Editions, 2006, 5--10.Google Scholar
- Long, P.M. Lee, T.K., and Jaffar J. Benchmarking Research Performance in the Department of Computer Science, School of Computing, National University of Singapore, Technical Report, 1999; http://www.comp.nus.edu.sg/tankl/bench.htmlGoogle Scholar
- MacRoberts, M. and MacRoberts, B. Problems of citation analysis. Scientometrics 36, 3 (July 1996), 435--444.Google ScholarCross Ref
- Meho, L. and Rogers, Y. Citation counting, citation ranking, and h-index of human-computer interaction researchers: A comparison between Scopus and Web of Science. Journal of the American Society for Information Science and Technology 59, 11 (Sept. 2008), 1711--1726. Google ScholarDigital Library
- Meho, L. and Yang, K. Impact of data sources on citation counts and rankings of LIS faculty: Web of Science vs. Scopus and Google Scholar. Journal of the American Society for Information Science and Technology 58, 13 (Nov. 2007), 2105--2125. Google ScholarDigital Library
- Minsky, M. A Framework for Representing Knowledge. Technical Report. Massachusetts Institute of Technology, Cambridge, MA, 1974. Google ScholarDigital Library
- Patterson, D., Snyder, L., and Ullman, J. Best practices memo: Evaluating computer scientists and engineers for promotion and tenure. Computing Research News (Sept. 1999), special insert.Google Scholar
- Rahm, E. Comparing the scientific impact of conference and journal publications in computer science. Information Services and Use 28, 2 (Apr, 2008), 127--128. Google ScholarDigital Library
- Rahm, E and Thor, A. Citation analysis of database publications. ACM Sigmod Record 34, 4 (Dec. 2005), 48--53. Google ScholarDigital Library
- Seglen, P. Why the impact factor of journals should not be used for evaluating research. British Medical Journal 314, 7079 (Feb. 1997), 498--502.Google Scholar
Index Terms
- Relative status of journal and conference publications in computer science
Recommendations
Author‐based analysis of conference versus journal publication in computer science
Conference publications in computer science (CS) have attracted scholarly attention due to their unique status as a main research outlet, unlike other science fields where journals are dominantly used for communicating research findings. One frequent ...
Political science publications about Turkey
In this study, we conducted a bibliometric analysis of Thomson Reuters' Web of Knowledge (WoK): Social Sciences Citation Index (SSCI), Arts & Humanities Citation Index (A&HCI), and Journal Citation Reports (JCR) focusing on political science ...
What is the best database for computer science journal articles?
We compared general and specialized databases, by searching bibliographic information regarding journal articles in the computer science field, and by evaluating their bibliographic coverage and the quality of the bibliographic records retrieved. We ...
Comments