skip to main content
research-article
Free Access

Relative status of journal and conference publications in computer science

Published:01 November 2010Publication History
Skip Abstract Section

Abstract

Though computer scientists agree that conference publications enjoy greater status in computer science than in other disciplines, there is little quantitative evidence to support this view. The importance of journal publication in academic promotion makes it a highly personal issue, since focusing exclusively on journal papers misses many significant papers published by CS conferences.

Here, we aim to quantify the relative importance of CS journal and conference papers, showing that CS papers in leading conferences match the impact of papers in mid-ranking journals and surpass the impact of papers in journals in the bottom half of the Thompson Reuters rankings (http://www.isiknowledge.com) for impact measured in terms of citations in Google Scholar. We also show that poor correlation between this measure and conference acceptance rates indicates conference publication is an inefficient market where venues equally challenging in terms of rejection rates offer quite different returns in terms of citations.

How to measure the quality of academic research and performance of particular researchers has always involved debate. Many CS researchers feel that performance assessment is an exercise in futility, in part because academic research cannot be boiled down to a set of simple performance metrics, and any attempt to introduce them would expose the entire research enterprise to manipulation and gaming. On the other hand, many researchers want some reasonable way to evaluate academic performance, arguing that even an imperfect system sheds light on research quality, helping funding agencies and tenure committees make more informed decisions.

One long-standing way of evaluating academic performance is through publication output. Best practice for academics is to write key research contributions as scholarly articles for submission to relevant journals and conferences; the peer-review model has stood the test of time in determining the quality of accepted articles. However, today's culture of academic publication accommodates a range of publication opportunities yielding a continuum of quality, with a significant gap between the lower and upper reaches of the continuum; for example, journal papers are routinely viewed as superior to conference papers, which are generally considered superior to papers at workshops and local symposia. Several techniques are used for evaluating publications and publication outlets, mostly targeting journals. For example, Thompson Reuters (the Institute for Scientific Information) and other such organizations record and assess the number of citations accumulated by leading journals (and some high-ranking conferences) in the ISI Web of Knowledge (http://www.isiknowledge.com) to compute the impact factor of a journal as a measure of its ability to attract citations. Less-reliable indicators of publication quality are also available for judging conference quality; for example, a conference's rejection rate is often cited as a quality indicator on the grounds that a high rejection rate means a more selective review process able to generate higher-quality papers. However, as the devil is in the details, the details in this case vary among academic disciplines and subdisciplines.

Here, we examine the issue of publication quality from a CS/engineering perspective, describing how related publication practices differ from those of other disciplines, in that CS/engineering research is mainly published in conferences rather than in journals. This culture presents an important challenge when evaluating CS research because traditional impact metrics are better suited to evaluating journal rather than conference publications.

In order to legitimize the role of conference papers to the wider scientific community, we offer an impact measure based on an analysis of Google Scholar citation data suited to CS conferences. We validate this new measure with a large-scale experiment covering 8,764 conference and journal papers to demonstrate a strong correlation between traditional journal impact and our new citation score. The results highlight how leading conferences compare favorably to mid-ranking journals, surpassing the impact of journals in the bottom half of the traditional ISI Web of Knowledge ranking. We also discuss a number of interesting anomalies in the CS conference circuit, highlighting how conferences with similar rejection rates (the traditional way of evaluating conferences) can attract quite different citation counts. We also note interesting geographical distinctions in this regard, particularly with respect to European and U.S. conferences.

References

  1. Althouse, B., West, J., Bergstrom, C., and Bergstrom, T. Differences in impact factor across fields and over time. Journal of the American Society for Information Science and Technology 60, 1 (Jan. 2009), 27--34. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Butler, L. ICT assessment: Moving beyond journal outputs. Scientometrics 74, 1 (Jan. 2008), 39--55.Google ScholarGoogle ScholarCross RefCross Ref
  3. Cole, J. and Cole, S. Measuring the quality of sociological research: Problems in the use of the Science Citation Index. American Sociologist 6, 1 (Feb. 1971), 23--29.Google ScholarGoogle Scholar
  4. Cronin, B. Bibliometrics and beyond: Some thoughts on Web-based citation analysis. Journal of Information Science 27, 1 (Feb. 2001), 1--7.Google ScholarGoogle ScholarCross RefCross Ref
  5. Garfield, E. How can impact factors be improved? British Medical Journal 313, 7054 (Aug. 1996), 411--413.Google ScholarGoogle ScholarCross RefCross Ref
  6. Garfield, E. Citation indexes for science. Science 122, 3159 (July 1955), 108--111.Google ScholarGoogle ScholarCross RefCross Ref
  7. Ley, M, and Reuther, P. Maintaining an online bibliographical database: The problem of data quality. Extraction et Gestion des Connaissances Actes des Sixiemes Journees Extraction et Gestion des Connaissances, G. Ritschard and C. Djeraba, Eds. (Lille, France, Jan.17--20). Revue des Nouvelles Technologies de l'Information, Cepadues-Editions, 2006, 5--10.Google ScholarGoogle Scholar
  8. Long, P.M. Lee, T.K., and Jaffar J. Benchmarking Research Performance in the Department of Computer Science, School of Computing, National University of Singapore, Technical Report, 1999; http://www.comp.nus.edu.sg/tankl/bench.htmlGoogle ScholarGoogle Scholar
  9. MacRoberts, M. and MacRoberts, B. Problems of citation analysis. Scientometrics 36, 3 (July 1996), 435--444.Google ScholarGoogle ScholarCross RefCross Ref
  10. Meho, L. and Rogers, Y. Citation counting, citation ranking, and h-index of human-computer interaction researchers: A comparison between Scopus and Web of Science. Journal of the American Society for Information Science and Technology 59, 11 (Sept. 2008), 1711--1726. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Meho, L. and Yang, K. Impact of data sources on citation counts and rankings of LIS faculty: Web of Science vs. Scopus and Google Scholar. Journal of the American Society for Information Science and Technology 58, 13 (Nov. 2007), 2105--2125. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Minsky, M. A Framework for Representing Knowledge. Technical Report. Massachusetts Institute of Technology, Cambridge, MA, 1974. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Patterson, D., Snyder, L., and Ullman, J. Best practices memo: Evaluating computer scientists and engineers for promotion and tenure. Computing Research News (Sept. 1999), special insert.Google ScholarGoogle Scholar
  14. Rahm, E. Comparing the scientific impact of conference and journal publications in computer science. Information Services and Use 28, 2 (Apr, 2008), 127--128. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Rahm, E and Thor, A. Citation analysis of database publications. ACM Sigmod Record 34, 4 (Dec. 2005), 48--53. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Seglen, P. Why the impact factor of journals should not be used for evaluating research. British Medical Journal 314, 7079 (Feb. 1997), 498--502.Google ScholarGoogle Scholar

Index Terms

  1. Relative status of journal and conference publications in computer science

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image Communications of the ACM
            Communications of the ACM  Volume 53, Issue 11
            November 2010
            112 pages
            ISSN:0001-0782
            EISSN:1557-7317
            DOI:10.1145/1839676
            Issue’s Table of Contents

            Copyright © 2010 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 1 November 2010

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Popular
            • Refereed

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format