ABSTRACT
Software engineering researchers solve problems of several different kinds. To do so, they produce several different kinds of results, and they should develop appropriate evidence to validate these results. They often report their research in conference papers. I analyzed the abstracts of research papers submitted to ICSE 2002 in order to identify the types of research reported in the submitted and accepted papers, and I observed the program committee discussions about which papers to accept. This report presents the research paradigms of the papers, common concerns of the program committee, and statistics on success rates. This information should help researchers design better research projects and write papers that present their results to best advantage.
- Victor R. Basili. The experimental paradigm in software engineering. In Experimental Software Engineering Issues: CriticaI Assessment and Future Directives. Proc of Dagstuhl-Workshop, H. Dieter Rombach, Victor R. Basili, and Richard Selby (eds), published as Lecture Notes in Computer Science #706, Springer-Verlag 1993. Google ScholarDigital Library
- Geoffrey Bowker and Susan Leigh Star: Sorting Things Out: Classification and lts Consequences. MIT Press, 1999 Google ScholarDigital Library
- Frederick P. Brooks, Jr. Grasping Reality Through Illusion--Interactive Graphics Serving Science. Proc 1988 ACM SIGCHI Human Factors in Computer Systems Conf (CHI '88) pp. 1--11. Google ScholarDigital Library
- Rebecca Bumett. Technical Communication. Thomson Heinle 2001.Google Scholar
- Thomas F. Gieryn. Cultural Boundaries of Science: Credibility on the line. Univ of Chicago Press, 1999.Google Scholar
- ICSE 2002 Program Committee. Types of ICSE papers. http://icse-conferences.org/2002/info/paperTypes.htmlGoogle Scholar
- Impact Project. "Determining the impact of software engineering research upon practice. Panel summary, Proc. 23rd lnternational Conference on Software Engineering (ICSE 2001), 2001 Google ScholarDigital Library
- Ellen Isaacs and John Tang. Why don't more non-North-American papers get accepted to CHI? http://acm.org/sigchi/bulletin/1996.1/isaacs.htmlGoogle Scholar
- Ralph E. Johnson & panel. How to Get a Paper Accepted at OOPSLA. Proc OOPSLA'93, pp. 429--436, http://acm.org/sigplan/oopsla/oopsla96/how93.html Google ScholarDigital Library
- Jim Kajiya. How to Get Your SIGGRAPH Paper Rejected. Mirrored at http://www.cc.gatech.edu/student.services/phd/phd-advice/kajiyaGoogle Scholar
- Roy Levin and David D. Redell. How (and How Not) to Write a Good Systems Paper. ACM SIGOPS Operating Systems Review, Vol. 17, No. 3 (July, 1983), pages 35--40. http://fip.digital.com/pub/DEC/SRC/other/SOSPadvice.txtGoogle Scholar
- William Newman. A preliminary analysis of the products of HCI research, using pro forma abstracts. Proc 1994 ACM SIGCHI Human Factors in Computer Systems Conf (CHI '94), pp. 278--284. Google ScholarDigital Library
- William Newman et al. Guide to Successful Papers Submission at CHI2001. http://acm.org/sigs/sigchi/chi2001/call/submissions/guide-papers.htmlGoogle Scholar
- OOPSLA '91 Program Committee. How to get your paper accepted at OOPSLA. Proc OOPSLA'91, pp. 359--363. http://acm.org/sigplan/oopsla/oopsla96/how91.html Google ScholarDigital Library
- Craig Partridge. How to Increase the Chances your Paper is Accepted at ACM SIGCOMM. http://www.acm.org/sigcomm/conference-misc/author-guide.htmlGoogle Scholar
- William Pugh and PDLI 1991 Program Committee. Advice to Authors of Extended Abstracts. http://acm.org/sigsoft/conferences/pughadvice.htmlGoogle Scholar
- Samuel Redwine, et al. DoD Related Software Technology Requirements, Practices, and Prospects for the Future. IDA Paper P-1788, June 1984.Google Scholar
- S. Redwine & W. Riddle. Software technology maturation. Proceedings of the Eighth International Conference on Software Engineering, May 1985, pp. 189--200. Google ScholarDigital Library
- Mary Shaw. The coming-of-age of software architecture research. Proc. 23rd Int'l Conf on Software Engineering (ICSE 2001), pp. 656--664a. Google ScholarDigital Library
- Mary Shaw. What makes good research in software engineering? Presented at ETAPS 02, appeared in Opinion Corner department, Int'l Jour on Software Tools for Tech Transfer, vol 4, DOI 10.1007/s10009-002-0083-4, June 2002.Google Scholar
- SigGraph 2003 Call for Papers. http://www.siggraph.org/s2003/cfp/papers/index.htmlGoogle Scholar
- W. F. Tichy, P. Lukowicz, L. Prechelt, & E. A. Heinz. "Experimental evaluation in computer science: A quantitative study." Journal of Systems Software, Vol. 28, No. I, 1995, pp. 9--18. Google ScholarDigital Library
- Walter F. Tichy. "Should computer scientists experiment more? 16 reasons to avoid experimentation." IEEE Computer, Vol. 31, No. 5, May 1998 Google ScholarDigital Library
- Marvin V. Zelkowitz and Delores Wallace. Experimental validation in software engineering. Information and Software Technology, Vol 39, no 11, 1997, pp. 735--744.Google ScholarDigital Library
- Marvin V. Zelkowitz and Delores Wallace. Experimental models for validating technology. IEEE Computer, Vol. 31, No. 5, 1998, pp. 23--31. Google ScholarDigital Library
- Mary-Claire van Leunen and Richard Lipton. How to have your abstract rejected. http://acm.org/sigsoft/conferences/vanLeunenLipton.htmlGoogle Scholar
Index Terms
- Writing good software engineering research papers: minitutorial
Recommendations
Writing good software engineering research papers: revisited
ICSE-C '17: Proceedings of the 39th International Conference on Software Engineering CompanionWith the goal of helping software engineering researchers understand how to improve their papers, Mary Shaw presented "Writing Good Software Engineering Research Papers" in 2003. Shaw analyzed the abstracts of the papers submitted to the 2002 ...
Report from the 2nd international workshop on replication in empirical software engineering research (RESER 2011)
The RESER workshop provides a venue in which empirical software engineering researchers can discuss the theoretical foundations and methods of replication, as well as present the results of specific replicated studies. In 2011, the workshop co-located ...
Report from the 3rd International Workshop on Replication in Empirical Software Engineering Research (RESER 2013)
The RESER workshop provides a venue in which empirical software engineering researchers can discuss the theoretical foundations and methods of replication, as well as present the results of specific replicated studies. In 2013, the workshop co-located ...
Comments