skip to main content
10.1145/1808984.1808987acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Towards pro-active adaptation with confidence: augmenting service monitoring with online testing

Authors Info & Claims
Published:03 May 2010Publication History

ABSTRACT

Service-based applications need to operate in a highly dynamic and distributed world. As those applications are composed of individual services, they have to react to failures of those services to ensure that the applications maintain their expected functionality and quality. Self-adaptation is one solution to this problem, as it allows applications to autonomously react to failures. Currently, monitoring is typically used to identify failures, thus triggering adaptation. However, monitoring only observes failures after they have occurred, which means that adaptation based on monitoring is reactive. This can lead to shortcomings like user dissatisfaction, increased execution times, and late response to critical events. Pro-active adaptation addresses those shortcomings, because in such a setting, the application detects the need for adaptation and thus can adapt before a failure will occur. However, it is important to avoid unnecessary pro-active adaptations, as they can lead to severe shortcomings, such as increased costs or follow-up failures. This means that when taking pro-active adaptation decisions it is key that there is confidence in the predicted future failures, i.e., pro-active adaptation should only be performed if there is certainty that the failure could in fact occur. To avoid unnecessary adaptations, we introduce an approach based on augmenting service monitoring with online testing to produce failure predictions with confidence. We demonstrate the applicability of our approach using a scenario from the eGovernment domain.

References

  1. Delgado, N., Gates, A. Q., Roach, S.: A taxonomy and catalog of runtime software-fault monitoring tools. IEEE Trans. Software Eng. 30 (2004) 859--872 Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Benbernou, S.: State of the art report, gap analysis of knowledge on principles, techniques and methodologies for monitoring and adaptation of sbas. Deliverable PO-JRA-1.2.1, S-Cube Consortium (2008) http://www.s-cube-network.eu/.Google ScholarGoogle Scholar
  3. Hielscher, J., Kazhamiakin, R., Metzger, A., Pistore, M.: A framework for proactive self-adaptation of service-based applications based on online testing. In: ServiceWave 2008. Number 5377 in LNCS, Springer (2008) Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Nitto, E. D., Mazza, V., Mocci, A.: Collection of industrial best practices, scenarios and business cases. Deliverable CD-IA-2.2.2, S-Cube Consortium (2009) http://www.s-cube-network.eu/.Google ScholarGoogle Scholar
  5. Sahoo, R. K., Oliner, A. J., Rish, I., Gupta, M., Moreira, J. E., Ma, S., Vilalta, R., Sivasubramaniam, A.: Critical event prediction for proactive management in large-scale computer clusters. In: KDD '03: Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, New York, NY, USA, ACM (2003) 426--435 Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Liang, Y., Zhang, Y., Sivasubramaniam, A., Jette, M., Sahoo, R.: Bluegene/l failure analysis and prediction models. In: DSN '06: Proceedings of the International Conference on Dependable Systems and Networks, Washington, DC, USA, IEEE Computer Society (2006) 425--434 Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Song, H., Leangsuksun, C. b., Nassar, R.: Availability modeling and analysis on high performance cluster computing systems. In: ARES '06: Proceedings of the First International Conference on Availability, Reliability and Security, Washington, DC, USA, IEEE Computer Society (2006) 305--313 Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Fu, S., Xu, C. Z.: Exploring event correlation for failure prediction in coalitions of clusters. In: SC '07: Proceedings of the 2007 ACM/IEEE conference on Supercomputing, New York, NY, USA, ACM (2007) 1--12 Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Fu, S., Xu, C. Z.: Quantifying temporal and spatial correlation of failure events for proactive management. In: SRDS '07: Proceedings of the 26th IEEE International Symposium on Reliable Distributed Systems, Washington, DC, USA, IEEE Computer Society (2007) 175--184 Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Fox, A., Kiciman, E., Patterson, D.: Combining statistical monitoring and predictable recovery for self-management. In: WOSS '04: Proceedings of the 1st ACM SIGSOFT workshop on Self-managed systems, New York, NY, USA, ACM (2004) 49--53 Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Myers, G.: The Art of Software Testing. Wiley (2004) Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. McGregor, J., Sykes, D.: A Practical Guide to Testing Object-oriented Software. Addison-Wesley Professional (2001) Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Osterweil, L. J.: Strategic directions in software quality. ACM Comput. Surv. 28 (1996) 738--750 Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Ghezzi, C., Jazayeri, M., Mandrioli, D.: Fundamentals of Software Engineering. Prentice Hall (1991) Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Baresi, L., Nitto, E. D., eds.: Test and Analysis of Web Services. Springer (2007) Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Pernici, B., Metzger, A.: Survey of quality related aspects relevant for service-based applications. Deliverable PO-JRA-1.3.1, S-Cube Consortium (2008) http://www.s-cube-network.eu/.Google ScholarGoogle Scholar
  17. Poore, J., Trammell, C.: Engineering practices for statistical testing. Crosstalk: The Journal of Defense Software Engineering 11 (1998) 24--28Google ScholarGoogle Scholar
  18. Trammell, C.: Quantifying the reliability of software: statistical testing based on a usage model. In: ISESS '95: Proceedings of the 2nd IEEE Software Engineering Standards Symposium, Washington, DC, USA, IEEE Computer Society (1995) 208 Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Whittaker, J. A., Thomason, M. G.: A markov chain model for statistical software testing. IEEE Trans. Softw. Eng. 20 (1994) 812--824 Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Bauer, T., Bohr, F., Landmann, D., Beletski, T., Eschbach, R., Poore, J.: From requirements to statistical testing of embedded systems. In: SEAS '07: Proceedings of the 4th International Workshop on Software Engineering for Automotive Systems, Washington, DC, USA, IEEE Computer Society (2007) 3 Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Kallepalli, C., Tian, J.: Measuring and modeling usage and reliability for statistical web testing. IEEE Trans. Softw. Eng. 27 (2001) 1023--1036 Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Di Nitto, E., Ghezzi, C., Metzger, A., Papazoglou, M., Pohl, K.: A journey to highly dynamic, self-adaptive service-based applications. Automated Software Engineering (2008) Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Wang, Q., Quan, L., Ying, F.: Online testing of Web-based applications. In: Proceedings of the 28th Annual International Computer Software and Applications Conference (COMPSAC). (2004) 166--169 Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Deussen, P., Din, G., Schieferdecker, I.: A TTCN-3 based online test and validation platform for Internet services. In: Proceedings of the 6th International Symposium on Autonomous Decentralized Systems (ISADS). (2003) 177--184 Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Chan, W., Cheung, S., Leung, K.: A metamorphic testing approach for online testing of service-oriented software applications. International Journal of Web Services Research 4 (2007) 61--81Google ScholarGoogle ScholarCross RefCross Ref
  26. Bai, X., Chen, Y., Shao, Z.: Adaptive web services testing. In: 31st Annual International Computer Software and Applications Conference (COMPSAC). (2007) 233--236 Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Bai, X., Xu, D., Dai, G., Tsai, W., Chen, Y.: Dynamic reconfigurable testing of service-oriented architecture. In: Proceedings of the 31st Annual International Computer Software and Applications Conference (COMPSAC). (2007) 368--375 Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Ruth, M., Oh, S., Loup, A., Horton, B., Gallet, O., Mata, M., Tu, S.: Towards automatic regression test selection for web services. In: Proceedings of the 31st Annual International Computer Software and Applications Conference (COMPSAC). (2007) 729--734 Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Ruth, M., Tu, S.: A safe regression test selection technique for Web services. In: Second International Conference on Internet and Web Applications and Services (ICIW). (2007) Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Di Penta, M., Bruno, M., Esposito, G., et al.: Web Services Regression Testing. In Baresi, L., Di Nitto, E., eds.: Test and Analysis of Web Services. Springer (2007) 205--234Google ScholarGoogle Scholar
  31. F. Nielson, H. R. Nielson, C. H.: Principles of Program Analysis. Springer (2005) Second Ed.Google ScholarGoogle Scholar
  32. O'Connor, P., Newton, D., Bromley, R.: Practical reliability engineering. John Wiley & Sons Inc (2002)Google ScholarGoogle Scholar
  33. Goel, A. L.: Software reliability models: Assumptions, limitations, and applicability. IEEE Trans. Softw. Eng. 11 (1985) 1411--1423 Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Zhang, F., Zhou, X., Chen, J., Dong, Y.: A novel model for component-based software reliability analysis. In: HASE '08: Proceedings of the 2008 11th IEEE High Assurance Systems Engineering Symposium, Washington, DC, USA, IEEE Computer Society (2008) 303--309 Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Roshandel, R., Medvidovic, N., Golubchik, L.: A bayesian model for predicting reliability of software systems at the architectural level. In: QoSA. (2007) 108--126 Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Roshandel, R., Banerjee, S., Cheung, L., Medvidovic, N., Golubchik, L.: Estimating software component reliability by leveraging architectural models. In: ICSE. (2006) 853--856 Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Cheung, L., Roshandel, R., Medvidovic, N., Golubchik, L.: Early prediction of software component reliability. In: ICSE '08: Proceedings of the 30th international conference on Software engineering, New York, NY, USA, ACM (2008) 111--120 Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Epifani, I., Ghezzi, C., Mirandola, R., Tamburrelli, G.: Model evolution by run-time parameter adaptation. In: ICSE '09: Proceedings of the 2009 IEEE 31st International Conference on Software Engineering, Washington, DC, USA, IEEE Computer Society (2009) 111--121 Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Epifani, I., Ghezzi, C., Mirandola, R., Tamburrelli, G.: Ws-diamond: Web services - diagnosability, monitoring and diagnosis: Publishable final activity report. Technical report, WS-DIAMOND EU FP6 protect (2008)Google ScholarGoogle Scholar
  40. Canfora, G., Di Penta, M.: Testing services and service-centric systems: challenges and opportunities. IT Professional 8 (2006) 10--17 Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Canfora, G., Di Penta, M.: Testing method definition. deliverable A1.D3.4. Technical report, SeCSE EU FP6 Project (2008)Google ScholarGoogle Scholar
  42. Corder, G., Foreman, D.: Nonparametric statistics for non-statisticians: A step-by-step approach. Wiley-Blackwell (2009)Google ScholarGoogle Scholar
  43. Bai, X., Dong, W., Tsai, W. T., Chen, Y.: WSDL-Based Automatic Test Case Generation for Web Services Testing. In: Proceedings of the IEEE International Workshop on Service-Oriented System Engineering (SOSE), IEEE Computer Society (2005) 215--220 Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Tarhini, A., Fouchal, H., Mansour, N.: A simple approach for testing Web service based applications. In: 5th International Workshop on Innovative Internet Community Systems. Lecture Notes in Computer Science Vol. 3908 (2006) 134--146 Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Lübke, D.: Unit Testing BPEL Compositions. In Baresi, L., Di Nitto, E., eds.: Test and Analysis of Web Services. Springer (2007) 149--171Google ScholarGoogle Scholar
  46. Dong, W. L., Yu, H., Zhang, Y. B.: Testing BPEL-based Web Service Composition Using High-level Petri Nets. In: EDOC '06: Proceedings of the 10th IEEE International Enterprise Distributed Object Computing Conference, IEEE Computer Society (2006) 441--444 Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Martin, E., Basu, S., Xie, T.: Automated Testing and Response Analysis of Web Services. In: IEEE International Conference on Web Services (ICWS). (2007) 647--654Google ScholarGoogle Scholar
  48. Xue, Z., Dong, X., Ma, S., Dong, W.: A survey on failure prediction of large-scale server clusters. Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, ACIS International Conference on 2 (2007) 733--738 Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Gehlert, A., Bucchiarone, A., Kazhamiakin, R., Metzger, A., Pistore, M., Pohl, K.: Exploiting assumption-based verification for the adaptation of service-based applications. In: Proceedings of the 2010 ACM Symposium on Applied Computing (SAC), NNN, 2010. (2010) Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Bickel, S., Brückner, M., Scheffer, T.: Discriminative learning under covariate shift. Journal of Machine Learning Research 10 (2009) 2137--2155 Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Suliman, D., Paech, B., Borner, L., Atkinson, C., Brenner, D., Merdes, M., Malaka, R.: The MORABIT approach to runtime component testing. In: Proceedings of the 30th Annual Int'l Computer Software and Applications Conference (COMPSAC). (2006) 171--176 Google ScholarGoogle ScholarDigital LibraryDigital Library
  1. Towards pro-active adaptation with confidence: augmenting service monitoring with online testing

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SEAMS '10: Proceedings of the 2010 ICSE Workshop on Software Engineering for Adaptive and Self-Managing Systems
      May 2010
      146 pages
      ISBN:9781605589718
      DOI:10.1145/1808984
      • General Chair:
      • Rogèrio de Lemos,
      • Program Chair:
      • Mauro Pezzè

      Copyright © 2010 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 3 May 2010

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate17of31submissions,55%

      Upcoming Conference

      ICSE 2025

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader