skip to main content
10.1145/2517349.2522731acmconferencesArticle/Chapter ViewAbstractPublication PagessospConference Proceedingsconference-collections
research-article
Open Access

Consistency-based service level agreements for cloud storage

Published:03 November 2013Publication History

ABSTRACT

Choosing a cloud storage system and specific operations for reading and writing data requires developers to make decisions that trade off consistency for availability and performance. Applications may be locked into a choice that is not ideal for all clients and changing conditions. Pileus is a replicated key-value store that allows applications to declare their consistency and latency priorities via consistency-based service level agreements (SLAs). It dynamically selects which servers to access in order to deliver the best service given the current configuration and system conditions. In application-specific SLAs, developers can request both strong and eventual consistency as well as intermediate guarantees such as read-my-writes. Evaluations running on a worldwide test bed with geo-replicated data show that the system adapts to varying client-server latencies to provide service that matches or exceeds the best static consistency choice and server selection scheme.

Skip Supplemental Material Section

Supplemental Material

d2-07-douglas-terry.mp4

mp4

1.3 GB

References

  1. S. Agarwal, J. Dunagan, N. Jain, S. Saroiu, A. Wolman, and H. Bhogan. Volley: Automated data placement for geo-distributed cloud services. Proceedings USENIX Symposium on Networked Systems Design and Implementation (NSDI), April 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. M. K. Aguilera, A. Merchant, M. Shah, A. Veitch, and C. Karamanolis. Sinfonia: A new paradigm for building scalable distributed systems. ACM Transactions on Computer Systems 27 (3), November 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. R. Alonso, D. Barbara, and H. Garcia-Molina. Data caching issues in an information retrieval system. ACM Transactions on Database Systems 15(3):359--384, September 1990. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Amazon Web Services. Amazon DynamoDB Pricing. http://aws.amazon.com/dynamodb/pricing/Google ScholarGoogle Scholar
  5. Amazon Web Services. Amazon SimpleDB. http://aws.amazon.com/simpledb/Google ScholarGoogle Scholar
  6. E. Anderson, X. Li, M. Shah, J. Tucek, and J. Wylie. What consistency does your key-value store actually provide? Proceedings USENIX Workshop on Hot Topics in Systems Dependability, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. M. Armbrust, A. Fox, D. A. Patterson, N. Lanham, B. Trushkowsky, J. Trutna, and H. Oh. SCADS: Scale-independent storage for social computing applications. Proceedings Conference on Innovative Data Systems Research (CIDR), January 2009.Google ScholarGoogle Scholar
  8. J. Baker, C. Bond, J. C. Corbett, JJ Furman, A. Khorlin, J. Larson, J.-M. Leon, Y. Li, A. Lloyd, and V. Yushprakh. Megastore: Providing scalable, highly available storage for interactive services. Proceedings Conference on Innovative Data Systems Research (CIDR), January 2011.Google ScholarGoogle Scholar
  9. D. Barbara-Milla and H. Garcia-Molina. The demarcation protocol: A technique for maintaining constraints in distributed database systems. VLDB Journal 3(3):325--353, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. N. Belaramani, M. Dahlin, L. Gao, A. Nayate, A. Venkataramani, P. Yalagandula and J. Zheng. PRACTI replication. Proceedings USENIX Symposium on Networked Systems Design and Implementation (NSDI), May 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. D. Bermbach and S. Tai. Eventual consistency: How soon is eventual? An evaluation of Amazon S3's consistency behavior. Proceedings Workshop on Middleware for Service Oriented Computing, December 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. E. Brewer. CAP twelve years later: How the "rules" have changed. IEEE Computer, February 2012.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. N. Bonvin, T. G. Papaioannou, and K. Aberer. A self-organized, fault-tolerant and scalable replication scheme for cloud storage. Proceedings ACM Symposium on Cloud Computing (SoCC), June 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. B. Calder, J. Wang, A. Ogus, N. Nilakantan, A. Skjolsvold, S. McKelvie, Y. Xu, S. Srivastav, J. Wu, H. Simitci, J. Haridas, C. Uddaraju, H. Khatri, A. Edwards, V. Bedekar, S. Mainali, R. Abbasi, A. Agarwal, M. Fahim ul Haq, M. Ikram ul Haq, D. Bhardwaj, S. Dayanand, A. Adusumilli, M. McNett, S. Sankaran, K. Manivannan, and L. Rigas. Windows Azure Storage: A highly available cloud storage service with strong consistency. Proceedings ACM Symposium on Operating Systems Principles (SOSP), October 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. R. Cattell, Scalable SQL and NoSQL data stores, ACM SIGMOD Record 39(4), December 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. F. Chang, J. Dean, S. Ghemawat, W. C. Hsieh, D. A. Wallach, M. Burrows, T. Chandra, A. Fikes, and R. E. Gruber. Bigtable: A distributed storage system for structured data. ACM Transactions on Computer Systems 26(2), June 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. B. Cooper, R. Ramakrishnan, U. Srivastava, A. Silberstein, P. Bohannon, H.-A. Jacobsen, N. Puz, D. Weaver, and R. Yerneni. PNUTS: Yahoo!'s hosted data serving platform. Proceedings International Conference on Very Large Data Bases (VLDB), August 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. B. F. Cooper, A. Silberstein, E. Tam, R. Ramakrishnan, and R. Sears. Benchmarking cloud serving systems with YCSB. Proceedings ACM Symposium on Cloud Computing (SoCC), June 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. J. C. Corbett, J. Dean, M. Epstein, A. Fikes, C. Frost, JJ Furman, S. Ghemawat, A. Gubarev, C. Heiser, P. Hochschild, W. Hsieh, S. Kanthak, E. Kogan, H. Li, A. Lloyd, S. Melnik, D. Mwaura, D. Nagle, S. Quinlan, R. Rao, L, Rolig, Y. Saito, M. Szymaniak, C. Taylor, R. Wang, and D. Woodford. Spanner: Google's globally-distributed database. Proceedings USENIX Symposium on Operating System Design and Implementation (OSDI), October 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. G. DeCandia, D. Hastorun, M. Jampani, G. Kakulapati, A. Lakshman, A. Pilchin, S. Sivasubramanian, P. Vosshall, and W. Vogels. Dynamo: Amazon's highly available key-value store. Proceedings ACM Symposium on Operating Systems Principles (SOSP), October 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. W. Golab, X. Li, and M. A. Shah. Analyzing consistency properties for fun and profit. Proceedings ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing (PODC), June 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Google. Read consistency & deadlines: more control of your datastore. Google App Engine Blog, March 29, 2010. http://googleappengine.blogspot.com/2010/03/read-consistency-deadlines-more-control.htmlGoogle ScholarGoogle Scholar
  23. H. Guo, P.-Å. Larson, R. Ramakrishnan, and J. Goldstein. Relaxed currency and consistency: How to say "good enough" in SQL. Proceedings ACM International Conference on Management of Data (SIGMOD), June 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. J. Hamilton. The cost of latency. Perspectives Blog, October 31, 2009. http://perspectives.mvdirona.com/2009/10/31/TheCostOfLatency.aspxGoogle ScholarGoogle Scholar
  25. S. Kadambi, J. Chen, B. F. Cooper, D. Lomax, R. Ramakrishnan, A. Silberstein, E. Tam, and H. Garcia-Molina. Where in the world is my data? Proceedings International Conference on Very Large Data Bases (VLDB), August 2011.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. T. Kraska, M. Hentschel, G. Alonso, and D. Kossmann. Consistency rationing in the cloud: pay only when it matters. Proceedings International Conference on Very Large Data Bases (VLDB), August 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. A. Lakshman and P. Malik. Cassandra: a decentralized structured storage system. SIGOPS Operating Systems Review 44(2), April 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. L. Lamport. Time, clocks, and the ordering of events in a distributed system. Communications of the ACM 21(7), July 1978. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. C. Li, D. Porto, A. Clement, J. Gehrke, N. Preguica, and R. Rodrigues. Making geo-replicated systems fast as possible, consistent when necessary. Proceedings USENIX Symposium on Operating System Design and Implementation (OSDI), October 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. W. Lloyd, M. J. Freedman, M. Kaminsky, and D. G. Andersen. Don't settle for eventual: Scalable causal consistency for wide-area storage with COPS. Proceedings ACM Symposium on Operating Systems Principles (SOSP), October 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. R. Minnear. Latency: The Achilles heel of cloud computing. Cloud Computing Journal, March 9, 2011.Google ScholarGoogle Scholar
  32. Oracle. Oracle NoSQL Database. An Oracle White Paper, September 2011. http://www.oracle.com/technetwork/database/nosqldb/learnmore/nosql-database-498041.pdfGoogle ScholarGoogle Scholar
  33. A. Phanishayee, D. G. Andersen, H. Pucha, A. Povzner, and W. Belluomini. Flex-KV: Enabling high-performance and flexible KV systems. Proceedings Workshop on Management of Big Data Systems, September 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. M. Serafini and F. Junqueira. Weak consistency as a last resort. Proceedings ACM Workshop on Large Scale Distributed Systems and Middleware (LADIS), July 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. A. E. Silberstein, R. Sears, W. Zhou, and B. F. Cooper. A batch of PNUTS: Experiences connecting cloud batch and serving systems. Proceedings International Conference on Management of Data (SIGMOD), June 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. D. Terry, A. Demers, K. Petersen, M. Spreitzer, M. Theimer, and B. Welch. Session guarantees for weakly consistent replicated data. Proceedings IEEE International Conference on Parallel and Distributed Information Systems, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. D. B. Terry, M. M. Theimer, K. Petersen, A. J. Demers, M. J. Spreitzer, and C. H. Hauser. Managing update conflicts in Bayou, a weakly connected replicated storage system. Proceedings ACM Symposium on Operating Systems Principles (SOSP), December 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. D. Terry, V. Prabhakaran, R. Kotla, M. Balakrishnan, and M. K. Aguilera. Transactions with consistency choices on geo-replicated cloud storage. Microsoft Technical Report MSR-TR-2013-82, September 2013.Google ScholarGoogle Scholar
  39. D. Terry. Replicated data consistency explained through baseball, Microsoft Technical Report MSR-TR-2011-137, October 2011. To appear in Communications of the ACM, December 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. J. F. Van Der Zwet. Layers of latency: Cloud complexity and performance. Wired, September 18, 2012.Google ScholarGoogle Scholar
  41. R. van Renesse and F. B. Schneider. Chain replication for supporting high throughput and availability. Proceedings USENIX Symposium on Operating System Design and Implementation (OSDI), December 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. W. Vogels. Eventually consistent. Communications of the ACM, January 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. W. Vogels. Choosing consistency. All Things Distributed, February 24, 2010. http://www.allthingsdistributed.com/2010/02/strong_consistency_simpledb.htmlGoogle ScholarGoogle Scholar
  44. H. Wada, A. Fekete, L. Zhao, K. Lee, and A. Liu. Data consistency properties and the trade-offs in commercial cloud storages: the consumers' perspective. Proceedings Conference on Innovative Data Systems Research (CIDR), January 2011.Google ScholarGoogle Scholar
  45. X. Wang, S. Yang, S. Wang, X. Niu, and J. Xu. An application-based adaptive replica consistency for cloud storage. Proceedings IEEE International Conference on Grid and Cloud Computing, November 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. H. Yu and A. Vahdat. Design and evaluation of a conit-based continuous consistency model for replicated services. ACM Transactions on Computer Systems 20(3):239--282, August 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Consistency-based service level agreements for cloud storage

              Recommendations

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in
              • Published in

                cover image ACM Conferences
                SOSP '13: Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles
                November 2013
                498 pages
                ISBN:9781450323888
                DOI:10.1145/2517349

                Copyright © 2013 Owner/Author

                Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 3 November 2013

                Check for updates

                Qualifiers

                • research-article

                Acceptance Rates

                Overall Acceptance Rate131of716submissions,18%

                Upcoming Conference

                SOSP '24

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader