skip to main content
10.1145/3188745.3188752acmconferencesArticle/Chapter ViewAbstractPublication PagesstocConference Proceedingsconference-collections
research-article

The adaptive complexity of maximizing a submodular function

Published:20 June 2018Publication History

ABSTRACT

In this paper we study the adaptive complexity of submodular optimization. Informally, the adaptive complexity of a problem is the minimal number of sequential rounds required to achieve a constant factor approximation when polynomially-many queries can be executed in parallel at each round. Adaptivity is a fundamental concept that is heavily studied in computer science, largely due to the need for parallelizing computation. Somewhat surprisingly, very little is known about adaptivity in submodular optimization. For the canonical problem of maximizing a monotone submodular function under a cardinality constraint, to the best of our knowledge, all that is known to date is that the adaptive complexity is between 1 and Ω(n).

Our main result in this paper is a tight characterization showing that the adaptive complexity of maximizing a monotone submodular function under a cardinality constraint is Θ(log n): - We describe an algorithm which requires O(log n) sequential rounds and achieves an approximation that is arbitrarily close to 1/3; - We show that no algorithm can achieve an approximation better than O(1 / log n) with fewer than O(log n / log log n) rounds. Thus, when allowing for parallelization, our algorithm achieves a constant factor approximation exponentially faster than any known existing algorithm for submodular maximization.

Importantly, the approximation algorithm is achieved via adaptive sampling and complements a recent line of work on optimization of functions learned from data. In many cases we do not know the functions we optimize and learn them from labeled samples. Recent results show that no algorithm can obtain a constant factor approximation guarantee using polynomially-many labeled samples as in the PAC and PMAC models, drawn from any distribution. Since learning with non-adaptive samples over any distribution results in a sharp impossibility, we consider learning with adaptive samples where the learner obtains poly(n) samples drawn from a distribution of her choice in every round. Our result implies that in the realizable case, where there is a true underlying function generating the data, Θ(log n) batches of adaptive samples are necessary and sufficient to approximately “learn to optimize” a monotone submodular function under a cardinality constraint.

Skip Supplemental Material Section

Supplemental Material

7c-5.mp4

mp4

30.1 MB

References

  1. {AAAK17} Arpit Agarwal, Shivani Agarwal, Sepehr Assadi, and Sanjeev Khanna. Learning with limited rounds of adaptivity: Coin tossing, multi-armed bandits, and ranking from pairwise comparisons. In COLT, 2017.Google ScholarGoogle Scholar
  2. {AM10} Saeed Alaei and Azarakhsh Malekian. Maximizing sequencesubmodular functions and its application to online advertising. arXiv preprint arXiv:1009.4153, 2010.Google ScholarGoogle Scholar
  3. {ANRW15} Noga Alon, Noam Nisan, Ran Raz, and Omri Weinstein. Welfare maximization with limited interaction. In FOCS, pages 1499–1512. IEEE, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. {AWZ08} Akram Aldroubi, Haichao Wang, and Kourosh Zarringhalam. Sequential adaptive compressed sampling via huffman codes. arXiv preprint arXiv:0810.4916, 2008.Google ScholarGoogle Scholar
  5. {Bal15} Maria-Florina Balcan. Learning submodular functions with applications to multi-agent systems. In AAMAS, 2015.Google ScholarGoogle Scholar
  6. {BCIW12} Maria-Florina Balcan, Florin Constantin, Satoru Iwata, and Lei Wang. Learning valuation functions. In COLT, 2012.Google ScholarGoogle Scholar
  7. {BDF + 12} Ashwinkumar Badanidiyuru, Shahar Dobzinski, Hu Fu, Robert Kleinberg, Noam Nisan, and Tim Roughgarden. Sketching valuation functions. In SODA, pages 1025–1035, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. {BENW15} Rafael Barbosa, Alina Ene, Huy Nguyen, and Justin Ward. The power of randomization: Distributed submodular maximization on massive datasets. In ICML, pages 1236–1244, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. {BENW16} Rafael da Ponte Barbosa, Alina Ene, Huy L Nguyen, and Justin Ward. A new framework for distributed submodular maximization. In FOCS, pages 645–654. Ieee, 2016.Google ScholarGoogle Scholar
  10. {BGSMdW12} Harry Buhrman, David García-Soriano, Arie Matsliah, and Ronald de Wolf. The non-adaptive query complexity of testing k-parities. arXiv preprint arXiv:1209.3849, 2012.Google ScholarGoogle Scholar
  11. {BH11} Maria-Florina Balcan and Nicholas JA Harvey. Learning submodular functions. In STOC, pages 793–802. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. {Ble96} Guy E Blelloch. Programming parallel algorithms. Communications of the ACM, 39(3):85–97, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. {BMW16} Mark Braverman, Jieming Mao, and S Matthew Weinberg. Parallel algorithms for select and partition with noisy comparisons. In STOC, pages 851–862. ACM, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. {BPR + 16} Ashwinkumar Badanidiyuru, Christos Papadimitriou, Aviad Rubinstein, Lior Seeman, and Yaron Singer. Locally adaptive optimization: Adaptive seeding for monotone submodular functions. In SODA, pages 414–429. Society for Industrial and Applied Mathematics, 2016.Google ScholarGoogle Scholar
  15. {BPT11} Guy E Blelloch, Richard Peng, and Kanat Tangwongsan. Linear-work greedy parallel approximate set cover and variants. In SPAA, pages 23–32. ACM, 2011.Google ScholarGoogle Scholar
  16. {BRM98} Guy E Blelloch and Margaret Reid-Miller. Fast set operations using treaps. In SPAA, pages 16–26. ACM, 1998.Google ScholarGoogle Scholar
  17. {BRS89} Bonnie Berger, John Rompel, and Peter W Shor. Efficient nc algorithms for set cover with applications to learning and geometry. In FOCS, pages 54–59. IEEE, 1989. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. STOC’18, June 25–29, 2018, Los Angeles, CA, USA Eric Balkanski and Yaron SingerGoogle ScholarGoogle Scholar
  19. {BRS17} Eric Balkanski, Aviad Rubinstein, and Yaron Singer. The limitations of optimization from samples. STOC, 2017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. {BS17a} Eric Balkanski and Yaron Singer. Minimizing a submodular function from samples. 2017.Google ScholarGoogle Scholar
  21. {BS17b} Eric Balkanski and Yaron Singer. The sample complexity of optimizing a convex function. In COLT, pages 275–301, 2017.Google ScholarGoogle Scholar
  22. {BST12} Guy E Blelloch, Harsha Vardhan Simhadri, and Kanat Tangwongsan. Parallel and i/o efficient set covering algorithms. In SPAA, pages 82–90. ACM, 2012.Google ScholarGoogle Scholar
  23. {BVP11} Gregory R Bowman, Vincent A Voelz, and Vijay S Pande. Taming the complexity of protein folding. Current opinion in structural biology, 21(1):4–11, 2011.Google ScholarGoogle Scholar
  24. {CG17} Clement Canonne and Tom Gur. An adaptivity hierarchy theorem for property testing. arXiv preprint arXiv:1702.05678, 2017.Google ScholarGoogle Scholar
  25. {CKT10} Flavio Chierichetti, Ravi Kumar, and Andrew Tomkins. Max-cover in map-reduce. In WWW, pages 231–240. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. {Col88} Richard Cole. Parallel merge sort. SIAM Journal on Computing, 17(4):770–785, 1988.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. {CST + 17} Xi Chen, Rocco A Servedio, Li-Yang Tan, Erik Waingarten, and Jinyu Xie. Settling the query complexity of non-adaptive junta testing. arXiv preprint arXiv:1704.06314, 2017.Google ScholarGoogle Scholar
  28. {CWW10} Wei Chen, Chi Wang, and Yajun Wang. Scalable influence maximization for prevalent viral marketing in large-scale social networks. In KDD, pages 1029–1038. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. {CWY09} Wei Chen, Yajun Wang, and Siyu Yang. Efficient influence maximization in social networks. In KDD, pages 199–208. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. {DG08} Jeffrey Dean and Sanjay Ghemawat. Mapreduce: simplified data processing on large clusters. Communications of the ACM, 51(1):107–113, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. {DGS84} Pavol Duris, Zvi Galil, and Georg Schnitger. Lower bounds on communication complexity. In STOC, pages 81–91. ACM, 1984.Google ScholarGoogle Scholar
  32. {DHK + 16} Nikhil R Devanur, Zhiyi Huang, Nitish Korula, Vahab S Mirrokni, and Qiqi Yan. Whole-page optimization and submodular welfare maximization with online bidders. EC, 4(3):14, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. {DNO14} Shahar Dobzinski, Noam Nisan, and Sigal Oren. Economic efficiency requires interaction. In STOC, pages 233–242. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. {DR01} Pedro Domingos and Matt Richardson. Mining the network value of customers. In KDD, pages 57–66. ACM, 2001.Google ScholarGoogle Scholar
  35. {EMZ17} Alessandro Epasto, Vahab S. Mirrokni, and Morteza Zadimoghaddam. Bicriteria distributed submodular maximization in a few rounds. In SPAA, pages 25–33, 2017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. {FJK10} Joseph D Frazier, Peter K Jimack, and Robert M Kirby. On the use of adjoint-based sensitivity estimates to control local mesh refinement. Commun Comput Phys, 7:631–8, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  37. {FK14} Vitaly Feldman and Pravesh Kothari. Learning coverage functions and private release of marginals. In COLT, pages 679–702, 2014.Google ScholarGoogle Scholar
  38. {FMV11} Uriel Feige, Vahab S Mirrokni, and Jan Vondrak. Maximizing nonmonotone submodular functions. SIAM Journal on Computing, 40(4):1133–1153, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. {FV13} Vitaly Feldman and Jan Vondrák. Optimal bounds on approximation of submodular and XOS functions by juntas. In FOCS, 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. {FV15} Vitaly Feldman and Jan Vondrák. Tight bounds on low-degree spectral concentration of submodular and xos functions. In FOCS, pages 923– 942. IEEE, 2015.Google ScholarGoogle Scholar
  41. {GK10} Daniel Golovin and Andreas Krause. Adaptive submodularity: A new approach to active learning and stochastic optimization. In COLT, pages 333–345, 2010.Google ScholarGoogle Scholar
  42. {GLL11} Amit Goyal, Wei Lu, and Laks VS Lakshmanan. Celf++: optimizing the greedy algorithm for influence maximization in social networks. In WWW, pages 47–48. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. {HBCN09} Jarvis D Haupt, Richard G Baraniuk, Rui M Castro, and Robert D Nowak. Compressive distilled sensing: Sparse recovery using adaptivity in compressive measurements. In SSC, pages 1551–1555. IEEE, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. {HNC09} Jarvis Haupt, Robert Nowak, and Rui Castro. Adaptive sensing for sparse signal recovery. In Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  45. {HS15} Thibaut Horel and Yaron Singer. Scalable methods for adaptively seeding a social network. In WWW, pages 441–451, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. {IPW11} Piotr Indyk, Eric Price, and David P Woodruff. On the power of adaptivity in sparse recovery. In FOCS, 2011.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. { JBS13} Stefanie Jegelka, Francis Bach, and Suvrit Sra. Reflection methods for user-friendly submodular optimization. In NIPS, pages 1313–1321, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. { JLB11} Stefanie Jegelka, Hui Lin, and Jeff A Bilmes. On fast approximate submodular minimization. In NIPS, pages 460–468, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. { JXC08} Shihao Ji, Ya Xue, and Lawrence Carin. Bayesian compressive sensing. IEEE Transactions on Signal Processing, 56(6):2346–2356, 2008.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. {KKT03} David Kempe, Jon Kleinberg, and Éva Tardos. Maximizing the spread of influence through a social network. In KDD, pages 137–146. ACM, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. {KMVV15} Ravi Kumar, Benjamin Moseley, Sergei Vassilvitskii, and Andrea Vattani. Fast greedy algorithms in mapreduce and streaming. ACM Transactions on Parallel Computing, 2(3):14, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. {KSV10} Howard Karloff, Siddharth Suri, and Sergei Vassilvitskii. A model of computation for mapreduce. In SODA, pages 938–948. Society for Industrial and Applied Mathematics, 2010.Google ScholarGoogle Scholar
  53. {MKBK15} Baharan Mirzasoleiman, Amin Karbasi, Ashwinkumar Badanidiyuru, and Andreas Krause. Distributed submodular cover: Succinctly summarizing massive data. In NIPS, pages 2881–2889, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. {MKSK13} Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, and Andreas Krause. Distributed submodular maximization: Identifying representative elements in massive data. In NIPS, pages 2049–2057, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. {MNSW95} Peter Bro Miltersen, Noam Nisan, Shmuel Safra, and Avi Wigderson. On data structures and asymmetric communication complexity. In STOC, pages 103–111. ACM, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. {MSW08} Dmitry M Malioutov, Sujay Sanghavi, and Alan S Willsky. Compressed sensing with sequential observations. In ICASSP, pages 3357–3360. IEEE, 2008.Google ScholarGoogle Scholar
  57. {MZ15} Vahab Mirrokni and Morteza Zadimoghaddam. Randomized composable core-sets for distributed submodular maximization. In STOC, pages 153–162. ACM, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. {NJJ14} Robert Nishihara, Stefanie Jegelka, and Michael I Jordan. On the convergence rate of decomposable submodular function minimization. In NIPS, pages 640–648, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. {NPS15} Harikrishna Narasimhan, David C. Parkes, and Yaron Singer. Learnability of influence in networks. In NIPS, pages 3186–3194, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. {NW91} Noam Nisan and Avi Widgerson. Rounds in communication complexity revisited. In STOC, pages 419–429. ACM, 1991. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. {NWF78} George L Nemhauser, Laurence A Wolsey, and Marshall L Fisher. An analysis of approximations for maximizing submodular set functionsâĂŤi. Mathematical Programming, 14(1):265–294, 1978. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. {PJG + 14} Xinghao Pan, Stefanie Jegelka, Joseph E Gonzalez, Joseph K Bradley, and Michael I Jordan. Parallel double greedy submodular maximization. In NIPS, pages 118–126, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. {PS84} Christos H Papadimitriou and Michael Sipser. Communication complexity. Journal of Computer and System Sciences, 28(2):260–269, 1984.Google ScholarGoogle ScholarCross RefCross Ref
  64. {RD02} Matthew Richardson and Pedro Domingos. Mining knowledge-sharing sites for viral marketing. In KDD, pages 61–70. ACM, 2002.Google ScholarGoogle Scholar
  65. {RS06} Sofya Raskhodnikova and Adam Smith. A note on adaptivity in testing properties of bounded degree graphs. ECCC, 2006.Google ScholarGoogle Scholar
  66. {RV98} Sridhar Rajagopalan and Vijay V Vazirani. Primal-dual rnc approximation algorithms for set cover and covering integer programs. SIAM Journal on Computing, 28(2):525–540, 1998.Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. {SS13} Lior Seeman and Yaron Singer. Adaptive seeding in social networks. In FOCS, pages 459–468. IEEE, 2013.Google ScholarGoogle Scholar
  68. {STK16} Adish Singla, Sebastian Tschiatschek, and Andreas Krause. Noisy submodular maximization via adaptive sampling with applications to crowdsourced image collection summarization. In AAAI, pages 2037–2043, 2016.Google ScholarGoogle Scholar
  69. {STW15} Rocco A Servedio, Li-Yang Tan, and John Wright. Adaptivity helps for testing juntas. In CCC, pages 264–279, 2015.Google ScholarGoogle Scholar
  70. {Tho90} Steven K Thompson. Adaptive cluster sampling. Journal of the American Statistical Association, 85(412):1050–1059, 1990.Google ScholarGoogle ScholarCross RefCross Ref
  71. {TIWB14} Sebastian Tschiatschek, Rishabh K Iyer, Haochen Wei, and Jeff A Bilmes. Learning mixtures of submodular functions for image collection summarization. In NIPS, pages 1413–1421, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. {Val75} Leslie G Valiant. Parallelism in comparison problems. SIAM Journal on Computing, 4(3):348–355, 1975.Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. {Val84} L. G. Valiant. A theory of the learnable. Commun. ACM, 27(11):1134– 1142, 1984. Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. {WIB14} Kai Wei, Rishabh Iyer, and Jeff Bilmes. Fast multi-stage submodular maximization. In ICML, pages 1494–1502, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. The adaptive complexity of maximizing a submodular function

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      STOC 2018: Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing
      June 2018
      1332 pages
      ISBN:9781450355599
      DOI:10.1145/3188745

      Copyright © 2018 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 20 June 2018

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate1,469of4,586submissions,32%

      Upcoming Conference

      STOC '24
      56th Annual ACM Symposium on Theory of Computing (STOC 2024)
      June 24 - 28, 2024
      Vancouver , BC , Canada

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader