skip to main content
10.1145/3295500.3356202acmconferencesArticle/Chapter ViewAbstractPublication PagesscConference Proceedingsconference-collections
research-article

Scalable reinforcement-learning-based neural architecture search for cancer deep learning research

Published:17 November 2019Publication History

ABSTRACT

Cancer is a complex disease, the understanding and treatment of which are being aided through increases in the volume of collected data and in the scale of deployed computing power. Consequently, there is a growing need for the development of data-driven and, in particular, deep learning methods for various tasks such as cancer diagnosis, detection, prognosis, and prediction. Despite recent successes, however, designing high-performing deep learning models for nonimage and nontext cancer data is a time-consuming, trial-and-error, manual task that requires both cancer domain and deep learning expertise. To that end, we develop a reinforcement-learning-based neural architecture search to automate deep-learning-based predictive model development for a class of representative cancer data. We develop custom building blocks that allow domain experts to incorporate the cancer-data-specific characteristics. We show that our approach discovers deep neural network architectures that have significantly fewer trainable parameters, shorter training time, and accuracy similar to or higher than those of manually designed architectures. We study and demonstrate the scalability of our approach on up to 1,024 Intel Knights Landing nodes of the Theta supercomputer at the Argonne Leadership Computing Facility.

References

  1. [n. d.]. AutoML Workshops. https://www.ml4aad.org/automl/Google ScholarGoogle Scholar
  2. [n. d.]. CANDLE Exascale Computing Program Application. https://github.com/ECP-CANDLE/BenchmarksGoogle ScholarGoogle Scholar
  3. [n. d.]. Combo Benchmark. https://github.com/ECP-CANDLE/Benchmarks/tree/master/Pilot1/ComboGoogle ScholarGoogle Scholar
  4. [n. d.]. An End-to-End AutoML Solution for Tabular Data at KaggleDays. https://ai.googleblog.com/2019/05/an-end-to-end-automl-solution-for.htmlGoogle ScholarGoogle Scholar
  5. [n. d.]. Exascale Deep Learning and Simulation Enabled Precision Medicine for Cancer. https://candle.cels.anl.govGoogle ScholarGoogle Scholar
  6. [n. d.]. Literature on Neural Architecture Search. https://www.ml4aad.org/automl/literature-on-neural-architecture-search/Google ScholarGoogle Scholar
  7. [n. d.]. Neural Network Intelligence. https://github.com/Microsoft/nniGoogle ScholarGoogle Scholar
  8. [n. d.]. NT3 Benchmark. https://github.com/ECP-CANDLE/Benchmarks/tree/master/Pilot1/NT3Google ScholarGoogle Scholar
  9. [n. d.]. Uno Benchmark. https://github.com/ECP-CANDLE/Benchmarks/tree/master/Pilot1/UnoGoogle ScholarGoogle Scholar
  10. [n. d.]. World Health Organization: Cancer key facts. https://www.who.int/news-room/fact-sheets/detail/cancerGoogle ScholarGoogle Scholar
  11. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. TensorFlow: A system for large-scale machine learning. In OSDI, Vol. 16. 265--283.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Anubhav Ashok, Nicholas Rhinehart, Fares Beainy, and Kris M Kitani. 2017. N2n learning: Network to network compression via policy gradient reinforcement learning. arXiv preprint 1709.06030 (2017).Google ScholarGoogle Scholar
  13. Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. 2016. Designing neural network architectures using reinforcement learning. arXiv preprint 1611.02167 (2016).Google ScholarGoogle Scholar
  14. P. Balaprakash, R. Egele, M. Salim, V. Vishwanath, and S. M. Wild. 2018. Deep-Hyper: Scalable automated machine learning package. https://github.com/deephyper/deephyperGoogle ScholarGoogle Scholar
  15. Prasanna Balaprakash, Michael Salim, Thomas Uram, Venkat Vishwanath, and Stefan Wild. 2018. DeepHyper: Asynchronous Hyperparameter Search for Deep Neural Networks. In HiPC 2018: 25th edition of the IEEE International Conference on High Performance Computing, Data, and Analytics.Google ScholarGoogle Scholar
  16. Irwan Bello, Barret Zoph, Vijay Vasudevan, and Quoc V Le. 2017. Neural optimizer search with reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning, Vol. 70. JMLR. org, 459--468.Google ScholarGoogle Scholar
  17. T. Ben-Nun and T. Hoefler. 2018. Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis. CoRR abs/1802.09941 (Feb. 2018).Google ScholarGoogle Scholar
  18. Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. 2018. Understanding and simplifying one-shot architecture search. In International Conference on Machine Learning. 549--558.Google ScholarGoogle Scholar
  19. Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. 2018. Understanding and Simplifying One-Shot Architecture Search. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.), Vol. 80. PMLR, 550--559.Google ScholarGoogle Scholar
  20. James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. Journal of Machine Learning Research 13, Feb (2012), 281--305.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. James Bergstra, Dan Yamins, and David D Cox. 2013. Hyperopt: A Python library for optimizing the hyperparameters of machine learning algorithms. In Proceedings of the 12th Python in Science Conference. 13--20.Google ScholarGoogle ScholarCross RefCross Ref
  22. James Bergstra, Daniel Yamins, and David Daniel Cox. 2013. Making a science of model search: Hperparameter optimization in hundreds of dimensions for vision architectures. (2013).Google ScholarGoogle Scholar
  23. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. OpenAI Gym. arXiv:arXiv:1606.01540Google ScholarGoogle Scholar
  24. Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. 2017. Geometric deep learning: Going beyond euclidean data. IEEE Signal Processing Magazine 34, 4 (2017), 18--42.Google ScholarGoogle ScholarCross RefCross Ref
  25. Yukang Chen, Qian Zhang, Chang Huang, Lisen Mu, Gaofeng Meng, and Xinggang Wang. 2018. Reinforced Evolutionary Neural Architecture Search. arXiv preprint 1808.00193 (2018).Google ScholarGoogle Scholar
  26. François Chollet et al. 2017. Keras (2015).Google ScholarGoogle Scholar
  27. Patryk Chrabaszcz, Ilya Loshchilov, and Frank Hutter. 2017. A downsampled variant of ImageNet as an alternative to the CIFAR datasets. arXiv preprint 1707.08819 (2017).Google ScholarGoogle Scholar
  28. Xiangxiang Chu, Bo Zhang, Hailong Ma, Ruijun Xu, Jixiang Li, and Qingyuan Li. 2019. Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search. arXiv preprint 1901.07261 (2019).Google ScholarGoogle Scholar
  29. Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, Yuhuai Wu, and Peter Zhokhov. 2017. OpenAI baselines. https://github.com/openai/baselines.Google ScholarGoogle Scholar
  30. Georgi Dikov, Patrick van der Smagt, and Justin Bayer. 2019. Bayesian Learning of Neural Network Architectures. arXiv preprint 1901.04436 (2019).Google ScholarGoogle Scholar
  31. Jesse R Dixon, Jie Xu, Vishnu Dileep, Ye Zhan, Fan Song, Victoria T Le, Galip Gürkan Yardimci, Abhijit Chakraborty, Darrin V Bann, Yanli Wang, et al. 2018. Integrative detection and analysis of structural variation in cancer genomes. Nature Genetics 50, 10 (2018), 1388.Google ScholarGoogle ScholarCross RefCross Ref
  32. Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. 2018. Neural architecture search: A survey. arXiv preprint 1808.05377 (2018).Google ScholarGoogle Scholar
  33. Manuel Fernández-Delgado, Eva Cernadas, Senén Barro, and Dinani Amorim. 2014. Do we need hundreds of classifiers to solve real world classification problems? The Journal of Machine Learning Research 15, 1 (2014), 3133--3181.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Dario Floreano, Peter Dürr, and Claudio Mattiussi. 2008. Neuroevolution: From architectures to learning. Evolutionary Intelligence 1, 1 (2008), 47--62.Google ScholarGoogle ScholarCross RefCross Ref
  35. Ivo Grondman, Lucian Busoniu, Gabriel AD Lopes, and Robert Babuska. 2012. A survey of actor-critic reinforcement learning: SStandard and natural policy gradients. IEEE Transactions on Systems, Man, and Cybernetics, Part C(Applications and Reviews) 42, 6 (2012), 1291--1307.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Minghao Guo, Zhao Zhong, Wei Wu, Dahua Lin, and Junjie Yan. 2018. IRLAS: Inverse Reinforcement Learning for Architecture Search. arXiv preprint 1812.05285 (2018).Google ScholarGoogle Scholar
  37. F. Hutter, L. Kotthoff, and J. Vanschoren (Eds.). 2019. Automated Machine Learning: Methods, Systems, Challenges. Springer International Publishing.Google ScholarGoogle Scholar
  38. Max Jaderberg, Valentin Dalibard, Simon Osindero, Wojciech M Czarnecki, Jeff Donahue, Ali Razavi, Oriol Vinyals, Tim Green, Iain Dunning, Karen Simonyan, et al. 2017. Population Based Training of Neural Networks. arXiv preprint 1711.09846 (2017).Google ScholarGoogle Scholar
  39. Haifeng Jin, Qingquan Song, and Xia Hu. 2018. Efficient neural architecture search with network morphism. arXiv preprint 1806.10282 (2018).Google ScholarGoogle Scholar
  40. Purushotham Kamath, Abhishek Singh, and Debo Dutta. [n. d.]. AMLA: An AutoML frAmework for Neural Network Design. ([n. d.]).Google ScholarGoogle Scholar
  41. Kirthevasan Kandasamy, Willie Neiswanger, Jeff Schneider, Barnabas Poczos, and Eric P Xing. 2018. Neural architecture search with Bayesian optimisation and optimal transport. In Advances in Neural Information Processing Systems. 2020--2029.Google ScholarGoogle Scholar
  42. A. Klein, S. Falkner, N. Mansur, and F. Hutter. 2017. RoBO: A flexible and robust Bayesian Optimization framework in Python. In NeurIPS 2017 Bayesian Optimization Workshop.Google ScholarGoogle Scholar
  43. Aaron Klein, Stefan Falkner, Jost Tobias Springenberg, and Frank Hutter. 2016. Learning curve prediction with Bayesian neural networks. (2016).Google ScholarGoogle Scholar
  44. Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. 2016. Hyperband: Bandit-based configuration evaluation for hyperparameter optimization. (2016).Google ScholarGoogle Scholar
  45. Liam Li and Ameet Talwalkar. 2019. Random Search and Reproducibility for Neural Architecture Search. arXiv preprint 1902.07638 (2019).Google ScholarGoogle Scholar
  46. Jason Liang, Elliot Meyerson, Babak Hodjat, Dan Fink, Karl Mutch, and Risto Miikkulainen. 2019. Evolutionary Neural AutoML for Deep Learning. arXiv preprint 1902.06827 (2019).Google ScholarGoogle Scholar
  47. Jason Liang, Elliot Meyerson, and Risto Miikkulainen. 2018. Evolutionary architecture search for deep multitask networks. In Proceedings of the Genetic and Evolutionary Computation Conference. ACM, 466--473.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Shaoping Ling, Zheng Hu, Zuyu Yang, Fang Yang, Yawei Li, Pei Lin, Ke Chen, Lili Dong, Lihua Cao, Yong Tao, et al. 2015. Extremely high genetic diversity in a single tumor points to prevalence of non-Darwinian cell evolution. Proceedings of the National Academy of Sciences 112, 47 (2015), E6496--E6505.Google ScholarGoogle ScholarCross RefCross Ref
  49. Chenxi Liu, Barret Zoph, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. 2017. Progressive neural architecture search. arXiv preprint 1712.00559 (2017).Google ScholarGoogle Scholar
  50. Pablo Ribalta Lorenzo and Jakub Nalepa. 2018. Memetic evolution of deep neural networks. In Proceedings of the Genetic and Evolutionary Computation Conference. ACM, 505--512.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Pablo Ribalta Lorenzo, Jakub Nalepa, Luciano Sanchez Ramos, and José Ranilla Pastor. 2017. Hyper-parameter selection in deep neural networks using parallel particle swarm optimization. In Proceedings of the Genetic and Evolutionary Computation Conference Companion. ACM, 1864--1871.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. arXiv preprint 1508.04025 (2015).Google ScholarGoogle Scholar
  53. Krzysztof Maziarz, Andrey Khorlin, Quentin de Laroussilhe, and Andrea Gesmundo. 2018. Evolutionary-Neural Hybrid Agents for Architecture Search. arXiv preprint 1811.09828 (2018).Google ScholarGoogle Scholar
  54. Risto Miikkulainen, Jason Liang, Elliot Meyerson, Aditya Rawal, Dan Fink, Olivier Francon, Bala Raju, Arshak Navruzyan, Nigel Duffy, and Babak Hodjat. 2017. Evolving deep neural networks. arXiv preprint 1703.00548 (2017).Google ScholarGoogle Scholar
  55. Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I Jordan, et al. 2018. Ray: A Distributed Framework for Emerging {AI} Applications. In 13th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 18). 561--577.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Renato Negrinho and Geoff Gordon. 2017. Deeparchitect: Automatically designing and training deep architectures. arXiv preprint 1704.08792 (2017).Google ScholarGoogle Scholar
  57. Michail Nikolaou, Athanasia Pavlopoulou, Alexandros G Georgakilas, and Efthymios Kyrodimos. 2018. The challenge of drug resistance in cancer treatment: A current overview. Clinical & Experimental Metastasis 35, 4 (2018), 309--318.Google ScholarGoogle ScholarCross RefCross Ref
  58. Randal S. Olson, Nathan Bartley, Ryan J. Urbanowicz, and Jason H. Moore. 2016. Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science. In Proceedings of the Genetic and Evolutionary Computation Conference 2016 (GECCO '16). ACM, New York, NY, USA, 485--492. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Robert M Patton, J Travis Johnston, Steven R Young, Catherine D Schuman, Don D March, Thomas E Potok, Derek C Rose, Seung-Hwan Lim, Thomas P Karnowski, Maxim A Ziatdinov, et al. 2018. 167-PFlops deep learning for electron microscopy: From learning physics to atomic manipulation. In Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis. IEEE Press, 50.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12 (2011), 2825--2830.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. 2018. Efficient Neural Architecture Search via Parameter Sharing. arXiv preprint 1802.03268 (2018).Google ScholarGoogle Scholar
  62. Aditya Rawal and Risto Miikkulainen. 2018. From nodes to networks: Evolving recurrent neural networks. arXiv preprint 1803.04439 (2018).Google ScholarGoogle Scholar
  63. Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. 2018. Regularized evolution for image classifier architecture search. arXiv preprint 1802.01548 (2018).Google ScholarGoogle Scholar
  64. Ed Reznik, Augustin Luna, Bülent Arman Aksoy, Eric Minwei Liu, Konnor La, Irina Ostrovnaya, Chad J Creighton, A Ari Hakimi, and Chris Sander. 2018. A landscape of metabolic variation across tumor types. Cell Systems 6, 3 (2018), 301--313.Google ScholarGoogle ScholarCross RefCross Ref
  65. Raanan Y Rohekar, Shami Nisimov, Yaniv Gurwicz, Guy Koren, and Gal Novik. 2018. Constructing Deep Neural Networks by Bayesian Network Structure Learning. In Advances in Neural Information Processing Systems. 3051--3062.Google ScholarGoogle Scholar
  66. Michael A. Salim, Thomas D. Uram, Taylor Childers, Prasanna Balaprakash, Venkatram Vishwanath, and Michael E. Papka. 2018. Balsam: Automated Scheduling and Execution of Dynamic, Data-Intensive Workflows. In PyHPC 2018: Proceedings of the 8th Workshop on Python for High-Performance and Scientific Computing.Google ScholarGoogle Scholar
  67. Francisco Sanchez-Vega, Marco Mina, Joshua Armenia, Walid K Chatila, Augustin Luna, Konnor C La, Sofia Dimitriadoy, David L Liu, Havish S Kantheti, Sadegh Saghafinia, et al. 2018. Oncogenic signaling pathways in the cancer genome atlas. Cell 173, 2 (2018), 321--337.Google ScholarGoogle ScholarCross RefCross Ref
  68. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017).Google ScholarGoogle Scholar
  69. Christian Sciuto, Kaicheng Yu, Martin Jaggi, Claudiu Musat, and Mathieu Salzmann. 2019. Evaluating the Search Phase of Neural Architecture Search. arXiv preprint 1902.08142 (2019).Google ScholarGoogle Scholar
  70. Jasper Snoek, Hugo Larochelle, and Ryan P Adams. 2012. Practical Bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems. 2951--2959.Google ScholarGoogle Scholar
  71. Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Mostofa Patwary, Mr Prabhat, and Ryan Adams. 2015. Scalable Bayesian optimization using deep neural networks. In International Conference on Machine Learning. 2171--2180.Google ScholarGoogle Scholar
  72. Kenneth O. Stanley, Jeff Clune, Joel Lehman, and Risto Miikkulainen. 2019. Designing neural networks through neuroevolution. Nature Machine Intelligence 1, 1 (2019), 24--35. Google ScholarGoogle ScholarCross RefCross Ref
  73. Kenneth O Stanley, David B D'Ambrosio, and Jason Gauci. 2009. A hypercube-based encoding for evolving large-scale neural networks. Artificial Life 15, 2 (2009), 185--212.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Masanori Suganuma, Mete Ozay, and Takayuki Okatani. 2018. Exploiting the potential of standard convolutional autoencoders for image restoration by evolutionary search. arXiv preprint 1803.00370 (2018).Google ScholarGoogle Scholar
  75. Masanori Suganuma, Shinichi Shirakawa, and Tomoharu Nagao. 2017. A genetic programming approach to designing convolutional neural network architectures. In Proceedings of the Genetic and Evolutionary Computation Conference. ACM, 497--504.Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Richard S. Sutton and Andrew G. Barto. 2018. Reinforcement learning: An introduction. MIT Press.Google ScholarGoogle Scholar
  77. Richard S. Sutton, David A. McAllester, Satinder P. Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems. 1057--1063.Google ScholarGoogle Scholar
  78. Gerard Jacques van Wykand Anna Sergeevna Bosman. 2018. Evolutionary Neural Architecture Search for Image Restoration. arXiv preprint 1812.05866 (2018).Google ScholarGoogle Scholar
  79. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. 5998--6008.Google ScholarGoogle Scholar
  80. Jiazhuo Wang, Jason Xu, and Xuejun Wang. 2018. Combination of hyperband and Bayesian optimization for hyperparameter optimization in deep learning. arXiv preprint 1801.01596 (2018).Google ScholarGoogle Scholar
  81. Daan Wierstra, Faustino J Gomez, and Jürgen Schmidhuber. 2005. Modeling systems with internal state using Evolino. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation. ACM, 1795--1802.Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Martin Wistuba. 2017. Bayesian Optimization Combined with Incremental Evaluation for Neural Network Architecture Optimization. Proceedings of the International Workshop on Automatic Selection, Configuration and Composition of Machine Learning Algorithms (2017).Google ScholarGoogle Scholar
  83. Justin M. Wozniak, Rajeev Jain, Prasanna Balaprakash, Jonathan Ozik, Nicholson T. Collier, John Bauer, Fangfang Xia, Thomas S. Brettin, Rick Stevens, Jamaludin Mohd-Yusof, Cristina Garcia-Cardona, Brian Van Essen, and Matthew Baughman. 2018. CANDLE/Supervisor: A workflow framework for machine learning applied to cancer research. BMC Bioinformatics 19-S, 18 (2018), 59--69. Google ScholarGoogle ScholarCross RefCross Ref
  84. Fangfang Xia, Maulik Shukla, Thomas Brettin, Cristina Garcia-Cardona, Judith Cohn, Jonathan E Allen, Sergei Maslov, Susan L Holbeck, James H Doroshow, Yvonne A Evrard, et al. 2018. Predicting tumor cell line response to drug pairs with deep learning. BMC Bioinformatics 19, 18 (2018), 486.Google ScholarGoogle ScholarCross RefCross Ref
  85. Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. 2018. SNAS: Stochastic neural architecture search. arXiv preprint 1812.09926 (2018).Google ScholarGoogle Scholar
  86. Steven R Young, Derek C Rose, Travis Johnston, William T Heller, Thomas P Karnowski, Thomas E Potok, Robert M Patton, Gabriel Perdue, and Jonathan Miller. 2017. Evolving deep networks using HPC. In Proceedings of the Machine Learning on HPC Environments. ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  87. Arber Zela, Aaron Klein, Stefan Falkner, and Frank Hutter. 2018. Towards automated deep learning: Efficient joint neural architecture and hyperparameter search. arXiv preprint 1807.06906 (2018).Google ScholarGoogle Scholar
  88. Barret Zoph and Quoc V Le. 2016. Neural architecture search with reinforcement learning. arXiv preprint 1611.01578 (2016).Google ScholarGoogle Scholar
  89. Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. 2018. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8697--8710.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Scalable reinforcement-learning-based neural architecture search for cancer deep learning research

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            SC '19: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis
            November 2019
            1921 pages
            ISBN:9781450362290
            DOI:10.1145/3295500

            Copyright © 2019 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 17 November 2019

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

            Acceptance Rates

            Overall Acceptance Rate1,516of6,373submissions,24%

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader