Affiliation:
1. Northwest University, USA
2. Monash University, Australia
3. Carnegie Mellon University, USA
4. Qilu University of Technology (Shandong Academy of Sciences), China
Abstract
Deep learning has made substantial breakthroughs in many fields due to its powerful automatic representation capabilities. It has been proven that neural architecture design is crucial to the feature representation of data and the final performance. However, the design of the neural architecture heavily relies on the researchers’ prior knowledge and experience. And due to the limitations of humans’ inherent knowledge, it is difficult for people to jump out of their original thinking paradigm and design an optimal model. Therefore, an intuitive idea would be to reduce human intervention as much as possible and let the algorithm automatically design the neural architecture.
Neural Architecture Search
(
NAS
) is just such a revolutionary algorithm, and the related research work is complicated and rich. Therefore, a comprehensive and systematic survey on the NAS is essential. Previously related surveys have begun to classify existing work mainly based on the key components of NAS: search space, search strategy, and evaluation strategy. While this classification method is more intuitive, it is difficult for readers to grasp the challenges and the landmark work involved. Therefore, in this survey, we provide a new perspective: beginning with an overview of the characteristics of the earliest NAS algorithms, summarizing the problems in these early NAS algorithms, and then providing solutions for subsequent related research work. In addition, we conduct a detailed and comprehensive analysis, comparison, and summary of these works. Finally, we provide some possible future research directions.
Publisher
Association for Computing Machinery (ACM)
Subject
General Computer Science,Theoretical Computer Science
Reference162 articles.
1. Long Short-Term Memory
2. M. X. Chen O. Firat A. Bapna M. Johnson W. Macherey G. Foster L. Jones N. Parmar M. Schuster Z. Chen Y. Wu and M. Hughes. 2018. The best of both worlds: Combining recent advances in neural machine translation. arXiv:1804.09849. Retrieved from https://arxiv.org/pdf/1804.09849.pdf. M. X. Chen O. Firat A. Bapna M. Johnson W. Macherey G. Foster L. Jones N. Parmar M. Schuster Z. Chen Y. Wu and M. Hughes. 2018. The best of both worlds: Combining recent advances in neural machine translation. arXiv:1804.09849. Retrieved from https://arxiv.org/pdf/1804.09849.pdf.
3. K. Simonyan and A. Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. ICLR. K. Simonyan and A. Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. ICLR.
Cited by
338 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献