Abstract
We study a new technique for solving the fundamental challenge in nanophotonic design: fast and accurate characterization of nanoscale photonic devices with minimal human intervention. Much like the fusion between Artificial Intelligence and Electronic Design Automation (EDA), many efforts have been made to apply deep neural networks (DNN) such as convolutional neural networks to prototype and characterize next-gen optoelectronic devices commonly found in Photonic Integrated Circuits. However, state-of-the-art DNN models are still far from being directly applicable in the real world: e.g., DNN-produced correlation coefficients between target and predicted physical quantities are about 80%, which is much lower than what it takes to generate reliable and reproducible nanophotonic designs. Recently, attention-based transformer models have attracted extensive interests and been widely used in Computer Vision and Natural Language Processing. In this work, we for the first time propose a Transformer model (POViT) to efficiently design and simulate photonic crystal nanocavities with multiple objectives under consideration. Unlike the standard Vision Transformer, our model takes photonic crystals as input data and changes the activation layer from GELU to an absolute-value function. Extensive experiments show that POViT significantly improves results reported by previous models: correlation coefficients are increased by over 12% (i.e., to 92.0%) and prediction errors are reduced by an order of magnitude, among several key metric improvements. Our work has the potential to drive the expansion of EDA to fully automated photonic design (i.e., PDA). The complete dataset and code will be released to promote research in the interdisciplinary field of materials science/physics and computer science.
Funder
National Natural Science Foundation of China
Shenzhen Fundamental Research Fund
Shenzhen Key Laboratory Project
Longgang Key Laboratory Project
Longgang Matching Support Fund
President’s Fund
Optical Communication Core Chip Research Platform
Shenzhen Science and Technology Program
Guangdong Basic and Applied Basic Research Foundation
Shenzhen Research Institute of Big Data
Subject
General Materials Science,General Chemical Engineering
Reference47 articles.
1. Deep learning in medical imaging: General overview;Lee;Korean J. Radiol.,2017
2. Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new technique;Greenspan;IEEE Trans. Med. Imaging,2016
3. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. Adv. Neural Inf. Process. Syst., 30, Available online: https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html.
4. Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., and Funtowicz, M. (2019). Huggingface’s transformers: State-of-the-art natural language processing. arXiv.
5. Deep learning for safe autonomous driving: Current challenges and future directions;Muhammad;IEEE Trans. Intell. Transp. Syst.,2020
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献