Author:
Xiong Yingfei,Tian Yongqiang,Liu Yepang,Cheung Shing-Chi
Publisher
Springer Science and Business Media LLC
Reference13 articles.
1. Wang Z, Yan M, Liu S, et al. Survey on testing deep learning neural networks (in Chinese). J Software, 2020, 31: 1255–1275
2. Huang X, Kroening D, Ruan W, et al. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Comput Sci Rev, 2020, 37: 100270
3. Ma L, Juefei-Xu F, Zhang F, et al. DeepGauge: multigranularity testing criteria for deep learning systems. In: Proceedings of ACM/IEEE International Conference on Automated Software Engineering, 2018. 120–131
4. Pei K, Cao Y, Yang J, et al. DeepXplore: automated white-box testing of deep learning systems. In: Proceedings of the ACM Symposium on Operating Systems Principles, 2017. 1–18
5. Raghunathan A, Xie S M, Yang F, et al. Adversarial training can hurt generalization. In: Proceedings of ICML Deep Phenomena, 2019
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Toward Understanding Deep Learning Framework Bugs;ACM Transactions on Software Engineering and Methodology;2023-09-29
2. OrdinalFix: Fixing Compilation Errors via Shortest-Path CFL Reachability;2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE);2023-09-11
3. Reliability Assurance for Deep Neural Network Architectures Against Numerical Defects;2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE);2023-05
4. A Comprehensive Study of Real-World Bugs in Machine Learning Model Optimization;2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE);2023-05