Affiliation:
1. Department of Computing, Imperial College London, UK
Abstract
We introduce an efficient and tight layer-based semidefinite relaxation for verifying local robustness of neural networks. The improved tightness is the result of the combination between semidefinite relaxations and linear cuts. We obtain a computationally efficient method by decomposing the semidefinite formulation into
layerwise constraints. By leveraging on chordal graph decompositions, we show that the formulation here presented is provably tighter than current approaches. Experiments on a set of benchmark networks show that the approach here proposed enables the verification of more instances compared to other relaxation methods. The results also demonstrate that the SDP relaxation here proposed is one order of magnitude faster than previous SDP methods.
Publisher
International Joint Conferences on Artificial Intelligence Organization
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Chordal sparsity for SDP-based neural network verification;Automatica;2024-03
2. VeRe: Verification Guided Synthesis for Repairing Deep Neural Networks;Proceedings of the IEEE/ACM 46th International Conference on Software Engineering;2024-02-06
3. On the Verification of Embeddings with Hybrid Markov Logic;2023 IEEE International Conference on Data Mining (ICDM);2023-12-01
4. Expediting Neural Network Verification via Network Reduction;2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE);2023-09-11
5. Verification-friendly Networks: the Case for Parametric ReLUs;2023 International Joint Conference on Neural Networks (IJCNN);2023-06-18