Affiliation:
1. College of Computer Science and Software Engineering Shenzhen University Shenzhen China
2. Institute for Infocomm Research Agency for Science Technology and Research Singapore Singapore
3. Key Laboratory of Collaborative Intelligence Systems Ministry of Education Xidian University Xi'an China
Abstract
AbstractNetwork embedding (NE) tries to learn the potential properties of complex networks represented in a low‐dimensional feature space. However, the existing deep learning‐based NE methods are time‐consuming as they need to train a dense architecture for deep neural networks with extensive unknown weight parameters. A sparse deep autoencoder (called SPDNE) for dynamic NE is proposed, aiming to learn the network structures while preserving the node evolution with a low computational complexity. SPDNE tries to use an optimal sparse architecture to replace the fully connected architecture in the deep autoencoder while maintaining the performance of these models in the dynamic NE. Then, an adaptive simulated algorithm to find the optimal sparse architecture for the deep autoencoder is proposed. The performance of SPDNE over three dynamical NE models (i.e. sparse architecture‐based deep autoencoder method, DynGEM, and ElvDNE) is evaluated on three well‐known benchmark networks and five real‐world networks. The experimental results demonstrate that SPDNE can reduce about 70% of weight parameters of the architecture for the deep autoencoder during the training process while preserving the performance of these dynamical NE models. The results also show that SPDNE achieves the highest accuracy on 72 out of 96 edge prediction and network reconstruction tasks compared with the state‐of‐the‐art dynamical NE algorithms.
Publisher
Institution of Engineering and Technology (IET)