Abstract
AbstractMotivationIn the field of biology and medicine, the interpretability and accuracy are both important when designing predictive models. The interpretability of many machine learning models such as neural networks is still a challenge. Recently, many researchers utilized prior information such as biological pathways to develop bioinformatics methods based on neural networks, so that the prior information can provide some insights and interpretability for the models. However, the prior biological knowledge may be incomplete and there still exists some unknown information to be explored.ResultsWe proposed a novel method, named PathExpSurv, to gain an insight into the black-box model of neural network for cancer survival analysis. We demonstrated that PathExpSurv could not only incorporate the known prior information into the model, but also explore the unknown possible expansion to the existing pathways. We performed downstream analyses based on the expanded pathways and successfully identified some key genes associated with the diseases and original pathways.AvailabilityPython source code of PathExpSurv is freely available athttps://github.com/Wu-Lab/PathExpSurv.Contact:lywu@amss.ac.cnSupplementary informationSupplementary data are available atBioinformaticsonline.
Publisher
Cold Spring Harbor Laboratory
Reference29 articles.
1. Ancona, M. et al. (2017). Towards better understanding of gradient-based attribution methods for deep neural networks. arXiv preprint arXiv:1711.06104.
2. Snhg5 promotes breast cancer proliferation by sponging the mir-154-5p/pcna axis;Molecular Therapy-Nucleic Acids,2019
3. Regression models and life-tables;Journal of the Royal Statistical Society: Series B (Methodological),1972
4. Differential distribution of erbb receptors in human glioblastoma multiforme: expression of erbb3 in cd133-positive putative cancer stem cells;Journal of Neuropathology & Experimental Neurology,2010
5. A neural network model for survival data