1. [1] K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” Proc. 27th International Conference on International Conference on Machine Learning, pp.399-406, Omnipress, 2010.
2. [2] J.R. Hershey, J.L. Roux, and F. Weninger, “Deep unfolding: Model-based inspiration of novel deep architectures,” arXiv preprint arXiv:1409.2574, 2014. 10.48550/arXiv.1409.2574
3. [3] P. Sprechmann, A.M. Bronstein, and G. Sapiro, “Learning efficient sparse and low rank models,” IEEE Trans. Pattern Anal. Mach. Intell., vol.37, no.9, pp.1821-1833, 2015. 10.1109/tpami.2015.2392779
4. [4] B. Xin, Y. Wang, W. Gao, D. Wipf, and B. Wang, “Maximal sparsity with deep networks?,” Advances in Neural Information Processing Systems, pp.4340-4348, 2016.
5. [5] M. Borgerding, P. Schniter, and S. Rangan, “AMP-inspired deep networks for sparse linear inverse problems,” IEEE Trans. Signal Process., vol.65, no.16, pp.4293-4308, 2017. 10.1109/tsp.2017.2708040