1. Ahmad S, Hawkins J (2015) Properties of sparse distributed representations and their application to hierarchical temporal memory. CoRR, arxiv:1503.07469
2. Altman N, Krzywinski M (2018) The curse (s) of dimensionality. Nat Methods 15(6):399–400
3. Baevski A, Babu A, Hsu W-N, Auli M (2023) Efficient self-supervised learning with contextualized target representations for vision, speech and language. Interna- tional conference on machine learning (pp.1416–1429)
4. Brown, TB, Mann B, Ryder N.Subbiah M, Kaplan J, Dhariwal P, Amodei D (2020). Language models are few-shot learners. H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, H. Lin (Eds.), Advances in neural information processing systems 33: Annual conference on neural informa- tion processing systems 2020, neurips 2020, december 6-12, 2020, virtual
5. Fischer A, Igel C (2014) Training restricted Boltzmann machines: an introduction. Pattern Recognit 47(1):25–39. https://doi.org/10.1016/j.patcog.2013.05.025