A memristive all-inclusive hypernetwork for parallel analog deployment of full search space architectures
-
Published:2024-07
Issue:
Volume:175
Page:106312
-
ISSN:0893-6080
-
Container-title:Neural Networks
-
language:en
-
Short-container-title:Neural Networks
Author:
Lyu BoORCID, Yang Yin, Cao Yuting, Shi Tuo, Chen Yiran, Huang Tingwen, Wen ShipingORCID
Reference59 articles.
1. Ahn, J., Hong, S., Yoo, S., Mutlu, O., & Choi, K. (2015). A scalable processing-in-memory accelerator for parallel graph processing. In Proceedings of the 42nd annual international symposium on computer architecture (pp. 105–117). 2. Boroumand, A., Ghose, S., Kim, Y., Ausavarungnirun, R., Shiu, E., Thakur, R., et al. (2018). Google workloads for consumer devices: Mitigating data movement bottlenecks. In Proceedings of the twenty-third international conference on architectural support for programming languages and operating systems (pp. 316–331). 3. Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning;Chen;ACM SIGARCH Computer Architecture News,2014 4. Chen, Y., Luo, T., Liu, S., Zhang, S., He, L., Wang, J., et al. (2014). Dadiannao: A machine-learning supercomputer. In 47th annual IEEE/ACM international symposium on microarchitecture (pp. 609–622). 5. Chen, X., Xie, L., Wu, J., & Tian, Q. (2019). Progressive differentiable architecture search: Bridging the depth gap between search and evaluation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1294–1303).
|
|