Affiliation:
1. Department of Electrical, Electronic and Communication Engineering Kindai University 3‐4‐1 Kowakae, Higashi‐Osaka city Osaka 577‐8502 Japan
2. Department of Electronic and Computer Engineering Ritsumeikan University 1‐1‐1 Noji‐Higashi Kusatsu Shiga 525‐8577 Japan
3. Research Institute for Nanodevice and Bio Systems (RNBS) Hiroshima University 1‐4‐2 Kagamiyama, Higashi‐ Hiroshima 739‐8527 Japan
Abstract
AbstractSeveral multimedia applications have recently been implemented on mobile devices, including digital image compression, video compression, and audio processing. Furthermore, Artificial Intelligence (AI) processing has grown in popularity, necessitating the execution of large amounts of data in mobile devices. Therefore, the processing core in a mobile device requires high performance, programmability, and versatility. Multimedia apps for mobile devices typically comprise repeated arithmetic and table‐lookup coding operations. A Content Addressable Memory‐based massive‐parallel SIMD matriX core (CAMX) is presented to increase the processing speed of both operations on a processing core. The CAMX serves as a CPU core accelerator for mobile devices. The CAMX supports high‐parallel processing and is equipped with two CAM modules for high‐speed repeated arithmetic and table‐lookup coding operations. The CAMX has great performance, programmability, and versatility on mobile devices because it can handle logical, arithmetic, search, and shift operations in parallel. This paper shows that the CAMX can process parallel repeated arithmetic and table‐lookup coding operations; single‐precision floating‐point arithmetic can calculate 1024 entries in 5613 clock cycles in parallel without embedding a dedicated floating‐point arithmetic unit. This clock cycle using two's complement‐reduced floating‐point addition implementation decreases 59% than the implementation of straight‐forward floating‐point addition. The implementation of straight‐forward floating‐point additions is improved as two's complement instruction reduced algorithms. Thus, this paper proposes an instruction reduction architecture by modulating the CAMX to directly access the data in the left and right CAM modules from the preserve register. The CAMX has achieved high performance, programmability, and versatility by not embedding a dedicated processing unit. Moreover, assuming the CAMX processes at an operating frequency of 0.1, 0.5, 1.0, or 1.5 GHz, it can process floating‐point additions above approximately 4500 parallelized data, with better performance than an ARM core using NEON and Vector Floating‐Point (VFP). In addition, related works executed by software instruction, dedicated floating‐point arithmetic unit, or both and the CAMX are compared while assuming the same operation frequency. From this result, the CAMX which has 128‐bit and 1024‐entry CAM modules achieves higher performance than the related works executed by only software instructions and by combining software instructions and a dedicated floating‐point arithmetic unit. © 2023 Institute of Electrical Engineers of Japan. Published by Wiley Periodicals LLC.
Funder
Ministry of Education, Culture, Sports, Science and Technology
Subject
Electrical and Electronic Engineering
Reference46 articles.
1. https://www.semiconductor‐digest.com/2020/03/10/transistor‐count‐trends‐continue‐to‐track‐with‐moores‐law/
2. There's plenty of room at the top: What will drive computer performance after Moore's law?, American association for the;Leiserson CE;Advancement of Science,2020
3. ThabetR MahmoudiR BedouiMH.Image processing on mobile devices: An overview international image processing applications and systems conference 1–8 2014.
4. XiuL MaB ZhuK ZhangL.Implementation and optimization of image acquisition with smartphones in Computer Vision 2018 International Conference on Information Networking 261–266 2018.
5. KageyamaK SugiyamaK KumakiT FujinoT.1/f fluctuation‐based visible light beacon for spy‐photo prevention system RISP International workshop on nonlinear circuit computer and signal processing 2016.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献