Affiliation:
1. Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung 80424, Taiwan
Abstract
In computer vision and image processing, the shift from traditional cameras to emerging sensing tools, such as gesture recognition and object detection, addresses privacy concerns. This study navigates the Integrated Sensing and Communication (ISAC) era, using millimeter-wave signals as radar via a Convolutional Neural Network (CNN) model for event sensing. Our focus is on leveraging deep learning to detect security-critical gestures, converting millimeter-wave parameters into point cloud images, and enhancing recognition accuracy. CNNs present complexity challenges in deep learning. To address this, we developed flexible quantization methods, simplifying You Only Look Once (YOLO)-v4 operations with an 8-bit fixed-point number representation. Cross-simulation validation showed that CPU-based quantization improves speed by 300% with minimal accuracy loss, even doubling the YOLO-tiny model’s speed in a GPU environment. We established a Raspberry Pi 4-based system, combining simplified deep learning with Message Queuing Telemetry Transport (MQTT) Internet of Things (IoT) technology for nursing care. Our quantification method significantly boosted identification speed by nearly 2.9 times, enabling millimeter-wave sensing in embedded systems. Additionally, we implemented hardware-based quantization, directly quantifying data from images or weight files, leading to circuit synthesis and chip design. This work integrates AI with mmWave sensors in the domain of nursing security and hardware implementation to enhance recognition accuracy and computational efficiency. Employing millimeter-wave radar in medical institutions or homes offers a strong solution to privacy concerns compared to conventional cameras that capture and analyze the appearance of patients or residents.
Reference20 articles.
1. Real-time human action recognition on an embedded, reconfigurable video processing architecture;Meng;J. Real-Time Image Process.,2008
2. Molchanov, P., Gupta, S., Kim, K., and Pulli, K. (2015, January 4–8). Multi-sensor system for driver’s hand-gesture recognition. Proceedings of the 11th IEEE Int. Conf. Workshops Autom. Face Gesture Recognit. (FG), Ljubljana, Slovenia.
3. Qian, C., Wu, Z., Yang, Z., Liu, Y., and Jamieson, K. (2017, January 10–14). Widar: Decimeterlevel passive tracking via velocity monitoring with commodity Wi-Fi. Proceedings of the 18th ACM International Symposium on Mobile Ad Hoc Networking and Computing, New York, NY, USA.
4. WiHF: Gesture and user recognition with WiFi;Li;IEEE Trans. Mobile Comput.,2022
5. Wi-Fi Sensing for Joint Gesture Recognition and Human Identification from Few Samples in Human-Computer Interaction;Zhang;IEEE J. Sel. Areas Commun.,2022