Abstract
Light Detection and Ranging (LiDAR) technology, a cutting-edge advancement in mobile applications, presents a myriad of compelling use cases, including enhancing low-light photography, capturing and sharing 3D images of fascinating objects, and elevating the overall augmented reality (AR) experience. However, its widespread adoption has been hindered by the prohibitive costs and substantial power consumption associated with its implementation. To surmount these obstacles, this paper proposes a low-power, low-cost, SPAD-based system-on-chip (SoC) which packages the microlens arrays (MLA) and incorporates with a light-weight RGB-guided sparse depth imaging completion neural network for 3D LiDAR imaging. The proposed SoC integrates an 8x8 Single-Photon Avalanche Detectors (SPADs) macro pixel array with time-to-digital converters (TDC) and charge pump, fabricated using a 180nm bipolar-CMOS-DMOS (BCD) process. A random MLA-based homogenizing diffuser efficiently transforms Gaussian beams into flat-topped beams with a 45° field of view (FOV), enabling flash projection at the transmitter. To further enhance resolution and broaden application possibilities, a lightweight neural network employing RGB-guided sparse depth complementation is proposed, enabling a substantial expansion of image resolution from 8x8 to quarter video graphics array level (QVGA; 256x256). Experimental results demonstrate the effectiveness and stability of the hardware encompassing the SoC and optical system, as well as the lightweight features and accuracy of the algorithmic neural network. This integrated state-of-the-art hardware-software solution offers a promising and inspiring foundation for developing consumer-level 3D imaging applications.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献