Abstract
In this paper, we introduce a novel Dense D2C-Net, an unobtrusive display-to-camera (D2C) communication scheme that embeds and extracts additional data via visual content through a deep convolutional neural network (DCNN). The encoding process of Dense D2C-Net establishes connections among all layers of the cover image, and fosters feature reuse to maintain the visual quality of the image. The Y channel is employed to embed binary data owing to its resilience against distortion from image compression and its lower sensitivity to color transformations. The encoder structure integrates hybrid layers that combine feature maps from the cover image and input binary data to efficiently hide the embedded data, while the addition of multiple noise layers effectively mitigates distortions caused by the optical wireless channel on the transmitted data. At the decoder, a series of 2D convolutional layers is used for extracting output binary data from the captured image. We conducted experiments in a real-world setting using a smartphone camera and a digital display, demonstrating superior performance from the proposed scheme compared to conventional DCNN-based D2C schemes across varying parameters such as transmission distance, capture angle, display brightness, and camera resolution.
Funder
National Research Foundation of Korea
Subject
Atomic and Molecular Physics, and Optics
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献