Abstract
A video enhancement method based on applying gamma correction is proposed. Its realization implies processing all video frames using a single algorithm. The proposed algorithm, due to the preservation of inter-frame coherence throughout the entire video, significantly reduces the time spent searching for the optimal value of the gamma parameter, which ensures the highest quality of the frame reaching the maximum possible level in the automatic mode in the sense of both visual observation and the determination of key points and the selection of contours of objects in images. The method is characterized by high adaptability to sudden changes in the lighting of the scene, preservation of interframe coherence, and the absence of any side negative artifacts in the enhanced video. A toolkit for automatically determining the optimal value of the gamma parameter for video frames is developed. It significantly increases the efficiency of video analytics systems, image and video segmentation and processing processes due to reducing the negative impact of the scene lighting mode on image quality. Keywords: gamma-correction, video analytics system, video sequence enhancement, histogram, cumulative histogram, video processing, interframe coherence.
Publisher
V.M. Glushkov Institute of Cybernetics
Reference27 articles.
1. 1. Golovin O. Analysis of a crowd of people using computer vision methods. Komp'yuterni zasoby, merezhi ta systemy. Coll. of science works. V.M. Glushkov Institute of Cybernetics. Kyiv, 2019. Iss. 18. P. 45-57.
2. 2. Dong X., Wang G., Pang Y., Li W., Wen J., Meng W., Lu Y. Fast efficient algorithm for enhancement of low lighting video. Proc. SIGGRAPH 2010 (26-30 July 2010, Los Angeles, CA, USA). Los Angeles, 2010. P. 1-6. https://doi.org/10.1145/1836845.1836920.
3. 3. Golovin A. Ensuring the quality of services in multimedia networks with intelligent video cameras. Komp'yuterni zasoby, merezhi ta systemy. Coll. of science works. V.M. Glushkov Institute of Cybernetics. Kyiv, 2015. N 14. P. 151-160.
4. 4. Gubarev V.F., Boyun V.P., Melnichuk S.V., Salnikov N.N., Simakov V.A., Godunok L.A., Komisarenko V.I., Dobrovolskyy V.Yu., Derkach S.V., Matviyenko S.A. Using vision systems for determining the parameters of relative motion of spacecrafts. Journal of Automation and Information Sciences. 2016. Vol. 48, N 11. P. 23-39. https://doi.org/10.1615/ JAutomatInfScien.v48.i11.30.
5. 5. Wan T., George T., Panagiotis T., Nishan C., Alin A. Context enhancement through image fusion: a multi-resolution approach based on convolution of Cauchy distributions. Proc. 2008 IEEE International Conference on Acoustics, Speech and Signal Processing (31 March - 04 April 2008, Las Vegas, NV, USA). Las Vegas, 2008. P. 1309-1312. https://doi.org/10.1109/ICASSP.2008.4517858.