Author:
Alphonse P. J. A.,Sriharsha K. V.
Abstract
AbstractIn recent years, with increase in concern about public safety and security, human movements or action sequences are highly valued when dealing with suspicious and criminal activities. In order to estimate the position and orientation related to human movements, depth information is needed. This is obtained by fusing data obtained from multiple cameras at different viewpoints. In practice, whenever occlusion occurs in a surveillance environment, there may be a pixel-to-pixel correspondence between two images captured from two cameras and, as a result, depth information may not be accurate. Moreover use of more than one camera exclusively adds burden to the surveillance infrastructure. In this study, we present a mathematical model for acquiring object depth information using single camera by capturing the in focused portion of an object from a single image. When camera is in-focus, with the reference to camera lens center, for a fixed focal length for each aperture setting, the object distance is varied. For each aperture reading, for the corresponding distance, the object distance (or depth) is estimated by relating the three parameters namely lens aperture radius, object distance and object size in image plane. The results show that the distance computed from the relationship approximates actual with a standard error estimate of 2.39 to 2.54, when tested on Nikon and Cannon versions with an accuracy of 98.1% at 95% confidence level.
Publisher
Springer Science and Business Media LLC
Subject
General Earth and Planetary Sciences,General Physics and Astronomy,General Engineering,General Environmental Science,General Materials Science,General Chemical Engineering
Reference23 articles.
1. Chaudhuri S, Rajagopalan AN (2012) Depth from defocus: a real aperture imaging approach. Springer, Germany
2. Hansard M, Lee S, Choi O, Horaud RP (2012) Time-of-flight cameras: principles, methods and applications. Springer, Germany
3. Lefloch D, Nair R, Lenzen F, Schäfer H, Streeter L, Cree MJ, Kolb A (2013) Technical foundation and calibration methods for time-of-flight cameras. In Grzegorzek M, Theobalt C, Koch R, Kolb A (eds) Time-of-flight and depth imaging. Sensors, algorithms, and applications. Springer, Berlin, Heidelberg, pp 3–24
4. Li L (2014)Time-of-flight camera–an introduction. Technical white paper, (SLOA190B)
5. Fuchs S (2010).Multipath interference compensation in time-of-flight camera images. In 2010 20th International Conference on Pattern Recognition (pp. 3583-3586).IEEE
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献