Recovering pose and occlusion consistencies in augmented reality systems using affine properties
Abstract
PurposeAugmented environments superimpose computer enhancements on the real world. The pose and occlusion consistencies between virtual and real objects have to be managed correctly, so that users can look at the natural scene. The purpose of this paper is to describe a novel technique that can be used to resolve pose and occlusion consistencies in real time with a unified affine properties‐based framework.Design/methodology/approachFirst, the method is simple and can resolve pose and occlusion consistencies in a unified framework based on affine properties. It can improve third dimension of the augmented reality system to a large degree while reducing the computing complexity. Second, the method is robust to arbitrary camera motion and does not require multiple cameras, camera calibration, use of fiducials, or a structural model of the scene to work. Third, a novel feature tracking method is proposed combing narrow and wide baseline strategies to match natural features between reference images and current frame directly.FindingsIt is found that the method is still effective even under large changes of viewing angles, while casting off the requirement that the initial camera position should close to the reference images.Originality/valueThis paper describes some experiments which have been carried out to demonstrate the validity of the proposed approach.
Subject
Electrical and Electronic Engineering,Industrial and Manufacturing Engineering
Reference32 articles.
1. Comport, A., Marchand, É., Pressigout, M. and Chaumette, F. (2006), “Real‐time markerless tracking for augmented reality: the virtual visual servoing framework”, IEEE Transaction on Visualization and Computer Graphics, Vol. 12 No. 4, pp. 615‐28. 2. Cornelis, K., Pollefeys, M. and Vergauwen, M. (2001), “Augmented reality using uncalibrated video sequences”, Lecture Notes in Computer Science, Vol. 2018, pp. 144‐60. 3. Duan, Y., Guan, T. and Yang, B. (2009), “Registration combining wide and narrow baseline feature tracking techniques for markerless AR systems”, Sensors, Vol. 9 No. 12, pp. 10097‐116. 4. Gibson, S., Cook, J., Howard, T., Hubbold, R. and Oram, D. (2002), “Accurate camera calibration for off‐line, video‐based augmented reality”, pp. 37‐46. 5. Guo, Y., Hsu, S., Samarasekera, S., Sawhney, H. and Kumar, R. (2000), “Multi‐view 3D analysis with applications for augmented reality and enhanced video visualization”, IEEE Conference on Computer Vision and Pattern Recognition, Hilton Head, pp. 2780‐1.
|
|