Author:
Barsai Gabor,Yilmaz Alper,Nagarajan Sudhagar,Srestasathiern Panu
Abstract
Recovering the camera orientation is a fundamental problem in photogrammetry for precision 3D recovery, orthophoto generation, and image registration. In this paper, we achieve this goal by fusing the image information with information extracted from different modalities, including
lidar and <small>GIS</small>. In contrast to other approaches, which require feature correspondences, our approach exploits edges across the modalities without the necessity to explicitly establish correspondences. In the proposed approach, extracted edges from different modalities
are not required to have analytical forms. This flexibility is achieved by minimizing a new cost function using a Bayesian approach, which takes the Euclidean distances between the projected edges extracted from the other data source and the edges extracted from the reference image as its
random variable. The proposed formulation minimizes the overall distances between the sets of edges iteratively, such that the end product results in the correct camera parameters for the reference image as well as matching features across the modalities. The initial solution can be obtained
from <small>GPS/IMU</small> data. The formulation is shown to successfully handle noise and missing observations in edges. Point matching methods may fail for oblique images, especially high oblique images. We eliminate the requirement for exact point-to-point matching. The feasibility
of the method is experimented with nadir and oblique images.
Publisher
American Society for Photogrammetry and Remote Sensing
Subject
Computers in Earth Sciences
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献