Author:
Trusheim P.,Mehltretter M.,Rottensteiner F.,Heipke C.
Abstract
Abstract. In the context of image orientation, it is commonly assumed that the environment is completely static. This is why dynamic elements are typically filtered out using robust estimation procedures. Especially in urban areas, however, many such dynamic elements are present in the environment, which leads to a noticeable amount of errors that have to be detected via robust adjustment. This problem is even more evident in the case of cooperative image orientation using dynamic objects as ground control points (GCPs), because such dynamic objects carry the relevant information. One way to deal with this challenge is to detect these dynamic objects prior to the adjustment and to process the related image points separately. To do so, a novel methodology to distinguish dynamic and static image points in stereoscopic image sequences is introduced in this paper, using a neural network for the detection of potentially dynamic objects and additional checks via forward intersection. To investigate the effects of the consideration of dynamic points in the adjustment, an image sequence of an inner-city traffic scenario is used; image orientation, as well as the 3D coordinates of tie points, are calculated via a robust bundle adjustment. It is shown that compared to a solution without considering dynamic points, errors in the tie points are significantly reduced, while the median of the precision of all 3D coordinates of the tie points is improved.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献