Abstract
Hyperspectral imaging and distance data have previously been used in aerial, forestry, agricultural, and medical imaging applications. Extracting meaningful information from a combination of different imaging modalities is difficult, as the image sensor fusion requires knowing the optical properties of the sensors, selecting the right optics and finding the sensors’ mutual reference frame through calibration. In this research we demonstrate a method for fusing data from Fabry–Perot interferometer hyperspectral camera and a Kinect V2 time-of-flight depth sensing camera. We created an experimental application to demonstrate utilizing the depth augmented hyperspectral data to measure emission angle dependent reflectance from a multi-view inferred point cloud. We determined the intrinsic and extrinsic camera parameters through calibration, used global and local registration algorithms to combine point clouds from different viewpoints, created a dense point cloud and determined the angle dependent reflectances from it. The method could successfully combine the 3D point cloud data and hyperspectral data from different viewpoints of a reference colorchecker board. The point cloud registrations gained 0.29–0.36 fitness for inlier point correspondences and RMSE was approx. 2, which refers a quite reliable registration result. The RMSE of the measured reflectances between the front view and side views of the targets varied between 0.01 and 0.05 on average and the spectral angle between 1.5 and 3.2 degrees. The results suggest that changing emission angle has very small effect on the surface reflectance intensity and spectrum shapes, which was expected with the used colorchecker.
Funder
Council of Tampere Region
European Regional Development Fund
European Commission
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference37 articles.
1. Lillesand, T., Kiefer, R., and Chipman, J. Remote Sensing and Image Interpretation, 2007.
2. Choubik, Y., and Mahmoudi, A. Machine Learning for Real Time Poses Classification Using Kinect Skeleton Data. Proceedings of the 2016 13th International Conference on Computer Graphics, Imaging and Visualization (CGiV).
3. El-laithy, R.A., Huang, J., and Yeh, M. Study on the use of Microsoft Kinect for robotics applications. Proceedings of the 2012 IEEE/ION Position, Location and Navigation Symposium.
4. Rao, D., Le, Q.V., Phoka, T., Quigley, M., Sudsang, A., and Ng, A.Y. Grasping novel objects with depth segmentation. Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.
5. Miniaturized hyperspectral imager calibration and UAV flight campaigns;Saari;Sensors, Systems, and Next-Generation Satellites XVII,2013
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献