Abstract
AbstractComputing 3D bone models using traditional Computed Tomography (CT) requires a high-radiation dose, cost and time. We present a fully automated, domain-agnostic method for estimating the 3D structure of a bone from a pair of 2D X-ray images. Our triplet loss-trained neural network extracts a 128-dimensional embedding of the 2D X-ray images. A classifier then finds the most closely matching 3D bone shape from a predefined set of shapes. Our predictions have an average root mean square (RMS) distance of 1.08 mm between the predicted and true shapes, making our approach more accurate than the average achieved by eight other examined 3D bone reconstruction approaches. Each embedding extracted from a 2D bone image is optimized to uniquely identify the 3D bone CT from which the 2D image originated and can serve as a kind of fingerprint of each bone; possible applications include faster, image content-based bone database searches for forensic purposes.
Publisher
Springer Science and Business Media LLC
Subject
General Agricultural and Biological Sciences,General Biochemistry, Genetics and Molecular Biology,Medicine (miscellaneous)
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献