Affiliation:
1. Machine Learning Palantir Technologies Washington DC 20007 USA
2. Machine Learning Applied Science Zillow Seattle WA 98133 USA
3. Department of Radiology Northwestern University Chicago IL 60611 USA
Abstract
Capsule networks promise significant benefits over convolutional neural networks (CNN) by storing stronger internal representations and routing information based on the agreement between intermediate representations’ projections. Despite this, their success has been limited to small‐scale classification datasets due to their computationally expensive nature. Though memory‐efficient, convolutional capsules impose geometric constraints that fundamentally limit the ability of capsules to model the pose/deformation of objects. Further, they do not address the bigger memory concern of class capsules scaling up to bigger tasks such as detection or large‐scale classification. Herein, a new family of capsule networks, deformable capsules (DeformCaps), is introduced to address object detection problem in computer vision. Two new algorithms associated with our DeformCaps, a novel capsule structure (SplitCaps), and a novel dynamic routing algorithm (SE‐Routing), which balance computational efficiency with the need for modeling a large number of objects and classes, are proposed. This has never been achieved with capsule networks before. The proposed methods efficiently scale up to create the first‐ever capsule network for object detection in the literature. The proposed architecture is a one‐stage detection framework and it obtains results on microsoft common objects in context which are on par with state‐of‐the‐art one‐stage CNN‐based methods, while producing fewer false‐positive detection, generalizing to unusual poses/viewpoints of objects.
Funder
National Institutes of Health
Reference45 articles.
1. A.Punjabi J.Schmid A. K.Katsaggelos arXiv preprint arXiv:2001.109642020.
2. M. A.Alcorn Q.Li Z.Gong C.Wang L.Mai W.‐S.Ku A.Nguyen inProc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) IEEE Piscataway NJ2019 pp.4845–4854.