Abstract
The process of recognizing manufacturing parts in real time requires fast, accurate, small, and low-power-consumption sensors. Here, we describe a method to extract descriptors from several objects observed from a wide range of angles in a three-dimensional space. These descriptors define the dataset, which allows for the training and further validation of a convolutional neural network. The classification is implemented in reconfigurable hardware in an embedded system with an RGB sensor and the processing unit. The system achieved an accuracy of 96.67% and a speed 2.25× faster than the results reported for state-of-the-art solutions. Our proposal is 655 times faster than implementation on a PC. The presented embedded system meets the criteria of real-time video processing and it is suitable as an enhancement for the hand of a robotic arm in an intelligent manufacturing cell.
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献