Affiliation:
1. Haute Ecole Arc Ingenierie; HES-SO/Integrated Systems Lab, ETH Zurich, Switzerland
2. School of Computer Science and Statistics, Trinity College Dublin, Ireland
3. Haute Ecole Arc Ingenierie; HES-SO, Switzerland
4. Nviso, Switzerland
5. Universidad de Castilla—La Mancha, Spain
6. Integrated Systems Lab, ETH Zurich, Switzerland
7. School of Computer Science 8 Statistics, Trinity College Dublin, Ireland
Abstract
Next generation of embedded Information and Communication Technology (ICT) systems are interconnected and collaborative systems able to perform autonomous tasks. The remarkable expansion of the embedded ICT market, together with the rise and breakthroughs of Artificial Intelligence (AI), have put the focus on the
Edge
as it stands as one of the keys for the next technological revolution: the seamless integration of AI in our daily life. However, training and deployment of custom AI solutions on embedded devices require a fine-grained integration of data, algorithms, and tools to achieve high accuracy and overcome functional and non-functional requirements. Such integration requires a high level of expertise that becomes a real bottleneck for small and medium enterprises wanting to deploy AI solutions on the
Edge
, which, ultimately, slows down the adoption of AI on applications in our daily life.
In this work, we present a modular AI pipeline as an integrating framework to bring data, algorithms, and deployment tools together. By removing the integration barriers and lowering the required expertise, we can interconnect the different stages of particular tools and provide a modular end-to-end development of AI products for embedded devices. Our AI pipeline consists of four modular main steps:
(i)
data ingestion,
(ii)
model training,
(iii)
deployment optimization, and
(iv)
the IoT hub integration. To show the effectiveness of our pipeline, we provide examples of different AI applications during each of the steps. Besides, we integrate our deployment framework, Low-Power Deep Neural Network (LPDNN), into the AI pipeline and present its lightweight architecture and deployment capabilities for embedded devices. Finally, we demonstrate the results of the AI pipeline by showing the deployment of several AI applications such as keyword spotting, image classification, and object detection on a set of well-known embedded platforms, where LPDNN consistently outperforms all other popular deployment frameworks.
Funder
Swiss State Secretariat for Education, Research and Innovation
European Union's Horizon 2020 research and innovation programme
Publisher
Association for Computing Machinery (ACM)
Reference76 articles.
1. 2017. Google Speech Commands Dataset. Retrieved from https://ai.googleblog.com/2017/08/launching-speech-commands-dataset.html. 2017. Google Speech Commands Dataset. Retrieved from https://ai.googleblog.com/2017/08/launching-speech-commands-dataset.html.
2. 2017. The ONNX Project. Retrieved from https://github.com/onnx/onnx. 2017. The ONNX Project. Retrieved from https://github.com/onnx/onnx.
3. 2017. Open Neural Network Exchange (ONNX). Retrieved from https://onnx.ai/. 2017. Open Neural Network Exchange (ONNX). Retrieved from https://onnx.ai/.
4. 2018. Arm Compute Library. Retrieved from https://developer.arm.com/ip-products/processors/machine-learning/compute-library. 2018. Arm Compute Library. Retrieved from https://developer.arm.com/ip-products/processors/machine-learning/compute-library.
5. 2018. Bonseyes Official Caffe 1.0 Version. Retrieved from https://github.com/bonseyes/caffe-jacinto. 2018. Bonseyes Official Caffe 1.0 Version. Retrieved from https://github.com/bonseyes/caffe-jacinto.
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献