Adaptive Compression-Aware Split Learning and Inference for Enhanced Network Efficiency

Author:

Mudvari Akrit1ORCID,Vainio Antero2,Ofeidis Iason1,Tarkoma Sasu2,Tassiulas Leandros1

Affiliation:

1. Electrical Engineering, Yale University, New Haven, United States

2. Helsingin yliopisto, Helsinki, Finland

Abstract

The growing number of AI-driven applications in mobile devices has led to solutions that integrate deep learning models with the available edge-cloud resources. Due to multiple benefits such as reduction in on-device energy consumption, improved latency, improved network usage, and certain privacy improvements, split learning, where deep learning models are split away from the mobile device and computed in a distributed manner, has become an extensively explored topic. Incorporating compression-aware methods (where learning adapts to compression level of the communicated data) has made split learning even more advantageous. This method could even offer a viable alternative to traditional methods, such as federated learning techniques. In this work, we develop an adaptive compression-aware split learning method (’deprune’) to improve and train deep learning models so that they are much more network-efficient, which would make them ideal to deploy in weaker devices with the help of edge-cloud resources. This method is also extended (’prune’) to very quickly train deep learning models through a transfer learning approach, which trades off little accuracy for much more network-efficient inference abilities. We show that the ’deprune’ method can reduce network usage by 4x when compared with a split-learning approach (that does not use our method) without loss of accuracy, while also improving accuracy over compression-aware split-learning by up to 4 percent. Lastly, we show that the ’prune’ method can reduce the training time for certain models by up to 6x without affecting the accuracy when compared against a compression-aware split-learning approach.

Publisher

Association for Computing Machinery (ACM)

Reference59 articles.

1. 2019. https://www.gsma.com/futurenetworks/wiki/cloud-ar-vr-whitepaper

2. Manoj Ghuhan Arivazhagan Vinay Aggarwal Aaditya Kumar Singh and Sunav Choudhary. 2019. Federated learning with personalization layers. arXiv preprint arXiv:1912.00818(2019).

3. Juliano S Assine, Eduardo Valle, et al. 2021. Single-training collaborative object detectors adaptive to bandwidth and computation. arXiv preprint arXiv:2105.00591(2021).

4. Miguel A Carreira-Perpinán and Yerlan Idelbayev. 2018. “learning-compression” algorithms for neural net pruning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8532–8541.

5. Adam Coates, Andrew Ng, and Honglak Lee. 2011. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 215–223.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3