Performance Comparison between Pytorch and Mindspore
-
Published:2022-04-30
Issue:02
Volume:14
Page:14-2
-
ISSN:0975-5985
-
Container-title:International Journal of Database Management Systems
-
language:
-
Short-container-title:IJDMS
Author:
XIA Xiangyu,ZHOU Shaoxiang
Abstract
Deep learning has been well used in many fields. However, there is a large amount of data when training neural networks, which makes many deep learning frameworks appear to serve deep learning practitioners, providing services that are more convenient to use and perform better. MindSpore and PyTorch are both deep learning frameworks. MindSpore is owned by HUAWEI, while PyTorch is owned by Facebook. Some people think that HUAWEI's MindSpore has better performance than FaceBook's PyTorch, which makes deep learning practitioners confused about the choice between the two. In this paper, we perform analytical and experimental analysis to reveal the comparison of training speed of MIndSpore and PyTorch on a single GPU. To ensure that our survey is as comprehensive as possible, we carefully selected neural networks in 2 main domains, which cover computer vision and natural language processing (NLP). The contribution of this work is twofold. First, we conduct detailed benchmarking experiments on MindSpore and PyTorch to analyze the reasons for their performance differences. This work provides guidance for end users to choose between these two frameworks.
Publisher
Academy and Industry Research Collaboration Center (AIRCC)
Subject
Microbiology (medical),Immunology,Immunology and Allergy
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. STI: Turbocharge NLP Inference at the Edge via Elastic Pipelining;Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2;2023-01-27