Affiliation:
1. Beijing University of Posts and Telecommunications
2. Tibet University
3. Chalmers University of Technology
4. Soochow University
Abstract
The computing power network (CPN) is a novel network technology that integrates computing power from the cloud, edge, and terminals using IP/optical cross-layer networks for distributed computing. CPNs can provide an effective solution for distributed model training (DMT). As a bandwidth optimization architecture based on data parallelism, ring all-reduce (RAR) is widely used in DMT. However, any node or link failure on the ring can interrupt or block the requests deployed on the ring. Meanwhile, due to the resource competition of batch RAR-based DMT requests, inappropriate scheduling strategies will also lead to low training efficiency or congestion. As far as we know, there is currently no research that considers the survivability of rings in scheduling strategies for RAR-based DMT. To fill this gap, we propose a scheduling scheme for RAR-based DMT requests in CPNs to optimize the allocation of computing and wavelength resources considering the time dimension while ensuring reliability. In practical scenarios, service providers may focus on different performance metrics. We formulate an integer linear programming (ILP) model and a RAR-based DMT deployment algorithm (RDDA) to solve this problem considering four optimization objectives under the premise of the minimum blocking rate: minimum computing resource consumption, minimum wavelength resource consumption, minimum training time, and maximum reliability. Simulation results demonstrate that our model satisfies the reliability requirements while achieving corresponding optimal performance for DMT requests under four optimization objectives.
Funder
Beijing Natural Science Foundation
Fundamental Research Funds for the Central Universities
National Natural Science Foundation of China
Reference35 articles.
1. Exploration and practice of Computing Power Network (CPN) to realize convergence of computing and network;Lei,2022
2. Edge computing architecture for applying AI to IoT;Calo,2017
3. Language models are few-shot learners;Brown,2020
4. Large scale distributed deep networks;Dean,2012