KEYWORDS: Data communications, Education and training, Computing systems, Mathematical optimization, Systems modeling, Network architectures, Computer architecture, Parallel computing, Deep learning
With the widespread application of deep learning frameworks, large-scale computing and GPU programming are receiving increased attention. For upper-layer applications that utilize GPUs for computational communication, such as TensorFlow and PyTorch, improving the communication efficiency of the underlying communication library is of paramount importance to enhance the overall performance of the frameworks. Among them, the RCCL (Rocm Collective Communication Library) GPU communication library, provided by the Rocm (Radeon Open Compute platform) computing platform, supports various collective communication operations and point-to-point operations. Through analysis, we have identified a problem in the initialization and usage of the ring channel network in the RCCL library, specifically in multi-network card systems. This issue results in certain network cards being unable to communicate, leading to wasted system resources. To address this problem, optimizations can be made at the code level by introducing data structures and algorithms to control the invocation of network cards. The goal is to adjust the usage strategy of multiple network cards in the ring channel network without modifying the original design concept of RCCL. After optimization, extensive evaluations were conducted using a large-scale GPU cluster. The optimized RCCL library achieved significant improvements in communication performance. Under a communication scale of 16 compute nodes and 64 GPUs, the peak bandwidth increased from 5.28GB/s to 7.78GB/s. In inter-node collective communication tests, the performance improvement reached up to 60%. The improved RCCL library provides better low-level communication performance for upper-layer applications on the Rocm computing platform, offering enhanced communication support.
MPI (Message Passing Interface) plays a crucial role in the field of parallel computing. In the Allreduce algorithm of the OpenMPI communication library, there are some issues in handling communication scenarios with a number of processes that is non-power-of-two. The two existing algorithms address this by excluding some processes to achieve a power-of-two process count. However, the consideration factors are too simplistic, resulting in an imbalanced distribution of participating processes on nodes, greatly impacting communication efficiency. To address this problem, the layout of processes on nodes is taken into consideration, and the range of excluded processes is redefined. Both algorithms are subjected to generic load balancing optimizations and adaptations for domestic architectures, resulting in improved load balancing. Experimental results show that, under a communication scale of 16 nodes, the recursive_doubling algorithm achieves performance improvements of up to 30%, while the reduce_scatter_allgather algorithm achieves performance improvements of up to 21%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.