Caffe-MPI is developed by AI & HPC Application R&D team of Inspur. It is a parallel version for multi-node GPU cluster, which is designed based on the NVIDIA/Caffe forked from the BVLC/caffe ( https://github.com/NVIDIA/caffe, more details please visit http://caffe.berkeleyvision.org).
The Caffe-MPI is designed for high density GPU clusters; The new version supports InfiniBand (IB) high speed network connection and shared storage system that can be equipped by distributed file system, like NFS and GlusterFS. The training dataset is read in parallel for each MPI process. The hierarchical communication mechanisms were developed to minimize the bandwidth requirements between computing nodes.
Support NCCL 2.0
Both inter and intra node GPU communication are managed by NCCL with GPU direct RDMA.
The AlexNet, GoogLeNet and ResNet model have been tested with Caffe-MPI 2.0 on a GPU cluster, which includes 4 nodes, and each of which has 4 P40 GPUs. The dataset is ImageNet. The speedup is 14.65X, 14.25X, 15.34X, for AlexNet (batchsize=1024), GoogLeNet (batchsize=128) and ResNet (batchsize=32) respectively on 4 nodes with 16 GPUs.
Caffe-MPI retains all the features of the original Caffe architecture, namely the pure C++/CUDA architecture, support of the command line, Python interfaces, and various programming methods. As a result, the cluster version of the Caffe framework is user-friendly, fast, modularized and open, and gives users the optimal application experience.
This program can run 1 processes at least.
More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server
Deep Image: Scaling up Image Recognition
For reporting bugs, please use the caffe-mpi/issues page or send email to us.
Email address: wush@inspur.com
Shaohua Wu.