Networking Converged High-Speed Network

Most MPI parallel applications require low-latency and high-bandwidth compute network. Compute-storage converged solution becomes popular. Low-latency and high-bandwidth network also helps to improve the IOPS and I/O throughput of parallel storage system.

2015:

EDR 100Gb/s

InfiniBand is coming!

InfiniBand, as a communication standard used in HPC, delivers the highest bandwidth and the lowest latency in the industry.

 PCI-ETheoretical BandwidthxIB EncodingxPCI-E Encodin=Peak BandwidthLatency
QDR PCI-E 2.0 x8 40Gb/s x 8/10 x 8/10 = 3.2 GB/s 1.3 μs
FDR PCI-E 3.0 x8 56Gb/s x 64/66 x 128/130 = 6.68 GB/s 0.7 μs
EDR PCI-E 3.0 x16 100Gb/s x 64/66 x 128/130 = 11.93 GB/s 0.6 μs

Sugon’s supercomputer supports several types of InfiniBand topology.

Fat-Tree

  • Widely used, best performance
  • Flexibility with non-blocking or different blocking ratio
  • ftree or updn routing algorithm

3D-Torus

  • Best Cost-effective, infinitely scalable
  • Network locality aware job scheduling policy support
  • MPI tuning for 3D-Torus support