The performance of CPU as an essential component of computer technology has rapidly increased.
From the perspective of applications, HPC workflows are generally characterized as compute/communicate/compute/communicate...
A high-bandwidth yet low-latency network is therefore required to ensure a fast MPI communication.
The question is how to remove the network bottleneck of HPC systems for efficient utilization of CPU compute performance.
Currently, the most effective way is to use dedicated high-speed networks such as InfiniBand or Omni-Path.
In the latest published TOP500, 56% of the HPC systems on the list including all the first 50 use high-speed networks.
High-speed network has following characteristics:
• High Performance: Currently single-link transmission capability of 40~100Gbps.
• Ultra-Low Latency: Less than 1us cross application communication delay.
• High-Reliability, low-SER (system error rate), self-managed network: link-level flow control, congestion control
Inspur High-Speed HPC Network1. InfiniBand 100Gb/s EDR & 56Gb/s FDR Quick start, easy management Full speed non-block solutions Enterprise-level high-speed network, high performance, low latency, high density, low power consumption2. Omni-Path 100Gb/s High performance: 100Gb/s, 300-500ns latency Low cost: integrate network adapter, 48-port switch Scalability: 27000-node cluster3. True Scale 40Gb/s InfiniBand Flexible switch port configuration Complete redundancy of power supply, fans, and management module Self-adaptive Routing Support
Copyright © 2020 Inspur. All Rights Reserved.
Мы используем cookie-файлы, чтобы улучшить ваш опыт.
Продолжая использовать этот сайт, вы принимаете подобное использование.