GPU Computing
GPU computing is very often used in large computer clusters in order to enable scalable and parallel computing of applications. A basic introduction about GPUs can be found in our article GPU. This article describes how GPUs are used in large cluster for massive parallelism. There are essentially two compute node architecture designs for large clusters. A homogenous node architecture using the same multi-core processors or a hybrid node architecture using CPUs and additional GPUs. This article focusses on the hybrid architecture design. The CPUs are used for integer operations while the GPUs act as a form of co-processor in order to speed up floating point operations. This in turn accelerates the system being perfect for the analysis of big data.
Many supercomputers and large clusters in the Top500 list adopt the hybrid design. One of the top supercomputers in the world named the Tianhe-1A system of the national supercomputing Center of Tianjin in China adopted the hybrid node architecture design. It consists of 14,336 Intel Xeon X5670 CPUs with six cores each. As an Addition there are also 7168 NVIDIA Tesla M2050 GPUs with 448 CUDA cores each. This system uses two Intel Xeon processors with one additional NVIDIA GPU per compute node. The performance of the system is 2.57 Pflops and the power consumption is 4.02 MW. The hybrid CPU and GPU design significantly influences the performance versus power ratio of a large cluster system. One can observe a trade-off between these two metrics. In Addition to the Top500 list there is also a Top500 Green list that ranks the supercomputers and large custers by their power efficiency. This list shows that all systems that use the hybrid architecture design consume much less power.
More about GPU computing
There is the following nice video about this topic: