HPC – High Performance Computing
HPC stands for High Performance Computing that represents one key technology in order to perform big data analytics today. It usually is enabled by powerful supercomputers and large file systems. Supercomputers are computers at the frontline of contemporary processing and provides an enormous speed of calculation. All these supercomputers depend heavily on parallelism today. This type of parallel computing is essential due to capabilities such as performing calculations faster, doing better visualizations, and enabling data processing at an incredible and ever-increasing speed. It is recognized to tackle complex problems and to increase insights due to its unique possibilities. One can perform virtual experiments that are too dangerous or expensive in the real world. It enables ‘simulation of real-world phenomena’ not possible otherwise. Furthermore it automates re-occuring processing of large quantities of data.
HPC as a whole ecosystem in computing includes work on ‘four basic building blocks’: theory, technology, architecures, and software. Firstly, theory that includes numerical laws, physical models, and speed-up performance. Secondly, technology including multi-core and many-core systems, supercomputers, fact interconnected networks, and powerful storages. Thirdly, different architectures such as shared memory, distributed memory, interconnects, and, more recently, General purpose graphical processing units (GPGPUs). Finally, all is accessed together via software including libraries, schedulers, monitoring systems, and applications. One interesting metric in HPC is the ‘measure of speed’. A common measure for parallel computers established since a Long time is the Top 500 list. It is based on a benchmark for ranking the best 500 supercomputers worldwide.
More Information about HPC
Please refer to the following Video about this topic:
Follow us on Facebook: