High-Performance Computing
|
The research and teaching in High-Performance Computing focuses on
different aspects of optimizing (or “tuning”) existing applications
or applications under development, such that results can be obtained
faster. Those applications are typically large-scale simulations, that can
run for days and weeks, and that consume lots of computational resources,
e.g. memory, CPU cycles, disk space, etc.
There are several different approaches to “speed-up” calculations:
|
Serial tuning
This area focuses on finding the bottlenecks in sequential algorithms and
applications, and to implement solutions that are optimized for modern
cache-based CPU architectures.
Parallelization
This work involves both the design and implementation of new parallel
algorithms, as well as to parallelize existing implementations of
algorithms efficiently. The major part of work in this area focuses on
the development of parallel codes for multi-core (CPU) and many-core (GPU,
accelerators) types of architectures, using e.g. OpenMP, OpenCL or CUDA.
Run-time optimization
Large applications can very often benefit from changing or optimizing
the runtime environment, e.g. access to local disks, adapting the memory
page sizes, replacing of runtime-libraries, etc. This can be a good way to
improve performance for applications that are not available in source
code.
|