pith. machine review for the scientific record. sign in

arxiv: 1409.1510 · v1 · submitted 2014-09-04 · 💻 cs.DC · hep-lat

Recognition: unknown

HISQ inverter on Intel Xeon Phi and NVIDIA GPUs

Authors on Pith no claims yet
classification 💻 cs.DC hep-lat
keywords performancearchitecturesgpusintelkernelnvidiaobtainxeon
0
0 comments X
read the original abstract

The runtime of a Lattice QCD simulation is dominated by a small kernel, which calculates the product of a vector by a sparse matrix known as the "Dslash" operator. Therefore, this kernel is frequently optimized for various HPC architectures. In this contribution we compare the performance of the Intel Xeon Phi to current Kepler-based NVIDIA Tesla GPUs running a conjugate gradient solver. By exposing more parallelism to the accelerator through inverting multiple vectors at the same time we obtain a performance 250 GFlop/s on both architectures. This more than doubles the performance of the inversions. We give a short overview of both architectures, discuss some details of the implementation and the effort required to obtain the achieved performance.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.