Computing hardware and software
The USQCD Collaboration designs, constructs and operates large scale
computing systems for lattice QCD calculations with support from the
Department of Energy and collaborators in industry. Computers for lattice QCD can be made particularly cost-effective by taking optimal advantage of
simplifying features of lattice QCD, such as regular grids, uniform and
predictable communications, and relatively low memory and I/O requirements.
|
Community software has been developed by the USQCD Collaboration
under grants from the DOE's Scientific Discovery through Advanced Computing
(SciDAC) Program.
This enables the development of highly efficient code for clusters and commercial supercomputers.
|
Clusters at Fermilab.
Clusters at JLab.
Commodity hardware for lattice QCD has been developed by a team centered at
Fermilab and JLab.
By selecting the most cost effective and appropriately balanced
combinations of processor and network interconnect, as opposed to the
products which individually had the best performance, and by taking
advantage of the modest requirements for memory size and disk
bandwidth, large scale clusters (below) have been constructed
with better price/performance than any existing general purpose parallel
computing platform.
|
In recent years, clusters with GPU accelerators have played an increasing role in the USQCD program. USQCD is teaming with lattice gauge theorists employed by the NVIDIA Corporation to optimize codes for these machines and for the GPU-based Titan supercomputer at the Oak Ridge Leadership Computing Facility.
|
BlueGene/Q at the Argonne.
Leadership Computing Facility.
Prototype BlueGene/Q at Brookhaven.
|