agen
All Hands' Home
Registration
USQCD Home
Call for Proposals
Agenda
2017 Proposals
USQCD Science
2017 Machine performance
DOE INCITE
Argonne ALCF
Oak Ridge OLCF
Reception
Visiting JLab
USQCD Home
2016 Meeting
2015 Meeting
2014 Meeting
2013 Meeting
2012 Meeting
2011 Meeting
2010 Meeting
2009 Meeting
2008 Meeting
2007 Meeting
2006 Meeting
2005 Meeting
|
|
USQCD Call for Proposals
2017
This message is a Call for Proposals for awards of time on the USQCD
computer resources dedicated to lattice QCD and other lattice field
theories. These are the clusters at Fermilab and JLab, the GPU-clusters at
Fermilab and JLab, and the BG/Q, GPU-cluster, and future USQCD machine at
BNL.
The awards will be for calculations that further the scientific goals of
the collaboration, as laid out in the recent white papers and USQCD
proposals that can be found at /collaboration, noting
that an important reason for funding is relevance to the DOE experimental
program.
In this allocation year, we expect to distribute about
17.7 M BG/Q core-hours at BNL
263 M Jpsi-core hours on clusters at FNAL and JLAB
8.4 M GPU-hours on GPU clusters at FNAL, BNL and JLAB
90.3 M KNL-core hours on a KNL cluster at JLAB
350 TB disk space and 1000TB tape at BNL
780 TB disk space and 1000TB tape at FNAL
1100 TB disk space and 1000 TB tape at JLAB
In addition USQCD will be purchasing a system at BNL. The architecture of
the system is not yet known but we expect to allocate 300M Jpsi-core hours
on this system. We will allocate this resource once the architecture is
known, and we will send out another email at that time.
Important dates:
============
February 13: this Call for Proposals
March 8: proposals due for Type A proposals
April 18: reports to proponents sent out
April 28-29: All Hands' Meeting at JLAB (ending ~5pm)
May 31: allocations announced
July 1: new allocations start
The web site for the All Hands' Meeting is /meetings/allHands2017/
Resources:
=========
USQCD also has community resources through the DoE INCITE program at
Argonne and Oak Ridge and on the NSF supercomputer Blue Waters at NCSA.
These Incite and NSF resources will be used as was described in the
successful proposals by those groups making the proposals, following any
modifications required as a result of the Incite and NSF review processes.
These projects and allocations will be listed together with the allocations
of the SPC on the password-protected USQCD website. These Incite and NSF
allocations will also be considered by the SPC when allocating other USQCD
resources in order to achieve a balanced USQCD program which addresses the
Collaboration’s scientific priorities. The SPC will allocate the
zero-priority time at Argonne made available to USQCD because of the Incite
program, as described below.
We expect to receive in this calendar year zero-priority time on the BG/Q
at Argonne. Based on previous usage and availability, we will distribute
zero-priority time starting this year as soon as our INCITE allocation has
been consumed and zero-priority time becomes available, according to the
allocations made during last year’s allocation process. This is expected
to begin around April of this year. These existing allocations for this
year’s zero-priority time will complete June 30 of this year. As part of
the current allocations process, driven by this Call for Proposals, we will
allocate the ANL zero-priority time for the second half of this calendar
year and the first half of next year. As the total amount of time cannot
be reliably estimated, we will allocate percentages of zero-priority usage.
The SPC may readjust these percentage allocations based upon observed
usage. The Oak Ridge facility does not provide a zero-priority queue. The
usage of the INCITE allocations should be monitored by all PIs of INCITE
projects on the USQCD WEB-page:
http://www.mcs.anl.gov/~osborn/usqcd-spc/2013-14-mira.html
USQCD computing resources:
The Scientific Program Committee will allocate
7200 hours/year to Type A and Type B proposals. Of the 8766 hours in an
average year the facilities are supposed to provide 8000 hours of uptime.
We then reserve 400 hours (i.e., 5%) for each host laboratory's own use,
and another 400 hours for Type C proposals and contingencies.
==================================
At BNL:
BG/Q:
60% of a 1024 node BG/Q rack from July 1, 2017 to September 30, 2017
16 cores/node, up to 4 threads per core
16 GB memory/node
10% of a BNL rack with time donated to USQCD
50% of a rack owned by USQCD.
total: 1800*1024*16*0.60 = 17.7 M BG/Q core-hours = 29 M Jpsi-equivalent core-hours
Institutional Cluster (IC):
40 nodes on the BNL Institutional Cluster (IC) are dedicated to USQCD
Dual-socket Broadwell CPU's
2 NVIDIA K80 GPU's per node
128 GBytes of memory per node
EDR Infiniband interconnect
1 K80 = 2.2 K40 = 2.2 * 2.7 C2050
total: 7200*40*2*2.2*2.7 = 3.4 M GPU-hours
USQCD will be purchasing a system at BNL. The first part of the system
should be ready in the summer of 2017 and has a projected speed of 36.5
TFlops. The second part of the system is projected to be available in late
fall (contingent on funding) and has a projected size of 26.6 TFlops. The
architecture of the system is not currently known. ( 36.5 TFlops * 7200
hours + 26.6 TFlops * 3600 hours ) / 0.00122 Tflops-hours = 300 M Jpsi
equivalent core hours will be allocated for this projected resource.
The BNL IC and the new USQCD system will have access to 150 TBytes of
storage on the BNL GPFS system, with a peak bandwidth of 24 GBytes/second.
Around 200 TBytes of storage with lower bandwidth will also be available,
along with access to tape storage.
==================================
At FNAL:
224 node cluster ("Bc")
Eight-core, quad-socket 2.8 GHz AMD Opteron (Abu Dhabi) nodes
32 cores per node
64 GB memory/node
1 core-hour = 1.48 JPsi-equivalent core-hours
total: 7200*224*32*1.48 = 76.4 M Jpsi-equivalent core-hours
314 node cluster ("Pi0")
Eight-core, dual-socket 2.6 GHz Intel Xeon (Ivy Bridge) nodes
16 cores per node
128 GB memory/node
1 core-hour = 3.14 JPsi-equivalent core-hours
total: 7200*314*16*3.14 = 113.6 M Jpsi-equivalent core-hours
32 node cluster ("Pi0g")
Eight-core, dual-socket 2.6 GHz Intel Xeon (Ivy Bridge) nodes
128 GB memory/node
4 GPUs NVIDIA K40m (Kepler Tesla) per node, GPU rating 2.6
(128 total GPUs available)
GPU memory (ECC on) 11.5 GB/GPU
Each K40 gpu-hr is equivalent to 2.6 Fermi-gpu-hr
total: 7200*128*2.6 = 2396 K GPU-hours)
These clusters will share about 780 TBytes of disk space in Lustre file
systems. 1000TBytes of tape access is also available.
For further information see https://www.usqcd.org/fnal/
==================================
At JLAB:
268 node cluster ("12s")
Eight-core, dual processor Intel Sandy Bridge nodes
16 cores per node
32 GB memory/node
QDR network card, with full bi-sectional bandwidth network fabric
1 12s core-hour = 2.3 Jpsi cores
total: 7200*268*16*2.3 = 71 M Jpsi-equivalent core hours
45 node GPU cluster at JLab ("12k")
Dual 8 core Intel Sandy Bridge host nodes
4 NVIDIA K20m (2012 Kepler Tesla) GPUs/node
128 GB memory/node
FDR network fabric with full bi-sectional bandwidth
1 K20m = 2 C2050
168 total GPUs available: in C2050 units -> 336 GPUs total
total: 7200*45*4*2 = 2.6 M GPU hours
(equivalent to ~ 210 M Jpsi core hours from the listed conversion factors)
196 node Xeon Phi / KNL cluster ("16p")
Single socket 64 core KNL (with AVX-512 8 double / 16 single precision)
192 GB main memory / node
32 GB high bandwidth on package memory (6x higher bandwidth)
100 Gbps bi-directional Omnipath network fabric (total 25 GB/s/node)
32 nodes / switch, 16 up-links to core / switch
1 KNL core = 2-4 Jpsi cores – this varies with effective use
of high bandwidth memory, cores and AVX-512 vector units
we will use a charge rate of 3 Jpsi / core; we advise benchmarking codes
total: 7200*196*64*3 = 271 M Jpsi core hours
Shared Disk
1.1 PB total for LQCD
write thru cache option (never full: auto migrate to tape, so consumes tape)
volatile option (never full: auto-delete, least recently used, "reserved" quota
never deleted)
both: able to burst above managed quotas when needed
requested disk space must include anything present on disk today you intend
to keep on disk
For further information see also http://lqcd.jlab.org . Machine descriptions can be found at
https://wiki.jlab.org/cc/external/wiki/index.php/New_Users_Start_Here
===========================================================
USQCD allocation procedures:
The remainder of this document describes USQCD allocation procedures that
are not expected to vary much from year to year. All members of the USQCD
Collaboration are eligible to submit proposals. Those interested in
joining the Collaboration should contact Paul Mackenzie
(mackenzie@fnal.gov).
Requests can be of three types:
A) requests for potentially large amounts of time on USQCD
dedicated resources and/or leadership class computers, to
support calculations of benefit for the whole USQCD
Collaboration and/or addressing critical scientific needs.
There is no minimum size to the request. However, small
requests will be not considered suitable for leadership
resources. Allocations are for one year on USQCD resources.
B) requests for medium amounts of time on USQCD dedicated
resources intended to support calculations in an early
stage of development which address, or have the potential
to address the scientific needs of the collaboration;
--- No maximum, but encouraged to be below 2.5 M
Jpsi-equivalent core-hours or less on clusters,
or 100 K GPU hours or less on GPU clusters.
No suggested size for BNL BG/Q requests ---
Allocations are for up to 6 months.
C) requests for exploratory calculations, such as those
needed to develop and/or benchmark code, acquire expertise
on the use of the machines, or to perform investigations of
limited scope. The amount of time used by such projects
should not exceed 100 K Jpsi core-hours on clusters or 10 K
GPU-hours on the GPU-clusters. Requests for BG/Q at BNL
should be handled on a case-by-case basis.
Requests of Type A and B must be made in writing to the Scientific Program
Committee and are subject to the policies spelled out below. These
proposals must also specify the amount of disk and tape storage they will
carry forward and the amount of each that will be created in the coming
year. Projects will be charged for new disks and tapes as well as existing
disk usage. How this will be implemented is discussed in section (iii).
Requests of Type B can be made anytime of the year, and will start in the
nearest month. Requests should be sent in an e-mail message to the head of
the SPC, currently Anna Hasenfratz (anna@eotvos.colorado.edu).
Requests of Type C should be made in an e-mail message to
Paul Mackenzie (mackenzie@fnal.gov) for clusters at FNAL,
Robert Mawhinney (rdm@physics.columbia.edu) at BNL,
Chip Watson (Chip.Watson@jlab.org) for clusters at JLAB.
Collaboration members who wish to perform calculations on USQCD hardware or
use USQCD zero-priority at Argonne can present requests according to
procedures specified below. The Scientific Program Committee would like to
handle requests and awards on leadership class computers and cluster in
their respective units, namely Blue Gene core hours or Cray core hours.
Requests for the BG/Q will be handled in BG/Q core hours, and requests on
the GPU clusters will be handled in GPU hours. Conversion factors for
clusters, GPUs, and leadership class computers are given below. As
projects usually are not flexible enough to switch between running on GPUs,
BG/Q, and clusters, we choose to allocate in their respective units. In
addition, since the various GPU clusters have quite different properties,
it may be useful if proposals asking for GPU time included a preference, if
any, for particular USQCD GPU.
USQCD has adopted a policy to encourage even use of allocations throughout
the year, which is very similar to policies in use at supercomputer centers
such as NERSC. Our policy requires projects to use some of their allocation
in each calendar quarter. Projects that fail to do this will forfeit some
of their allocation for the quarter, which will be added to the allocations
of projects that are ready to run. (See
https://www.usqcd.org/reductions.html
for a detailed statement of the rules.)
--------------------------------------------
The remainder of this document deals with requests of Types A and B. It is organized as follows:
i) policy directives regarding the usage of awarded resources;
ii) guidelines for the format of the proposals and deadline for submission;
iii) procedures that will be followed to reach a consensus on the research programs and the allocations;
iv) policies for handling awards on leadership-class machines
i) Policy directives.
1) This Call for Proposals is for calculations that will further the
physics goals of the USQCD Collaboration, as stated in the proposals for
funding submitted to the DOE (see /), and have the
potential of benefiting additional research projects by members of the
Collaboration. In particular, the scientific goals are described in the
science sections of the recent SciDAC proposals and in the recent white
papers, which are placed on the same web-site. It is important to our
success in continued funding that we demonstrate continued importance in
helping DoE experiments to succeed.
2) Proposals of Type A are for investigations of very large scale, which
may require a substantial fraction of the available resources. Proposals
of Type B are for investigations in an early stage of development, and are
medium to large scale which will require a smaller amount of resources.
There is no strict lower limit for requests within Type A proposals, and
there is no upper limit on Type B Proposals. However, Type B requests for
significantly more than 2.5 M Jpsi-equivalent core-hours on clusters or
more than 100 K hours on GPU-clusters, will receive significant scrutiny.
Proposals that request time on the leadership-class computers at Argonne
and Oak Ridge should be of Type A and should demonstrate that they (i) can
efficiently make use of large partitions of leadership class computers, and
(ii) will run more efficiently on leadership class computers than on
clusters.
3) All Type A and B proposals are expected to address the scientific needs
of the USQCD Collaboration. Proposals of Type A are for investigations
that benefit the whole USQCD Collaboration. Thus it is expected that the
calculations will either produce data, such as lattice gauge fields or
quark propagators, that can be used by the entire Collaboration, or that
the calculations produce physics results listed among the Collaboration's
strategic goals. Accordingly, proponents planning to generate
multi-purpose data must describe in their proposal what data will be made
available to the whole Collaboration, and how soon, and specify clearly
what physics analyses they would like to perform in an "exclusive manner"
on these data (see below), and the expected time to complete them.
Similarly, proponents planning important physics analyses should explain
how the proposed work meets our strategic goals and how its results would
interest the broader physics community. Projects generating multi-purpose
data are clear candidates to use USQCD's award(s) on leadership-class
computers. Therefore, these proposals must provide additional information
on several fronts: they should
- demonstrate the potential to be of broad benefit, for example by
providing a list of other projects that would use the shared data, or how
the strategic scientific needs of USQCD are addressed;
- present a roadmap for future planning, presenting, for example,
criteria for deciding when to stop with one ensemble and start with
another;
- discuss how they would cope with a substantial increase in allocated
resources, from the portability of the code and storage needed to the
availability of competent personnel to carry out the running;
Some projects carrying out strategic analyses are candidates for running on
the leadership-class machines. They should provide the same information as
above.
4) Proposals of Type B are not required to share data, although if they do
so it is a plus. Type B proposals may also be scientifically valuable even
if not closely aligned with USQCD goals. In that case the proposal should
contain a clear discussion of the physics motivations. If appropriate,
Type B proposals may discuss data-sharing and strategic importance as in
the case of Type A proposals.
5) The data that will be made available to the whole Collaboration will
have to be released promptly. "Promptly" should be interpreted with common
sense. Lattice gauge fields and propagators do not have to be released as
they are produced, especially if the group is still testing the production
environment. On the other hand, it is not considered reasonable to delay
release of, say, 444 files, just because the last 56 will not be available
for a few months. After a period during which such data will remain for
the exclusive use of the members of the USQCD Collaboration, and possibly
of members of other collaborations under reciprocal agreements, the data
will be made available worldwide as decided by the Executive Committee.
6) The USQCD Collaboration recognizes that the production of shared data
will generally entail a substantial amount of work by the investigators
generating the data. They should therefore be given priority in analyzing
the data, particularly for their principal physics interests. Thus,
proponents are encouraged to outline a set of physics analyses that they
would like to carry out with these data in an exclusive manner and the
amount of time that they would like to reserve to themselves to complete
such calculations. When using the shared data, all other members of the
USQCD collaboration agree to respect such exclusivity. Thus, they shall
refrain from using the data to reproduce the reserved or closely similar
analyses. In its evaluation of the proposals the Scientific Program
Committee will in particular examine the requests for exclusive use of the
data and will ask the proposers to revise it in case the request was found
too broad or excessive in any other form. Once an accepted proposal has
been posted on the Collaboration website, it should be deemed by all
parties that the request for exclusive use has been accepted by the
Scientific Program Committee. Any dispute that may arise in regards to the
usage of such data will have to be directed to the Scientific Program
Committee for resolution and all members of the Collaboration should abide
by the decisions of this Committee.
7) Usage of the USQCD software, developed under our SciDAC grants, is
recommended, but not required. USQCD software is designed to be efficient
and portable, and its development leverages efforts throughout the
Collaboration. If you use this software, the SPC can be confident that
your project can use USQCD resources efficiently. Software developed
outside the collaboration must be documented to show that it performs
efficiently on its target platform(s). Information on portability is
welcome, but not mandatory.
8) The investigators whose proposals have been selected by the Scientific
Program Committee for a possible award of USQCD resources shall agree to
have their proposals posted on a password protected website, available only
to our Collaboration, for consideration during the All Hands' Meeting.
9) The investigators receiving a Type A allocation of time following this
Call for Proposals must maintain a public web page that reasonably
documents their plans, progress, and the availability of data. These pages
should contain information that funding agencies and review panels can use
to determine whether USQCD is a well-run organization. The public web page
need not contain unpublished scientific results, or other sensitive
information.
ii) Format of the proposals and deadline for submission.
The proposals should contain a title page with title, abstract and the
listing of all participating investigators. The body, including
bibliography and embedded figures, should not exceed 12 pages in length for
requests of Type A, and 10 pages in length for requests of Type B, with
font size of 11pt or larger. If necessary, further figures, with captions
but without text, can be appended, for a maximum of 8 additional pages.
CVs, publication lists and similar personal information are not requested
and should not be submitted. Title page, proposal body and optional
appended figures should be submitted as a single pdf file, in an attachment
to an e-mail message sent to anna@eotvos.colorado.edu
The last sentence of the abstract must state the total amount of computer
time in Jpsi-equivalent core-hours for clusters, GPU-clusters in GPU-hours,
and in BG/Q core hours for those machines. Proposals lacking this
information will be returned without review (but will be reviewed if the
corrected proposal is returned quickly and without other changes).
The body of the proposal should contain the following information, if possible in the order below:
1) The physics goals of the calculation.
2) The computational strategy, including such details as gauge and
fermionic actions, parameters, computational methods.
3) The software used, including a description of the main algorithms and
the code base employed. If you use USQCD software, it is not necessary to
document performance in the proposal. If you use your own code base, then
the proposal should provide enough information to show that it performs
efficiently on its target platform(s).
Information on portability is welcome, but not mandatory. As feedback for
the software development team, proposals may include an explanation of
deficiencies of the USQCD software for carrying out the proposed work.
4) The amount and type of resources requested. Here one should also state
which machine is most desirable and why, and whether it is feasible or
desirable to run some parts of the proposed work on one machine, and other
parts on another. If relevant, proposals of Type A should indicate
longer-term computing needs here.
The Scientific Program Committee will use the following table to convert:
1 J/psi core-hour = 1 Jpsi core-hour
1 12s core-hour = 2.3 Jpsi core-hour
1 XK7 core-hour = 1.0 Jpsi core-hour
1 BG/Q core-hour = 1.64 Jpsi core-hour
1 C2050 GPU hour = 82 Jpsi equivalent core-hour
1 K20 GPU hour = 172 Jpsi equivalent core-hour
1 K40 GPU hour = 224 Jpsi equivalent core-hour
1 Phi MIC hour = 164 Jpsi equivalent core-hour
(1 Jpsi core-hour = 1.22 GFlop/sec-hour)
The above numbers are based on appropriate averages of asqtad, DWF fermion
inverters, and Clover inverters. In the case of XK7 performance is based
on a Clover inverter run on the GPUs at leadership scale. The conversion
of GPU to Jpsi is based on the average of application performance on user
jobs across all GPU systems at FNAL and JLab (including gamer as well as
non-gamer cards). See http://lqcd.fnal.gov/performance.html for details.
In addition to CPU time, proposals must specify how much mass storage is
needed. The resources section of the proposal should state how much tape
and disk storage is already in use, and how much new storage is needed, for
disk and tape, in Tbytes. In addition, please also restate the storage
request in Jpsi-equivalent core-hours, using the following conversion
factor, which reflect the current replacement costs for disk storage and
tapes:
1 Tbyte disk = 40 K Jpsi-equivalent core-hour
1 Tbyte tape = 6 K Jpsi-equivalent core-hour
Projects using disk storage will be charged 25% of these costs every three
months. Projects will be charged for tape usage when a file is written at
the full cost of tape storage; when tape files are deleted, they will
receive a 40% refund of the charge. Proposals should discuss whether these
files will be used by one, a few, or several project(s). The cost for
files (e.g., gauge configurations) that are used by several projects will
be borne by USQCD and not a specific physics project. The charge for files
used by a single project will be deducted from the computing allocation:
projects are thus encouraged to figure out whether it is more
cost-effective to store or re-compute a file. If a few (2-3) projects
share a file, they will share the charge.
Projects that expect to have large I/O requirements, such as those that use
eigenvalue and deflation methods, are requested to note that in their
proposal and to work with the site managers to handle these needs as
painlessly as possible.
5) If relevant, what data will be made available to the entire
Collaboration, and the schedule for sharing it.
6) What calculations the investigators would like to perform in an
"exclusive manner" (see above in the section on policy directives), and for
how long they would like to reserve to themselves this exclusive right.
iii) Procedure for the awards.
The Scientific Program Committee will receive proposals until the deadline.
Proposals not stating the total request in the last sentence of the
abstract will be returned without review (but will be reviewed if the
corrected proposal is returned quickly and without other changes).
Proposals that are considered meritorious and conforming to the goals of
the Collaboration will be posted on the web at /, in
the Collaboration's password-protected area. Proposals recommended for
awards in previous years can be found there too.
The Scientific Program Committee (SPC) will make a preliminary assessment
of the proposals. Before the All Hands Meeting, the SPC will send a report
to the proponents raising any concerns about the proposal.
Following the All Hands' Meeting the SPC will determine a set of
recommendations on the awards. The quality of the initial proposal, the
proponents' response to concerns raised in the written report, and the
views of the Collaboration expressed at the All Hands’ Meeting will all
influence the outcome. The SPC will send its recommendations to the
Executive Committee after the All Hands' Meeting, and inform the proponents
once the recommendations have been accepted by the Executive Committee.
The successful proposals and the size of their awards will be posted on the
web.
Scientific publications describing calculations carried out with these
awards should acknowledge the use of USQCD resources, by including the
following sentence in the Acknowledgments:
"Computations for this work were carried out in part on facilities of the
USQCD Collaboration, which are funded by the Office of Science of the U.S.
Department of Energy."
Projects whose sole source of computing is USQCD should omit the phrase "in part".
Back to Top
|