Future of HEP Computing
"Old Farts:
Stu Loken
Wolfgang von Rüden
David Williams
Richard Mount
Roman Tirler (will be absent)
Tom Nash
Fresh new faces chosen by panel
based on the talks at the conference:
Michael Ernst
Bob Jacobsen
Abstract
In the distant past, computing
in HEP was one of a few isolated examples of fields which were
extremely demanding of computing and where computing was integrated
in an essential way in the day to day workings of the field. HEP
was thereby rather independent of computing in the broader world.
Our experiments and labs built their own software, established
their own networks, and at times even built specialized processing
hardware.
One cannot define a moment when
this situation changed. But it is clear today that HEP computing
is now very much a creature of the tremendous movements that are
taking place in commercially driven computing technology, the
rapid spread of computing into a ubiquitous place, whether explicit
or embedded, in much of the world's daily life. This panel will
focus on how this ongoing revolution in the broader world of computing
will impact computing within our field.
Questions
- What is (are) the main new
challenges that HEP computing faces over the next decade?
- At least one of them will
be the matter of maximizing the cohesiveness and effectiveness
of huge geographically dispersed & disparate collaborations.
David Williams 5-10 minutes
- How long should we anticipate
the extremely rapid changes in technologies to go on and what
projections do you believe are valid over the next 10 years in
the following areas:
- Collaboration tools, shared
data bases, etc. - Stu Loken
- Software engineering, OOPS,
Java - Wolfgang von Rüden
- Processor/storage curves -
Roman Tirler (shown by Nash?)
- What do you anticipate to
be the impact of the changes in the wide world of computing in
each of these technological areas on the HEP challenges we have
identified?
- What are the problems that
HEP computing will need to address by its own efforts (beyond
the usual integration of commercially available components) to
meet the challenges we have identified?
- What will the data geographical
model be?
- Centralized data vs partially
centralized date (Jürgen Knobloch)
- Impact on network and computing
iron/storage requirements
- What is a regional center?
- Will OO DBs be lightweight
enough to be used throughout data cycle?
- Java und C++
- What will be (should be) the
place for each?
- (When) will C++ code become
legacy?
- Will Java always be slower
than C++?
- Grad students should learn
both (to be more employable)?
- (When) are learning curves
worth the pay back?
- Should physicists learn (computing)
analysis skills?
- Role of computer scientists?
Consultants?
- How do you know you have found
a good one?
- Requirements? When in process?
- Waterfall vs iterative development?
- Rapid prototyping?
- Reviews and checkpoints: electronic
only?
- Process?
- Let them code first? Will
they ever design?
- Daily, weekly, monthly build?
- (Why) are we slower than Netscape,
Microsoft release cycles?
- Will complex, do-everything
programs die, evolve to Component software?
- Word, GEANT, Experiment analysis
packages
- Chosen, focused functionality
- What is appropriate level
of component granularity?
- Future (appropriate roles)
(positions) of NT vs Unix vs (Java + Browser)?
- Integration of HEP computing
into HEP
- What is trend of % cost for
computing in experiments?
- Should (can, will) computing
be included in TPC (total project cost)?
- in project work break down
structure (WBS)?
- What should we be saying about
computing issues as leaders in CHEP to leaders in HEP?