ACAT 2005

Abstracts of Talks in Session 1 by Author

Quick links:
Programme overview
detailed Timetable
Abstracts of Plenary and Invited Talks
Programme and Abstracts of Session 1
Programme and Abstracts of Session 2
Programme and Abstracts of Session 3
Title The Graphics Editor in ROOT
Speaker Antcheva, Ilka
Institution CERN
Abstract
The ROOT graphics editor is split into discrete units of so-called
object editors. It makes the Graphical User Interface easier to design and
adapted to the different users' profiles.

Title Parallel interactive and batch HEP data analysis with PROOF
Speaker Biskup, Marek
Institution CERN
Abstract
The Parallel ROOT Facility, PROOF, enables a physicist to analyze
and understand much larger data sets on a shorter time scale. It makes use
of the inherent parallelism in event data and implements an architecture
that optimizes I/O and CPU utilization in heterogeneous clusters with
distributed storage. The system provides transparent and interactive access
to gigabytes today. Being part of the ROOT framework PROOF inherits the
benefits of a performant object storage system and a wealth
of statistical and visualization tools.

In this talk we will describe the latest developments on closer integration
of PROOF into the ROOT user environment, e.g. support for the popular
TTree::Draw() interface for PROOF based trees, easy PROOF based tree access
via the tree viewer GUI and PROOF session access via the ROOT browser. We
will also outline how we plan to extend PROOF to support an "interactive"
batch mode where the user can disconnect and reconnect from several long
running PROOF sessions. This feature is especially interesting in a Grid
environment where the data is globally distributed.
Title DAQ software for SND detector
Speaker Bogdanchikov, Alexander
Institution Budker Institute of Nuclear Physics
Abstract
The report describes data acquisition system software for
the SND detector experiments on the new e+e- collider VEPP-2000
(Novosibirsk) which will operate at the energy range 0.4-2.0 GeV
with expected luminosity 10^32 s^-1 cm^-2. The system architecture 
is presented. An overview of its features is given.

The distinctive features of the SND data acquisition system are
following. Deep buffering of readout events provides independence of
data reading from their processing. Computer farm for events 
processing and selection is implemented in such a way to allow
linear scaling of computing power. The operator interface is  
implemented with Web-technologies. State machine, process starter,
process control & recovery services are designed to control system
processes. The system configuration and data taking conditions are          
stored in the relational (SQL) database. The database access is             
implemented through object-oriented API designed for this project.  
Events processing and selection modules are embedded into the highly
configurable software framework.

The DAQ software provides high level of robustness, flexibility and
scalability.

Title Towards the operation of the Italian Tier-1 for CMS: lessons learned from the CMS Data Challenge
Speaker Bonacorsi, Daniele
Institution CNAF - INFN Italy
Abstract
After CMS Data Challenge in 2004 (DC04) - which was devised to
test several key aspects of the CMS Computing Model - a deeper insight in
most of the crucial issues in the operation of a Tier-1 within the overall
CMS infrastructure was achieved. In particular, at the involved Italian
CNAF-INFN Tier-1 many improvements were done in one year since then,
concerning the data management and the distribution topology using the CMS
PhEDEx tool, the coexistence of local traditional farm operations and Grid
official CMS Monte Carlo production, the development and usage of the CRAB
tool to grant efficient data access to distributed users to analyse DST data
via Grid tools, the long-term local archiving and custodial responsibility
(e.g. MSS with Castor back-end), the daily CMS operations on Tier-1
resources shared by LHC (and not only) experiments, and so on. The INFN
Tier-1 resources, set-up and configuration are here reviewed and discussed,
thinking of the overall operation of the regional center in the next future
when real data from LHC will be available.
Title The CMS analysis chain in a distributed environment
Speaker De Filippis, Nicola
Institution Dipartimento di Fisica dell'Universita' e del Politecnico di Bari e INFN
Abstract
The CMS (Compact Muon Solenoid) collaboration is making a big effort
to define the analysis model and to develop software tools with the
purpose of analyzing several millions of simulated and real data events by
a large number of people in many geografically distributed sites.
>From the computing point of view, the most complex issue when doing remote
analysis is the data discovery and their access. Some software tools were
developed in order to move data, make them available to the full
international community and validate them for the subsequent analysis.
The batch analysis processing is performed with workload management
tools developped on purpose, which are mainly responsible for the job
preparation and the job submission. The job monitoring and the output
management are implemented as the last part of the analysis chain. Grid
tools provided by the LCG project are experimented to gain access to the
data and the resources by providing a user friendly interface to the
physicists submitting the analysis jobs.
An overview of the current implementation and of the interactions between
the previous components of the CMS analysis system is presented in this
work.

Title Interactive Analysis Environment of Unified Accelerator Libraries
Speaker Fine, Valeri
Institution Brookhaven National Laboratory
Abstract
Unified Accelerator Libraries (UAL,http://www.ual.bnl.gov) software is an
open accelerator simulation 
environment addressing a broad spectrum of accelerator tasks ranging 
from online-oriented efficient modeling to full-scale realistic beam 
dynamics studies. The paper introduces a new package integrating UAL 
simulation algorithms with the Qt-based Graphical User Interface and an 
open collection of analysis and visualization components. The primary 
user application is implemented as an interactive and configurable 
Accelerator Physics Player whose extensibility is provided by plug-in 
architecture. Its interface to data analysis and visualization modules 
is based on the Qt layer (http://root.bnl.gov) developed by the STAR 
experiment. The present version embodies the ROOT (http://root.cern.ch 
) data analysis framework, Qt/Root package supported
by STAR (http://www.star.bnl.gov) and Coin 3D 
(http://www.coin3d.org ) graphics library.

Title Grid Technology in Production at DESY
Speaker Gellrich, Andreas
Institution DESY
Abstract
DESY is one of the world leading centers for research with particle
accelerators and a center for research with synchrotron light. The
hadron-electron collider HERA houses three experiments which are
taking data and will be operated until 2007.
 
The H1 and ZEUS collaborations face a growing demand for Monte Carlo
events after the recent luminosity upgrade of the collider. Grid technology
turns out to be an attractive way to meet this
challenge. The core site at DESY acts as a central hub to send production
jobs to sites which incorporate Grid resources in the dedicated HERA VOs.

The DESY Grid Infrastructure deploys the LCG-2 middleware, giving DESY a
spot on the worldwide map of active LCG-2 sites. The DESY Production Grid
provides Grid core services, including all components to make DESY a
complete and independent Grid site. In addition to hosting and supporting
dedicated VOs for H1 and ZEUS, DESY fosters the Grid activities of the LQCD
community and the International Linear Collider Group.

Data management is a key aspect of Grid computing in HEP. In cooperation
with Fermilab, DESY has developed a Storage Element (SE) which consists of
dCache as the core storage system and an implementation of the Storage
Resource Manager (SRM). Access to the entire DESY data space of 0.5 PB is
provided by a dCache-based SE.

In the contribution to ACAT 2005 we will describe the DESY Grid
infrastructure in the context of the DESY Grid activities and present
operation experiences and future plans.
Title Optimization of Lattice QCD codes for the AMD Opteron processor
Speaker Koma, Miho
Institution DESY
Abstract
We report the current status of the new Opteron cluster at DESY
Hamburg, including benchmarks.
Details of the optimization using SSE/SSE2 instructions and 
the effective use of a prefetch instructions are discussed.
Title Analysis of SCTP and TCP based communication in high-speed cluster
Speaker Kozlovszky, Miklos
Institution BUTE
Abstract
Performance  and  financial  constraints  are  pushing  modern  DAQs   (Data
Acquisition Systems) to use  distributed  cluster  environments  instead  of
monolith one-box systems. Inside the cluster the communication layer of  the
nodes should support  outstanding  high  performance  requirements.  We  are
currently investigating different network  protocols  that  could  meet  the
requirements of high speed/low  latency  peer-to-peer  communication  within
DAQ system. We have carried out various performance  measurements  with  TCP
and SCTP over  Gigabit  Ethernet.  We  are  focusing  on  Gigabit  Ethernet,
because this transport medium is broad deployed, cost efficient and  it  has
much  better  cost/throughput  ratio  than  other  available   communication
alternatives (e.g.: Myrinet, Infiniband).
To reach the highest throughput, and minimize latency during data  transfer,
we have made both software and hardware tunings in the pilot system. On  the
hardware side we have increased the number of network interface  cards,  the
memory buffers, and the CPU performance. On the software side we  have  used
independent pending queues, multi-streaming, and  multi-threading  for  both
protocols.
The  major  topics  investigated  include:  blocking   versus   non-blocking
communication, multi-rail versus single-rail  connections  and  jumbo  frame
usage. We discuss the performance results  of  single/multi-stream  peer-to-
peer communication with TCP  and  SCTP  and  give  overview  about  protocol
overhead, CPU and memory usage.

Title DaqProVis, a toolkit for acquisition, interactive analysis, processing and visualization of multidimensional data
Speaker Morhac, Miroslav
Institution Institute of Physics, Slovak Academy of Sciences
Abstract
In the contribution we present the data acquisition, processing and
visualization  system, which is being built at the Institute of Physics,
Slovak Academy of Sciences, Bratislava and FLNR JINR Dubna. DaqProVis is
well suited for interactive analysis of multiparameter data from small and
medium sized experiments in nuclear physiscs. However it can analyse event
data even from big experiments, e.g. from GAMASPHERE. The system is
continuously being developed, improved and supplemented with new additional
functions and capabilities.
The data acquisition part of the system allows one to acquire multiparameter
events either directly from the experiment, from a list file of events or
from another DaqProVis working in server mode. The capability of DaqProVis
to work simultaneously in both the client and the server mode enables us to
realize remote as well as distributed acquisition, processing and
visualization systems.
        The raw events coming from one of the above mentioned data sources
can be  sorted according to predefined criteria (gates) and written to
sorted streams as well. The event variables can be anlysed to create 1, 2,
3, 4, 5 "parameter histograms" spectra, analysed and compressed using
on-line compression procedure (the amplitude analysis is carried out
simultaneously with the compression, event by event, in on-line acquisition
mode), sampled using various sampler modes (sampling, multiscaling, or
stability measurement of a chosen event variable).
>From acquired multidimensional spectra one can make slices of lower
dimensionality. Continuous scanning aimed at looking for and localizing
interesting parts of multidimensional spectra, with automatic stop when the
attached condition is fulfilled, is also possible
Once collected the analysed data can be further processed using
sophisticated background elimination, deconvolution, peak searching and
fitting algorithms. A comprehensive number of both conventional and new
developed spectra processing algorithms were implemented in the system.
The system allows one to display 1, 2, 3, 4, 5-parameter spectra using a
great variety of conventional as well as sophisticated (shaded isosurface,
volume rendering etc) visualization techniques. It supports various
graphical formats (pcx, ps, jpg, bmp). If desired all changes of individual
pictures or entire screen can be recorded in an avi file. It proved to be
very efficient, e.g. in the analysis of iterative processing methods
(deconvolution, fitting). 
        The modular structure of the DaqProVis system provides a great
flexibility for both experimental and post-experimental configurations. To
write the software we have employed the object oriented approch. Objects
such as detection line, event, gate/condition, filter, analyser, sampler,
compressor, spectrum, picture etc. are internally represented by structures.
The experimental, processing and visualization configurations are completely
stored in the networks of structures.

Title Performance Comparison of the LCG2 and gLite File Catalogues
Speaker Munro, Craig
Institution Brunel University
Abstract
File catalogues are presently one of the core components of the Grid
middleware and their perfomance is crucial to the performance the
entire system. We present a detailed comparison study of the
performance of the LCG File Catalogue (LFC) with the gLite FiReMan
catalogue developed in the EGEE project. A detailed discussion of
the merits and shortcomings of the different approaches is done
with an emphasis on the different access protocols.

Title ILDG: DataGrids for Lattice QCD
Speaker Pleiter, Dirk
Institution NIC / DESY Zeuthen
Abstract
As the need for computing resources to carry out numerical simulations of
QCD formulated on a lattice has increased significantly, efficient use
of the generated data has become a major concern. To improve on this,
groups plan to share their configurations on a worldwide level within
the International Lattice DataGrid (ILDG). Doing so requires standardized
description of the configurations, standards on binary file formats and
common middleware interfaces. In this talk we will detail the requirements
for a ILDG, describe the problems and discuss the solutions.  Furthermore,
we will give an overview on the implementation of the LatFor DataGrid
(LDG) which will be one of the grids within ILDG's grid-of-grids. The
implementation of LDG is a common project of DESY (Hamburg/Zeuthen),
FZJ/ZAM (Juelich), NIC (Zeuthen/Juelich) and ZIB (Berlin).
Title Storage resources management and access at TIER1 CNAF
Speaker Ricci, Pier Paolo
Institution INFN CNAF
Abstract
At presents at LCG TIER1 CNAF we have 2 main different mass
storage systems for archiving the HEP experiment data: a HSM software system
(CASTOR) and about 200TB of different storage devices over SAN. This paper
briefly describe our hardware and software environtment and summarize the
simple technical improvements we have implemented in order to obtain a
better avaliability and the best data access throughtput from the front-end
machines. Also some test results for different file systems over SAN are
reported.
Title Evolution of the configuration database design
Speaker Salnikov, Andrei
Institution SLAC
Abstract
BaBar experiment at SLAC successfully collects physics data 
since 1999. One of the major parts of its on-line system is 
the configuration database which provides other parts of the
system with the configuration data necessary for data taking. 
Originally the configuration database was implemented 
in Objectivity/DB ODBMS. Recently BaBar performed a 
successful migration of its event store from Objectivity/DB 
to ROOT and this prompted a complete phase-out of the 
Objectivity/DB in all other BaBar databases. It required 
the complete redesign of the configuration database to hide
any implementation details and to support multiple 
implementations of the same interface. In this paper we 
describe the result of the migration of configuration database, 
its new design, implementation strategy and details.
Title Metadata Services on the Grid
Speaker Santos, Nuno
Institution CERN
Abstract
We present the design of a metadata service for the Grid which has been
developed in the ARDA project and which is now evolving as a common effort
together with the gLite Data Management team. The results of extensive
performance studies with our implementation of the service are shown
including a comparison of the SOAP based implemementation of the interface
with an implemenation based on TCP streaming. This allows to clarify in a
quantitative way the implication of the usage of SOAP as a metadata access
protocol. Finally, the activity of the ARDA team on metadata services within
the HEP community is reviewed.

Title InfiniBand
Speaker Schwickerath, Ulrich
Institution Forschungszentrum Karlsruhe
Abstract
InfiniBand is an emerging technology which becomes more and more
interesting due to good performance and falling prices 
for both high performance and high throughput applications. The
Institute for Scientific Computing (IWR) of the Forschungszentrum
Karlsruhe was amongst the first adopters of 4x InfiniBand in Germany.
In this presentation, experiences with MPI based applications 
and performance results of own developements on various platforms are 
presented, and recent developements in the field are reviewed.

Title The apeNEXT Project
Speaker Simma, Hubert
Institution DESY
Abstract
Numerical simulations in theoretical high-energy physics (Lattice QCD) require huge computing resources. 
Several generations of massively parallel computers optimised for these applications have been developed 
within the APE (array processor experiment) project. Large prototype systems of the latest generation, 
apeNEXT, are currently being assembled and tested.

This talk provides an overview of the hardware and software architecture of apeNEXT, describes its new features, 
like the SPMD programming model and the C compiler, and reports on the current status.

Title Monte Carlo Mass Production for the ZEUS experiment on the Grid
Speaker Stadie, Hartmut
Institution DESY
Abstract
The detector and collider upgrades for HERA-II have drastically
increased the demand on computing resources for Monte Carlo production for
the ZEUS experiment. To close the gap, the existing production system was
extended to use grid resources. This extended system has been used in
production since November 2004. Using 25 different LHC computing grid (LCG)
sites more than 100 million events were simulated and reconstructed
exceeding the capacity of the old system. We will present the production
setup and introduce the toolkit that was developed by ZEUS to use the
existing grid middleware efficiently. Finally, we will report about our
experiences on running mass production on the grid and our future plans.
Title Grid Middleware Configuration at the KIPT CMS Linux Cluster
Speaker Zub, Stanislav
Institution Institute of High Energy Physics and Nuclear Physics (NSC KIPT)
Abstract
Problems associated with storage, processing and analysis of huge data
samples expected in experiments, planned at the Large Hadron Collider
(LHC), are discussed. Current status and problems associated with
installation of LCG middleware on the KIPT CMS Linux Cluster (KCLC), which
is a part of the Moscow distributed regional center for the LHC data
analysis, are outlined. Configuration and testing of the LHC computing
Grid middleware at the KCLC is described. Participation of the KCLC in the
CMS Monte-Carlo event production is presented.

Quick links:
Programme overview
detailed Timetable
Abstracts of Plenary and Invited Talks
Programme and Abstracts of Session 1
Programme and Abstracts of Session 2
Programme and Abstracts of Session 3


, last updated: Tue Sep 27 15:04:55 2005