SCinet
Print
this page
Bandwidth
Challenge Schedule
Bandwidth
Challenge Winners Announced
High-Performance
Bandwidth Challenge
Continuing the tradition started at SC2000, SCinet
and Qwest Communications are sponsoring the Third
Annual High-performance Bandwidth Challenge. For the
Bandwidth Challenge, applicants from science and engineering
research communities across the globe will use the
unique SCinet infrastructure to demonstrate emerging
techniques or applications, many of which consume
enormous amounts of network resources. At SC2001,
a single application consumed more than 3 billion
bits per second (Gbps). Another application creatively
consumed (squandered?) significant aggregate amounts
of bandwidth by connecting to a large number of remote
sites.
For
SC2002, applicants are challenged to significantly
stress the SCinet network infrastructure while delivering
innovative application value on an OC-48 or higher
(OC-192!) interconnect. In turn, SCinet facilitates
access to the networks, provides technical support
to applicants, and makes arrangements for equipment
and floor and rack space to applicants with demonstrable
needs.
Qwest
Communications is sponsoring the award of one or more
monetary prizes for the applications that make the
most effective and/or courageous use of SCinet resources.
The primary measure of performance will be the verifiable
network throughput as measured from the contestant's
booth through the SCinet switches and routers to external
connections.
Seven
full proposals were received prior to the initial
application deadline. The abstracts received with
these proposals are provided here.
Bandwidth
Gluttony - Distributed Grid-Enabled Particle Physics
Event Analysis over enhanced TCP/IP
Submitted by Julian Bunn, California Institute of
Technology
"Using distributed databases at Caltech, CERN,
possibly UCSD and other HEP institutes, we will show
a particle physics analysis application that issues
remote database selection queries, prepares virtual
data object collections, moves those collections across
the WAN using specially enhanced TCP/IP stacks, and
renders the results in real time on the analysis client
workstation in Baltimore. This scheme is a preview
of a general "Grid Enabled Analysis Environment"
that is being developed for CERN's LHC experiments.
It makes use of modified TCP with Adaptive Queue Management
to achieve excellent throughput for large data transfers
in the WAN."
Bandwidth
to the World
Submitted by Les Cottrell, Stanford Linear Accelerator
Center
"The unprecedented avalanche of data already
being generated by and for new and future High Energy
and Nuclear Physics (HENP) experiments at Labs such
as SLAC, FNAL, KEK and CERN is demanding new strategies
for how the data is collected, shared, analyzed and
presented. For example, the SLAC BaBar experiment
and JLab are each already collecting over a TByte/day,
and BaBar expects to increase by a factor of 2 in
the coming year. The Fermilab CDF and D0 experiments
are ramping up to collect similar amounts of data,
and the CERN LHC experiment expects to collect over
ten million TBytes. The strategies being adopted to
analyze and store this unprecedented amount of data
is the coordinated deployment of Grid technologies
such as those being developed for the Particle Physics
Data Grid and the Grid Physics Network. It is anticipated
that these technologies will be deployed at hundreds
of institutes that will be able to search out and
analyze information from an interconnected worldwide
grid of tens of thousands of computers and storage
devices. This in turn will require the ability to
sustain over long periods the transfer of large amounts
of data between collaborating sites with relatively
low latency. The Bandwidth to the World project is
designed to demonstrate the current data transfer
capabilities to several sites with high performance
links, worldwide. In a sense the site at SC2002 is
acting like a HENP tier 0 or tier 1 site (an accelerator
or major computation site) in distributing copies
of the raw data to multiple replica sites. The demonstration
will be over real live production networks with no
efforts to manually limit other traffic. Since, by
turning off the 10Gbps link, we will be able to saturate
our 1 Gbps link to SCinet and control the router in
our booth, which will be at one end of the congested
link, we also hope to be able to investigate/demonstrate
the effectiveness of QBone Scavenger Service (QBSS)
in managing competing traffic flows and on the response
time of lower volume interactive traffic on high performance
links."
Data
Services and Visualization
Submitted by Helen Chen, Sandia National Laboratories
"The ability to do effective problem setup and
analysis of simulation results is critical to a complete,
balanced problem-solving environment for ASCI. Research
scientists and analysts must be able to efficiently
design complex computational experiments, as well
as be able to see and understand the results of those
experiments in a manner that enables unprecedented
confidence in simulation results. We demonstrate two
visualization techniques that utilize a 10Gbps network
infrastructure to deliver robust data services for
the management and comprehension of very large and
complex data sets. The LBL Visapult application visualizes
a large scientific data set that is stored in an InfinARRAY
File System (IAFS). IAFS aggregates, over TCP/IP,
the storage of a number of physical file systems into
a single virtual file system. Its performance is determined
by the interconnect bandwidth, and the number of parallel
storage processors in the system, thereby offering
a very scalable solution for servicing scientific
supercomputing. A possible alternative to IAFS is
to employ remote storage, using the emerging IP storage
technology such as iFCP and iSCSI as the data source
to Visapult. The Sandia V2001 is an Interactive Video
System that transports high-resolution video over
local- or wide-area IP networks. A V2001 Encoder connects
to the DVI or RGB video from a computer as if it were
a flat panel monitor. It delivers the video signal
presented at its video port over Gigabit Ethernet.
At the visualization end, a V2001 Decoder displays
the received signal to a panel monitor connected via
its DVI or RGB port. The V2001 Decoder provides low-latency
interactivity on remote images using an USBbased keyboard
and mouse."
Global
Telescience featuring IPv6
Submitted by Mark Ellisman, National Center for Microscopy
and Imaging Research (NCMIR)
"The NCMIR Lab at the University of California
(UCSD) and the San Diego Supercomputer Center (SDSC)
intends to demonstrate another "real" scientific
application utilizing native IPv6 and a mixture of
high bandwidth and low latency. In our demonstration
we will feature a network enabled end-to-end system
for 3D tomography utilizing network resources to remotely
control an Intermediate Voltage Electron Microscope,
transfer data to remote storage resources and complete
compute jobs on distributed heterogeneous computer
systems. The process of finding features using the
microscope is a visually guided task in which users
must distinguish features in a low contrast high noise
medium. The nature of this task calls for the highest
video quality possible when navigating the specimen
in the microscope. To address these challenges, the
Telescience system is actively integrating digital
video over native IPv6 networks providing high quality
low latency video for use in navigation of the specimen
in the microscope. Similar to our presentation last
year, we will continue to improve upon past achievements.
This year's demo will feature higher bandwidth usage
as well as other technological improvements."
Grid
Datafarm for a HEP Application
Submitted by Osamu Tatebe, National Institute of Advanced
Industrial Science and Technology (AIST)
"This is a high-energy physics application that
simulates the ATLAS detector, which will be operational
by 2007 at CERN. Currently, six clusters, three in
US and three in Japan, comprise a cluster-of-cluster
filesystem (Gfarm filesystem). The FADS/Goofy simulation
code based on the Geant4 toolkit simulates the ATLAS
detector and generates hits collection (raw data)
in this Gfarm filesystem. Physically, each cluster
generates the corresponding part of hits collection,
and stores it to its cluster filesystem, which will
be replicated to all other cluster filesystems, although
every operation is uniformly performed on the Gfarm
cluster-of-cluster filesystem. Replicated files will
be transparently accessed, and used for fault tolerance
and load balancing."
Real-Time
Terascale Computation for 3D Tele-Immersion
Submitted by Herman Towles, University of North Carolina
at Chapel Hill
"Tele-Immersion, an ability to share presence
with distant individuals, situations and environments,
may provide vast new possibilities for human experience
and interaction in the near future. Combining correlation-based
3D reconstruction and view-dependent stereo display
technologies, the University of Pennsylvania and the
University of North Carolina have previously demonstrated
a prototype 3D tele-immersion system running over
Abilene between Philadelphia and Chapel Hill. Working
with the Pittsburgh Supercomputing Center, we are
now working to harness the massive computational power
of PSC s new Terascale Computing System (lemieux)
to improve our reconstruction volume, reconstruction
quality and frame rate performance. We propose to
demonstrate 3D tele-immersion using the remote computational
resources of the Pittsburgh Supercomputing Center
to do high-quality, 3D reconstruction of a dynamically
changing, room-size environment at SC2002 and to display
it with view-dependent stereo at a nearby office-like
booth. We will set up both these environments at SC2002
an office for acquisition and an office with display.
In the latter office, there will be a large projective
stereo display providing a live, 3D portal into the
acquisition office. The display will use passive stereo
technology with position tracking of the user for
view-dependent rendering. The first office and occupant(s)
will be acquired with an array of up to 45 cameras.
Raw 2D images from the acquisition office (at SC2002
in Baltimore) will be transported to PSC (in Pittsburgh)
and processed into a 3D scene description in real-time.
This 3D scene description will then be transported
back to the display office (at SC2002 in Baltimore)
for 3D rendering. In addition to this remote interactive
compute mode, we will provide a local reconstruction
mode (that runs without PSC) and a local playback
mode of pre-computed reconstructions for those times
when remote computing or networking resources are
not available."
Wide
Area Distributed Simulations using Cactus, Globus
and Visapult
Submitted by John Shalf, Lawrence Berkeley National
Laboratory
"We will perform a "hero calculation"
of unprecedented scale that will consume unprecedented
amounts of network bandwidth. The calculation will
model gravitational waves generated during the collision
of black holes. A single simulation will be distributed
amongst several MPP supercomputers at several sites.
The distributed simulation will send simulation results
over multiple high-capacity network links to the SC02
show floor for visualization and analysis. We expect
this year's entry to set new records for extreme bandwidth
consumption. This year's entry builds on the effort
of our winning SC2001 entry in several important aspects.
We will demonstrate a system that provides tracking,
remote monitoring, management interfaces, and high
performance parallel visualization of supercomputing
simulations that span across multiple parallel supercomputers
on the Grid. We will also demonstrate both fault-resilient
unreliable protocols and even some custom reliable
protocols are capable of using all available networking
resources with a high degree of efficiency. This endeavor
will lay the groundwork for the kind of tools necessary
for making efficient use of high performance research
networks to support Grid and metacomputing activities.
The entry will consist of the Cactus code framework,
the Visapult parallel visualization system and Globus
Grid infrastructure. The Cactus code will run across
multiple parallel supercomputers at different sites
including NERSC, LBNL, Sandia, Poland and other similar
resources using MPICH-G2 for MPI and Globus GRAM for
job launching/management. The Cactus steering infrastructure
will be used to remotely control the running code
and the Visapult system will provide extremely high
performance parallel visualization of the current
physics being evolved by Cactus using cluster computers
located on the SC2002 show floor."
Data
Reservoir
Submitted by Kei Hiraki, University of Tokyo
Data Reservoir is a data sharing system that uses
very high-speed internet connections (up to 100Gbps)
between distant locations. The Data Reservoir utilizes
the low-level iSCSI protocol and has filesystem transparency.
Parallel data transfer with hierarchical data striping
is a key factor in achieving full bandwidth of the
highspeed network. Our system has two nodes that are
connected by a 10Gbps link, and each node consists
of dozens of 1U IA-servers and a 10Gbps-capable switch.
Our software environment is RedHat Linux and uses
NFS and the iSCSI driver.
Below
is a schedule for presentation of the High-Performance
Bandwidth Challenge Entries. All demonstrations will
take place on the exhibit hall show floor.
Tuesday,
November 19 |
2:00pm |
Entry
1 |
Kei
Hiraki, "Data Reservoir" |
3:00pm |
Entry
2 |
Mark
Ellisman, "Global Telescience Featuring IPv6" |
4:00pm |
Entry
3 |
currently
unused |
Wednesday,
November 20 |
11:00am |
Entry
4 |
Osamu
Tatebe, "Grid Datafarm for a HEP Application
" |
12:00pm |
Entry
5 |
Les
Cottrell, "Bandwidth To The World" |
1:00pm |
Entry
6 |
Herman
Towles, "Terascale Computation for 3D Tele-Immersion" |
Wednesday,
November 20 |
2:00pm |
Entry
7 |
Helen
Chen, "Data Services and Visualization" |
3:00pm |
Entry
8 |
Julian
Bunn, "Bandwidth Gluttony" |
4:00pm |
Entry
9 |
John
Shalf "Simulations using Cactus, Globus and
Visapult" |
SCinet
will attempt to take all measurements during the
above time slots. In case network or operational
problems are encountered, the judges will set
aside additional time slots on Tuesday and Wednesday
evening. |
Thursday,
November 21 |
8:30am
- 11:30am |
Judging |
11:30am
- 1:00pm |
SC2002
Netcasting |
Please
keep bandwidth utilization to a MINIMUM between
11:30am and 1:00pm due to SC2002 Netcasting.
SCinet will be watching!
|
1:30pm |
Awards
Ceremony |
3:00pm |
All
entries are encouraged to run simultaneously. |