Third Workshop on Massively Parallel Processing (WMPP)

Saturday 26 April 2003 at the
International Parallel and Distributed Processing Symposium
Nice, FRANCE



Message from the General and Program Chairs
Welcome to the THIRD Workshop on Massively Parallel Processing!
This year's workshop builds
on the success of the
First and
Second Workshops on Massively Parallel Processing, held as part of
IPDPS'01 and
IPDPS'02. The first and second
workshops featured invited keynote talks by Peter Kogge of Notre Dame, and David Bader of the
University of New Mexico, respectively. Each workshop had around 10
papers, and following the usual practice at IPDPS, all papers were
published with the paper abstracts in a hardcopy volume and the
complete paper in an accompanying CD-ROM.
As you can see from the schedule below,
the workshop has grown to 15 papers this year,
and features an invited keynote talk by
Thomas
Sterling (Cal Tech and JPL) that reflects recent new
initiatives by Caltech, Cray, and other institutions under the DARPA
High Productivity Computing Systems Program.
We hope that you enjoy the workshop!
Schedule
Session 1 (Opening Session) -- 8:00-9:10
(Session Chair -- Johnnie Baker, Kent State University)
- Opening Remarks
Johnnie Baker,
Kent State University
-
KEYNOTE ADDRESS: Next Generation MPP Architecture in the Petaflops Decade
Thomas Sterling,
California Institute of Techology & NASA Jet Propulsion Laboratory
Abstract:
The MPP has replaced all other previous forms of
parallel processing architecture to dominate high end computing for
more than a decade. But low efficiencies for some important classes of
applications experienced with the largest ASCI MPPs and the rapid
growth of commodity cluster computing with their high
performance-to-cost challenge the future of the MPP even as it leads
the world in sustained performance with the Japanese Earth
Simulator. Commodity clusters (including Constellations) now comprise
approximately half of the Top-500 list of computers measured by the
Linpack benchmark and benefit from much of the same technology
employed by MPPs without the development costs or lead time. However,
the source of the strength of clusters, their sole use of COTS
subsystems, is also their principal weakness and is not shared by
MPPs. The future generation of MPPs as represented by the DARPA High
Productivity Computing Program is transforming high end computer
architecture with dramatic new strategies addressing the drivers of
performance degradation resulting from low efficiency and exploiting
new opportunities in VLSI computing structures as yet untapped by
conventional microprocessor architecture to deliver unprecedented
performance-to-cost. This talk will present a set of innovations in
MPP architecture being explored by the Cray Cascade project that may
catalyze a renaissance in high end computer architecture and determine
the next generation of MPP architecture as it enters the
trans-Petaflops performance regime of the next decade.
Break -- 9:10-9:25
Session 2 (Architecture) -- 9:25-10:05
(Session Chair -- David Andrews, University of Kansas)
- A Fine-Grained Parallel Pipelined Karhunen-Loeve Transform
Martin Fleury, Bob Self, Andy Downton,
University of Essex
- Trident: Technology-Scalable Architecture for Data Parallel Applications
Stanislav Sedukhin and Mostofa Soliman,
University of Aizu
Break -- 10:05-10:30
Session 3 (Architecture) -- 10:30-12:05
(Session Chair -- Robert Walker, Kent State University)
- Parallel Cellular Programming for Devloping Massively Parallel Emergent Systems
Domenico Talia,
Universita della Calabria
- Architectural Frameworks for MPP System on a Chip
David Andrews and Douglas Niehaus,
University of Kansas
- Importance of SIMD Computation Reconsidered
Will Meilander, Johnnie Baker, and Mingxian Jin,
Kent State University
- Multiple Instruction Stream Control for an Associative Model of Parallel Computation
M. Scherger, J. Baker, and J. Potter,
Kent State University
- Implementing a Scalable ASC Processor (*)
Hong Wang and Robert A Walker,
Kent State University
Break (lunch on your own) -- 12:05-1:30
Session 4 (System Management in MPP) -- 1:30-3:10
(Session Chair -- Jie Wu, Florida Atlantic University )
- System Management in the BlueGene/L Supercomputer
The IBM BlueGene/L Team,
IBM
- An Executable Analytical Performance Evaluation Approach for Early Performance Prediction
A. Jacquet, V. Janot, R. Govindarajan, C. Leung, G. Gao, and T. Sterling,
University of Delaware
- Automatic Resource Management using an Adaptive Parallel Environment
David Wangerin and Isaac Scherson,
University of California-Irvine
- Partitioning with Space-Filling Curves on the Cubed-Spher
John Dennis,
National Center for Atmospheric Research
- A Decentralized Hierarchical Scheduler for a Grid-based Clearinghouse
Xavier Percival, Cai Wentong, and Francis Lee Bu Sung,
Nanyang Technological University
Break -- 3:10-3:40
Session 5 (Algorithms and Models for MPP) -- 3:40-4:30
(Session Chair -- Johnnie Baker, Kent State University)
- Parallel Algorithms to Find the Voronoi Diagram and the Order-k Voronoi Diagram (*)
Christian Trefftz and Joseph Szakas,
Grand Valley State University
- GCA: A Massively Parallel Model (*)
Wolfgang Heenes, Rolf Hoffmann, and Klaus-Peter Volkmann,
Darmstadt University of Technology
- On Self-similarity and Hamiltonicity of Dual-Cubes
Changfu Wu and Jie Wu,
Florida Atlantic University
Post-Workshop Wrap-up & Planning for Next Year -- 4:30-5:00
(*) = short presentation (15 minutes)
Workshop Organizers
- Organizing Committee:
- Program Committee:
- Nael Abu-Ghazaleh, SUNY Binghamton
- Johnnie Baker, Kent State University
- Thomas Braunl, The University of Western Australia
- Ray Hoare, University of Pittsburgh
- Mahmut Kandemir, Penn State University
- Peter Kogge, Notre Dame University
- H. J. Siegel, Colorado State University
- Theo Ungerer, University of Karlsruhe
- Robert Walker, Kent State University
- Philip A. Wilsey, University of Cincinnati
- Publicity Committee:
- Nael Abu-Ghazaleh, SUNY Binghamton
- Ray Hoare, University of Pittsburgh
Web pages for the KSU Parallel Processing Group
(http://www.mcs.kent.edu/~parallel)
are maintained by
parallel@mcs.kent.edu