This course provides graduate students in computer science and in other fields of science and engineering with experience of parallel and distributed computing. It gives an overview of parallel and distributed computers, and parallel computation. The course addresses architectures, languages, environments, communications, and parallel programming. Emphasis on understanding parallel and distributed computers and portable parallel programming with MPI.
Two 500 level CS courses or consent of the instructor.
No previous experience with parallel computers is necessary. Programming skill in a high level programming language such as C or Fortran is required.
Students will learn about parallel and distributed computers. They will be able to write portable programs for parallel or distributed architectures using Message-Passing Interface (MPI) library.
Introduction to Parallel Computing (tentative)
by Vipin Kumar, Ananth Grama, Anshul Gupta, and George Karypis
Using MPI: Portable Parallel Programming with the Message-Passing Interface (optional)
by William Gropp, Ewing Lusk, and Anthony Skjellum
This is a sample outline. Exact outline will be determined by the instructor offering this course:
Parallel architectures and communications:
1. limitations of sequential computers
2. SISD, SIMD, MIMD and networked computers
3. shared memory and distributed memory computers
4. static and dynamic interconnections
5. message routing schemes
Performance and scalability
1. speedup, granularity, cost-optimality
2. isoefficiency functions
Amdahl's law and its suitability
Fast Fourier Transform (FFT)
Cost-effectiveness of meshes and hypercubes for FFT
Dense matrix computations:
1. striped and checkboard partitionings
2. matrix transposition
3. matrix-vector and matrix-matrix multiplications
Canon's and Fox's algorithms
Message Passing Interface
1. basic MPI functions
2. blocking and nonblocking communications
3. local and global communication functions
4. groups and communicators
5. applications and case studies
Advanced Message Passing in MPI
1. matrix algorithms
2. linear systems
Exact details about graded work in this course will be determined by the instructor offering the course and will be made available in the syllabus during the first class meeting. Typically, a student's grade will be determined by a weighted average of homework assignments, programming projects, midterm and final examinations. A typical weighting is:
Homework - 20%
Programming Projects - 30%
Midterm Examination - 20%
Final Examination - 30%
Letter Grades: A: 100-86, B: 85-76, C: 75-60, E: 59-0.