Fall 2024 D2 Instructor: Vlad Wojcik Support: Mubashir Murshed Course Plan Lecture Room TH248
INSTRUCTOR:
Wlodzimierz ("Vlad") WOJCIK, vwojcik@cogeco.ca, Office: J213; Teaching Assistant: Mubashir Murshed
TIMES AND LOCATIONS:
Lectures and Seminars: Room TH248, Tuesdays and Fridays 4 PM to 5:30 PM. Mode of delivery is Lecture, face-to-face. All students are expected and required to attend lectures. To pass the course, all students must take tests and final exam and to submit seminar posters.
Office Consultation Hours: Tuesdays and Fridays 3 PM to 4 PM (Just before lectures). Room J213. Students unable to make these consultation hours are asked to contact the Instructor via e-mail, in order to make alternate arrangements.
Test Locations as per course plan (link above)..COURSE OBJECTIVE:
To familiarize students with basic ideas pertaining to parallel computation. We will depart from the standard architecture of the von Neumann-type digital computer. A number of parallel computing architectures will be discussed, including SISD, SIMD, MIMD and data flow machines. Multiprocessor organizations: arrays, meshes of trees, hypercubes. Biological inspirations for multiprocessor configurations. Issues of dynamic and static machine reconfigurations. Concepts of diameter, bandwidth, and bisection width of the multiprocessor configuration. Parallel algorithms and their performance estimation: big O, Theta and Omega notations. Introduction to the theory of parallel languages with some exposure to Ada 2012.
TOPICS COVERED:
Origins of parallelism, classification of algorithm designs, characterization of performance. Pipelined computers: selection and comparison, case studies. Processor arrays, switching networks, case studies. Parallel languages: general principles, parallel constructs, vectorizing compilers, issues of portability. Some exposure to parallel programming language (Ada 2012) and parallel computing hardware. Parallel algorithms: general principles, recurrences, parallel approach to data structures and computational structures. Future trends: technology, design limitations, future supercomputers.
PREREQUISITE:
COSC 2P13 (minimum 60%). NOTE: In case of any discrepancies, the University Calendar prevails.
LECTURE NOTES, TRANSPARENCIES, ETC.:
- The modeling Process: A Meta-Model
- Measures of Parallelism: Basic Concepts
- Parallel Computers: A Taxonomy
- Parallel Computers: Memories
- Parallel Computers: Interconnect Architectures
- Parallel Computers: Programming Methodology
- Parallel Computers: Benchmarking
- Parallel Computers: Programming Tools
- Parallel Languages: Compositional C++
- Process Algebras: CSP:
REFERENCES AND RECOMMENDED READING:
- Introduction to Ada: Set of PPT slides, courtesy of late Robert Dewar
- J. Barnes: Programming in Ada 2012, Addison-Wesley 2014, ISBN 9781107424814.
- T.G. Mattson, B.A. Sanders, B.L. Massingill: Patterns for Parallel Programming, Addison-Wesley 2005, ISBN 0-321-22811-1.
- S.H. Roosta: Parallel Processing and Parallel Algorithms, Springer Verlag 2000, ISBN 0-387-98716-9
- I. Foster: Designing and Building Parallel Programs, Addison-Wesley Publishing, 1995, ISBN 0-201-57594-9
- M.A. Smith: Object Oriented Programming in Ada 2005
- R. Riehle: Ada Distilled: An Introduction to Ada Programming for Experienced Computer Programmers, AdaWorks Software Engineering 2002
- Quentin Ochem, Ada for Java or C++ Developer, AdaCore 2013.
- J. Barnes: Programming in Ada 2012, ISBN 9781107424814.
- Wikibook: Ada Programming.
- C. Xavier, S.S. Iyengar: Intro. to Parallel Algorithms, Wiley 1998, ISBN 0-471-25182-8
- H.F. Jordan, G. Alaghband: Fundamentals of Parallel Processing, Prentice Hall 2003, ISBN 0-13-901158-7.
- A. Grama, A. Gupta, G. Karypis, V. Kumar: Introduction to Parallel Computing, Addison-Wesley 2ed., 2003, ISBN 0-201-64865-2.
- B.M. Brosgol: Ada-Java comparison, 2000
- Peter S. Pacheco: An Introduction to Parallel Programming (1st Edition). Publisher: Morgan Kaufmann; 1 edition (2011). ISBN-10: 0123742609 & ISBN-13: 978-0123742605.
- Victor Eijkhout, Robert van de Geijn, and Edmond Chow: Introduction to High Performance Scientific Computing (2nd Edition). Publisher: Lulu; 2 edition (2015). ISBN: 9781257992546.
- Michael McCool, James Reinders, and Arch Robison: Structured Parallel Programming: Patterns for Efficient Computation. Publisher (imprint): Morgan Kaufmann; First Edition (2012). eBook ISBN:9780123914439. Paperback ISBN: 9780124159938.
- Blaise Barney: Introduction to Parallel Computing, Lawrence Livermore National Laboratory. On-line.
- Rauber, T., & Rünger, G. (2013). Parallel Programming: for Multicore and Cluster Systems. Springer.http://dl.icdst.org/pdfs/files4/13902d00e9216df705275e19452dbb53.pdf
- Gossett, S. (2019). 12 Parallel Processing Examples and Applications. Built-In. https://builtin.com/hardware/parallel-processing-example
USEFUL NOTES AND PROGRAMMING EXAMPLES:
- Refresher on Semaphores
- Tasking in Ada
- Use of Attributes in Ada
- Garbage collection in Ada: programming example
SEMINARS:
- [22 Oct 2024] Daniel Arana Charlebois, David Fawzy: OpenCL: History of the OpenCL API and its usage. This will include both a general overview of the OpenCL API and more specific examples and applications in implementing parallel algorithms.
- [22 Oct 2024] Bryan Bedard, Wickham Gibbs: AI and Parallel Programming: Examples of the usage of parallel programming in AI applications, with explanation how common or uncommon its usage is. We will also explain the pros and cons of its use as compared to sequential processing. In this vain, we will also come to our own conclusion about which is better for AI applications. Finally we will open the floor to discussion amongst the class and ask other students to share some examples they can think of where parallel programming is or isn't ideal in AI use cases.
- [22 Oct 2024] Rouvin Rebello, Anjali Sabu: Parallel Computing in High-Performance Computing (HPC) - How parallel computing is used in HPC systems for scientific and large-scale computations: Parallel computing is at the heart of high-performance computing (HPC), enabling systems to tackle scientific challenges and large-scale computations that would be impossible with traditional computing methods. By splitting tasks across multiple processors, parallel computing allows HPC systems to solve problems faster and more efficiently. In this seminar, we explore real-world examples of how parallel computing is used in HPC, such as climate modeling, drug discovery, and astrophysics, where HPC has driven transformative scientific advancements. We break down key concepts such as task and data parallelism and look at how modern HPC systems use multi-core processors and distributed memory to achieve incredible performance. Ultimately, this seminar will show how parallel computing boosts processing power, shortens computation times, and handles massive data workloads.
- [25 Oct 2024] Adam Shariff, Maxwell Young: Parallel Search Algorithms. We will present a few different search algorithms. We will look at the advantages, disadvantages of each algorithm, and how they compare to sequential search algorithms.
- [25 Oct 2024] Colin Doubrough, Nicolas Wong: A discussion of Parallel Approaches to Graphing Challenges like MST, and weighted search algorithms. The advantages and disadvantages of these approaches and a practical example of a parallel BFS algorithm will be presented, and a comparison will be made to its sequential version.
- [25 Oct 2024] Gideon Oludeyi, Manan Yayeshbhai Patel: Parallel Programming Approaches with UNIX: We provide a brief history of the UNIX system, and discuss features of the UNIX model that lend themselves suitably to the development of parallel programs.
- [29 Oct 2024] Jaden Kuhn, Abdel Zahran: Parallel Computing with MPI: We will explain what a Message Passing Interface (MPI) is and how it is used in parallel computing. We will also go over the advantages of a MPI and possibly the alternatives to MPI for parallel computing.
- [29 Oct 2024] Brendan Kane: Parallelism in Cloud Computing: We will discuss the role parallel computing plays in cloud computing. Specifically:
How parallelism increases performance and scaleability;
How parallelism can benefit from resource management capabilities in cloud computing;
How cloud computing abstracts and provides services to parallelism;
Finally, the challenges when combining the two.
- [29 Oct 2024] Alexandre Reuillon, Chidera Nwana: The Architecture of a GPU: A look at the underlying hardware of what a GPU is comprised of and how that is used to process a large amount of data through parallelization.
- [01 Nov 2024] Dan Christian Lagria, Alexis Gobin: Usage of Parallelism on Most of GPUs Today: Why GPUs are effective for producing images, especially at high frame rates in video games. We will cover the basics of shading and rendering 3D perspectives onto a 2D screen, and dive into how parallelism works at software level in a GPU. Specifically we will explain how GPUs handle thousands of operations simultaneously, allowing for faster image processing and efficient rendering. We will also discuss recent technological advancements like Ray Tracing and Frame Generation.
- Vlad Wojcik: Scene Segmentation
Seminar topic suggestions:
- Cluster Connectivity: Quadrics, Myrinet, Infiniband, Dolphinics, etc ... (several seminars possible here)
- Hottest Processors of the Day ...
- Parallel Programming Approaches with one of: UNIX, MPI, Open MP, PCN, PVM, Linda (including C-Linda and/or FORTRAN-Linda), CHARM ...
- Parallel Programming Languages (select one of): Occam, OpenCL, Modula-2, Modula 3, Java, Fortran M, High Performance Fortran, Compositional C++, Data-flow programming in VAL, Functional programming in SISAL, Data parallel programming with C*, Data parallel programming with Fortran 90 ...
- AI and Parallel Processing
- Selected Parallel Search Algorithms
- Selected Parallel Graph Algorithms
- Parallel Programming Approaches with UNIX
- Measuring Computational Performance
- Parallel Computing with MPI
- OSCAR (Open Source Cluster Application Resources)
- Cluster Management (IPMI, etc.)
- Cluster Booting Issues (PXE, etc.)
- Other (please suggest...)
MARKING SCHEME:
- Two Tests @ 20% = 40%
- One Seminar (Two cooperating students delivering): 20%
- Final Exam 40% (you must score at least 40% here to pass the course!)
CAUTION: The Department reserves the right to scan submissions using electronic means, in order to ensure the originality of students' work.
NOTES:
In case a given mark is perceived unjust or unclear by a student, s/he is encouraged to see the instructor to discuss the issue. Depending on the case s/he is able to make, a mark can be modified. The deadline to contact the instructor on these matters is one week after the mark has been issued. Marks not disputed within this period will be considered final.
PENALTIES:
Possible lateness in assignment submission is counted in days, each period of a day ending at 4 PM. The penalty for late submission is 25% up to three days (or a part of a day). After that period the penalty is 100%.
While honest cooperation between students is considered appropriate, the Faculty of Mathematics and Sciences considers plagiarism and other forms of academic misconduct as grave offenses. For clarification on these issues you are directed to Section VII, "Academic Misconduct" in the "Academic Regulations and University Policies" entry in the University Calendar, to view a fuller description of prohibited actions, and the procedures and penalties.
Information on what constitutes academic integrity is available on Brock University Academic Integrity Website.
Instructor: Vlad Wojcik
Revised: 14 October, 2024 9:02 AM
Copyright © 2024 Vlad Wojcik