Parallel Algorithms and Systems (MAE840)

Από Wiki Τμήματος Μαθηματικών

General

School

School of Science

Academic Unit

Department of Mathematics

Level of Studies

Undergraduate

Course Code

MAE840

Semester

8

Course Title

Parallel Algorithms and Systems

Independent Teaching Activities

Lectures-Laboratory (Weekly Teaching Hours: 3, Credits: 6)

Course Type

Special Background

Prerequisite Courses

Introduction to programming, Introduction to Computers, Database Systems and Web applications development

Language of Instruction and Examinations

Greek

Is the Course Offered to Erasmus Students

Yes(in English)

Course Website (URL) See eCourse, the Learning Management System maintained by the University of Ioannina.

Learning Outcomes

Learning outcomes

Students knowledge acquisition of:

  • Parallel algorithmic methods, multitasking programming, thread programming, resources contention/congestion and contention/congestion avoidance mechanisms
  • Understanding of the basic functional parts of a parallel and a distributed system.
  • Understanding of the basic concepts and techniques / programming, communication, and transparency techniques used in both parallel and distributed systems.
  • Programming parallel tasks using parallel programming libraries such as OpenMP and distributed programming tools such as MPI.
  • Parallel algorithms, Parallel architectures, Parallel algorithm development, Parallel Selection, Parallel Merge, Parallel Classification, Parallel Search, Parallel Algorithms of Computational Geometry. Parallel iterative methods for solving Linear problems.
  • Parallel and Distributed Systems and Architectures. Performance of Parallel and Distributed Systems and Applications.
  • Threading / multitasking and programming of parallel and distributed algorithms using OpenMP and MPI.
General Competences
  • Data search, analysis and synthesis using Information Technologies
  • Decision making
  • Project design and implementation
  • Working independently

Syllabus

  1. Historical review of parallel and distributed processing.
  2. Von Neumann model. Flynn categorization. Tubing. Multiprocessors, Multi-computers.
  3. Distributed and Shared Memory Systems. Memory architectures for single and non-unified access time.
  4. Performance calculations and metrics. System scalability, partitioning and optimization. Parallel computer interface networks.
  5. Law of Grosch, of Amdahl, of Gustafson Barsis. Design of parallel applications.
  6. Program parallelization - MPI. Synchronization. Dependency charts, shared resources and racing conditions. Scheduling. Shared Memory Affinity. MESI. Parallel Processing using parallella FPGA cores.
  7. Models and process communication mechanisms. Vector Processing. Arrays and computational grid. Examples of application parallelization. Synchronization issues

Course laboratory part

  1. Introductory programming concepts using gcc. Pointers, classes, dynamic structures. Creating processes in Linux, separating user-space and kernel-space concepts, parenting processes and parent-child relationships, Process Management.
  2. Containers, Templates, STL (C++ standard templates library).
  3. Introduction to Boost and advanced C ++ aspects.
  4. Introduction to C ++ Armadilo
  5. Process intercommunication. Static memory areas, pipelines, shared memory areas, process signalling.
  6. Threads creation and thread management. shared thread memory areas, critical areas, producer-consumer model, threads signalling.
  7. Thread Management and Synchronization, critical areas protection using mutex locks and semaphores. Presentation of conditional execution threads and sync barriers.
  8. Introduction to MPI, MPI settings, MPI key features presentation, preliminary MPI programs.
  9. Presentation of basic modern methods of sending and receiving messages in MPI. Presentation of asynchronous upload methods. Examples.
  10. Using Gather-Scatter-Reduce-Broadcast Collective Methods and Examples.
  11. Basic structures for organizing distributed programs. Examples of distributed calculations. Advanced data types using MPI. Creating # Complex Data Structures with MPI And Sending Data Structure Messages.
  12. Parallel programming OpenMP and Epiphany-SDK, BSP.

Teaching and Learning Methods - Evaluation

Delivery

Classroom

Use of Information and Communications Technology Use of Micro-computers Laboratory
Teaching Methods
Activity Semester Workload
Lectures 39
Working Independently 78
Exercises-Homework 33
Course total 150
Student Performance Evaluation
  • Using new ICT and metrics of the asynchronous e-learning platform (30%)
  • Examination of laboratory exercises (20%)
  • Semester written examination (50%)

Attached Bibliography

See the official Eudoxus site or the local repository of Eudoxus lists per academic year, which is maintained by the Department of Mathematics. Books and other resources, not provided by Eudoxus:

  • Parallel Scientific Computing in C++ and MPI: A Seamless Approach to Parallel Algorithms and their Implementation, G.M. Karniadakis and R.M. Kirby, 2003, Cambridge University press, ISBN: 0-521-81754-4
  • Using OpenMP, Portable Shared Memory Parallel Programming., B. Chapman, G. Jost and R. Pas, 2008, MIT press, ISBN: 9780262533027
  • Learning Boost C++ libraries, A. Mukherjee, 2015, PACKT, ISBN:978-1-78355-121-7
  • Boost C++ Application Development Cookbook - Second Edition: Recipes to simplify your application development, 2 Edition, A. Polukhin, 2017, PACKT, ISBN:978-1-78728-224-7
  • C++17 STL Cookbook, J. Galowicz, PACKT,978-1-78712-049-5, 2017