High Performance Computing Systems (CMSC714)
Group Project
The group project should be implemented in C/C++ (or Fortran), and use a
parallel programming model such as MPI, OpenMP, Charm++, CUDA etc. This
project will be done in teams of 3 or 2 people. The final deliverable will be a
report and the code (with a Makefile and clear instructions for running it).
Deadlines
- Project description - due on October 14, 2019 @ 5:00 PM
- Submit a PDF (one per group) with a project title, description (<500 words) and list of group members by e-mail to bhatele@cs.umd.edu.
- Interim report - due on November 18, 2019 @ 5:00 PM
- Demos in class on December 3 and 5, 2019 @ 9:30 AM
- Final project - due on December 11, 2019 @ 5:00 PM
Project Ideas
- Parallel Genetic Algorithms for Inverse Boggle Solving
- Parallel Implementation of Sequential Minimal Optimization (SMO) Algorithm and Model Selection for Support Vector Machines
- A Parallel Hybrid Framework for Graph Processing
- Parallel Patch Matching for Image Segmentation
- Parallel Graph-based Semi-supervised Learning
- Load Balancing in Distributed Computing
- Auto-tuning for scalable parallel 3-D FFT
- Parallel implementation of Machine Learning algorithms using Spark/Hadoop
- CPU-GPU Dynamic Approximation for Parallel Applications
- Algebraic Multigrid with OpenMP, OpenACC and MPI
- Developing Parallel Algorithms for Creating and Solving Sudoku Puzzle
- Distributed Learning for Deep Neural Networks
- A High Performance Concurrent Thread-Safe Hash Table
- A Visual Debugging Tool for MPI Programs
- Graph Partitioning using Parallel Clustering for Distributed Databases
- A Study on Memory and Compute Bound Kernels on CPU and GPU Hardware
- Online Auto-Tuning of Collective Communication Operations in MPI
- Parallel Simulation of Information Diffusion on Large Social Network Graphs
Other suggestions
- Application performance studies across one or more parallel machines - e.g. satellite data processing, parallel search, computer vision algorithms, bioinformatics
- Application performance studies on GPUs
- Reproduce results from a paper, extend to current systems - e.g., CPU vs. GPU paper (pick a small number of application kernels)
- Debunking a published paper