The purpose of this programming assignment is to gain experience in writing GPU kernels. This assignment is split into two different problems. Students in CMSC416 will be simulating Game of Life. Students in CMSC818X will be applying image effects to videos.
The game of life simulates simple cellular automata. The game is played on a rectangular board containing cells. At the start, some of the cells are occupied, the rest are empty. The game consists of constructing successive generations of the board. The rules for constructing the next generation from the previous one are:
You are provided with starter code to handle reading/writing data and copying it between the CPU and GPU.. Your job is to write the GPU kernel compute_on_gpu, which computes a single iteration of Game of Life. The starter code contains the function compute, which is a serial CPU implementation of what you are going to implement.
The GPU kernel is structured as follows:
__global__ void compute_on_gpu(int *life, int *previous_life, int X_limit, int Y_limit) {
/* your code here */
}
life is the board you are writing into, previous_life are the values of the board at the previous iteration, and X_limit & Y_limit are the board size.
compute_on_gpu will be called with block size 16x16 and grid size ⌈X_limit/16⌉x⌈Y_limit/16⌉.
You can change the block size with the blockDimSize variable.
Your kernel should be able to handle when there are more cells than threads available on the GPU (i.e. you need to implement striding).
Your program should read in a file containing the coordinates of the initial cells. Sample files are located here: life.1.256x256.data and life.2.256x256.data (256x256 board). Each line in this file represents the coordinates of a cell on the board that is live. For instance, the following entry:
1,3
means that the cell at position [1, 3] is live in the initial state. You can
also find many other sample patterns on the web (use your favorite search
engine on "game of life" and/or "Conway").
Your program should take five command line arguments: the name of the data and output file, the number of generations to iterate, X_limit, and Y_limit. To be more specific, the command line of your program should be:
./life <input file name> <# of generations> <X_limit> <Y_limit> <output file name>
To get a GPU in your sbatch job add the following setting to the top
#SBATCH --gres=gpu:1
If you use interactive jobs, then add the --gres=gpu:1 argument when you run salloc.
Your program should write <output file name> that contains comma separated values representing the board. There should be one line (containing the x coordinate, a comma, and then the y coordinate) for each occupied cell at the end of the last iteration.
Sample output files are available:
The only print from your program to standard output should be the GPU timing (up to five decimal places) that looks like this:
TIME: 4.529 s
Three input files that will be used for testing correctness are available via these links:
life.22x22.data
life.32x32.data
final.512x512.data.
You must submit the following files and no other files:
game-of-life.cu
: starter code with compute_on_gpu implemented.
Makefile
that will compile your code successfully on deepthought2 when using nvcc. You can see a sample Makefile here. Make sure that the executable name is game-of-life, and do not include the executables in the tarball.
LastName-FirstName-report.pdf
) with performance results (one line plot).
The line plots should present the execution times to run the GPU version respectively on the input file final.512x512.data (for different block sizes).
You can change the block size by changing the blockDimSize variable.
In the report, you should mention:
LastName-FirstName-assign4
), compress it to .tar.gz
(LastName-FirstName-assign4.tar.gz
) and upload that to ELMS.
If you want to try a bigger board try running on the input file life.1024x1024.data.
The project will be graded as follows:
Component | Percentage |
---|---|
Runs correctly with 22x22 board | 30 |
Runs correctly with 32x32 board | 30 |
Runs correctly with 512x512 board | 30 |
Writeup | 10 |
For this assignment you will implement convolutions with image kernels on a video file. Image kernels are used for blurring, edge detection, and other effects. An image kernel is a square matrix which is applied to each pixel in an image. It is applied by setting the pixel to the dot product of its neighbors and the image kernel. Wikipedia and this visualization contain further reading.
For example, if k is a 3x3 kernel, then pixel i, j of img would be computed as
new_img[i][j] = img[i-1][j-1]*k[0][0] + img[i-1][j]*k[0][1] + img[i-1][j+1]*k[0][2] +
img[i][j-1]*k[1][0] + img[i][j]*k[1][1] + img[i][j+1]*k[1][2] +
img[i+1][j-1]*k[2][0] + img[i+1][j]*k[2][1] + img[i+1][j+1]*k[2][2]
The same pattern can be extended for kernels of size 5x5, 7x7, ... In this assignment you will apply a kernel to each pixel of each frame in a video. The pixels are in BGR format. You should convolve each color channel independently. Additionally, border pixels will be ignored.
Your task is to implement this procedure on the GPU. We have provided starter code to handle reading/writing the video file (driver.cc) and moving data on/off the GPU (video-effect.cu). You need to implement the convolveGPU kernel in video-effect.cu. A serial implementation of this algorithm is provided in the convolveCPU function. driver.cc contains code to read/write the video and does not need to be edited.
The GPU kernel is structured as follows:
__global__ void convolveGPU(float const* in, float *out, int width, int height, float const* kernel, int kernelWidth, int kernelHeight) {
/* your code here */
}
in is the video frame you are convolving. out is the frame you are storing results into. The frames are in packed BGR format (see the convolveCPU example for how to read each color channel). width and height give the dimensions of the video frame. kernel, kernelWidth, and kernelHeight provide the image kernel. You can assume the kernel is square with an odd dimension >= 3.
convolveGPU will be called with block size 8x8 and grid size ⌈height/8⌉x⌈width/8⌉. You can change the block size with the blockDimSize variable. Your kernel should be able to handle when there are more pixels than threads available on the GPU (i.e. you need to implement striding).
The starter code is setup to take an input video file name, output file name, and kernel name. The available kernel names are blur, edge, sharpen, and identity. You can also provide an optional frame number which will be dumped into a csv file. Dumped frames can be used to check for correctness. The command line arguments are:
./video-effect <input_filename> <output_filename> <kernel_name> <optional:frame_number> <optional:frame_filename>
You can check your output for this video (download link). We provide the output for frame 100 with the edge kernel here.
To get a GPU in your sbatch job add the following setting to the top
#SBATCH --gres=gpu:1
If you use interactive jobs, then add the --gres=gpu:1 argument when you run salloc.
For this code you need to load cuda and opencv with module load cuda opencv
.
An example sbatch script is provided here.
The program will write out a video file with _out appended to the base filename.
For instance ./video-effect video.mp4 blur
will write out video_out.mp4 with the blur effect.
You can copy videos off of DeepThought2 to view them.
Running ./video-effect video.mp4 edge 100 frame.csv will write the 100th frame into frame.csv. Your output should match forest-edge-100.csv.
The program will output to stdout:
Total time: 3.221 s
Frames processed per second: 80.039 frames/s
You must submit the following files and no other files:
video-effect.cu
: file with convolveGPU implemented
Makefile
that will compile your code successfully on deepthought2 when using nvcc. A Makefile is provided here. Make sure that the executable name is video-effect, and do not include the executables in the tarball.
LastName-FirstName-report.pdf
) with performance results (one line plot).
The line plots should present the execution times to run the GPU version respectively on the input file video.mp4 (for different block sizes).
You can change the block size by changing the blockDimSize variable.
In the report, you should mention:
LastName-FirstName-assign4
), compress it to .tar.gz
(LastName-FirstName-assign4.tar.gz
) and upload that to ELMS.
Feel free to try different videos and image kernels. Keep in mind that some videos may drastically increase in file size if the kernel reduces the compression rate of the video data.
The project will be graded as follows:
Component | Percentage |
---|---|
Runs correctly with the identity kernel | 15 |
Runs correctly with other three kernels | 75 (25% each kernel) |
Writeup | 10 |