Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project 2: Lan Lou #10

Open
wants to merge 9 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
147 changes: 141 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,147 @@ CUDA Stream Compaction

**University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2**

* (TODO) YOUR NAME HERE
* (TODO) [LinkedIn](), [personal website](), [twitter](), etc.
* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab)
* Lan Lou
* Tested on: Windows 10, i7-6700HQ @ 2.6GHz 16GB, GTX 1070 8GB (Personal Laptop)

### (TODO: Your README)
## Project Features:

Include analysis, etc. (Remember, this is public, so don't put
anything here that you don't want to share with the world.)
- CPU scan: This is the most basic way of doing a prefix sum(looping & adding in cpu).
- Naive scan : what differentiate naive from simple cpu is that this approach happens on gpu, and also we change into using the follwing algorithm shown in the pseudo code:
```for d = 1 to log2n
for all k in parallel
if (k >= 2d-1)
x[k] = x[k – 2d-1] + x[k];
```
which ensure that parallel can be applied.

- Efficient GPU scan : by changing the original naive algorithm into two steps:upsweep and downsweep, we can shorten the amount of operations times it took significantly, the following two shows the pseudo code of up and down sweep:

#### upsweep:
```
for d = 0 to log2n - 1
for all k = 0 to n – 1 by 2d+1 in parallel
x[k + 2d+1 – 1] += x[k + 2d – 1];
```

#### downsweep:
```
x[n - 1] = 0
for d = log2n – 1 to 0
for all k = 0 to n – 1 by 2d+1 in parallel
t = x[k + 2d – 1];
x[k + 2d – 1] = x[k + 2d+1 – 1];
x[k + 2d+1 – 1] += t;
```
- Efficient scan(optimized)(Extra points) : when finished the former efficient, I discovered that it wasn't even faster than cpu methods, so I went for the optimized version as extra points: the reason for this to happen is that we haven't made full use of all threads in a upsweep or downsweep kernal, since it has braching in it, we will always have some "non-functional" threads,therefore, I decided that instead of judging inside the kernal about if the current loop's thread is a multiple of ```2^(d+1)```, we bring this outside into the scan functions, in it ,we will first caculate the block num for each loop to be ```blocknum = (adjustlen/interval + blocksize ) / blocksize;``` by dividing the array total length with ```interval = 2^(d-1)```, and also, inside the kernal, we will mutiply the index of threads with ```2^(d-1)``` to make sure it's a multiple of it, as for the others , they will stay the same.

- compact for cpu & efficient: this part algorithm is simmilar for both, as first step, you turn the array into a buffer only containing 1 and 0 through judging if it's 0 or not, then, you do scanning using corresponding method each, after that, we will treat the caculated buffer as outter layer indices to fill into the output buffer, and only taking into acount the inner indices that is having value 1 according to the boolean buffer.

## Questions:
- *Roughly optimize the block sizes of each of your implementations for minimal run time on your GPU.*

blocksize/ops' time(ms)| naive scan(ms)| efficient scan(ms)
-----|-----|----
128 | 1.911 | 0.541
256 | 1.908 | 0.533
512 | 1.92 | 0.545
1024 | 1.915 | 0.559

- the diagram bellow explicitly shows how variation of block size impact on both the naive and the optimized solutions for scanning, although it's not very apparent, I think block size == 256 is best suited for me, in terms of my own machine condition, algorithms, etc...

![](https://github.com/LanLou123/Project2-Stream-Compaction/raw/master/blocksizechart.JPG)

- *Compare all of these GPU Scan implementations (Naive, Work-Efficient, and Thrust) to the serial CPU version of Scan. Plot a graph of the comparison (with array size on the independent axis).*

Array Size(2^(n))/ops' time(ms)| Naïve | Work Efficient(not optimized)| Work Efficient(optimized) | Thrust
----|----|----|----|----
3 | 0.032 | 0.029 | 0.093 | 0.085
6 | 0.049 | 0.046 |0.058| 0.065
9 | 0.076 | 0.075 | 0.068 |0.249
12 | 0.09 | 0.137 | 0.135| 0.255
15 | 0.128 | 0.131 | 0.118| 0.263
18 | 0.306 | 0.471 | 0.167 |0.36
21 | 3.89 | 2.311 | 1.008 |0.37
24 | 35.408 | 19.495 |7.925| 0.932
27 | 317.784 | 141.549 |61.03| 5.704

- the corresponding graph is:

![](https://github.com/LanLou123/Project2-Stream-Compaction/raw/master/arraysize.JPG)

- frome the graph we can tell the efficiency of those GPU method arraged in order are: thrust>efficient(optimized)>efficient(not optimized)>naive.

- *Write a brief explanation of the phenomena you see here.*

some performence issues encountered by efficient gpu as i mentioned before is because of invalid use of thread which lead to a lot of wasting, as for the others, one thing that I noticed is that the program will crash when I set the array size to be 2^30,since it's same for all methods, I guess it's the memory IO that's causing this bottleneck.


## Results

### I got this result using the following settings:

- ```blocksize``` = 1024
- ```SIZE``` (in main.cpp) = 1 << 25

```
****************
** SCAN TESTS **
****************
[ 18 25 1 39 31 37 5 10 42 11 4 11 12 ... 37 0 ]
==== cpu scan, power-of-two ====
elapsed time: 221.761ms (std::chrono Measured)
[ 0 18 43 44 83 114 151 156 166 208 219 223 234 ... 821806793 821806830 ]
==== cpu scan, non-power-of-two ====
elapsed time: 77.1578ms (std::chrono Measured)
[ 0 18 43 44 83 114 151 156 166 208 219 223 234 ... 821806711 821806713 ]
passed
==== naive scan, power-of-two ====
elapsed time: 73.3082ms (CUDA Measured)
[ 0 18 43 44 83 114 151 156 166 208 219 223 234 ... 821806793 821806830 ]
passed
==== naive scan, non-power-of-two ====
elapsed time: 75.7463ms (CUDA Measured)
[ 0 18 43 44 83 114 151 156 166 208 219 223 234 ... 0 0 ]
passed
==== work-efficient scan, power-of-two ====
elapsed time: 15.5116ms (CUDA Measured)
[ 0 18 43 44 83 114 151 156 166 208 219 223 234 ... 821806793 821806830 ]
passed
==== work-efficient scan, non-power-of-two ====
elapsed time: 15.3897ms (CUDA Measured)
[ 0 18 43 44 83 114 151 156 166 208 219 223 234 ... 821806711 821806713 ]
passed
==== thrust scan, power-of-two ====
elapsed time: 1.63021ms (CUDA Measured)
[ 0 18 43 44 83 114 151 156 166 208 219 223 234 ... 821806793 821806830 ]
passed
==== thrust scan, non-power-of-two ====
elapsed time: 1.63738ms (CUDA Measured)
[ 0 18 43 44 83 114 151 156 166 208 219 223 234 ... 821806711 821806713 ]
passed

*****************************
** STREAM COMPACTION TESTS **
*****************************
[ 0 0 3 1 2 3 0 0 3 1 0 2 2 ... 1 0 ]
==== cpu compact without scan, power-of-two ====
elapsed time: 121.362ms (std::chrono Measured)
[ 3 1 2 3 3 1 2 2 1 1 2 2 1 ... 3 1 ]
passed
==== cpu compact without scan, non-power-of-two ====
elapsed time: 121.729ms (std::chrono Measured)
[ 3 1 2 3 3 1 2 2 1 1 2 2 1 ... 2 3 ]
passed
==== cpu compact with scan ====
elapsed time: 293.673ms (std::chrono Measured)
[ 3 1 2 3 3 1 2 2 1 1 2 2 1 ... 3 1 ]
passed
==== work-efficient compact, power-of-two ====
elapsed time: 15.202ms (CUDA Measured)
[ 3 1 2 3 3 1 2 2 1 1 2 2 1 ... 3 1 ]
passed
==== work-efficient compact, non-power-of-two ====
elapsed time: 15.2422ms (CUDA Measured)
[ 3 1 2 3 3 1 2 2 1 1 2 2 1 ... 2 3 ]
passed
```
Binary file added arraysize.JPG
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added blocksizechart.JPG
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
26 changes: 13 additions & 13 deletions src/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
#include <stream_compaction/thrust.h>
#include "testing_helpers.hpp"

const int SIZE = 1 << 8; // feel free to change the size of array
const int SIZE = 1 << 22; // feel free to change the size of array
const int NPOT = SIZE - 3; // Non-Power-Of-Two
int *a = new int[SIZE];
int *b = new int[SIZE];
Expand All @@ -36,13 +36,13 @@ int main(int argc, char* argv[]) {
// At first all cases passed because b && c are all zeroes.
zeroArray(SIZE, b);
printDesc("cpu scan, power-of-two");
StreamCompaction::CPU::scan(SIZE, b, a);
StreamCompaction::CPU::scan(SIZE, b, a,true);
printElapsedTime(StreamCompaction::CPU::timer().getCpuElapsedTimeForPreviousOperation(), "(std::chrono Measured)");
printArray(SIZE, b, true);

zeroArray(SIZE, c);
printDesc("cpu scan, non-power-of-two");
StreamCompaction::CPU::scan(NPOT, c, a);
StreamCompaction::CPU::scan(NPOT, c, a ,true);
printElapsedTime(StreamCompaction::CPU::timer().getCpuElapsedTimeForPreviousOperation(), "(std::chrono Measured)");
printArray(NPOT, b, true);
printCmpResult(NPOT, b, c);
Expand All @@ -51,7 +51,7 @@ int main(int argc, char* argv[]) {
printDesc("naive scan, power-of-two");
StreamCompaction::Naive::scan(SIZE, c, a);
printElapsedTime(StreamCompaction::Naive::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(SIZE, c, true);
printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);

/* For bug-finding only: Array of 1s to help find bugs in stream compaction or scan
Expand All @@ -64,35 +64,35 @@ int main(int argc, char* argv[]) {
printDesc("naive scan, non-power-of-two");
StreamCompaction::Naive::scan(NPOT, c, a);
printElapsedTime(StreamCompaction::Naive::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(SIZE, c, true);
printArray(SIZE, c, true);
printCmpResult(NPOT, b, c);

zeroArray(SIZE, c);
printDesc("work-efficient scan, power-of-two");
StreamCompaction::Efficient::scan(SIZE, c, a);
StreamCompaction::Efficient::realscan(SIZE, c, a,true);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(SIZE, c, true);
printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);

zeroArray(SIZE, c);
printDesc("work-efficient scan, non-power-of-two");
StreamCompaction::Efficient::scan(NPOT, c, a);
StreamCompaction::Efficient::realscan(NPOT, c, a,true);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(NPOT, c, true);
printArray(NPOT, c, true);
printCmpResult(NPOT, b, c);

zeroArray(SIZE, c);
printDesc("thrust scan, power-of-two");
StreamCompaction::Thrust::scan(SIZE, c, a);
printElapsedTime(StreamCompaction::Thrust::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(SIZE, c, true);
printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);

zeroArray(SIZE, c);
printDesc("thrust scan, non-power-of-two");
StreamCompaction::Thrust::scan(NPOT, c, a);
printElapsedTime(StreamCompaction::Thrust::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(NPOT, c, true);
printArray(NPOT, c, true);
printCmpResult(NPOT, b, c);

printf("\n");
Expand Down Expand Up @@ -137,14 +137,14 @@ int main(int argc, char* argv[]) {
printDesc("work-efficient compact, power-of-two");
count = StreamCompaction::Efficient::compact(SIZE, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(count, c, true);
printArray(count, c, true);
printCmpLenResult(count, expectedCount, b, c);

zeroArray(SIZE, c);
printDesc("work-efficient compact, non-power-of-two");
count = StreamCompaction::Efficient::compact(NPOT, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(count, c, true);
printArray(count, c, true);
printCmpLenResult(count, expectedNPOT, b, c);

system("pause"); // stop Win32 console from closing on exit
Expand Down
2 changes: 1 addition & 1 deletion stream_compaction/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -13,5 +13,5 @@ set(SOURCE_FILES

cuda_add_library(stream_compaction
${SOURCE_FILES}
OPTIONS -arch=sm_20
OPTIONS -arch=sm_60
)
12 changes: 12 additions & 0 deletions stream_compaction/common.cu
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,12 @@ namespace StreamCompaction {
*/
__global__ void kernMapToBoolean(int n, int *bools, const int *idata) {
// TODO
int idx = blockIdx.x*blockDim.x + threadIdx.x;
if (idx < n)
{
if (idata[idx])
bools[idx] = 1;
}
}

/**
Expand All @@ -33,6 +39,12 @@ namespace StreamCompaction {
__global__ void kernScatter(int n, int *odata,
const int *idata, const int *bools, const int *indices) {
// TODO
int idx = blockIdx.x*blockDim.x + threadIdx.x;
if (idx < n)
{
if (bools[idx])
odata[indices[idx]] = idata[idx];
}
}

}
Expand Down
41 changes: 38 additions & 3 deletions stream_compaction/cpu.cu
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,17 @@ namespace StreamCompaction {
* For performance analysis, this is supposed to be a simple for loop.
* (Optional) For better understanding before starting moving to GPU, you can simulate your GPU scan in this function first.
*/
void scan(int n, int *odata, const int *idata) {
void scan(int n, int *odata, const int *idata, bool istimer) {
if (istimer)
timer().startCpuTimer();
// TODO
odata[0] = 0;
odata[1] = idata[0];
for (int i = 2; i < n; ++i)
{
odata[i] = odata[i - 1] + idata[i-1];
}
if (istimer)
timer().endCpuTimer();
}

Expand All @@ -31,10 +39,21 @@ namespace StreamCompaction {
int compactWithoutScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
int idx = 0;
for (int i = 0; i < n; ++i)
{
if (idata[i] == 0) continue;
else
{
odata[idx] = idata[i];
idx++;
}
}
timer().endCpuTimer();
return -1;
return idx;
}


/**
* CPU stream compaction using scan and scatter, like the parallel version.
*
Expand All @@ -43,8 +62,24 @@ namespace StreamCompaction {
int compactWithScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
//map
int *checker = new int[n];
for (int i = 0; i < n; ++i)
{
if (idata[i]) checker[i] = 1;
else checker[i] = 0;
}
//scan
scan(n, odata, checker,false);
//scatter
int num = odata[n - 1] ;
for (int i = 0; i < n; ++i)
{
if (idata[i]) odata[odata[i]] = idata[i];
}
delete []checker;
timer().endCpuTimer();
return -1;
return num;
}
}
}
2 changes: 1 addition & 1 deletion stream_compaction/cpu.h
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ namespace StreamCompaction {
namespace CPU {
StreamCompaction::Common::PerformanceTimer& timer();

void scan(int n, int *odata, const int *idata);
void scan(int n, int *odata, const int *idata , bool istimer);

int compactWithoutScan(int n, int *odata, const int *idata);

Expand Down
Loading