The source paper1 to this implementation can be found here.and the output can be accessed using this link. If one wishes to fork and run the demo for oneself, one may directly jump to here. A task list details the steps taken to arrive to the current state of this repo.
Noise % | True Image | Noisy Image | SMF Filter | MMAPF Image |
---|---|---|---|---|
The code is developed and tested in python 3.10, and the various modules imported have been version locked in the requirements.txt file. To run the demo, it is not required but recommended to use virtual enviornments and steps to do the same has been touched upon below. It is also recommended to not run the video processing unless you are willing to commit a couple of hours to the processing (someone please optimize that kek).
This demo assumes that the reader is able to install python, and has basic familiarity with the terminal. To check if the correct python version is installed run python3 --version
This is the local instance of the enviornment in which python is run, and is independent of the modules available to the global instance. It is the recommended option to avoid installing bloat. In the terminal, where you have forked this repo to, please type:
python3 -m venv .venv
It should create a hidden folder by the name of .venv
. The next step is to connect to the virtual environment whose steps differ for different OS. For windows, and unix based system this should be sufficient.
For Windows: .\.venv\Scripts\activate
For Unix(MacOS): source ./.venv/Scripts/activate
Double check that the venv (virtual environment) is active. One way to check this is to type: pip -V
. The path provided should have the .venv
inside it. If it does, all good! If it doesn't, you might need a rechecking of the above steps.
The modules required are standard python modules. These go in the .venv
folder, inside .venv/lib/site-packages
. Once your virtual environment is situated:
pip install -r requirements.txt
While that installs, let me inform you of the amazing
I would recommend, only running the image processing helpers, and avoiding the video processing unless you REALLY want to. The video processing is slow, and prone to... hiccups on lower end computers... The forked code comes with the video processing function commented out to avoid any hassels, so all should be good!
Another point to note is that, to change the noise density you can navigate to src/classes/ImageHandler.py and change the noise_factor
default value in the add_noise
function to whatever desired. It should be noted that the images are overwritten, so save the previous images in a folder if you like! Yet another point may be that the number of processes that are active at a time is given as the first argument in the various functions. Change that as appropriate as well.
To start the program you need to:
python main.py
That's, about it. A folder by the name of Test Data Out
should be created in the main project folder (give no error occur and write permissions are enabled), inside of which should be the true, noisy, and denoised images!
Delete the folder. Everything was saved and operated upon locally so there is no bloatware that persists. Also, would love to hear feedback and merge changes if someone spends the time to make this better (:3)[https://opensource.org/].
- Flow of the README
- Problem statement
- How widespread is the problem + Affected domains
- Usual solutions
- Proposed Solution
- Along with diagram of the concept and source idea
- Dataset used and backlinks to the source files for the datasets
- Demonstration
- Improvement images (colored and gray), and video
- Metrics that are reproducable by forking
- Code - cleanup
- Code for running testing
- MMAPF Implementation
- Code for running colored images
- Code for running videos
- Error for colored images
- Speeding up algo for video processing
- Sub-divide video processing generator to handle batches of frames
- Parrallize the batch process into mp.Process objects
- Save colored images and bnw images in respective folders
- Posting to Social Media
- LinkedIn post with the research paper, GitHub link and an example
- Twitter Post tweeting the implementation and the paper link
- Making a YouTube video for the implementation
- Video Processor Not Working
- Video Processor needs OS specific file format and encoder additions
- Video Processor needs file process updates like a bar that fills according to how much progress made
- Format the code with a goal specific architecture
- Use folders for bnw, clr, and video processing and its classes
- Output Formatting
- Output Files need better formatting and clarity
- Output files should add to the
data
folder- Better I/O organisation
- Option for selecting a
input
folder should be provided with auto-sorting of the input data into types - Option for a destination folder should be provided and should be separated from
sample_data
- Option for selecting a
- test_input should contain sample testing data (should be limited to 1-2 image of each kind)
- test_output should contain a subset of outputs ideally processed from test_input
- Better I/O organisation
- <image_name><bnw | clr><true | noisy><0 for true image | 1 for noisy image | noise_density between 0 and 1 for filtered image>_<filter_used>
- Output files should add to the
- Output Files need better formatting and clarity
- Videos are intensive processes. The Video section should be converted to handle Gifs instead. Provides better GitHub README.md integration as well.
Footnotes
-
DOI:10.1109/LSP.2020.3016868 ↩