-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dev christoph #17
Merged
Merged
Dev christoph #17
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add functionality to predict.py in save_img_mat() to save the image as a nii file. The filetype gets specified in config with parameter save_predict_img_datatype
the new version of the h5py caused an error so the old version of the package gets specified in requirements.txt
Create the function sort_by_informativeness() and the accompaning function uncertainty_sampling() as a first step of the active learning implementation. The idea is for sort_by_informativeness() to sort the training-patches by a certain value that represents it's potential to benefit the model in training.
Correct/finish the function uncertainty_sampling() in active_learning.py that calculates an uncertainty value for a given prediction-Tensor. It now first calculates a value for every pixel and then averages over the entire image to get a value for the image.
Establish the first part of loading data from tf record files for subsequent prediction necessary for active learning. The process is heavily inspired by pipeline.py and predict.py
To avoid redundancy in code, redesign the implementation of active learning in train.py. Build of training pipeline and fitting of the model are now in the same for loop (it doesn't matter if active learning is on or of)
Rename some variables to be better readable. Turn off random shiftig of patches in patching function. Add prediction statement (doesn't work at the moment)
Fix prediction error in active_learning.py by casting the indice list to Float32. Add part that calculates an uncertainty value for every image.
In active_learning.py add code that selects the patches with the highest uncertainty value for training and return these to train.py
For some networks the indices-list should be regularized before prediction. Therefore add this process to prediction in active_learning.py. The process is inspired/copied form predict.py predict_image(). To keep the code more readable put this in a extra function predict()
Introduce the class PatchPool that contains patching relevant parameters and also keeps track of all patches their properties and whether or not they were already used in a prediction or should be used in the next one. For this implement a second class Patch that represents a single patch with relevant information. The PatchPool class contains methods get_unused_patches_indices() that outputs all patches of an image that haven't been used in training the network. Implement this method in query_training_patches() to predict only the next possible candidates for every iteration. The method select_patches is supposed to pick the n best patches for training(not tested yet)
Rebuild the PatchPool class. Use a dictionary which includes a dictionary for every image that contains the patches. This way acsess to specific patches saved in pool is much easier and efficient. To this end implement a method get_pos_key() that creates a key for a given patch index. Also rewrite the methods calculate_values(), get_unused_patches_indices() and select_patches() accordingly. Implement the methods in the code of query_training_patches() !The PatchPool generally works however the indices are initialized before the actual image is patched, if the indexes change during patching an error occurs, this will have to be changed in the following commit
Don't initialize the patches in PatchPool self.pool on initialisation of the class but instead the first time the values of the batches in self.pool get updated. Modify get_unused_patches to return the ideal list of indices if this initialisation hasn't occured yet. To keep Track of this introduce self.patches_set_up, a list that keeps track of for which image the patches have already been initialized.
Create method that returns the patches selected for training for a specific image number and edits the pool accordingly
Only create PatchPool Object if active learning is activated, pass the object to pipeline if al activated. Passing of the object replaces parameter active_learning as marker in pipeline, that al is wished.
Make the modified function get_patches_data and it's usage in pipeline compatible again. Change order of arguments in pipeline, edit return statement in the function. Ajust query_training_patches() accordingly
To only get the information on how the patches indices will turn out after patching modify the function. The idea was to use this functionality to build the patch pool for active learning. For now however another technique is used.
Get changes from dev_christoph branch (commit f0ca9f8) to test branch. Includes mainly options to save the used patches and their origin for later analysis
Move parameters for determining the used number of patches to config (as temporary parameters) Also settings for first Exp9-1
Change the filename in which mosaic plots are saved so that it includes the name of the experiment (according to config) so that the predictions of different experiments can be distinguished
Also Settings for Plotting of Exp9 slices (Predict_slices_Exp9)
(Predict_slices_Exp9)
Small corrections to the results of merging the test branch with the developement branch Result is the momentary version of the AL pipeline at the end of my bachelors thesis.
Compare with the last master commit that was merged into this branch and match them as much as possible, again removing unnecessary changes
Notes:
|
thomaskuestner
requested changes
Mar 26, 2021
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add new provided config parameters to config check for backward compatibility
New config parameters active_learning and max_patch_num might not be present in some config files, therefore these checks are needed
thomaskuestner
approved these changes
Mar 30, 2021
@all-contributors please add @CDStark for code, maintenance |
I've put up a pull request to add @CDStark! 🎉 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
AL as another option for training