- Code for fast watersheds. Code is based around code from https://bitbucket.org/poozh/watershed described in http://arxiv.org/abs/1505.00249. For use in https://github.com/naibaf7/PyGreentea.
conda install -c conda-forge zwatershed
pip install zwatershed
- clone the repository
- run ./make.sh
- numpy, h5py, cython
- if using parallel watershed, also requires multiprocessing or pyspark
- in order to build the cython, requires a c++ compiler and boost
(segs, rand) = zwatershed_and_metrics(segTrue, aff_graph, eval_thresh_list, seg_save_thresh_list)
- returns segmentations and metrics
segs
: list of segmentationslen(segs) == len(seg_save_thresh_list)
rand
: dictrand['V_Rand']
: V_Rand score (scalar)rand['V_Rand_split']
: list of score valueslen(rand['V_Rand_split']) == len(eval_thresh_list)
rand['V_Rand_merge']
: list of score values,len(rand['V_Rand_merge']) == len(eval_thresh_list)
segs = zwatershed(aff_graph, seg_save_thresh_list)
- returns segmentationssegs
: list of segmentationslen(segs) == len(seg_save_thresh_list)
rand = zwatershed_and_metrics_h5(segTrue, aff_graph, eval_thresh_list, seg_save_thresh_list, seg_save_path)
zwatershed_h5(aff_graph, eval_thresh_list, seg_save_path)
(segs, rand) = zwatershed_and_metrics_arb(segTrue, node1, node2, edgeWeight, eval_thresh_list, seg_save_thresh_list)
segs = zwatershed_arb(seg_shape, node1, node2, edgeWeight, seg_save_thresh_list)
rand = zwatershed_and_metrics_h5_arb(segTrue, node1, node2, edgeWeight, eval_thresh_list, seg_save_thresh_list, seg_save_path)
zwatershed_h5_arb(seg_shape, node1, node2, edgeWeight, eval_thresh_list, seg_save_path)
- a full example is given in par_ex.ipynb
- Partition the subvolumes
partition_data = partition_subvols(pred_file,out_folder,max_len)
- evenly divides the data in pred_file with the constraint that no dimension of any subvolume is longer than max_len
- Zwatershed the subvolumes
eval_with_spark(partition_data[0])
- with spark
eval_with_par_map(partition_data[0],NUM_WORKERS)
- with python multiprocessing map
- after evaluating, subvolumes will be saved into the out_folder directory named based on their smallest indices in each dimension (ex. path/to/out_folder/0_0_0_vol)
- Stitch the subvolumes together
stitch_and_save(partition_data,outname)
- stitch together the subvolumes in partition_data
- save to the hdf5 file outname
- outname['starts'] = list of min_indices of each subvolume
- outname['ends'] = list of max_indices of each subvolume
- outname['seg'] = full stitched segmentation
- outname['seg_sizes'] = array of size of each segmentation
- outname['rg_i'] = region graph for ith subvolume
- Threshold individual subvolumes by merging
seg_merged = merge_by_thresh(seg,seg_sizes,rg,thresh)
- load in these areguments from outname