Skip to content

Commit

Permalink
imports + strict & fixing Docs (#5)
Browse files Browse the repository at this point in the history
* absolute imports
* rise Sphinx warnings
* fixing docs & config
* docs: fix links & API folder
* move figures
* pandas: fix as_matrix
* update CI
* pep8
  • Loading branch information
Borda authored Oct 28, 2020
1 parent 68c7ffa commit 5479ac1
Show file tree
Hide file tree
Showing 29 changed files with 243 additions and 124 deletions.
28 changes: 28 additions & 0 deletions .pep8speaks.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# File : .pep8speaks.yml

scanner:
diff_only: True # If False, the entire file touched by the Pull Request is scanned for errors. If True, only the diff is scanned.
linter: pycodestyle # Other option is flake8

pycodestyle: # Same as scanner.linter value. Other option is flake8
max-line-length: 110 # Default is 79 in PEP 8
ignore: # Errors and warnings to ignore
- W504 # line break after binary operator
- E402 # module level import not at top of file
- E731 # do not assign a lambda expression, use a def
- C406 # Unnecessary list literal - rewrite as a dict literal.
- E741 # ambiguous variable name

no_blank_comment: True # If True, no comment is made on PR without any errors.
descending_issues_order: False # If True, PEP 8 issues in message will be displayed in descending order of line numbers in the file

message: # Customize the comment made by the bot,
opened: # Messages when a new PR is submitted
header: "Hello @{name}! Thanks for opening this PR. "
# The keyword {name} is converted into the author's username
footer: "Do see the [Hitchhiker's guide to code style](https://goo.gl/hqbW4r)"
# The messages can be written as they would over GitHub
updated: # Messages when new commits are added to the PR
header: "Hello @{name}! Thanks for updating this PR. "
footer: "" # Why to comment the link to the style guide everytime? :)
no_errors: "There are currently no PEP 8 issues detected in this Pull Request. Cheers! :beers: "
2 changes: 1 addition & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ sudo: false
python:
- 2.7
# - 3.4 # will be deprecated for pandas
- 3.5
- 3.6
- 3.7
- 3.8

# See http://docs.travis-ci.com/user/caching/#pip-cache
cache: pip
Expand Down
2 changes: 2 additions & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@ recursive-include figures *.png
# Include the sample data
recursive-include data_images *.yml *.yaml

recursive-include assets *.png

prune .git
prune venv
prune build
Expand Down
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

We present a final step of image processing pipeline which accepts a large number of images, containing spatial expression information for thousands of genes in Drosophila imaginal discs. We assume that the gene activations are binary and can be expressed as a union of a small set of non-overlapping spatial patterns, yielding a compact representation of the spatial activation of each gene. This lends itself well to further automatic analysis, with the hope of discovering new biological relationships. Traditionally, the images were labelled manually, which was very time-consuming. The key part of our work is a binary pattern dictionary learning algorithm, that takes a set of binary images and determines a set of patterns, which can be used to represent the input images with a small error.

![schema](figures/pipeline_schema.png)
![schema](assets/pipeline_schema.png)

For the image segmentation and individual object detection, we used [Image segmentation toolbox](https://borda.github.io/pyImSegm/).

Expand Down Expand Up @@ -86,13 +86,13 @@ python experiments/run_dataset_generate.py \
```

**Sample atlases**
![atlases](figures/synth_atlases.png)
![atlases](assets/synth_atlases.png)

**Sample binary images**
![binary samples](figures/synth_samples_binary.png)
![binary samples](assets/synth_samples_binary.png)

**Sample fuzzy images**
![fuzzy samples](figures/synth_samples_fuzzy.png)
![fuzzy samples](assets/synth_samples_fuzzy.png)

For adding Gaussian noise with given sigmas use following script:
```bash
Expand All @@ -101,7 +101,7 @@ python experiments/run_dataset_add_noise.py \
-d apdDataset_vX --sigma 0.01 0.1 0.2
```

![gauss noise](figures/synth_gauss-noise.png)
![gauss noise](assets/synth_gauss-noise.png)

### Real images

Expand Down Expand Up @@ -226,7 +226,7 @@ python experiments/run_reconstruction.py \
--nb_workers 1 --visual
```

![reconstruction](figures/reconst_imag-disc.png)
![reconstruction](assets/reconst_imag-disc.png)

### Aggregating results

Expand Down
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
31 changes: 16 additions & 15 deletions bpdl/data_utils.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,21 @@
"""
The basic module for generating synthetic images and also loading / exporting
Copyright (C) 2015-2018 Jiri Borovec <jiri.borovec@fel.cvut.cz>
Copyright (C) 2015-2020 Jiri Borovec <jiri.borovec@fel.cvut.cz>
"""
from __future__ import absolute_import

import os
import glob
import logging
# import warnings
import itertools
import logging
import multiprocessing as mproc
import os
from functools import partial

# to suppress all visual, has to be on the beginning
import matplotlib

if os.environ.get('DISPLAY', '') == '' and matplotlib.rcParams['backend'] != 'agg':
print('No display found. Using non-interactive Agg backend.')
# https://matplotlib.org/faq/usage_faq.html
Expand All @@ -31,7 +32,7 @@
from imsegm.utilities.experiments import WrapExecuteSequence
from imsegm.utilities.data_io import io_imread, io_imsave

from .utilities import create_clean_folder
from bpdl.utilities import create_clean_folder

NB_WORKERS = mproc.cpu_count()
IMAGE_SIZE_2D = (128, 128)
Expand Down Expand Up @@ -154,7 +155,7 @@ def image_deform_elastic(im, coef=0.5, grid_size=(20, 20), rand_seed=None):
>>> img = np.zeros((10, 15), dtype=int)
>>> img[2:8, 3:7] = 1
>>> img[6:, 9:] = 2
>>> image_deform_elastic(img, coef=0.3, grid_size=(5, 5), rand_seed=0)
>>> image_deform_elastic(img, coef=0.3, grid_size=(2, 2), rand_seed=0)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
Expand All @@ -164,7 +165,7 @@ def image_deform_elastic(im, coef=0.5, grid_size=(20, 20), rand_seed=None):
[0, 0, 0, 1, 1, 1, 1, 0, 0, 2, 2, 2, 2, 2, 0],
[0, 0, 0, 1, 1, 1, 1, 0, 0, 2, 2, 2, 2, 2, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2, 2]], dtype=uint8)
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> img = np.zeros((10, 15, 5), dtype=int)
>>> img[2:8, 3:7, :] = 1
>>> im = image_deform_elastic(img, coef=0.2, grid_size=(4, 5), rand_seed=0)
Expand Down Expand Up @@ -734,19 +735,19 @@ def export_image(path_out, img, im_name, name_template=SEGM_PATTERN,
(5, 10)
>>> np.round(im.astype(float), 1).tolist() # doctest: +NORMALIZE_WHITESPACE
[[0.6, 0.7, 0.6, 0.6, 0.4, 0.7, 0.4, 0.9, 1.0, 0.4],
[0.8, 0.5, 0.6, 0.9, 0.1, 0.1, 0.0, 0.8, 0.8, 0.9],
[1.0, 0.8, 0.5, 0.8, 0.1, 0.7, 0.1, 1.0, 0.5, 0.4],
[0.3, 0.8, 0.5, 0.6, 0.0, 0.6, 0.6, 0.6, 1.0, 0.7],
[0.4, 0.4, 0.7, 0.1, 0.7, 0.7, 0.2, 0.1, 0.3, 0.4]]
[0.8, 0.5, 0.6, 0.9, 0.1, 0.1, 0.0, 0.8, 0.8, 0.9],
[1.0, 0.8, 0.5, 0.8, 0.1, 0.7, 0.1, 1.0, 0.5, 0.4],
[0.3, 0.8, 0.5, 0.6, 0.0, 0.6, 0.6, 0.6, 1.0, 0.7],
[0.4, 0.4, 0.7, 0.1, 0.7, 0.7, 0.2, 0.1, 0.3, 0.4]]
>>> img = np.random.randint(0, 9, [5, 10])
>>> path_img = export_image('.', img, 'testing-image', stretch_range=False)
>>> name, im = load_image(path_img, fuzzy_val=False)
>>> im.tolist() # doctest: +NORMALIZE_WHITESPACE
[[4, 4, 6, 4, 4, 3, 4, 4, 8, 4],
[3, 7, 5, 5, 0, 1, 5, 3, 0, 5],
[0, 1, 2, 4, 2, 0, 3, 2, 0, 7],
[5, 0, 2, 7, 2, 2, 3, 3, 2, 3],
[4, 1, 2, 1, 4, 6, 8, 2, 3, 0]]
[3, 7, 5, 5, 0, 1, 5, 3, 0, 5],
[0, 1, 2, 4, 2, 0, 3, 2, 0, 7],
[5, 0, 2, 7, 2, 2, 3, 3, 2, 3],
[4, 1, 2, 1, 4, 6, 8, 2, 3, 0]]
>>> os.remove(path_img)
Image - TIFF
Expand Down Expand Up @@ -1135,7 +1136,7 @@ def dataset_load_weights(path_base, name_csv=CSV_NAME_WEIGHTS, img_names=None):
encoding = np.array([[int(x) for x in c.split(';')] for c in coding])
# the new encoding with pattern names
else:
encoding = df.as_matrix()
encoding = df.values
return np.array(encoding)


Expand Down
15 changes: 8 additions & 7 deletions bpdl/dictionary_learning.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,17 @@
The main module for Atomic pattern dictionary, jjoiningthe atlas estimation
and computing the encoding / weights
Copyright (C) 2015-2018 Jiri Borovec <jiri.borovec@fel.cvut.cz>
Copyright (C) 2015-2020 Jiri Borovec <jiri.borovec@fel.cvut.cz>
"""
from __future__ import absolute_import

import logging
import os
import time
import logging

# to suppress all visual, has to be on the beginning
import matplotlib

if os.environ.get('DISPLAY', '') == '' and matplotlib.rcParams['backend'] != 'agg':
print('No display found. Using non-interactive Agg backend.')
# https://matplotlib.org/faq/usage_faq.html
Expand All @@ -25,14 +26,14 @@
# using https://github.com/Borda/pyGCO
from gco import cut_general_graph, cut_grid_graph_simple

from .pattern_atlas import (
from bpdl.pattern_atlas import (
compute_positive_cost_images_weights, edges_in_image2d_plane, init_atlas_mosaic,
atlas_split_indep_ptn, reinit_atlas_likely_patterns, compute_relative_penalty_images_weights)
from .pattern_weights import (
from bpdl.pattern_weights import (
weights_image_atlas_overlap_major, weights_image_atlas_overlap_partial)
from .metric_similarity import compare_atlas_adjusted_rand
from .data_utils import export_image
from .registration import register_images_to_atlas_demons
from bpdl.metric_similarity import compare_atlas_adjusted_rand
from bpdl.data_utils import export_image
from bpdl.registration import register_images_to_atlas_demons

NB_GRAPH_CUT_ITER = 5
TEMPLATE_NAME_ATLAS = 'BPDL_{}_{}_iter_{:04d}'
Expand Down
2 changes: 1 addition & 1 deletion bpdl/metric_similarity.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
"""
Introducing some used similarity measures fro atlases and etc.
Copyright (C) 2015-2018 Jiri Borovec <jiri.borovec@fel.cvut.cz>
Copyright (C) 2015-2020 Jiri Borovec <jiri.borovec@fel.cvut.cz>
"""

# from __future__ import absolute_import
Expand Down
12 changes: 6 additions & 6 deletions bpdl/pattern_atlas.py
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
"""
Estimating the pattern dictionary module
Copyright (C) 2015-2018 Jiri Borovec <jiri.borovec@fel.cvut.cz>
Copyright (C) 2015-2020 Jiri Borovec <jiri.borovec@fel.cvut.cz>
"""
# from __future__ import absolute_import
import logging

# import numba
import numpy as np
from sklearn.decomposition import SparsePCA, FastICA, DictionaryLearning, NMF
from skimage import morphology, measure, segmentation, filters
from scipy import ndimage as ndi
from skimage import morphology, measure, segmentation, filters
from sklearn.decomposition import SparsePCA, FastICA, DictionaryLearning, NMF

from .data_utils import image_deform_elastic, extract_image_largest_element
from .pattern_weights import (
from bpdl.data_utils import image_deform_elastic, extract_image_largest_element
from bpdl.pattern_weights import (
weights_label_atlas_overlap_threshold, convert_weights_binary2indexes)

REINIT_PATTERN_COMPACT = True
Expand Down Expand Up @@ -403,7 +403,7 @@ def init_atlas_sparse_pca(imgs, nb_patterns, nb_iter=5, bg_threshold=0.1):
>>> atlas[3:7, 6:12] = 2
>>> luts = np.array([[0, 1, 0]] * 99 + [[0, 0, 1]] * 99 + [[0, 1, 1]] * 99)
>>> imgs = [lut[atlas] for lut in luts]
>>> init_atlas_sparse_pca(imgs, 2)
>>> init_atlas_sparse_pca(imgs, 2, bg_threshold=0.05)
array([[0, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0],
[0, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0],
[0, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0],
Expand Down
2 changes: 1 addition & 1 deletion bpdl/pattern_weights.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
"""
Estimating pattern weight vector for each image
Copyright (C) 2015-2018 Jiri Borovec <jiri.borovec@fel.cvut.cz>
Copyright (C) 2015-2020 Jiri Borovec <jiri.borovec@fel.cvut.cz>
"""

# from __future__ import absolute_import
Expand Down
7 changes: 3 additions & 4 deletions bpdl/registration.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,21 +5,21 @@
* http://insightsoftwareconsortium.github.io/SimpleITK-Notebooks/
* https://bic-berkeley.github.io/psych-214-fall-2016/dipy_registration.html
Copyright (C) 2017-2018 Jiri Borovec <jiri.borovec@fel.cvut.cz>
Copyright (C) 2017-2020 Jiri Borovec <jiri.borovec@fel.cvut.cz>
"""

import time
import logging
import time
# import multiprocessing as mproc
from functools import partial

import numpy as np
from scipy import ndimage, interpolate
# from scipy.ndimage import filters
from dipy.align import VerbosityLevels
from dipy.align.imwarp import SymmetricDiffeomorphicRegistration, DiffeomorphicMap
from dipy.align.metrics import SSDMetric
from imsegm.utilities.experiments import WrapExecuteSequence, nb_workers
from scipy import ndimage, interpolate

NB_WORKERS = nb_workers(0.8)

Expand Down Expand Up @@ -93,7 +93,6 @@ def register_demons_sym_diffeom(img_sense, img_ref, smooth_sigma=1.,
[ 0., 0., 0., 0., 1., 1., 1., 1., 1., 1.],
[ 0., 0., 0., 0., 1., 1., 1., 1., 1., 1.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
>>> np.round(img_warp - img_sense, 1) # doctest: +SKIP
>>> img_sense = np.zeros(img_ref.shape, dtype=int)
>>> img_sense[4:9, 3:10] = 1
>>> img_sense
Expand Down
14 changes: 8 additions & 6 deletions bpdl/utilities.py
Original file line number Diff line number Diff line change
@@ -1,24 +1,26 @@
"""
The basic module for generating synthetic images and also loading / exporting
Copyright (C) 2015-2018 Jiri Borovec <jiri.borovec@fel.cvut.cz>
Copyright (C) 2015-2020 Jiri Borovec <jiri.borovec@fel.cvut.cz>
"""

import logging
# from __future__ import absolute_import
import os
import re
import types
import logging
import shutil
# import multiprocessing.pool
# import multiprocessing as mproc
# from functools import wraps
import types

import numpy as np
from scipy import stats
from scipy.spatial import distance


# import multiprocessing.pool
# import multiprocessing as mproc
# from functools import wraps


# def update_path(path_file, lim_depth=5, absolute=True):
# """ bubble in the folder tree up intil it found desired file
# otherwise return original one
Expand Down
Loading

0 comments on commit 5479ac1

Please sign in to comment.