Releases: MisaOgura/flashtorch
Releases · MisaOgura/flashtorch
0.1.3
Install steps
pip install flashtorch
Upgrade steps
pip install flashtorch -U
Breaking changes
- None
New features
- None
Bug fixes
- None
Improvements
- Requested improvement: #30
- Implemented by #31
- Quick summary:
flashtorch.saliency.Backprop
can now handle models with mono-channel/grayscale input images
Other changes
- None
0.1.2
Install steps
pip install flashtorch
Upgrade steps
pip install flashtorch -U
Breaking changes
- None
New features
- None
Bug fixes
- Reported bug: #18
- Fixed by: #25
- Quick summary:
flashtorch.saliency.Backprop.visualize
now correctly passesuse_gpu
flag down to thecalculate_gradient
.
Improvements
- None
Other changes
- None
0.1.1
Install steps
pip install flashtorch
Upgrade steps
pip install flashtorch -U
Breaking changes
- None
New features
- None
Bug fixes
- Removes a dependency on
README.md
insetup.py
: this is to avoid getting unicode decoding error (reported by #14).setup.py
now gets thelong_description
from its docstring.
Improvements
- None
Other changes
- None
0.1.0
Install steps
pip install flashtorch
Upgrade steps
pip install flashtorch -U
Breaking changes
flashtorch.utils.visualize
: This functionality was specific for creating saliency maps, and therefore has been moved as a class method forflashtorch.saliency.Backprop
Refer to the notebooks below for details and how to use it:
- Image-specific class saliency map with backpropagation
- Google Colab version: best for playing around
New features
-
flashtorch.activmax.GradientAscent
: This is a new API which implements activation maximization via gradient ascent. It has three public facing APIs:GradientAscent.optimize
: Generates an image that maximally activates the target filter.GradientAscent.visualize
: Optimizes for the target layer/filter and visualizes the output.GradientAscent.deepdream
: Creates DeepDream.
Refer to the notebooks below for details and how to use it:
- Activation maximization
- Google Colab version: best for playing around
Bug fixes
- None
Improvements
flashtorch.utils.standardize_and_clip
: Users can optionally set thesaturation
andbrightness
.
Other changes
- None
0.0.8
Install steps
pip install flashtorch
Upgrade steps
pip install flashtorch -U
Breaking changes
- None
New features
- None
Bug fixes
- Fixes #2
Improvements
-
Users can explicitly set a device to use when calculating gradients when using an instance of
Backprop
, by settinguse_gpu=True
. If it's True andtorch.cuda.is_available
, the computation will be moved to GPU. It defaults toFalse
if not provided.from flashtorch.saliency import Backprop ... # Prepare input and target_class model = model() backprop = Backprop(model) gradients = backprop. calculate_gradients(input, target_class, use_gpu=True)
Other changes
setup.py
has better indications of supported Python versions