Skip to content

Commit

Permalink
Merge pull request #28 from moskomule/dev
Browse files Browse the repository at this point in the history
v0.7
  • Loading branch information
moskomule authored Oct 4, 2019
2 parents 59eab55 + dee3a84 commit 81d9ffb
Show file tree
Hide file tree
Showing 60 changed files with 1,339 additions and 581 deletions.
3 changes: 1 addition & 2 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,7 @@ jobs:
command: |
python -m venv venv
. venv/bin/activate
pip install -U https://download.pytorch.org/whl/cpu/torch-1.1.0-cp37-cp37m-linux_x86_64.whl
pip install -U https://download.pytorch.org/whl/cpu/torchvision-0.3.0-cp37-cp37m-linux_x86_64.whl
pip install torch==1.2.0+cpu torchvision==0.4.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
pip install -U pytest
pip install -U .
- save_cache:
Expand Down
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,5 @@
docs/build/html/.doctrees
docs/build/html/_sources
docs/build/doctrees
docs/source
docs/source/*
!docs/source/index.rst
83 changes: 70 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
# Homura [![CircleCI](https://circleci.com/gh/moskomule/homura/tree/master.svg?style=svg)](https://circleci.com/gh/moskomule/homura/tree/master)
# Homura [![CircleCI](https://circleci.com/gh/moskomule/homura/tree/master.svg?style=svg)](https://circleci.com/gh/moskomule/homura/tree/master) [![document](https://img.shields.io/static/v1?label=doc&message=homura&color=blue)](https://moskomule.github.io/homura)

[document](https://moskomule.github.io/homura)

**homura** is a library for prototyping DL research.
**homura** is a library for fast prototyping DL research.

🔥🔥🔥🔥 *homura* (焰) is *flame* or *blaze* in Japanese. 🔥🔥🔥🔥

Expand All @@ -12,17 +10,19 @@

```
Python>=3.7
PyTorch>=1.1.0
torchvision>=0.3.0
PyTorch>=1.2.0
torchvision>=0.4.0
tqdm # automatically installed
tensorboard # automatically installed
```

### optional

```
miniargs
colorlog
miniargs (to run samples)
colorlog (to log with colors)
faiss (for faster kNN)
accimage (for faster image pre-processing)
```

To enable distributed training using auto mixed precision (AMP), install apex.
Expand Down Expand Up @@ -82,9 +82,10 @@ with reporters.TensorboardReporter([callbacks.AccuracyCallback(),
Now `iteration` of trainer can be updated as follows,

```python
from homura.trainers import TrainerBase, SupervisedTrainer
from homura.utils.containers import Map

def iteration(trainer: Trainer, data: Tuple[torch.Tensor]) -> Mapping[torch.Tensor]:
def iteration(trainer: TrainerBase, data: Tuple[torch.Tensor]) -> Mapping[torch.Tensor]:
input, target = data
output = trainer.model(input)
loss = trainer.loss_f(output, target)
Expand All @@ -93,6 +94,7 @@ def iteration(trainer: Trainer, data: Tuple[torch.Tensor]) -> Mapping[torch.Tens
trainer.optimizer.zero_grad()
loss.backward()
trainer.optimizer.step()
# iteration returns at least (loss, output)
# registered values can be called in callbacks
results.user_value = user_value
return results
Expand All @@ -102,7 +104,44 @@ SupervisedTrainer.iteration = iteration
trainer.update_iteration(iteration)
```

Also, `dict` of models, optimizers, loss functions are supported.
`callbacks.Callback` can access the parameters of models, loss, outputs of models and other user-defined values.

In most cases, `callbacks.metric_callback_decorator` is useful. The returned values are accumulated.

```python
from homura import callbacks

# Note that `iteration` has `user_value`

callbacks.metric_callback_decorator(lambda data: data["user_value"], name='user_value')

# or equivalently,

@callbacks.metric_callback_decorator
def user_value(data):
return data["user_value"]
```

`callbacks.Callback` has methods `before_all`, `before_iteration`, `before_epoch`, `after_all`, `after_iteration` and `after_epoch`. For example, `callbacks.WeightSave` is like:

```python
from homura.callbacks import Callback
class WeightSave(Callback):
...

def after_epoch(self, data: Mapping):
self._epoch = data["epoch"]
self._step = data["step"]
if self.save_freq > 0 and data["epoch"] % self.save_freq == 0:
self.save(data, f"{data['epoch']}.pkl")

def after_all(self, data: Mapping):
if self.save_freq == -1:
self.save(data, "weight.pkl")
```


`dict` of models, optimizers, loss functions are supported.

```python
trainer = CustomTrainer({"generator": generator, "discriminator": discriminator},
Expand All @@ -111,21 +150,28 @@ trainer = CustomTrainer({"generator": generator, "discriminator": discriminator}
**kwargs)
```

### distributed training

Easy distributed initializer `homura.init_distributed()` is available.


## reproductivity

This method makes randomness deterministic in its context.

```python
from homura.reproductivity import set_deterministic
from homura.utils.reproducibility import set_deterministic
with set_deterministic(seed):
something()
```

## debugger
## debugging

```python
>>> debug.module_debugger(nn.Sequential(nn.Linear(10, 5),
nn.Linear(5, 1)),
torch.randn(4, 10))

[homura.debug|2019-02-25 17:57:06|DEBUG] Start forward calculation
[homura.debug|2019-02-25 17:57:06|DEBUG] forward> name=Sequential(1)
[homura.debug|2019-02-25 17:57:06|DEBUG] forward> name=Linear(2)
Expand All @@ -143,7 +189,6 @@ See [examples](examples).

* [cifar10.py](examples/cifar10.py): training ResNet-20 or WideResNet-28-10 with random crop on CIFAR10
* [imagenet.py](examples/imagenet.py): training a CNN on ImageNet on multi GPUs (single and multi process)
* [gap.py](examples/gap.py): better implementation of generative adversarial perturbation

For [imagenet.py](examples/imagenet.py), if you want

Expand All @@ -169,3 +214,15 @@ run

Here, `0<$RANK<$NUM_NODES`.

# Citing

```bibtex
@misc{homura,
author = {Ryuichiro Hataya},
title = {homura},
year = {2018},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://GitHub.com/moskomule/homura}},
}
```
2 changes: 2 additions & 0 deletions docs/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,6 @@ help:
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
rm $(wildcard $(SOURCEDIR)/homura*.rst)
sphinx-apidoc -f -o "$(SOURCEDIR)" "../homura"
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
6 changes: 6 additions & 0 deletions docs/build/html/_modules/homura/callbacks/saver.html
Original file line number Diff line number Diff line change
Expand Up @@ -193,10 +193,16 @@ <h1>Source code for homura.callbacks.saver</h1><div class="highlight"><pre>
<span class="n">data</span><span class="p">:</span> <span class="n">Mapping</span><span class="p">,</span>
<span class="n">file_name</span><span class="p">:</span> <span class="nb">str</span><span class="p">):</span>
<span class="k">try</span><span class="p">:</span>
<span class="c1"># scheduler is not a must</span>
<span class="n">scheduler_state_dict</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="n">SCHEDULER</span><span class="p">)</span>
<span class="k">if</span> <span class="n">scheduler_state_dict</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">scheduler_state_dict</span> <span class="o">=</span> <span class="n">scheduler_state_dict</span><span class="o">.</span><span class="n">state_dict</span><span class="p">()</span>

<span class="n">torch</span><span class="o">.</span><span class="n">save</span><span class="p">({</span><span class="s2">&quot;git&quot;</span><span class="p">:</span> <span class="n">get_git_hash</span><span class="p">(),</span>
<span class="s2">&quot;args&quot;</span><span class="p">:</span> <span class="n">get_args</span><span class="p">(),</span>
<span class="n">MODEL</span><span class="p">:</span> <span class="n">data</span><span class="p">[</span><span class="n">MODEL</span><span class="p">]</span><span class="o">.</span><span class="n">state_dict</span><span class="p">(),</span>
<span class="n">OPTIMIZER</span><span class="p">:</span> <span class="n">data</span><span class="p">[</span><span class="n">OPTIMIZER</span><span class="p">]</span><span class="o">.</span><span class="n">state_dict</span><span class="p">(),</span>
<span class="n">SCHEDULER</span><span class="p">:</span> <span class="n">scheduler_state_dict</span><span class="p">,</span>
<span class="n">EPOCH</span><span class="p">:</span> <span class="bp">self</span><span class="o">.</span><span class="n">_epoch</span><span class="p">,</span>
<span class="n">STEP</span><span class="p">:</span> <span class="bp">self</span><span class="o">.</span><span class="n">_step</span><span class="p">},</span>
<span class="bp">self</span><span class="o">.</span><span class="n">save_path</span> <span class="o">/</span> <span class="n">file_name</span><span class="p">)</span>
Expand Down
1 change: 0 additions & 1 deletion docs/build/html/_modules/homura/metrics/segmentation.html
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,6 @@ <h1>Source code for homura.metrics.segmentation</h1><div class="highlight"><pre>
<span class="k">raise</span> <span class="ne">RuntimeError</span><span class="p">(</span><span class="n">f</span><span class="s2">&quot;Dimension of target is expected to be 3, but got {target.dim()}&quot;</span><span class="p">)</span>

<span class="n">cm</span> <span class="o">=</span> <span class="n">confusion_matrix</span><span class="p">(</span><span class="nb">input</span><span class="p">,</span> <span class="n">target</span><span class="p">)</span><span class="o">.</span><span class="n">float</span><span class="p">()</span>
<span class="nb">print</span><span class="p">(</span><span class="n">cm</span><span class="p">)</span>
<span class="k">return</span> <span class="n">cm</span><span class="o">.</span><span class="n">diag</span><span class="p">()</span> <span class="o">/</span> <span class="p">(</span><span class="n">cm</span><span class="o">.</span><span class="n">sum</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span> <span class="o">+</span> <span class="n">cm</span><span class="o">.</span><span class="n">sum</span><span class="p">(</span><span class="mi">1</span><span class="p">)</span> <span class="o">-</span> <span class="n">cm</span><span class="o">.</span><span class="n">diag</span><span class="p">())</span></div>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@
<div itemprop="articleBody">

<h1>Source code for homura.modules.functional.discretization</h1><div class="highlight"><pre>
<span></span><span class="c1"># some functions to discretize input tensors</span>
<span></span><span class="c1"># some functions to discretize input_forward tensors</span>

<span class="kn">import</span> <span class="nn">random</span>

Expand All @@ -163,7 +163,8 @@ <h1>Source code for homura.modules.functional.discretization</h1><div class="hig
<span class="n">__all__</span> <span class="o">=</span> <span class="p">[</span><span class="s2">&quot;gumbel_softmax&quot;</span><span class="p">,</span> <span class="s2">&quot;gumbel_sigmoid&quot;</span><span class="p">,</span> <span class="s2">&quot;straight_through_estimator&quot;</span><span class="p">,</span> <span class="s2">&quot;semantic_hashing&quot;</span><span class="p">]</span>


<div class="viewcode-block" id="gumbel_sigmoid"><a class="viewcode-back" href="../../../../homura.modules.functional.html#homura.modules.functional.discretization.gumbel_sigmoid">[docs]</a><span class="k">def</span> <span class="nf">gumbel_sigmoid</span><span class="p">(</span><span class="nb">input</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">,</span> <span class="n">temp</span><span class="p">:</span> <span class="nb">float</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">:</span>
<div class="viewcode-block" id="gumbel_sigmoid"><a class="viewcode-back" href="../../../../homura.modules.functional.html#homura.modules.functional.discretization.gumbel_sigmoid">[docs]</a><span class="k">def</span> <span class="nf">gumbel_sigmoid</span><span class="p">(</span><span class="nb">input</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">,</span>
<span class="n">temp</span><span class="p">:</span> <span class="nb">float</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">:</span>
<span class="sd">&quot;&quot;&quot; gumbel sigmoid function</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">return</span> <span class="n">RelaxedBernoulli</span><span class="p">(</span><span class="n">temp</span><span class="p">,</span> <span class="n">probs</span><span class="o">=</span><span class="nb">input</span><span class="o">.</span><span class="n">sigmoid</span><span class="p">())</span><span class="o">.</span><span class="n">rsample</span><span class="p">()</span></div>
Expand All @@ -174,11 +175,13 @@ <h1>Source code for homura.modules.functional.discretization</h1><div class="hig
<span class="sd"> &quot;&quot;&quot;</span>

<span class="nd">@staticmethod</span>
<span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="n">ctx</span><span class="p">,</span> <span class="nb">input</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">:</span>
<span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="n">ctx</span><span class="p">,</span>
<span class="nb">input</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">:</span>
<span class="k">return</span> <span class="p">(</span><span class="nb">input</span> <span class="o">&gt;</span> <span class="mi">0</span><span class="p">)</span><span class="o">.</span><span class="n">float</span><span class="p">()</span>

<span class="nd">@staticmethod</span>
<span class="k">def</span> <span class="nf">backward</span><span class="p">(</span><span class="n">ctx</span><span class="p">,</span> <span class="n">grad_output</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">:</span>
<span class="k">def</span> <span class="nf">backward</span><span class="p">(</span><span class="n">ctx</span><span class="p">,</span>
<span class="n">grad_output</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">:</span>
<span class="k">return</span> <span class="n">F</span><span class="o">.</span><span class="n">hardtanh</span><span class="p">(</span><span class="n">grad_output</span><span class="p">)</span>


Expand All @@ -194,7 +197,7 @@ <h1>Source code for homura.modules.functional.discretization</h1><div class="hig


<span class="k">def</span> <span class="nf">_saturated_sigmoid</span><span class="p">(</span><span class="nb">input</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">:</span>
<span class="c1"># max(0, min(1, 1.2 * input.sigmoid() - 0.1))</span>
<span class="c1"># max(0, min(1, 1.2 * input_forward.sigmoid() - 0.1))</span>
<span class="k">return</span> <span class="n">F</span><span class="o">.</span><span class="n">relu</span><span class="p">(</span><span class="mi">1</span> <span class="o">-</span> <span class="n">F</span><span class="o">.</span><span class="n">relu</span><span class="p">(</span><span class="mf">1.1</span> <span class="o">-</span> <span class="mf">1.2</span> <span class="o">*</span> <span class="nb">input</span><span class="o">.</span><span class="n">sigmoid</span><span class="p">()))</span>


Expand All @@ -220,7 +223,9 @@ <h1>Source code for homura.modules.functional.discretization</h1><div class="hig
<span class="k">return</span> <span class="n">v1</span></div>


<div class="viewcode-block" id="gumbel_softmax"><a class="viewcode-back" href="../../../../homura.modules.functional.html#homura.modules.functional.discretization.gumbel_softmax">[docs]</a><span class="k">def</span> <span class="nf">gumbel_softmax</span><span class="p">(</span><span class="nb">input</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">,</span> <span class="n">dim</span><span class="p">:</span> <span class="nb">int</span><span class="p">,</span> <span class="n">temp</span><span class="p">:</span> <span class="nb">float</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">:</span>
<div class="viewcode-block" id="gumbel_softmax"><a class="viewcode-back" href="../../../../homura.modules.functional.html#homura.modules.functional.discretization.gumbel_softmax">[docs]</a><span class="k">def</span> <span class="nf">gumbel_softmax</span><span class="p">(</span><span class="nb">input</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">,</span>
<span class="n">dim</span><span class="p">:</span> <span class="nb">int</span><span class="p">,</span>
<span class="n">temp</span><span class="p">:</span> <span class="nb">float</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">:</span>
<span class="sd">&quot;&quot;&quot; gumbel softmax</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">return</span> <span class="n">RelaxedOneHotCategorical</span><span class="p">(</span><span class="n">temp</span><span class="p">,</span> <span class="nb">input</span><span class="o">.</span><span class="n">softmax</span><span class="p">(</span><span class="n">dim</span><span class="o">=</span><span class="n">dim</span><span class="p">))</span><span class="o">.</span><span class="n">rsample</span><span class="p">()</span></div>
Expand Down
Loading

0 comments on commit 81d9ffb

Please sign in to comment.