Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add firing rate scaling option to Converter #134

Merged
merged 9 commits into from
Mar 5, 2020
Merged
2 changes: 1 addition & 1 deletion .codecov.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ coverage:
project:
default:
enabled: yes
target: auto
target: 100%
patch:
default:
enabled: yes
Expand Down
3 changes: 2 additions & 1 deletion .nengobones.yml
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,8 @@ travis_yml:
deploy_dists:
- sdist

codecov_yml: {}
codecov_yml:
abs_target: 100%

setup_py:
include_package_data: True
Expand Down
8 changes: 5 additions & 3 deletions .templates/gpu.sh.template
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,11 @@
echo "Waiting for lock on GPU $GPU_NUM"
(
flock -x -w 540 200 || exit 1
CUDA_VISIBLE_DEVICES="$GPU_NUM" pytest $TEST_ARGS nengo_dl/tests/test_benchmarks.py::test_performance --performance -v --durations 20 --color=yes || exit 1
CUDA_VISIBLE_DEVICES="$GPU_NUM" pytest $TEST_ARGS nengo_dl -v --durations 20 --color=yes --cov=nengo_dl --cov-report=xml --cov-report=term-missing || exit 1
CUDA_VISIBLE_DEVICES="$GPU_NUM" pytest $TEST_ARGS --pyargs nengo -v --durations 20 --color=yes --cov=nengo_dl --cov-report=xml --cov-report=term-missing --cov-append || exit 1
export CUDA_VISIBLE_DEVICES="$GPU_NUM"
export TF_FORCE_GPU_ALLOW_GROWTH=true
pytest $TEST_ARGS nengo_dl/tests/test_benchmarks.py::test_performance --performance -v --durations 20 --color=yes || exit 1
pytest $TEST_ARGS nengo_dl -v -n 2 --durations 20 --color=yes --cov=nengo_dl --cov-report=xml --cov-report=term-missing || exit 1
pytest $TEST_ARGS --pyargs nengo -v -n 2 --durations 20 --color=yes --cov=nengo_dl --cov-report=xml --cov-report=term-missing --cov-append || exit 1
) 200>/var/lock/.travis-ci.exclusivelock."$GPU_NUM" || REMOTE_STATUS=1
{% endblock %}

Expand Down
2 changes: 1 addition & 1 deletion .templates/remote-script.sh.template
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
echo "Waiting for lock on GPU $GPU_NUM"
(
flock -x -w 540 200 || exit 1
CUDA_VISIBLE_DEVICES="$GPU_NUM" bash .ci/{{ remote_script }}.sh script || exit 1
CUDA_VISIBLE_DEVICES="$GPU_NUM" TF_FORCE_GPU_ALLOW_GROWTH=true bash .ci/{{ remote_script }}.sh script || exit 1
) 200>/var/lock/.travis-ci.exclusivelock."$GPU_NUM" || REMOTE_STATUS=1
{% endblock %}

Expand Down
9 changes: 9 additions & 0 deletions CHANGES.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,12 @@ Release history
(`#126`_)
- Added support for leaky ReLU Keras layers to ``nengo_dl.Converter``. (`#126`_)
- Added a new ``remove_reset_incs`` graph simplification step. (`#129`_)
- Added support for UpSampling layers to ``nengo_dl.Converter``. (`#130`_)
- Added tolerance parameters to ``nengo_dl.Converter.verify``. (`#130`_)
- Added ``scale_firing_rates`` option to ``nengo_dl.Converter``. (`#134`_)
- Added ``Converter.layers`` attribute which will map Keras layers/tensors to
the converted Nengo objects, to make it easier to access converted components.
(`#134`_)

**Changed**

Expand All @@ -57,6 +63,7 @@ Release history
- Reduced the amount of state that needs to be stored in the simulation. (`#129`_)
- Added more information to the error message when loading saved parameters that
don't match the current model. (`#129`_)
- More efficient implementation of convolutional biases in the Converter. (`#130`_)

**Fixed**

Expand All @@ -74,6 +81,8 @@ Release history
.. _#126: https://github.com/nengo/nengo-dl/pull/126
.. _#128: https://github.com/nengo/nengo-dl/pull/128
.. _#129: https://github.com/nengo/nengo-dl/pull/129
.. _#130: https://github.com/nengo/nengo-dl/pull/130
.. _#134: https://github.com/nengo/nengo-dl/pull/134
.. _#136: https://github.com/nengo/nengo-dl/pull/136
.. _Nengo#1591: https://github.com/nengo/nengo/pull/1591

Expand Down
17 changes: 17 additions & 0 deletions docs/tips.rst
Original file line number Diff line number Diff line change
Expand Up @@ -98,3 +98,20 @@ When debugging spiking performance issues, here are somethings to think about:
we use both of these techniques. Again, however, as with any hyperparameters these
will likely need to be adjusted depending on the application if we want to
maximize performance.
4. **Firing rates**. Non-spiking neurons output continuous values every timestep, so
it doesn't make much difference whether they are outputting a value of 1 or 100.
However, spiking neurons communicate via discrete events, and the rate of those
events is proportional to the continuous output value of the corresponding
non-spiking counterpart. So a spiking neuron emitting spikes at
1Hz is very different than one emitting spikes at 100Hz. Imagine we're
simulating the model for 100 timesteps with a simulation timestep of 0.001s. The
1Hz neuron is only expected to spike once every 1000 timesteps, so it may not
spike at all in our 100 timestep window, meaning that we really have no information
about what value that neuron is outputting. Even if a neuron spiked 1 or 2
times, that still doesn't provide much information. The 100Hz neuron, on the other
hand, would spike about 10 times in our 100 timestep window, allowing us to
estimate its firing rate fairly accurately. In conclusion, it is important to look
at the firing rates of neurons in your model, and make sure they are spiking fast
enough to provide useful information. If they are not spiking fast enough, consider
adjusting Ensemble parameterizations (before or after training) or adding
regularization terms during training to encourage higher firing rates.
Loading