Skip to content

Releases: RatInABox-Lab/RatInABox

v1.12.1

19 Feb 19:18
Compare
Choose a tag to compare

Faster animations and bug fixes

  • New Agent._history_arrays dataframe. This is identical to Agent.history except for two things (same for Neurons too):
    • It is dictionary of arrays, not lists.
    • It should only be accessed via its getter-function _history_arrays = self.get_history_arrays()
      For now this API is intended to be mostly internal. We don't recommend users use this and instead should stuck to using self.history as before. But it speeds up animation internally a lot because it checks a cache to see if the list-->array conversion has been done recently and only repeats it if it hasn't.
  • Plotting rate maps via the history method now shows the regions where the Agent never went as grey rather than black (i.e. zero firing rate). This is more realistic, shows the data more honestly and closer to how experimentalists do it (i.e. by shooting spikes and dividing by position occupancy).
    • Additionally users can specify the discretisation of the rate mat smoothing
PlaceCells.plot_rate_map(method="history", bin_size=0.06) #defaults of 0.04, i.e. 4 cm
  • Bug fixes relating to #104 and some other minor stuff I hadn't spotted in 1.12.

Full Changelog: 1.12.0...v1.12.1

v1.12.0

02 Feb 17:23
Compare
Choose a tag to compare

Main charges include:

  • Significantly modularised Agent.update() function. It is now more readable. This should be fully backwards compatible for conventional users of RiaB. For power users a few internal variables were renamed e.g. save_velocity-->measured_velocity but really very few. It also accepts (but doesn't recommend) users just forcing the next position of the Agent via
Agent.update(forced_next_position = <next_pos>)

This should be fully backwards compatible for conventional users of RiaB. For power users a few internal variables were renamed e.g. save_velocity-->measured_velocity but really very few.

  • Small bug fixes for @colleenjg to do with warning when params['n']` is incompatible with the specific class.
  • Bug fix from @gsivori to make position sampling acknowledge and account for the existence of holes in the Environment more naturally than was done before.
  • Minor changes to some plotting functions. More params have been moved from args to kwargs to clean up the doctoring and some (previously fixed) constants have been exposed as via letting kwargs define them. This both simplifies and improves plotting.

What's Changed

  • Position sampling in Environments with holes. by @gsivori in #100
  • Small fixes and new warning for Neurons classes. by @colleenjg in #101
  • Fixes spurious warning raised when initializing VectorCells objects. by @colleenjg in #103
  • For spike rasters, or rate maps plotted via the 'history' method there is a hidden option (via a kwarg) to set a different Agent from which you draw position data from. In case, say, you want to plot the spikes from Neurons1 against the positions of Agent2.

New Contributors

Full Changelog: v1.11.4...v1.12.0

v1.11.4

06 Dec 23:42
Compare
Choose a tag to compare

1D Objects (thanks @colleenjg) and minor bug fix for place cell distributions relating to #96

v1.11.3

05 Dec 14:43
Compare
Choose a tag to compare

Updates to the website including new pages for demos and testimonials and a rescaled logo

v1.11.2a

05 Dec 14:28
Compare
Choose a tag to compare

No change to code. Just a bug fix for website https://ratinabox-lab.github.io/RatInABox/

v1.11.2

05 Dec 14:17
Compare
Choose a tag to compare

Recurrent inputs for FeedForwardLayer

Previously FeedForwardLayers would throw recursion errors when you plot their rate maps if any of the inputs were recurrent (or ultimately part of a recurrent loop) since the get_state() method would call the input layer, which would call the input layer, which would.... you get the idea.

This has been elegantly fixed by @colleenjg. Flag an I put as recurrent when you add it and at rate map evaluation time specificy the recursion depth you want to go to.

Before

Env = Environment()
Ag = Agent(Env)
PCs = PlaceCells(Ag)
FFL = FeedForwardLayer(Ag)

FFL.add_input(PCs)
FFL.add_input(FFL) #< a recurrent input!!!

FFL.plot_rate_map()

returns

RecursionError: maximum recursion depth exceeded in comparison

Now

Env = Environment()
Ag = Agent(Env)
PCs = PlaceCells(Ag)
FFL = FeedForwardLayer(Ag)

FFL.add_input(PCs)
FFL.add_input(FFL,recurrent=True) #< a recurrent input, flag it as such!!!

FFL.plot_rate_map(max_recurrence=0) # max number of times to pass through the recurrent input loop before then ignoring it
FFL.plot_rate_map(max_recurrence=1) # 1 pass through the loop
FFL.plot_rate_map(max_recurrence=100) # 100 passes 

3c783622-00fc-45df-95ae-0dc7e670a881
de109171-be11-4b06-8828-e1f54d9af97c
3c483aba-bc6f-4728-ab7d-8b34833bdf5b

v1.11.1

20 Nov 21:56
Compare
Choose a tag to compare

New plotting functions and demo focussing on head direction

Head direction selectivity can be plotted for all cells.

Env = Environment()
Ag = Agent(Env)
HDCs = HeadDirectionCells(Ag, params={'n':3,'color':'C5'}) #or any other 
HDCs.plot_angular_rate_map()

02b39dc7-d287-4349-88ad-e54594014973

Spatial rate maps can be plotted averaged over all head directions (for use in instances where rate maps may be position and head direction selective).

This is supported by a new internal function get_head_direction_averaged_state() which calculates the state at all head direction 0 --> 2pi and then averages over these return the spatial receptive field.

Env = Environment()
Ag = Agent(Env)
GCs = GridCells(Ag, params={'n':3}) #or any other. 
GCs.plot_rate_map(method="groundtruth_headdirectionaveraged")

(in this case its actually redundant because grid cells have no head direction selectivity but in this demo we show a more involved use case
79d31cdb-92e7-4083-bc27-5de3a24eff12

New Conjunctive grid cells demo

In this demo we showcase FeedForwardLayer in probably its most simple use-case. Combining head direction cells and grid cells non-linearly to make head direction selective grid cells (conjunctive grid cells)
conjunctive_grid_cells

v1.11.0

12 Nov 16:17
Compare
Choose a tag to compare

AgentVectorCells and FieldOfViewAVCs

AgentVectorCells are like ObjectVectorCells but respond to other Agents in the Environment.
FieldOfViewAVCs are a subclass of AVCs arranged in a "field of view" as shown in the attached video. A demo can be found here (scroll to the bottom). This relates to #89 and closes #91.

trajectory_1606.mp4
from ratinabox.Neurons import AgentVectorCells, FieldOfViewAVCs

Env = Environment()
Ag1 = Agent(Env)
Ag2 = Agent(Env)
AVCs_1to2 = FieldOfViewAVCs(Ag1, Other_Agent=Ag2)
AVCs_2to1 = FieldOfViewAVCs(Ag2, Other_Agent=Ag1)

#remember to update both agents and neurons in the update loop: 
for _ in range(int(20/Ag1.dt)):
    Ag1.update()
    Ag2.update()
    AVCs_1to2.update()
    AVCs_2to1.update()

Other minor changes

  • Changes to VectorCells and GridCell API for how parameters are initialised/sampled (should be backward compatible, a warning might be thrown if you use old parameter names).
  • Bug fixes
  • New DNN demo

RatInABox website and 1.10.1 bug fix

16 Oct 22:01
Compare
Choose a tag to compare
Pre-release

We made a website for RatInABox which will eventually host all the documentation, demos etc. It's hosted on Github pages and can be found inside docs. https://ratinabox-lab.github.io/RatInABox

Then I uploaded v1.10.1 but for some reason the source distribution files included all the demos etc and blew up in size to 90MB (I have no idea what changed?!). I fixed that by adding a MANIFEST.in but in the long run I'd like to fix the bug another way.

v1.10.0

12 Oct 00:15
Compare
Choose a tag to compare

Random spatially tuned Neurons

In this version, besides minor bug fixes, we are releasing a new Neurons subclass called RandomSpatialNeurons for when you require spatially tuned neurons but which aren't necesarily place cells or grid cells etc.

Users specify a lengthscale and these neurons sample a smooth random function from a Gaussian process with a squared exponential covariance function (roughly analogous to weighted sum of Gaussians). This is a much more "assumption free" way to model spatially tuned inputs and should be useful to a lot of users.

Note walls still act correctly (covariance between points opposite side of a wall is high) and this works in 1D too.

Import like any neuron:

from ratinabox.Neurons import RandomSpatialNeurons

And use as follows:

Env = Environment()
Env.add_wall([[0.3,0.35],[0.3,0.85]])
Ag = Agent(Env)
RSNs = RandomSpatialNeurons(Ag,
                            params = {'n':3,'lengthscale':0.1,},)

RSNs.plot_rate_map()

a0054d84-2d56-4f5e-83a4-11e738688eb5

RSNs = RandomSpatialNeurons(Ag,
                            params = {'n':3,'lengthscale':0.2,},)

ec435df1-ee45-4510-b2a5-1d19e82e45f6

Env = Environment(params={'dimensionality':'1D'})
Ag = Agent(Env)
RSNs = RandomSpatialNeurons(Ag,
                            params = {'n':10,'lengthscale':0.02,},)

23436238-9a8d-43f4-8e55-8f0e6197b5d9