Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Access to postsynaptic variables with heterogeneous delay #629

Merged
merged 44 commits into from
Jul 8, 2024
Merged
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
0a1a7a0
extended test_forward_den_delay to test dendritic delays *and* batching
neworderofjamie Jul 3, 2024
39e2241
fixed typos in indexing code for reading from and writing to dendriti…
neworderofjamie Jul 3, 2024
f08ed49
simplified quantifiers in type system - as this system is purely goin…
neworderofjamie Jun 12, 2024
5934360
gave function type set of flags with various attributes (currently va…
neworderofjamie Jun 14, 2024
0038502
fixed typo in CUDA backend using old Qualifiers
neworderofjamie Jun 14, 2024
e8f34f6
added support for type checking and parsing of function [ index ] exp…
neworderofjamie Jun 14, 2024
1491cdd
initial unit test of array subscript overload indexing
neworderofjamie Jun 14, 2024
5fc6903
hopefully fixed issues with [] overload
neworderofjamie Jun 18, 2024
1d9a8aa
added a couple more unit tests
neworderofjamie Jun 18, 2024
c6e224f
added error message if maximum dendritic delay isn't set but weight u…
neworderofjamie Jun 18, 2024
536892a
added test of max dendritic delay setting error
neworderofjamie Jun 18, 2024
f053c7c
updated other unit tests which use weight update groups with dendriti…
neworderofjamie Jun 18, 2024
bcb2cc2
added helper function to detect whether identifiers are referenced wi…
neworderofjamie Jun 19, 2024
f8c8e0a
extend target neuron group delays to encompass max dendritic delay ti…
neworderofjamie Jun 21, 2024
e2a49d0
tweaked error message
neworderofjamie Jun 22, 2024
48fdceb
notes and todos
neworderofjamie Jun 22, 2024
b1d8955
replace some long types with ``auto``
neworderofjamie Jun 22, 2024
ac8fb36
added parameterisable types for addToXXX and addToXXXDelay functions
neworderofjamie Jun 5, 2024
4905f45
added parameterisable type for array subscript override functions
neworderofjamie Jun 24, 2024
b749d50
initial sketch of a addVarRefsHet method in environment
neworderofjamie Jun 24, 2024
c564c3e
First attempt at code generation of heterogeneous variable indices
neworderofjamie Jun 27, 2024
7cf9007
fixed deprecated syntax in feature tests
neworderofjamie Jun 29, 2024
a268ca4
feature test for continously back-propagating heterogeneously delayed…
neworderofjamie Jun 29, 2024
79f2d45
removed one layer of unnecessary helper methods
neworderofjamie Jul 2, 2024
4ff0489
started adding support for heterogeneously delayed access to postsyna…
neworderofjamie Jul 2, 2024
388e69d
SynapseGroup::isDendriticDelayRequired is a bit ambiguous, renamed to…
neworderofjamie Jul 2, 2024
e0220de
removed duplicate logic from pre and postsynaptic wum variable alloca…
neworderofjamie Jul 2, 2024
69dd41b
saner names and included heterogeneous delay check in SynapseWUPostVa…
neworderofjamie Jul 2, 2024
79c2904
tidying
neworderofjamie Jul 2, 2024
b8494d6
no need to make this generic
neworderofjamie Jul 2, 2024
32d523d
fixed typo
neworderofjamie Jul 2, 2024
e640067
tidied delay handling in EnvironmentLocalVarCache - removed more dupl…
neworderofjamie Jul 2, 2024
4b9dbbb
Logic for heterogeneously delayed access to postsynaptic WUM variables
neworderofjamie Jul 2, 2024
f38a4de
test for accessing weight update model postsynaptic variables via het…
neworderofjamie Jul 2, 2024
d371be0
expose ``SynapseGroup::isWUPostVarHeterogeneouslyDelayed`` to python …
neworderofjamie Jul 2, 2024
0288d5d
extended test_wu_var_cont test to also test accessing postsynaptic we…
neworderofjamie Jul 2, 2024
09ac6d6
added fusion unit test
neworderofjamie Jul 2, 2024
fb4b3ac
+
neworderofjamie Jul 2, 2024
a10ac9b
track whether spikes and spike events need queuing in neuron group (l…
neworderofjamie Jul 7, 2024
11b1bb2
updated runtime to allocate correct sized event-related data structures
neworderofjamie Jul 7, 2024
809e8f1
updated code generation
neworderofjamie Jul 7, 2024
51717db
fixed typo
neworderofjamie Jul 7, 2024
31436b6
re-order
neworderofjamie Jul 8, 2024
0f4fafb
additional corner case suggested by Thomas
neworderofjamie Jul 8, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 12 additions & 18 deletions include/genn/genn/code_generator/environment.h
Original file line number Diff line number Diff line change
@@ -807,41 +807,35 @@ class VarCachePolicy
using GroupInternal = typename G::GroupInternal;
using AdapterDef = typename std::invoke_result_t<decltype(&A::getDefs), A>::value_type;
using AdapterAccess = typename AdapterDef::AccessType;
using GetIndexFn = std::function<std::string(const std::string&, AdapterAccess)>;
using ShouldAlwaysCopyFn = std::function<bool(const std::string&, AdapterAccess)>;
using GetIndexFn = std::function<std::string(const std::string&, AdapterAccess, bool)>;

VarCachePolicy(GetIndexFn getReadIndex, GetIndexFn getWriteIndex,
ShouldAlwaysCopyFn shouldAlwaysCopy = ShouldAlwaysCopyFn())
bool alwaysCopyIfDelayed = true)
: m_GetReadIndex(getReadIndex), m_GetWriteIndex(getWriteIndex),
m_ShouldAlwaysCopy(shouldAlwaysCopy)
m_AlwaysCopyIfDelayed(alwaysCopyIfDelayed)
{}

VarCachePolicy(GetIndexFn getIndex, ShouldAlwaysCopyFn shouldAlwaysCopy = ShouldAlwaysCopyFn())
VarCachePolicy(GetIndexFn getIndex, bool alwaysCopyIfDelayed = true)
: m_GetReadIndex(getIndex), m_GetWriteIndex(getIndex),
m_ShouldAlwaysCopy(shouldAlwaysCopy)
m_AlwaysCopyIfDelayed(alwaysCopyIfDelayed)
{}

//------------------------------------------------------------------------
// Public API
//------------------------------------------------------------------------
bool shouldAlwaysCopy(G&, const AdapterDef &var) const
bool shouldAlwaysCopy(G &group, const AdapterDef &var) const
{
if(m_ShouldAlwaysCopy) {
return m_ShouldAlwaysCopy(var.name, var.access);
}
else {
return false;
}
return A(group.getArchetype()).isVarDelayed(var.name) && m_AlwaysCopyIfDelayed;
}

std::string getReadIndex(G&, const AdapterDef &var) const
std::string getReadIndex(G &group, const AdapterDef &var) const
{
return m_GetReadIndex(var.name, var.access);
return m_GetReadIndex(var.name, var.access, A(group.getArchetype()).isVarDelayed(var.name));
}

std::string getWriteIndex(G&, const AdapterDef &var) const
std::string getWriteIndex(G &group, const AdapterDef &var) const
{
return m_GetWriteIndex(var.name, var.access);
return m_GetWriteIndex(var.name, var.access, A(group.getArchetype()).isVarDelayed(var.name));
}

const Runtime::ArrayBase *getArray(const Runtime::Runtime &runtime, const GroupInternal &g, const AdapterDef &var) const
@@ -855,7 +849,7 @@ class VarCachePolicy
//------------------------------------------------------------------------
GetIndexFn m_GetReadIndex;
GetIndexFn m_GetWriteIndex;
ShouldAlwaysCopyFn m_ShouldAlwaysCopy;
bool m_AlwaysCopyIfDelayed;
};

//------------------------------------------------------------------------
27 changes: 5 additions & 22 deletions include/genn/genn/code_generator/synapseUpdateGroupMerged.h
Original file line number Diff line number Diff line change
@@ -41,28 +41,8 @@ class GENN_EXPORT SynapseGroupMergedBase : public GroupMerged<SynapseGroupIntern
//! Should the Toeplitz connectivity initialization parameter be implemented heterogeneously?
bool isToeplitzConnectivityInitDerivedParamHeterogeneous(const std::string &paramName) const;

std::string getPreSlot(unsigned int batchSize) const;
std::string getPostSlot(unsigned int batchSize) const;

std::string getPreVarIndex(unsigned int batchSize, VarAccessDim varDims, const std::string &index) const
{
return getPreVarIndex(getArchetype().getSrcNeuronGroup()->isDelayRequired(), batchSize, varDims, index);
}

std::string getPostVarIndex(unsigned int batchSize, VarAccessDim varDims, const std::string &index) const
{
return getPostVarIndex(getArchetype().getTrgNeuronGroup()->isDelayRequired(), batchSize, varDims, index);
}

std::string getPreWUVarIndex(unsigned int batchSize, VarAccessDim varDims, const std::string &index) const
{
return getPreVarIndex(getArchetype().getAxonalDelaySteps() != 0, batchSize, varDims, index);
}

std::string getPostWUVarIndex(unsigned int batchSize, VarAccessDim varDims, const std::string &index) const
{
return getPostVarIndex(getArchetype().getBackPropDelaySteps() != 0, batchSize, varDims, index);
}
std::string getPreSlot(bool delay, unsigned int batchSize) const;
std::string getPostSlot(bool delay, unsigned int batchSize) const;

std::string getPostDenDelayIndex(unsigned int batchSize, const std::string &index, const std::string &offset) const;

@@ -89,6 +69,9 @@ class GENN_EXPORT SynapseGroupMergedBase : public GroupMerged<SynapseGroupIntern
return ((batchSize == 1) ? "" : "$(_pre_batch_offset) + ") + index;
}

std::string getPostVarHetDelayIndex(unsigned int batchSize, VarAccessDim varDims,
const std::string &index) const;

std::string getSynVarIndex(unsigned int batchSize, VarAccessDim varDims, const std::string &index) const;
std::string getKernelVarIndex(unsigned int batchSize, VarAccessDim varDims, const std::string &index) const;

16 changes: 8 additions & 8 deletions include/genn/genn/customConnectivityUpdateInternal.h
Original file line number Diff line number Diff line change
@@ -85,13 +85,13 @@
//----------------------------------------------------------------------------
VarLocation getLoc(const std::string &varName) const{ return m_CU.getPreVarLocation(varName); }

std::vector<Models::Base::Var> getDefs() const{ return m_CU.getModel()->getPreVars(); }
auto getDefs() const{ return m_CU.getModel()->getPreVars(); }

const std::map<std::string, InitVarSnippet::Init> &getInitialisers() const{ return m_CU.getPreVarInitialisers(); }
const auto &getInitialisers() const{ return m_CU.getPreVarInitialisers(); }

bool isVarDelayed(const std::string &) const { return false; }
bool isVarDelayed(const std::string&) const { return false; }

const CustomConnectivityUpdate &getTarget() const{ return m_CU; }
const auto &getTarget() const{ return m_CU; }

VarAccessDim getVarDims(const Models::Base::Var &var) const{ return getVarAccessDim(var.access); }

@@ -116,13 +116,13 @@
//----------------------------------------------------------------------------
VarLocation getLoc(const std::string &varName) const{ return m_CU.getPostVarLocation(varName); }

std::vector<Models::Base::Var> getDefs() const{ return m_CU.getModel()->getPostVars(); }
auto getDefs() const{ return m_CU.getModel()->getPostVars(); }

const std::map<std::string, InitVarSnippet::Init> &getInitialisers() const{ return m_CU.getPostVarInitialisers(); }
const auto &getInitialisers() const{ return m_CU.getPostVarInitialisers(); }

bool isVarDelayed(const std::string &) const { return false; }
bool isVarDelayed(const std::string&) const { return false; }

const CustomConnectivityUpdate &getTarget() const{ return m_CU; }
const auto &getTarget() const{ return m_CU; }

Check warning on line 125 in include/genn/genn/customConnectivityUpdateInternal.h

Codecov / codecov/patch

include/genn/genn/customConnectivityUpdateInternal.h#L125

Added line #L125 was not covered by tests

VarAccessDim getVarDims(const Models::Base::Var &var) const{ return getVarAccessDim(var.access); }

3 changes: 3 additions & 0 deletions include/genn/genn/gennUtils.h
Original file line number Diff line number Diff line change
@@ -46,6 +46,9 @@ GENN_EXPORT bool areTokensEmpty(const std::vector<Transpiler::Token> &tokens);
//! Checks whether the sequence of token references a given identifier
GENN_EXPORT bool isIdentifierReferenced(const std::string &identifierName, const std::vector<Transpiler::Token> &tokens);

//! Checks whether the sequence of tokens references a given identifier with a delay
GENN_EXPORT bool isIdentifierDelayed(const std::string &identifierName, const std::vector<Transpiler::Token> &tokens);

//! Checks whether the sequence of token includes an RNG function identifier
GENN_EXPORT bool isRNGRequired(const std::vector<Transpiler::Token> &tokens);

16 changes: 16 additions & 0 deletions include/genn/genn/neuronGroup.h
Original file line number Diff line number Diff line change
@@ -154,6 +154,9 @@ class GENN_EXPORT NeuronGroup
// Set a variable as requiring queueing
void setVarQueueRequired(const std::string &varName){ m_VarQueueRequired.insert(varName); }

void setSpikeQueueRequired(){ m_SpikeQueueRequired = true; }
void setSpikeEventQueueRequired(){ m_SpikeEventQueueRequired = true; }

void addInSyn(SynapseGroupInternal *synapseGroup){ m_InSyn.push_back(synapseGroup); }
void addOutSyn(SynapseGroupInternal *synapseGroup){ m_OutSyn.push_back(synapseGroup); }

@@ -224,6 +227,13 @@ class GENN_EXPORT NeuronGroup

bool isVarQueueRequired(const std::string &var) const;

bool isSpikeQueueRequired() const{ return m_SpikeQueueRequired; }

bool isSpikeEventQueueRequired() const{ return m_SpikeEventQueueRequired; }

bool isSpikeDelayRequired() const{ return isDelayRequired() && isSpikeQueueRequired(); }
bool isSpikeEventDelayRequired() const{ return isDelayRequired() && isSpikeEventQueueRequired(); }

//! Updates hash with neuron group
/*! \note this can only be called after model is finalized */
boost::uuids::detail::sha1::digest_type getHashDigest() const;
@@ -272,6 +282,12 @@ class GENN_EXPORT NeuronGroup

//! Set of names of variable requiring queueing
std::set<std::string> m_VarQueueRequired;

//! Is queueing required for spikes?
bool m_SpikeQueueRequired;

//! Is queueing required for spike-like events?
bool m_SpikeEventQueueRequired;

//! Should zero-copy memory (if available) be used
//! for spike and spike-like event recording?
6 changes: 6 additions & 0 deletions include/genn/genn/neuronGroupInternal.h
Original file line number Diff line number Diff line change
@@ -23,6 +23,8 @@ class NeuronGroupInternal : public NeuronGroup

using NeuronGroup::checkNumDelaySlots;
using NeuronGroup::setVarQueueRequired;
using NeuronGroup::setSpikeQueueRequired;
using NeuronGroup::setSpikeEventQueueRequired;
using NeuronGroup::addInSyn;
using NeuronGroup::addOutSyn;
using NeuronGroup::finalise;
@@ -49,6 +51,10 @@ class NeuronGroupInternal : public NeuronGroup
using NeuronGroup::isRecordingEnabled;
using NeuronGroup::isVarInitRequired;
using NeuronGroup::isVarQueueRequired;
using NeuronGroup::isSpikeQueueRequired;
using NeuronGroup::isSpikeEventQueueRequired;
using NeuronGroup::isSpikeDelayRequired;
using NeuronGroup::isSpikeEventDelayRequired;
using NeuronGroup::getHashDigest;
using NeuronGroup::getInitHashDigest;
using NeuronGroup::getSpikeQueueUpdateHashDigest;
19 changes: 15 additions & 4 deletions include/genn/genn/synapseGroup.h
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
#pragma once

// Standard includes
#include <optional>
#include <map>
#include <string>
#include <vector>
@@ -150,7 +151,7 @@ class GENN_EXPORT SynapseGroup
unsigned int getAxonalDelaySteps() const{ return m_AxonalDelaySteps; }
unsigned int getMaxConnections() const{ return m_MaxConnections; }
unsigned int getMaxSourceConnections() const{ return m_MaxSourceConnections; }
unsigned int getMaxDendriticDelayTimesteps() const{ return m_MaxDendriticDelayTimesteps; }
unsigned int getMaxDendriticDelayTimesteps() const{ return m_MaxDendriticDelayTimesteps.value_or(1); }
SynapseMatrixType getMatrixType() const{ return m_MatrixType; }
const auto &getKernelSize() const { return m_KernelSize; }
size_t getKernelSizeFlattened() const;
@@ -329,8 +330,14 @@ class GENN_EXPORT SynapseGroup
//! model been fused with those from other synapse groups?
bool isWUPostModelFused() const { return m_FusedWUPostTarget != nullptr; }

//! Does this synapse group require dendritic delay?
bool isDendriticDelayRequired() const;
//! Is this synapse group's output dendritically delayed?
bool isDendriticOutputDelayRequired() const;

//! Is the named postsynaptic weight update model variable heterogeneously delayed?
bool isWUPostVarHeterogeneouslyDelayed(const std::string &var) const;

//! Are any postsynaptic weight update model variable heterogeneously delayed?
bool areAnyWUPostVarHeterogeneouslyDelayed() const;

//! Does this synapse group provide presynaptic output?
bool isPresynapticOutputRequired() const;
@@ -465,7 +472,7 @@ class GENN_EXPORT SynapseGroup
unsigned int m_MaxSourceConnections;

//! Maximum dendritic delay timesteps supported for synapses in this population
unsigned int m_MaxDendriticDelayTimesteps;
std::optional<unsigned int> m_MaxDendriticDelayTimesteps;

//! Kernel size
std::vector<unsigned int> m_KernelSize;
@@ -502,6 +509,10 @@ class GENN_EXPORT SynapseGroup
//! Initialiser used for creating toeplitz connectivity
InitToeplitzConnectivitySnippet::Init m_ToeplitzConnectivityInitialiser;

//! Set of names of postsynaptic weight update
//! model variables which are heterogeneously delayed
std::set<std::string> m_HeterogeneouslyDelayedWUPostVars;

//! Location of individual per-synapse state variables.
/*! This is ignored for simulations on hardware with a single memory space */
LocationContainer m_WUVarLocation;
6 changes: 4 additions & 2 deletions include/genn/genn/synapseGroupInternal.h
Original file line number Diff line number Diff line change
@@ -58,7 +58,9 @@ class SynapseGroupInternal : public SynapseGroup
using SynapseGroup::isPreSpikeFused;
using SynapseGroup::isWUPreModelFused;
using SynapseGroup::isWUPostModelFused;
using SynapseGroup::isDendriticDelayRequired;
using SynapseGroup::isDendriticOutputDelayRequired;
using SynapseGroup::isWUPostVarHeterogeneouslyDelayed;
using SynapseGroup::areAnyWUPostVarHeterogeneouslyDelayed;
using SynapseGroup::isPresynapticOutputRequired;
using SynapseGroup::isPostsynapticOutputRequired;
using SynapseGroup::isProceduralConnectivityRNGRequired;
@@ -247,7 +249,7 @@ class SynapseWUPostVarAdapter

const SynapseGroup &getTarget() const{ return m_SG.getFusedWUPostTarget(); }

bool isVarDelayed(const std::string&) const{ return (m_SG.getBackPropDelaySteps() != 0); }
bool isVarDelayed(const std::string &name) const{ return (m_SG.getBackPropDelaySteps() != 0) || m_SG.isWUPostVarHeterogeneouslyDelayed(name); }

VarAccessDim getVarDims(const Models::Base::Var &var) const{ return getVarAccessDim(var.access); }

Loading