Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transform kernels and convolutions using a transformer before code generation #1050

Draft
wants to merge 19 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
19 commits
Select commit Hold shift + click to select a range
4bd5b42
transform kernels and convolutions using a transformer before code ge…
Aug 27, 2024
f4f30f9
transform kernels and convolutions using a transformer before code ge…
Aug 30, 2024
11aed59
transform kernels and convolutions using a transformer before code ge…
Aug 31, 2024
3afcb68
transform kernels and convolutions using a transformer before code ge…
Aug 31, 2024
7474d38
transform kernels and convolutions using a transformer before code ge…
Aug 31, 2024
a25a199
transform kernels and convolutions using a transformer before code ge…
Sep 1, 2024
002395f
split up NESTMLPrinter
Sep 1, 2024
4c33037
transform kernels and convolutions using a transformer before code ge…
Sep 2, 2024
9af2a8c
transform kernels and convolutions using a transformer before code ge…
Sep 3, 2024
fb19510
transform kernels and convolutions using a transformer before code ge…
Sep 20, 2024
d67bf55
Merge remote-tracking branch 'upstream/master' into kernel-transformer
Sep 20, 2024
5c40cd9
transform kernels and convolutions using a transformer before code ge…
Sep 22, 2024
0ff3456
run context condition checks only once, after model parsing
Sep 29, 2024
8d90026
transform kernels and convolutions using a transformer before code ge…
Oct 2, 2024
e04cad2
Merge remote-tracking branch 'clinssen/symboltable_checks' into kerne…
Oct 2, 2024
2563e63
run context condition checks only once, after model parsing
Sep 29, 2024
bbc5509
Merge remote-tracking branch 'upstream/master' into symboltable_checks
Oct 4, 2024
f41b023
Merge remote-tracking branch 'clinssen/symboltable_checks' into kerne…
Oct 4, 2024
6053cf5
transform kernels and convolutions using a transformer before code ge…
Oct 4, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions doc/nestml_language/neurons_in_nestml.rst
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,8 @@ Note that in this example, the intended physical unit (pA) was assigned by multi
(Re)setting synaptic integration state
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

XXXXXXXXXXXXXX: this needs to be rewritten

When convolutions are used, additional state variables are required for each pair *(shape, spike input port)* that appears as the parameters in a convolution. These variables track the dynamical state of that kernel, for that input port. The number of variables created corresponds to the dimensionality of the kernel. For example, in the code block above, the one-dimensional kernel ``G`` is used in a convolution with spiking input port ``spikes``. During code generation, a new state variable called ``G__conv__spikes`` is created for this combination, by joining together the name of the kernel with the name of the spike buffer using (by default) the string “__conv__”. If the same kernel is used later in a convolution with another spiking input port, say ``spikes_GABA``, then the resulting generated variable would be called ``G__conv__spikes_GABA``, allowing independent synaptic integration between input ports but allowing the same kernel to be used more than once.

The process of generating extra state variables for keeping track of convolution state is normally hidden from the user. For some models, however, it might be required to set or reset the state of synaptic integration, which is stored in these internally generated variables. For example, we might want to set the synaptic current (and its rate of change) to 0 when firing a dendritic action potential. Although we would like to set the generated variable ``G__conv__spikes`` to 0 in the running example, a variable by this name is only generated during code generation, and does not exist in the namespace of the NESTML model to begin with. To still allow referring to this state in the context of the model, it is recommended to use an inline expression, with only a convolution on the right-hand side.
Expand Down
8 changes: 4 additions & 4 deletions doc/running/running_nest.rst
Original file line number Diff line number Diff line change
Expand Up @@ -110,13 +110,13 @@ After generating and building the model code, a ``receptor_type`` entry is avail

Note that in multisynapse neurons, receptor ports are numbered starting from 1.

We furthermore wish to record the synaptic currents ``I_kernel1``, ``I_kernel2`` and ``I_kernel3``. During code generation, one buffer is created for each combination of (kernel, spike input port) that appears in convolution statements. These buffers are named by joining together the name of the kernel with the name of the spike buffer using (by default) the string "__X__". The variables to be recorded are thus named as follows:
We furthermore wish to record the synaptic currents ``I_kernel1``, ``I_kernel2`` and ``I_kernel3``. During code generation, one buffer is created for each combination of (kernel, spike input port) that appears in convolution statements. These buffers are named by joining together the name of the kernel with the name of the spike buffer using (by default) the string "__conv__". The variables to be recorded are thus named as follows: XXX: add reference to the part of the docs that describe convolutions

.. code-block:: python

mm = nest.Create('multimeter', params={'record_from': ['I_kernel1__X__spikes_1',
'I_kernel2__X__spikes_2',
'I_kernel3__X__spikes_3'],
mm = nest.Create('multimeter', params={'record_from': ['I_kernel1__conv__spikes_1',
'I_kernel2__conv__spikes_2',
'I_kernel3__conv__spikes_3'],
'interval': .1})
nest.Connect(mm, neuron)

Expand Down

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -1347,7 +1347,7 @@
" NESTCodeGeneratorUtils.generate_code_for(nestml_neuron_model,\n",
" nestml_synapse_model,\n",
" codegen_opts=codegen_opts,\n",
" logging_level=\"INFO\") # try \"INFO\" or \"DEBUG\" for more debug information"
" logging_level=\"WARNING\") # try \"INFO\" or \"DEBUG\" for more debug information"
]
},
{
Expand Down
38 changes: 8 additions & 30 deletions pynestml/cocos/co_co_all_variables_defined.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,11 +41,10 @@ class CoCoAllVariablesDefined(CoCo):
"""

@classmethod
def check_co_co(cls, node: ASTModel, after_ast_rewrite: bool = False):
def check_co_co(cls, node: ASTModel):
"""
Checks if this coco applies for the handed over neuron. Models which contain undefined variables are not correct.
:param node: a single neuron instance.
:param after_ast_rewrite: indicates whether this coco is checked after the code generator has done rewriting of the abstract syntax tree. If True, checks are not as rigorous. Use False where possible.
"""
# for each variable in all expressions, check if the variable has been defined previously
expression_collector_visitor = ASTExpressionCollectorVisitor()
Expand All @@ -62,32 +61,6 @@ def check_co_co(cls, node: ASTModel, after_ast_rewrite: bool = False):

# test if the symbol has been defined at least
if symbol is None:
if after_ast_rewrite: # after ODE-toolbox transformations, convolutions are replaced by state variables, so cannot perform this check properly
symbol2 = node.get_scope().resolve_to_symbol(var.get_name(), SymbolKind.VARIABLE)
if symbol2 is not None:
# an inline expression defining this variable name (ignoring differential order) exists
if "__X__" in str(symbol2): # if this variable was the result of a convolution...
continue
else:
# for kernels, also allow derivatives of that kernel to appear

inline_expr_names = []
inline_exprs = []
for equations_block in node.get_equations_blocks():
inline_expr_names.extend([inline_expr.variable_name for inline_expr in equations_block.get_inline_expressions()])
inline_exprs.extend(equations_block.get_inline_expressions())

if var.get_name() in inline_expr_names:
inline_expr_idx = inline_expr_names.index(var.get_name())
inline_expr = inline_exprs[inline_expr_idx]
from pynestml.utils.ast_utils import ASTUtils
if ASTUtils.inline_aliases_convolution(inline_expr):
symbol2 = node.get_scope().resolve_to_symbol(var.get_name(), SymbolKind.VARIABLE)
if symbol2 is not None:
# actually, no problem detected, skip error
# XXX: TODO: check that differential order is less than or equal to that of the kernel
continue

# check if this symbol is actually a type, e.g. "mV" in the expression "(1 + 2) * mV"
symbol2 = var.get_scope().resolve_to_symbol(var.get_complete_name(), SymbolKind.TYPE)
if symbol2 is not None:
Expand All @@ -106,9 +79,14 @@ def check_co_co(cls, node: ASTModel, after_ast_rewrite: bool = False):
# in this case its ok if it is recursive or defined later on
continue

if symbol.is_predefined:
continue

if symbol.block_type == BlockType.LOCAL and symbol.get_referenced_object().get_source_position().before(var.get_source_position()):
continue

# check if it has been defined before usage, except for predefined symbols, input ports and variables added by the AST transformation functions
if (not symbol.is_predefined) \
and symbol.block_type != BlockType.INPUT \
if symbol.block_type != BlockType.INPUT \
and not symbol.get_referenced_object().get_source_position().is_added_source_position():
# except for parameters, those can be defined after
if ((not symbol.get_referenced_object().get_source_position().before(var.get_source_position()))
Expand Down
1 change: 1 addition & 0 deletions pynestml/cocos/co_co_function_unique.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,4 +65,5 @@ def check_co_co(cls, model: ASTModel):
log_level=LoggingLevel.ERROR,
message=message, code=code)
checked.append(funcA)

checked_funcs_names.append(func.get_name())
6 changes: 3 additions & 3 deletions pynestml/cocos/co_co_illegal_expression.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,13 @@
#
# You should have received a copy of the GNU General Public License
# along with NEST. If not, see <http://www.gnu.org/licenses/>.
from pynestml.meta_model.ast_inline_expression import ASTInlineExpression

from pynestml.utils.ast_source_location import ASTSourceLocation
from pynestml.meta_model.ast_declaration import ASTDeclaration
from pynestml.cocos.co_co import CoCo
from pynestml.meta_model.ast_declaration import ASTDeclaration
from pynestml.meta_model.ast_inline_expression import ASTInlineExpression
from pynestml.symbols.error_type_symbol import ErrorTypeSymbol
from pynestml.symbols.predefined_types import PredefinedTypes
from pynestml.utils.ast_source_location import ASTSourceLocation
from pynestml.utils.logger import LoggingLevel, Logger
from pynestml.utils.logging_helper import LoggingHelper
from pynestml.utils.messages import Messages
Expand Down
49 changes: 36 additions & 13 deletions pynestml/cocos/co_co_no_kernels_except_in_convolve.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,14 @@
from typing import List

from pynestml.cocos.co_co import CoCo
from pynestml.meta_model.ast_declaration import ASTDeclaration
from pynestml.meta_model.ast_external_variable import ASTExternalVariable
from pynestml.meta_model.ast_function_call import ASTFunctionCall
from pynestml.meta_model.ast_kernel import ASTKernel
from pynestml.meta_model.ast_model import ASTModel
from pynestml.meta_model.ast_node import ASTNode
from pynestml.meta_model.ast_variable import ASTVariable
from pynestml.symbols.predefined_functions import PredefinedFunctions
from pynestml.symbols.symbol import SymbolKind
from pynestml.utils.logger import Logger, LoggingLevel
from pynestml.utils.messages import Messages
Expand Down Expand Up @@ -89,24 +92,44 @@ def visit_variable(self, node: ASTNode):
if not (isinstance(node, ASTExternalVariable) and node.get_alternate_name()):
code, message = Messages.get_no_variable_found(kernelName)
Logger.log_message(node=self.__neuron_node, code=code, message=message, log_level=LoggingLevel.ERROR)

continue

if not symbol.is_kernel():
continue

if node.get_complete_name() == kernelName:
parent = node.get_parent()
if parent is not None:
parent = node
correct = False
while parent is not None and not isinstance(parent, ASTModel):
parent = parent.get_parent()
assert parent is not None

if isinstance(parent, ASTDeclaration):
for lhs_var in parent.get_variables():
if kernelName == lhs_var.get_complete_name():
# kernel name appears on lhs of declaration, assume it is initial state
correct = True
parent = None # break out of outer loop
break

if isinstance(parent, ASTKernel):
continue
grandparent = parent.get_parent()
if grandparent is not None and isinstance(grandparent, ASTFunctionCall):
grandparent_func_name = grandparent.get_name()
if grandparent_func_name == 'convolve':
continue
code, message = Messages.get_kernel_outside_convolve(kernelName)
Logger.log_message(code=code,
message=message,
log_level=LoggingLevel.ERROR,
error_position=node.get_source_position())
# kernel name is used inside kernel definition, e.g. for a node ``g``, it appears in ``kernel g'' = -1/tau**2 * g - 2/tau * g'``
correct = True
break

if isinstance(parent, ASTFunctionCall):
func_name = parent.get_name()
if func_name == PredefinedFunctions.CONVOLVE:
# kernel name is used inside convolve call
correct = True

if not correct:
code, message = Messages.get_kernel_outside_convolve(kernelName)
Logger.log_message(code=code,
message=message,
log_level=LoggingLevel.ERROR,
error_position=node.get_source_position())


class KernelCollectingVisitor(ASTVisitor):
Expand Down
1 change: 1 addition & 0 deletions pynestml/cocos/co_co_odes_have_consistent_units.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ def visit_ode_equation(self, node):
:param node: A single ode equation.
:type node: ast_ode_equation
"""
return
variable_name = node.get_lhs().get_name()
variable_symbol = node.get_lhs().get_scope().resolve_to_symbol(variable_name, SymbolKind.VARIABLE)
if variable_symbol is None:
Expand Down
3 changes: 0 additions & 3 deletions pynestml/cocos/co_co_v_comp_exists.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,9 +43,6 @@ def check_co_co(cls, neuron: ASTModel):
Models which are supposed to be compartmental but do not contain
state variable called v_comp are not correct.
:param neuron: a single neuron instance.
:param after_ast_rewrite: indicates whether this coco is checked
after the code generator has done rewriting of the abstract syntax tree.
If True, checks are not as rigorous. Use False where possible.
"""
from pynestml.codegeneration.nest_compartmental_code_generator import NESTCompartmentalCodeGenerator

Expand Down
13 changes: 9 additions & 4 deletions pynestml/cocos/co_cos_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@
from pynestml.cocos.co_co_priorities_correctly_specified import CoCoPrioritiesCorrectlySpecified
from pynestml.meta_model.ast_model import ASTModel
from pynestml.frontend.frontend_configuration import FrontendConfiguration
from pynestml.utils.logger import Logger


class CoCosManager:
Expand Down Expand Up @@ -123,12 +124,12 @@ def check_state_variables_initialized(cls, model: ASTModel):
CoCoStateVariablesInitialized.check_co_co(model)

@classmethod
def check_variables_defined_before_usage(cls, model: ASTModel, after_ast_rewrite: bool) -> None:
def check_variables_defined_before_usage(cls, model: ASTModel) -> None:
"""
Checks that all variables are defined before being used.
:param model: a single model.
"""
CoCoAllVariablesDefined.check_co_co(model, after_ast_rewrite)
CoCoAllVariablesDefined.check_co_co(model)

@classmethod
def check_v_comp_requirement(cls, neuron: ASTModel):
Expand Down Expand Up @@ -402,17 +403,19 @@ def check_input_port_size_type(cls, model: ASTModel):
CoCoVectorInputPortsCorrectSizeType.check_co_co(model)

@classmethod
def post_symbol_table_builder_checks(cls, model: ASTModel, after_ast_rewrite: bool = False):
def check_cocos(cls, model: ASTModel, after_ast_rewrite: bool = False):
"""
Checks all context conditions.
:param model: a single model object.
"""
Logger.set_current_node(model)

cls.check_each_block_defined_at_most_once(model)
cls.check_function_defined(model)
cls.check_variables_unique_in_scope(model)
cls.check_inline_expression_not_assigned_to(model)
cls.check_state_variables_initialized(model)
cls.check_variables_defined_before_usage(model, after_ast_rewrite)
cls.check_variables_defined_before_usage(model)
if FrontendConfiguration.get_target_platform().upper() == 'NEST_COMPARTMENTAL':
# XXX: TODO: refactor this out; define a ``cocos_from_target_name()`` in the frontend instead.
cls.check_v_comp_requirement(model)
Expand Down Expand Up @@ -452,3 +455,5 @@ def post_symbol_table_builder_checks(cls, model: ASTModel, after_ast_rewrite: bo
cls.check_co_co_priorities_correctly_specified(model)
cls.check_resolution_func_legally_used(model)
cls.check_input_port_size_type(model)

Logger.set_current_node(None)
4 changes: 2 additions & 2 deletions pynestml/codegeneration/builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,12 @@
# along with NEST. If not, see <http://www.gnu.org/licenses/>.

from __future__ import annotations
import subprocess
import os

from typing import Any, Mapping, Optional

from abc import ABCMeta, abstractmethod
import os
import subprocess

from pynestml.exceptions.invalid_target_exception import InvalidTargetException
from pynestml.frontend.frontend_configuration import FrontendConfiguration
Expand Down
1 change: 0 additions & 1 deletion pynestml/codegeneration/code_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,6 @@ def _setup_template_env(self, template_files: List[str], templates_root_dir: str
# Environment for neuron templates
env = Environment(loader=FileSystemLoader(_template_dirs))
env.globals["raise"] = self.raise_helper
env.globals["is_delta_kernel"] = ASTUtils.is_delta_kernel

# Load all the templates
_templates = list()
Expand Down
Loading
Loading