diff --git a/dev/custom/custom-addons/index.html b/dev/custom/custom-addons/index.html index 96889eb7f..6d36db803 100644 --- a/dev/custom/custom-addons/index.html +++ b/dev/custom/custom-addons/index.html @@ -21,4 +21,4 @@ end

Step 3: Computing products

The goal is to update the AddonCount structure when we multiply 2 messages. As a result, we need to write a function that allows us to define this behaviour. This function is called multiply_addons and accepts 5 arguments. In our example this becomes

function multiply_addons(left_addon::AddonCount, right_addon::AddonCount, new_dist, left_dist, right_dist)
     return AddonCount(left_addon.count + right_addon.count + 1)
-end

here we add the number of operations from the addons that are being multiplied and we add one (for the current operation). we are aware that this is likely not valid for iterative message passing schemes, but it still serves as a nice example. the left_addon and right_addon argument specify the addoncount objects that are being multiplied. corresponding to these addons, there are the distributions left_dist and right_dist, which might contain information for computing the product. the new distribution new_dist ∝ left_dist * right_dist is also passed along for potentially reusing the result of earlier computations.

More information

For more advanced information check the implementation of the log-scale or memory addons.

+end

here we add the number of operations from the addons that are being multiplied and we add one (for the current operation). we are aware that this is likely not valid for iterative message passing schemes, but it still serves as a nice example. the left_addon and right_addon argument specify the addoncount objects that are being multiplied. corresponding to these addons, there are the distributions left_dist and right_dist, which might contain information for computing the product. the new distribution new_dist ∝ left_dist * right_dist is also passed along for potentially reusing the result of earlier computations.

More information

For more advanced information check the implementation of the log-scale or memory addons.

diff --git a/dev/custom/custom-functional-form/index.html b/dev/custom/custom-functional-form/index.html index ddafc7f4e..b5ad615a5 100644 --- a/dev/custom/custom-functional-form/index.html +++ b/dev/custom/custom-functional-form/index.html @@ -1,5 +1,5 @@ -Custom functional form · ReactiveMP.jl

Custom Functional Form Specification

In a nutshell, functional form constraints defines a function that approximates the product of colliding messages and computes posterior marginal that can be used later on during the inference procedure. An important part of the functional forms constraint implementation is the prod function. More information about prod function is present in the Prod Implementation section. For example, if we refer to our CustomFunctionalForm as to f we can see the whole functional form constraints pipeline as:

\[q(x) = f\left(\frac{\overrightarrow{\mu}(x)\overleftarrow{\mu}(x)}{\int \overrightarrow{\mu}(x)\overleftarrow{\mu}(x) \mathrm{d}x}\right)\]

Interface

ReactiveMP.jl, however, uses some extra utility functions to define functional form constraint behaviour. Here we briefly describe all utility function. If you are only interested in the concrete example, you may directly head to the Custom Functional Form example at the end of this section.

Abstract super type

ReactiveMP.AbstractFormConstraintType
AbstractFormConstraint

Every functional form constraint is a subtype of AbstractFormConstraint abstract type.

Note: this is not strictly necessary, but it makes automatic dispatch easier and compatible with the CompositeFormConstraint.

See also: CompositeFormConstraint

source
ReactiveMP.CompositeFormConstraintType
CompositeFormConstraint

Creates a composite form constraint that applies form constraints in order. The composed form constraints must be compatible and have the exact same form_check_strategy. Any functional form constraint that defines is_point_mass_form_constraint() = true may be used only as the last element of the composition.

source

Form check strategy

Every custom functional form must implement a new method for the default_form_check_strategy function that returns either FormConstraintCheckEach or FormConstraintCheckLast.

  • FormConstraintCheckLast: q(x) = f(μ1(x) * μ2(x) * μ3(x))
  • FormConstraintCheckEach: q(x) = f(f(μ1(x) * μ2(x)) * μ3(x))
ReactiveMP.FormConstraintCheckEachType
FormConstraintCheckEach

This form constraint check strategy checks functional form of the messages product after each product in an equality chain. Usually if a variable has been connected to multiple nodes we want to perform multiple prod to obtain a posterior marginal. With this form check strategy constrain_form function will be executed after each subsequent prod function.

See also: FormConstraintCheckLast, default_form_check_strategy, constrain_form

source
ReactiveMP.FormConstraintCheckLastType
FormConstraintCheckEach

This form constraint check strategy checks functional form of the last messages product in the equality chain. Usually if a variable has been connected to multiple nodes we want to perform multiple prod to obtain a posterior marginal. With this form check strategy constrain_form function will be executed only once after all subsequenct prod functions have been executed.

See also: FormConstraintCheckLast, default_form_check_strategy, constrain_form

source

Prod constraint

Every custom functional form must implement a new method for the default_prod_constraint function that returns a proper prod_constraint object.

Constrain form, a.k.a f

The main function that a custom functional form must implement, which we referred to as f in the beginning of this section, is the constrain_form function.

Is point mass form constraint (optional)

Every custom functional form may implement a new method for the is_point_mass_form_constraint function that returns either true or false. This is an utility function that simplifes computation of the Bethe Free Energy and is not strictly necessary.

Compatibility with @constraints macro (optional)

To make custom functional form constraint compatible with the @constraints macro, it must implement a new method for the make_form_constraint function.

ReactiveMP.make_form_constraintFunction
make_form_constraint(::Type, args...; kwargs...)

Creates form constraint object based on passed type with given args and kwargs. Used to simplify form constraint specification.

As an example:

make_form_constraint(PointMass)

creates an instance of PointMassFormConstraint and

make_form_constraint(SampleList, 5000, LeftProposal())

should create an instance of SampleListFormConstraint.

See also: AbstractFormConstraint

source

Custom Functional Form Example

In this demo we show how to build a custom functional form constraint that is compatible with the ReactiveMP.jl inference backend. An important part of the functional forms constraint implementation is the prod function. More information about prod function is present in the Prod Implementation section. We show a relatively simple use-case, which might not be very useful in practice, but serves as a simple step-by-step guide. Assume that we want a specific posterior marginal of some random variable in our model to have a specific Gaussian parametrisation, for example mean-precision. We can use built-in NormalMeanPrecision distribution, but we still need to define our custom functional form constraint:

using ReactiveMP
+Custom functional form · ReactiveMP.jl

Custom Functional Form Specification

In a nutshell, functional form constraints defines a function that approximates the product of colliding messages and computes posterior marginal that can be used later on during the inference procedure. An important part of the functional forms constraint implementation is the prod function. More information about prod function is present in the Prod Implementation section. For example, if we refer to our CustomFunctionalForm as to f we can see the whole functional form constraints pipeline as:

\[q(x) = f\left(\frac{\overrightarrow{\mu}(x)\overleftarrow{\mu}(x)}{\int \overrightarrow{\mu}(x)\overleftarrow{\mu}(x) \mathrm{d}x}\right)\]

Interface

ReactiveMP.jl, however, uses some extra utility functions to define functional form constraint behaviour. Here we briefly describe all utility function. If you are only interested in the concrete example, you may directly head to the Custom Functional Form example at the end of this section.

Abstract super type

ReactiveMP.AbstractFormConstraintType
AbstractFormConstraint

Every functional form constraint is a subtype of AbstractFormConstraint abstract type.

Note: this is not strictly necessary, but it makes automatic dispatch easier and compatible with the CompositeFormConstraint.

See also: CompositeFormConstraint

source
ReactiveMP.CompositeFormConstraintType
CompositeFormConstraint

Creates a composite form constraint that applies form constraints in order. The composed form constraints must be compatible and have the exact same form_check_strategy. Any functional form constraint that defines is_point_mass_form_constraint() = true may be used only as the last element of the composition.

source

Form check strategy

Every custom functional form must implement a new method for the default_form_check_strategy function that returns either FormConstraintCheckEach or FormConstraintCheckLast.

  • FormConstraintCheckLast: q(x) = f(μ1(x) * μ2(x) * μ3(x))
  • FormConstraintCheckEach: q(x) = f(f(μ1(x) * μ2(x)) * μ3(x))
ReactiveMP.FormConstraintCheckEachType
FormConstraintCheckEach

This form constraint check strategy checks functional form of the messages product after each product in an equality chain. Usually if a variable has been connected to multiple nodes we want to perform multiple prod to obtain a posterior marginal. With this form check strategy constrain_form function will be executed after each subsequent prod function.

See also: FormConstraintCheckLast, default_form_check_strategy, constrain_form

source
ReactiveMP.FormConstraintCheckLastType
FormConstraintCheckEach

This form constraint check strategy checks functional form of the last messages product in the equality chain. Usually if a variable has been connected to multiple nodes we want to perform multiple prod to obtain a posterior marginal. With this form check strategy constrain_form function will be executed only once after all subsequenct prod functions have been executed.

See also: FormConstraintCheckLast, default_form_check_strategy, constrain_form

source

Prod constraint

Every custom functional form must implement a new method for the default_prod_constraint function that returns a proper prod_constraint object.

Constrain form, a.k.a f

The main function that a custom functional form must implement, which we referred to as f in the beginning of this section, is the constrain_form function.

Is point mass form constraint (optional)

Every custom functional form may implement a new method for the is_point_mass_form_constraint function that returns either true or false. This is an utility function that simplifes computation of the Bethe Free Energy and is not strictly necessary.

Compatibility with @constraints macro (optional)

To make custom functional form constraint compatible with the @constraints macro, it must implement a new method for the make_form_constraint function.

ReactiveMP.make_form_constraintFunction
make_form_constraint(::Type, args...; kwargs...)

Creates form constraint object based on passed type with given args and kwargs. Used to simplify form constraint specification.

As an example:

make_form_constraint(PointMass)

creates an instance of PointMassFormConstraint and

make_form_constraint(SampleList, 5000, LeftProposal())

should create an instance of SampleListFormConstraint.

See also: AbstractFormConstraint

source

Custom Functional Form Example

In this demo we show how to build a custom functional form constraint that is compatible with the ReactiveMP.jl inference backend. An important part of the functional forms constraint implementation is the prod function. More information about prod function is present in the Prod Implementation section. We show a relatively simple use-case, which might not be very useful in practice, but serves as a simple step-by-step guide. Assume that we want a specific posterior marginal of some random variable in our model to have a specific Gaussian parametrisation, for example mean-precision. We can use built-in NormalMeanPrecision distribution, but we still need to define our custom functional form constraint:

using ReactiveMP
 
 # First we define our functional form structure with no fields
 struct MeanPrecisionFormConstraint <: AbstractFormConstraint end

Next we define the behaviour of our functional form constraint:

ReactiveMP.is_point_mass_form_constraint(::MeanPrecisionFormConstraint) = false
@@ -17,4 +17,4 @@
 function ReactiveMP.constrain_form(::MeanPrecisionFormConstraint, distribution::DistProduct)
     # DistProduct is the special case, read about this type more in the corresponding documentation section
     # ...
-end
+end
diff --git a/dev/extra/contributing/index.html b/dev/extra/contributing/index.html index 79c4a4839..4178b1bf2 100644 --- a/dev/extra/contributing/index.html +++ b/dev/extra/contributing/index.html @@ -1,2 +1,2 @@ -Contributing · ReactiveMP.jl

Contribution guidelines

We welcome all possible contributors. This page details the some of the guidelines that should be followed when contributing to this package.

Reporting bugs

We track bugs using GitHub issues. We encourage you to write complete, specific, reproducible bug reports. Mention the versions of Julia and ReactiveMP for which you observe unexpected behavior. Please provide a concise description of the problem and complement it with code snippets, test cases, screenshots, tracebacks or any other information that you consider relevant. This will help us to replicate the problem and narrow the search space for solutions.

Suggesting features

We welcome new feature proposals. However, before submitting a feature request, consider a few things:

  • Does the feature require changes in the core ReactiveMP.jl code? If it doesn't (for example, you would like to add a factor node for a particular application), you can add local extensions in your script/notebook or consider making a separate repository for your extensions.
  • If you would like to add an implementation of a feature that changes a lot in the core ReactiveMP.jl code, please open an issue on GitHub and describe your proposal first. This will allow us to discuss your proposal with you before you invest your time in implementing something that may be difficult to merge later on.

Contributing code

Installing ReactiveMP

We suggest that you use the dev command from the new Julia package manager to install ReactiveMP.jl for development purposes. To work on your fork of ReactiveMP.jl, use your fork's URL address in the dev command, for example:

] dev git@github.com:your_username/ReactiveMP.jl.git

The dev command clones ReactiveMP.jl to ~/.julia/dev/ReactiveMP. All local changes to ReactiveMP code will be reflected in imported code.

Note

It is also might be useful to install Revise.jl package as it allows you to modify code and use the changes without restarting Julia.

Committing code

We use the standard GitHub Flow workflow where all contributions are added through pull requests. In order to contribute, first fork the repository, then commit your contributions to your fork, and then create a pull request on the master branch of the ReactiveMP.jl repository.

Before opening a pull request, please make sure that all tests pass without failing! All demos (can be found in /demo/ directory) and benchmarks (can be found in /benchmark/ directory) have to run without errors as well.

Style conventions

Note

ReactiveMP.jl repository contains scripts to automatically format code according to our guidelines. Use make format command to fix code style. This command overwrites files.

We use default Julia style guide. We list here a few important points and our modifications to the Julia style guide:

  • Use 4 spaces for indentation
  • Type names use UpperCamelCase. For example: AbstractFactorNode, RandomVariable, etc..
  • Function names are lowercase with underscores, when necessary. For example: activate!, randomvar, as_variable, etc..
  • Variable names and function arguments use snake_case
  • The name of a method that modifies its argument(s) must end in !

Unit tests

We use the test-driven development (TDD) methodology for ReactiveMP.jl development. The test coverage should be as complete as possible. Please make sure that you write tests for each piece of code that you want to add.

All unit tests are located in the /test/ directory. The /test/ directory follows the structure of the /src/ directory. Each test file should have following filename format: test_*.jl. Some tests are also present in jldoctest docs annotations directly in the source code. See Julia's documentation about doctests.

The tests can be evaluated by running following command in the Julia REPL:

] test ReactiveMP

In addition tests can be evaluated by running following command in the ReactiveMP root directory:

make test

Fixes to external libraries

If a bug has been discovered in an external dependencies of the ReactiveMP.jl it is the best to open an issue directly in the dependency's github repository. You use can use the fixes.jl file for hot-fixes before a new release of the broken dependecy is available.

Makefile

ReactiveMP.jl uses Makefile for most common operations:

  • make help: Shows help snippet
  • make test: Run tests, supports extra arguments
    • make test test_args="distributions:normal_mean_variance" would run tests only from distributions/test_normal_mean_variance.jl
    • make test test_args="distributions:normal_mean_variance models:lgssm" would run tests both from distributions/test_normal_mean_variance.jl and models/test_lgssm.jl
  • make docs: Compile documentation
  • make benchmark: Run simple benchmark
  • make lint: Check codestyle
  • make format: Check and fix codestyle
+Contributing · ReactiveMP.jl

Contribution guidelines

We welcome all possible contributors. This page details the some of the guidelines that should be followed when contributing to this package.

Reporting bugs

We track bugs using GitHub issues. We encourage you to write complete, specific, reproducible bug reports. Mention the versions of Julia and ReactiveMP for which you observe unexpected behavior. Please provide a concise description of the problem and complement it with code snippets, test cases, screenshots, tracebacks or any other information that you consider relevant. This will help us to replicate the problem and narrow the search space for solutions.

Suggesting features

We welcome new feature proposals. However, before submitting a feature request, consider a few things:

  • Does the feature require changes in the core ReactiveMP.jl code? If it doesn't (for example, you would like to add a factor node for a particular application), you can add local extensions in your script/notebook or consider making a separate repository for your extensions.
  • If you would like to add an implementation of a feature that changes a lot in the core ReactiveMP.jl code, please open an issue on GitHub and describe your proposal first. This will allow us to discuss your proposal with you before you invest your time in implementing something that may be difficult to merge later on.

Contributing code

Installing ReactiveMP

We suggest that you use the dev command from the new Julia package manager to install ReactiveMP.jl for development purposes. To work on your fork of ReactiveMP.jl, use your fork's URL address in the dev command, for example:

] dev git@github.com:your_username/ReactiveMP.jl.git

The dev command clones ReactiveMP.jl to ~/.julia/dev/ReactiveMP. All local changes to ReactiveMP code will be reflected in imported code.

Note

It is also might be useful to install Revise.jl package as it allows you to modify code and use the changes without restarting Julia.

Committing code

We use the standard GitHub Flow workflow where all contributions are added through pull requests. In order to contribute, first fork the repository, then commit your contributions to your fork, and then create a pull request on the master branch of the ReactiveMP.jl repository.

Before opening a pull request, please make sure that all tests pass without failing! All demos (can be found in /demo/ directory) and benchmarks (can be found in /benchmark/ directory) have to run without errors as well.

Style conventions

Note

ReactiveMP.jl repository contains scripts to automatically format code according to our guidelines. Use make format command to fix code style. This command overwrites files.

We use default Julia style guide. We list here a few important points and our modifications to the Julia style guide:

  • Use 4 spaces for indentation
  • Type names use UpperCamelCase. For example: AbstractFactorNode, RandomVariable, etc..
  • Function names are lowercase with underscores, when necessary. For example: activate!, randomvar, as_variable, etc..
  • Variable names and function arguments use snake_case
  • The name of a method that modifies its argument(s) must end in !

Unit tests

We use the test-driven development (TDD) methodology for ReactiveMP.jl development. The test coverage should be as complete as possible. Please make sure that you write tests for each piece of code that you want to add.

All unit tests are located in the /test/ directory. The /test/ directory follows the structure of the /src/ directory. Each test file should have following filename format: test_*.jl. Some tests are also present in jldoctest docs annotations directly in the source code. See Julia's documentation about doctests.

The tests can be evaluated by running following command in the Julia REPL:

] test ReactiveMP

In addition tests can be evaluated by running following command in the ReactiveMP root directory:

make test

Fixes to external libraries

If a bug has been discovered in an external dependencies of the ReactiveMP.jl it is the best to open an issue directly in the dependency's github repository. You use can use the fixes.jl file for hot-fixes before a new release of the broken dependecy is available.

Makefile

ReactiveMP.jl uses Makefile for most common operations:

  • make help: Shows help snippet
  • make test: Run tests, supports extra arguments
    • make test test_args="distributions:normal_mean_variance" would run tests only from distributions/test_normal_mean_variance.jl
    • make test test_args="distributions:normal_mean_variance models:lgssm" would run tests both from distributions/test_normal_mean_variance.jl and models/test_lgssm.jl
  • make docs: Compile documentation
  • make benchmark: Run simple benchmark
  • make lint: Check codestyle
  • make format: Check and fix codestyle
diff --git a/dev/extra/extensions/index.html b/dev/extra/extensions/index.html index c75cbc45a..654291627 100644 --- a/dev/extra/extensions/index.html +++ b/dev/extra/extensions/index.html @@ -1,2 +1,2 @@ -Extensions and interaction with the Julia ecosystem · ReactiveMP.jl

Extensions and interaction with the Julia ecosystem

ReactiveMP.jl exports extra functionality if other Julia packages are loaded in the same environment.

Optimisers.jl

The Optimizers.jl package defines many standard gradient-based optimisation rules, and tools for applying them to deeply nested models. The optimizers defined in the Optimziers.jl are compatible with the CVI approximation method.

Zygote.jl

The Zygote.jl package provides source-to-source automatic differentiation (AD) in Julia. If loaded in the current Julia session, the ZygoteGrad option becomes available with the CVI approximation method.

DiffResults.jl (loaded automatically with the ForwardDiff.jl)

The DiffResults.jl provides the DiffResult type, which can be passed to in-place differentiation methods instead of an output buffer. If loaded in the current Julia session enables faster derivatives with the ForwardDiffGrad option in the CVI approximation method (in the Gaussian case).

+Extensions and interaction with the Julia ecosystem · ReactiveMP.jl

Extensions and interaction with the Julia ecosystem

ReactiveMP.jl exports extra functionality if other Julia packages are loaded in the same environment.

Optimisers.jl

The Optimizers.jl package defines many standard gradient-based optimisation rules, and tools for applying them to deeply nested models. The optimizers defined in the Optimziers.jl are compatible with the CVI approximation method.

Zygote.jl

The Zygote.jl package provides source-to-source automatic differentiation (AD) in Julia. If loaded in the current Julia session, the ZygoteGrad option becomes available with the CVI approximation method.

DiffResults.jl (loaded automatically with the ForwardDiff.jl)

The DiffResults.jl provides the DiffResult type, which can be passed to in-place differentiation methods instead of an output buffer. If loaded in the current Julia session enables faster derivatives with the ForwardDiffGrad option in the CVI approximation method (in the Gaussian case).

diff --git a/dev/index.html b/dev/index.html index 075c30313..5007778b4 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Introduction · ReactiveMP.jl

ReactiveMP.jl

Julia package for reactive message passing Bayesian inference engine on a factor graph.

Note

This package exports only an inference engine, for the full ecosystem with convenient model and constraints specification we refer user to the RxInfer.jl package and its documentation.

Examples and tutorials

Tutorials and examples are available in the RxInfer documentation.

Table of Contents

Index

+Introduction · ReactiveMP.jl

ReactiveMP.jl

Julia package for reactive message passing Bayesian inference engine on a factor graph.

Note

This package exports only an inference engine, for the full ecosystem with convenient model and constraints specification we refer user to the RxInfer.jl package and its documentation.

Examples and tutorials

Tutorials and examples are available in the RxInfer documentation.

Table of Contents

Index

diff --git a/dev/lib/algebra/common/index.html b/dev/lib/algebra/common/index.html index 6a2fec83e..0514ff9ab 100644 --- a/dev/lib/algebra/common/index.html +++ b/dev/lib/algebra/common/index.html @@ -1,2 +1,2 @@ -Algebra utils · ReactiveMP.jl

Algebra common utilities

diageye

ReactiveMP.diageyeFunction
diageye(::Type{T}, n::Int)

An alias for the Matrix{T}(I, n, n). Returns a matrix of size n x n with ones (of type T) on the diagonal and zeros everywhere else.

source
diageye(n::Int)

An alias for the Matrix{Float64}(I, n, n). Returns a matrix of size n x n with ones (of type Float64) on the diagonal and zeros everywhere else.

source
+Algebra utils · ReactiveMP.jl

Algebra common utilities

diageye

ReactiveMP.diageyeFunction
diageye(::Type{T}, n::Int)

An alias for the Matrix{T}(I, n, n). Returns a matrix of size n x n with ones (of type T) on the diagonal and zeros everywhere else.

source
diageye(n::Int)

An alias for the Matrix{Float64}(I, n, n). Returns a matrix of size n x n with ones (of type Float64) on the diagonal and zeros everywhere else.

source
diff --git a/dev/lib/helpers/index.html b/dev/lib/helpers/index.html index ca2393f4b..e46b3508a 100644 --- a/dev/lib/helpers/index.html +++ b/dev/lib/helpers/index.html @@ -1,5 +1,5 @@ -Helper utils · ReactiveMP.jl

Helper utilities

ReactiveMP implements various structures/functions/methods as "helper" structures that might be useful in various contexts.

SkipIndexIterator

ReactiveMP.SkipIndexIteratorType
SkipIndexIterator

A special type of iterator that simply iterates over internal iterator, but skips index skip.

Arguments

  • iterator: internal iterator
  • skip: index to skip (integer)

See also: skipindex

source
ReactiveMP.skipindexFunction
skipindex(iterator, skip)

Creation operator for SkipIndexIterator.

julia> s = ReactiveMP.skipindex(1:3, 2)
+Helper utils · ReactiveMP.jl

Helper utilities

ReactiveMP implements various structures/functions/methods as "helper" structures that might be useful in various contexts.

SkipIndexIterator

ReactiveMP.SkipIndexIteratorType
SkipIndexIterator

A special type of iterator that simply iterates over internal iterator, but skips index skip.

Arguments

  • iterator: internal iterator
  • skip: index to skip (integer)

See also: skipindex

source
ReactiveMP.skipindexFunction
skipindex(iterator, skip)

Creation operator for SkipIndexIterator.

julia> s = ReactiveMP.skipindex(1:3, 2)
 2-element ReactiveMP.SkipIndexIterator{Int64, UnitRange{Int64}}:
  1
  3
@@ -7,8 +7,8 @@
 julia> collect(s)
 2-element Vector{Int64}:
  1
- 3

See also: SkipIndexIterator

source

deep_eltype

deep_eltype

ReactiveMP.deep_eltypeFunction
deep_eltype

Returns the eltype of the first container in the nested hierarchy.

julia> ReactiveMP.deep_eltype([ [1, 2], [2, 3] ])
 Int64
 
 julia> ReactiveMP.deep_eltype([[[ 1.0, 2.0 ], [ 3.0, 4.0 ]], [[ 5.0, 6.0 ], [ 7.0, 8.0 ]]])
-Float64
source

FunctionalIndex

ReactiveMP.FunctionalIndexType
FunctionalIndex

A special type of an index that represents a function that can be used only in pair with a collection. An example of a FunctionalIndex can be firstindex or lastindex, but more complex use cases are possible too, e.g. firstindex + 1. Important part of the implementation is that the resulting structure is isbitstype(...) = true, that allows to store it in parametric type as valtype.

One use case for this structure is to dispatch on and to replace begin or end (or more complex use cases, e.g. begin + 1) markers in constraints specification language.

source
+Float64
source

FunctionalIndex

ReactiveMP.FunctionalIndexType
FunctionalIndex

A special type of an index that represents a function that can be used only in pair with a collection. An example of a FunctionalIndex can be firstindex or lastindex, but more complex use cases are possible too, e.g. firstindex + 1. Important part of the implementation is that the resulting structure is isbitstype(...) = true, that allows to store it in parametric type as valtype.

One use case for this structure is to dispatch on and to replace begin or end (or more complex use cases, e.g. begin + 1) markers in constraints specification language.

source
diff --git a/dev/lib/message/index.html b/dev/lib/message/index.html index 9a8e764cd..7da91022b 100644 --- a/dev/lib/message/index.html +++ b/dev/lib/message/index.html @@ -1,5 +1,5 @@ -Messages · ReactiveMP.jl

Messages implementation

In our message passing framework one of the most important concepts is the message (wow!). Messages flow along edges of a factor graph and hold information about the part of the graph that it originates from. Usually this information is in a form of a probability distribution. Two common messages are belief propagation messages and variational messages, with are computed differently as shown below.

Abstract message type

Both belief propagation and variational messages are subtypes of a AbstractMessage supertype.

Belief-Propagation (or Sum-Product) message

message Belief propagation message

Variational message

message Variational message with structured factorisation q(x, y)q(z) assumption

Message type

All messages are encoded with the type Message.

ReactiveMP.MessageType
Message{D, A} <: AbstractMessage

Message structure encodes a Belief Propagation message, which holds some data that usually a probability distribution, but can also be an arbitrary object. Message acts as a proxy structure to data object and proxies most of the statistical functions, e.g. mean, mode, cov etc.

Arguments

  • data::D: message always holds some data object associated with it
  • is_clamped::Bool, specifies if this message is clamped
  • is_initial::Bool, specifies if this message is initial
  • addons::A, specifies the addons of the message

Example

julia> distribution = Gamma(10.0, 2.0)
+Messages · ReactiveMP.jl

Messages implementation

In our message passing framework one of the most important concepts is the message (wow!). Messages flow along edges of a factor graph and hold information about the part of the graph that it originates from. Usually this information is in a form of a probability distribution. Two common messages are belief propagation messages and variational messages, with are computed differently as shown below.

Abstract message type

Both belief propagation and variational messages are subtypes of a AbstractMessage supertype.

Belief-Propagation (or Sum-Product) message

message Belief propagation message

Variational message

message Variational message with structured factorisation q(x, y)q(z) assumption

Message type

All messages are encoded with the type Message.

ReactiveMP.MessageType
Message{D, A} <: AbstractMessage

Message structure encodes a Belief Propagation message, which holds some data that usually a probability distribution, but can also be an arbitrary object. Message acts as a proxy structure to data object and proxies most of the statistical functions, e.g. mean, mode, cov etc.

Arguments

  • data::D: message always holds some data object associated with it
  • is_clamped::Bool, specifies if this message is clamped
  • is_initial::Bool, specifies if this message is initial
  • addons::A, specifies the addons of the message

Example

julia> distribution = Gamma(10.0, 2.0)
 Gamma{Float64}(α=10.0, θ=2.0)
 
 julia> message = Message(distribution, false, true, nothing)
@@ -16,5 +16,5 @@
 
 julia> is_initial(message)
 true
-

See also: AbstractMessage, ReactiveMP.materialize!

source

From an implementation point a view the Message structure does nothing but hold some data object and redirects most of the statistical related functions to that data object. However, this object is used extensively in Julia's multiple dispatch. Our implementation also uses extra is_initial and is_clamped fields to determine if product of two messages results in is_initial or is_clamped posterior marginal. The final field contains the addons. These contain additional information on top of the functional form of the distribution, such as its scaling or computation history.

distribution = NormalMeanPrecision(0.0, 1.0)
-message      = Message(distribution, false, true, nothing)
Message(NormalMeanPrecision{Float64}(μ=0.0, w=1.0))
mean(message), precision(message)
(0.0, 1.0)
logpdf(message, 1.0)
-1.4189385332046727
is_clamped(message), is_initial(message)
(false, true)

The user should not really interact with Message structure while working with ReactiveMP unless doing some advanced inference procedures that involve prediction.

+

See also: AbstractMessage, ReactiveMP.materialize!

source

From an implementation point a view the Message structure does nothing but hold some data object and redirects most of the statistical related functions to that data object. However, this object is used extensively in Julia's multiple dispatch. Our implementation also uses extra is_initial and is_clamped fields to determine if product of two messages results in is_initial or is_clamped posterior marginal. The final field contains the addons. These contain additional information on top of the functional form of the distribution, such as its scaling or computation history.

distribution = NormalMeanPrecision(0.0, 1.0)
+message      = Message(distribution, false, true, nothing)
Message(NormalMeanPrecision{Float64}(μ=0.0, w=1.0))
mean(message), precision(message)
(0.0, 1.0)
logpdf(message, 1.0)
-1.4189385332046727
is_clamped(message), is_initial(message)
(false, true)

The user should not really interact with Message structure while working with ReactiveMP unless doing some advanced inference procedures that involve prediction.

diff --git a/dev/lib/methods/index.html b/dev/lib/methods/index.html index 62639abb3..4bf7ff2e9 100644 --- a/dev/lib/methods/index.html +++ b/dev/lib/methods/index.html @@ -97,7 +97,6 @@ GaussianMixtureNode GaussianWeighteMeanPrecision GenericLogPdfVectorisedProduct -HalfNormal IMPLY ImportanceSamplingApproximation IncludeAll @@ -294,4 +293,4 @@ weightedmean_precision weightedmean_std weightedmean_var -weights +weights diff --git a/dev/lib/nodes/flow/index.html b/dev/lib/nodes/flow/index.html index b515776b4..c0a7be853 100644 --- a/dev/lib/nodes/flow/index.html +++ b/dev/lib/nodes/flow/index.html @@ -1,9 +1,9 @@ -Flow · ReactiveMP.jl

Flow node

See also Flow tutorial for a comprehensive guide on using flows in ReactiveMP.

ReactiveMP.PlanarFlowType

The PlanarFlow function is defined as

\[f({\bf{x}}) = {\bf{x}} + {\bf{u}} \tanh({\bf{w}}^\top {\bf{x}} + b)\]

with input and output dimension $D$. Here ${\bf{x}}\in \mathbb{R}^D$ represents the input of the function. Furthermore ${\bf{u}}\in \mathbb{R}^D$, ${\bf{w}}\in \mathbb{R}^D$ and $b\in\mathbb{R}$ represent the parameters of the function. The function contracts and expands the input space.

This function has been introduced in:

Rezende, Danilo, and Shakir Mohamed. "Variational inference with normalizing flows." International conference on machine learning. PMLR, 2015.

source
ReactiveMP.RadialFlowType

The RadialFlow function is defined as

\[f({\bf{x}}) = {\bf{x}} + \frac{\beta({bf{z}} - {\bf{z}}_0)}{\alpha + |{\bf{z}} - {\bf{z}}_0|}\]

with input and output dimension $D$. Here ${\bf{x}}\in \mathbb{R}^D$ represents the input of the function. Furthermore ${\bf{z}}_0\in \mathbb{R}^D$, $\alpha\in \mathbb{R}$ and $\beta\in\mathbb{R}$ represent the parameters of the function. The function contracts and expands the input space.

This function has been introduced in:

Rezende, Danilo, and Shakir Mohamed. "Variational inference with normalizing flows." International conference on machine learning. PMLR, 2015.

source
ReactiveMP.FlowModelType

The FlowModel structure is the most generic type of Flow model, in which the layers are not constrained to be of a specific type. The FlowModel structure contains the input dimensionality and a tuple of layers and can be constructed as FlowModel( dim, (layer1, layer2, ...) ).

Note: this model can be specialized by constraining the types of layers. This potentially allows for more efficient specialized methods that can deal with specifics of these layers, such as triangular jacobian matrices.

source
ReactiveMP.CompiledFlowModelType

The CompiledFlowModel structure is the most generic type of compiled Flow model, in which the layers are not constrained to be of a specific type. The FlowModel structure contains the input dimension and a tuple of compiled layers. Do not manually create a CompiledFlowModel! Instead create a FlowModel first and compile it with compile(model::FlowModel). This will make sure that all layers/mappings are configured with the proper dimensionality and with randomly sampled parameters. Alternatively, if you would like to pass your own parameters, call compile(model::FlowModel, params::Vector).

Note: this model can be specialized by constraining the types of layers. This potentially allows for more efficient specialized methods that can deal with specifics of these layers, such as triangular jacobian matrices.

source
ReactiveMP.compileFunction

compile() compiles a model by setting its parameters. It randomly sets parameter values in the layers and flows such that inference in the model can be obtained.

Input arguments

  • model::FlowModel - a model of which the dimensionality of its layers/flows has been initialized, but its parameters have not been set.

Return arguments

  • ::CompiledFlowModel - a compiled model with set parameters, such that it can be used for processing data.
source

compile(model::FlowModel, params::Vector) lets you initialize a model model with a vector of parameters params.

Input arguments

  • model::FlowModel - a model of which the dimensionality of its layers/flows has been initialized, but its parameters have not been set.
  • params::Vector - a vector of parameters with which the model should be compiled.

Return arguments

  • ::CompiledFlowModel - a compiled model with set parameters, such that it can be used for processing data.
source
ReactiveMP.AdditiveCouplingLayerType

The additive coupling layer specifies an invertible function ${\bf{y}} = g({\bf{x}})$ following the specific structure (for the mapping $g: \mathbb{R}^2 \rightarrow \mathbb{R}^2$):

\[ \begin{align} +Flow · ReactiveMP.jl

Flow node

See also Flow tutorial for a comprehensive guide on using flows in ReactiveMP.

ReactiveMP.PlanarFlowType

The PlanarFlow function is defined as

\[f({\bf{x}}) = {\bf{x}} + {\bf{u}} \tanh({\bf{w}}^\top {\bf{x}} + b)\]

with input and output dimension $D$. Here ${\bf{x}}\in \mathbb{R}^D$ represents the input of the function. Furthermore ${\bf{u}}\in \mathbb{R}^D$, ${\bf{w}}\in \mathbb{R}^D$ and $b\in\mathbb{R}$ represent the parameters of the function. The function contracts and expands the input space.

This function has been introduced in:

Rezende, Danilo, and Shakir Mohamed. "Variational inference with normalizing flows." International conference on machine learning. PMLR, 2015.

source
ReactiveMP.RadialFlowType

The RadialFlow function is defined as

\[f({\bf{x}}) = {\bf{x}} + \frac{\beta({bf{z}} - {\bf{z}}_0)}{\alpha + |{\bf{z}} - {\bf{z}}_0|}\]

with input and output dimension $D$. Here ${\bf{x}}\in \mathbb{R}^D$ represents the input of the function. Furthermore ${\bf{z}}_0\in \mathbb{R}^D$, $\alpha\in \mathbb{R}$ and $\beta\in\mathbb{R}$ represent the parameters of the function. The function contracts and expands the input space.

This function has been introduced in:

Rezende, Danilo, and Shakir Mohamed. "Variational inference with normalizing flows." International conference on machine learning. PMLR, 2015.

source
ReactiveMP.FlowModelType

The FlowModel structure is the most generic type of Flow model, in which the layers are not constrained to be of a specific type. The FlowModel structure contains the input dimensionality and a tuple of layers and can be constructed as FlowModel( dim, (layer1, layer2, ...) ).

Note: this model can be specialized by constraining the types of layers. This potentially allows for more efficient specialized methods that can deal with specifics of these layers, such as triangular jacobian matrices.

source
ReactiveMP.CompiledFlowModelType

The CompiledFlowModel structure is the most generic type of compiled Flow model, in which the layers are not constrained to be of a specific type. The FlowModel structure contains the input dimension and a tuple of compiled layers. Do not manually create a CompiledFlowModel! Instead create a FlowModel first and compile it with compile(model::FlowModel). This will make sure that all layers/mappings are configured with the proper dimensionality and with randomly sampled parameters. Alternatively, if you would like to pass your own parameters, call compile(model::FlowModel, params::Vector).

Note: this model can be specialized by constraining the types of layers. This potentially allows for more efficient specialized methods that can deal with specifics of these layers, such as triangular jacobian matrices.

source
ReactiveMP.compileFunction

compile() compiles a model by setting its parameters. It randomly sets parameter values in the layers and flows such that inference in the model can be obtained.

Input arguments

  • model::FlowModel - a model of which the dimensionality of its layers/flows has been initialized, but its parameters have not been set.

Return arguments

  • ::CompiledFlowModel - a compiled model with set parameters, such that it can be used for processing data.
source

compile(model::FlowModel, params::Vector) lets you initialize a model model with a vector of parameters params.

Input arguments

  • model::FlowModel - a model of which the dimensionality of its layers/flows has been initialized, but its parameters have not been set.
  • params::Vector - a vector of parameters with which the model should be compiled.

Return arguments

  • ::CompiledFlowModel - a compiled model with set parameters, such that it can be used for processing data.
source
ReactiveMP.AdditiveCouplingLayerType

The additive coupling layer specifies an invertible function ${\bf{y}} = g({\bf{x}})$ following the specific structure (for the mapping $g: \mathbb{R}^2 \rightarrow \mathbb{R}^2$):

\[ \begin{align} y_1 &= x_1 \\ y_2 &= x_2 + f(x_1) \end{align}\]

where $f(\cdot)$ denotes an arbitrary function with mapping $f: \mathbb{R} \rightarrow \mathbb{R}$. This function can be chosen arbitrarily complex. Non-linear functions (neural networks) are often chosen to model complex relationships. From the definition of the model, invertibility can be easily achieved as

\[ \begin{align} x_1 &= y_1 \\ x_2 &= y_2 - f(y_1) \end{align}\]

The current implementation only allows for the mapping $g: \mathbb{R}^2 \rightarrow \mathbb{R}^2$, although this layer can be generalized for arbitrary input dimensions.

AdditiveCouplingLayer(f <: AbstractCouplingFlow) creates the layer structure with function f.

Example

f = PlanarFlow()
-layer = AdditiveCouplingLayer(f)

This layer structure has been introduced in:

Dinh, Laurent, David Krueger, and Yoshua Bengio. "Nice: Non-linear independent components estimation." arXiv preprint arXiv:1410.8516 (2014).

source
ReactiveMP.PermutationLayerType

The permutation layer specifies an invertible mapping ${\bf{y}} = g({\bf{x}}) = P{\bf{x}}$ where $P$ is a permutation matrix.

source
ReactiveMP.FlowMetaType

The FlowMeta structure contains the meta data of the Flow factor node. More specifically, it contains the model of the Flow factor node. The FlowMeta structure can be constructed as FlowMeta(model). Make sure that the flow model has been compiled.

The FlowMeta structure is required for the Flow factor node and can be included with the Flow node as: y ~ Flow(x) where { meta = FlowMeta(...) }

source
+layer = AdditiveCouplingLayer(f)

This layer structure has been introduced in:

Dinh, Laurent, David Krueger, and Yoshua Bengio. "Nice: Non-linear independent components estimation." arXiv preprint arXiv:1410.8516 (2014).

source
ReactiveMP.PermutationLayerType

The permutation layer specifies an invertible mapping ${\bf{y}} = g({\bf{x}}) = P{\bf{x}}$ where $P$ is a permutation matrix.

source
ReactiveMP.FlowMetaType

The FlowMeta structure contains the meta data of the Flow factor node. More specifically, it contains the model of the Flow factor node. The FlowMeta structure can be constructed as FlowMeta(model). Make sure that the flow model has been compiled.

The FlowMeta structure is required for the Flow factor node and can be included with the Flow node as: y ~ Flow(x) where { meta = FlowMeta(...) }

source
diff --git a/dev/lib/nodes/nodes/index.html b/dev/lib/nodes/nodes/index.html index 69a845a94..3a659f622 100644 --- a/dev/lib/nodes/nodes/index.html +++ b/dev/lib/nodes/nodes/index.html @@ -6,7 +6,7 @@ # Node's tag/name Node's type A fixed set of edges # Another possible The very first edge (in this example `x`) is considered # value is to be the output of the node -# `Deterministic`

This expression registers a new node that can be used with the inference engine. Note howeve, that the @node macro does not generate any message passing update rules. These must be defined using the @rule macro.

Node types

We distinguish different types of factor nodes in order to have better control over Bethe Free Energy computation. Each factor node has either the Deterministic or Stochastic functional form type.

ReactiveMP.DeterministicType
Deterministic

Deterministic object used to parametrize factor node object with determinstic type of relationship between variables.

See also: Stochastic, isdeterministic, isstochastic, sdtype

source
ReactiveMP.StochasticType
Stochastic

Stochastic object used to parametrize factor node object with stochastic type of relationship between variables.

See also: Deterministic, isdeterministic, isstochastic, sdtype

source
ReactiveMP.isdeterministicFunction
isdeterministic(node)

Function used to check if factor node object is deterministic or not. Returns true or false.

See also: Deterministic, Stochastic, isstochastic, sdtype

source
ReactiveMP.isstochasticFunction
isstochastic(node)

Function used to check if factor node object is stochastic or not. Returns true or false.

See also: Deterministic, Stochastic, isdeterministic, sdtype

source
ReactiveMP.sdtypeFunction
sdtype(object)

Returns either Deterministic or Stochastic for a given object (if defined).

See also: Deterministic, Stochastic, isdeterministic, isstochastic

source

For example the + node has the Deterministic type:

plus_node = make_node(+)
+#                       `Deterministic`

This expression registers a new node that can be used with the inference engine. Note howeve, that the @node macro does not generate any message passing update rules. These must be defined using the @rule macro.

Node types

We distinguish different types of factor nodes in order to have better control over Bethe Free Energy computation. Each factor node has either the Deterministic or Stochastic functional form type.

ReactiveMP.DeterministicType
Deterministic

Deterministic object used to parametrize factor node object with determinstic type of relationship between variables.

See also: Stochastic, isdeterministic, isstochastic, sdtype

source
ReactiveMP.StochasticType
Stochastic

Stochastic object used to parametrize factor node object with stochastic type of relationship between variables.

See also: Deterministic, isdeterministic, isstochastic, sdtype

source
ReactiveMP.isdeterministicFunction
isdeterministic(node)

Function used to check if factor node object is deterministic or not. Returns true or false.

See also: Deterministic, Stochastic, isstochastic, sdtype

source
ReactiveMP.isstochasticFunction
isstochastic(node)

Function used to check if factor node object is stochastic or not. Returns true or false.

See also: Deterministic, Stochastic, isdeterministic, sdtype

source
ReactiveMP.sdtypeFunction
sdtype(object)

Returns either Deterministic or Stochastic for a given object (if defined).

See also: Deterministic, Stochastic, isdeterministic, isstochastic

source

For example the + node has the Deterministic type:

plus_node = make_node(+)
 
 println("Is `+` node deterministic: ", isdeterministic(plus_node))
 println("Is `+` node stochastic: ", isstochastic(plus_node))
Is `+` node deterministic: true
@@ -16,7 +16,7 @@
 println("Is `Bernoulli` node stochastic: ", isstochastic(bernoulli_node))
Is `Bernoulli` node deterministic: false
 Is `Bernoulli` node stochastic: true

To get an actual instance of the type object we use sdtype function:

println("sdtype() of `+` node is ", sdtype(plus_node))
 println("sdtype() of `Bernoulli` node is ", sdtype(bernoulli_node))
sdtype() of `+` node is Deterministic()
-sdtype() of `Bernoulli` node is Stochastic()

Node functional dependencies pipeline

The generic implementation of factor nodes in ReactiveMP supports custom functional dependency pipelines. Briefly, the functional dependencies pipeline defines what dependencies are need to compute a single message. As an example, consider the belief-propagation message update equation for a factor node $f$ with three edges: $x$, $y$ and $z$:

\[\mu(x) = \int \mu(y) \mu(z) f(x, y, z) \mathrm{d}y \mathrm{d}z\]

Here we see that in the standard setting for the belief-propagation message out of edge $x$, we need only messages from the edges $y$ and $z$. In contrast, consider the variational message update rule equation with mean-field assumption:

\[\mu(x) = \exp \int q(y) q(z) \log f(x, y, z) \mathrm{d}y \mathrm{d}z\]

We see that in this setting, we do not need messages $\mu(y)$ and $\mu(z)$, but only the marginals $q(y)$ and $q(z)$. The purpose of a functional dependencies pipeline is to determine functional dependencies (a set of messages or marginals) that are needed to compute a single message. By default, ReactiveMP.jl uses so-called DefaultFunctionalDependencies that correctly implements belief-propagation and variational message passing schemes (including both mean-field and structured factorisations). The full list of built-in pipelines is presented below:

ReactiveMP.DefaultFunctionalDependenciesType
DefaultFunctionalDependencies

This pipeline translates directly to enforcing a variational message passing scheme. In order to compute a message out of some edge, this pipeline requires messages from edges within the same edge-cluster and marginals over other edge-clusters.

See also: ReactiveMP.RequireMessageFunctionalDependencies, ReactiveMP.RequireMarginalFunctionalDependencies, ReactiveMP.RequireEverythingFunctionalDependencies

source
ReactiveMP.RequireMessageFunctionalDependenciesType
RequireMessageFunctionalDependencies(indices::Tuple, start_with::Tuple)

The same as DefaultFunctionalDependencies, but in order to compute a message out of some edge also requires the inbound message on the this edge.

Arguments

  • indices::Tuple, tuple of integers, which indicates what edges should require inbound messages
  • start_with::Tuple, tuple of nothing or <:Distribution, which specifies the initial inbound messages for edges in indices

Note: start_with uses setmessage! mechanism, hence, it can be visible by other listeners on the same edge. Explicit call to setmessage! overwrites whatever has been passed in start_with.

@model macro accepts a simplified construction of this pipeline:

@model function some_model()
+sdtype() of `Bernoulli` node is Stochastic()

Node functional dependencies pipeline

The generic implementation of factor nodes in ReactiveMP supports custom functional dependency pipelines. Briefly, the functional dependencies pipeline defines what dependencies are need to compute a single message. As an example, consider the belief-propagation message update equation for a factor node $f$ with three edges: $x$, $y$ and $z$:

\[\mu(x) = \int \mu(y) \mu(z) f(x, y, z) \mathrm{d}y \mathrm{d}z\]

Here we see that in the standard setting for the belief-propagation message out of edge $x$, we need only messages from the edges $y$ and $z$. In contrast, consider the variational message update rule equation with mean-field assumption:

\[\mu(x) = \exp \int q(y) q(z) \log f(x, y, z) \mathrm{d}y \mathrm{d}z\]

We see that in this setting, we do not need messages $\mu(y)$ and $\mu(z)$, but only the marginals $q(y)$ and $q(z)$. The purpose of a functional dependencies pipeline is to determine functional dependencies (a set of messages or marginals) that are needed to compute a single message. By default, ReactiveMP.jl uses so-called DefaultFunctionalDependencies that correctly implements belief-propagation and variational message passing schemes (including both mean-field and structured factorisations). The full list of built-in pipelines is presented below:

ReactiveMP.RequireMessageFunctionalDependenciesType
RequireMessageFunctionalDependencies(indices::Tuple, start_with::Tuple)

The same as DefaultFunctionalDependencies, but in order to compute a message out of some edge also requires the inbound message on the this edge.

Arguments

  • indices::Tuple, tuple of integers, which indicates what edges should require inbound messages
  • start_with::Tuple, tuple of nothing or <:Distribution, which specifies the initial inbound messages for edges in indices

Note: start_with uses setmessage! mechanism, hence, it can be visible by other listeners on the same edge. Explicit call to setmessage! overwrites whatever has been passed in start_with.

@model macro accepts a simplified construction of this pipeline:

@model function some_model()
     # ...
     y ~ NormalMeanVariance(x, τ) where {
         pipeline = RequireMessage(x = vague(NormalMeanPrecision),     τ)
@@ -25,7 +25,7 @@
                                   # and initialise with `vague(...)`  but here we skip initialisation
     }
     # ...
-end

Deprecation warning: RequireInboundFunctionalDependencies has been deprecated in favor of RequireMessageFunctionalDependencies.

See also: ReactiveMP.DefaultFunctionalDependencies, ReactiveMP.RequireMarginalFunctionalDependencies, ReactiveMP.RequireEverythingFunctionalDependencies

source
ReactiveMP.RequireMarginalFunctionalDependenciesType
RequireMarginalFunctionalDependencies(indices::Tuple, start_with::Tuple)

Similar to DefaultFunctionalDependencies, but in order to compute a message out of some edge also requires the posterior marginal on that edge.

Arguments

  • indices::Tuple, tuple of integers, which indicates what edges should require their own marginals
  • start_with::Tuple, tuple of nothing or <:Distribution, which specifies the initial marginal for edges in indices

Note: start_with uses the setmarginal! mechanism, hence it can be visible to other listeners on the same edge. Explicit calls to setmarginal! overwrites whatever has been passed in start_with.

@model macro accepts a simplified construction of this pipeline:

@model function some_model()
+end

Deprecation warning: RequireInboundFunctionalDependencies has been deprecated in favor of RequireMessageFunctionalDependencies.

See also: ReactiveMP.DefaultFunctionalDependencies, ReactiveMP.RequireMarginalFunctionalDependencies, ReactiveMP.RequireEverythingFunctionalDependencies

source
ReactiveMP.RequireMarginalFunctionalDependenciesType
RequireMarginalFunctionalDependencies(indices::Tuple, start_with::Tuple)

Similar to DefaultFunctionalDependencies, but in order to compute a message out of some edge also requires the posterior marginal on that edge.

Arguments

  • indices::Tuple, tuple of integers, which indicates what edges should require their own marginals
  • start_with::Tuple, tuple of nothing or <:Distribution, which specifies the initial marginal for edges in indices

Note: start_with uses the setmarginal! mechanism, hence it can be visible to other listeners on the same edge. Explicit calls to setmarginal! overwrites whatever has been passed in start_with.

@model macro accepts a simplified construction of this pipeline:

@model function some_model()
     # ...
     y ~ NormalMeanVariance(x, τ) where {
         pipeline = RequireMarginal(x = vague(NormalMeanPrecision),     τ)
@@ -34,4 +34,4 @@
                                    # and initialise with `vague(...)`  but here we skip initialisation
     }
     # ...
-end

Note: The simplified construction in @model macro syntax is only available in GraphPPL.jl of version >2.2.0.

See also: ReactiveMP.DefaultFunctionalDependencies, ReactiveMP.RequireMessageFunctionalDependencies, ReactiveMP.RequireEverythingFunctionalDependencies

source
ReactiveMP.RequireEverythingFunctionalDependenciesType

RequireEverythingFunctionalDependencies

This pipeline specifies that in order to compute a message of some edge update rules request everything that is available locally. This includes all inbound messages (including on the same edge) and marginals over all local edge-clusters (this may or may not include marginals on single edges, depends on the local factorisation constraint).

See also: DefaultFunctionalDependencies, RequireMessageFunctionalDependencies, RequireMarginalFunctionalDependencies

source

Node traits

Each factor node has to define the ReactiveMP.as_node_functional_form trait function and to specify a ReactiveMP.ValidNodeFunctionalForm singleton as a return object. By default ReactiveMP.as_node_functional_form returns ReactiveMP.UndefinedNodeFunctionalForm. Objects that do not specify this property correctly cannot be used in model specification.

Note

@node macro does that automatically

+end

Note: The simplified construction in @model macro syntax is only available in GraphPPL.jl of version >2.2.0.

See also: ReactiveMP.DefaultFunctionalDependencies, ReactiveMP.RequireMessageFunctionalDependencies, ReactiveMP.RequireEverythingFunctionalDependencies

source
ReactiveMP.RequireEverythingFunctionalDependenciesType

RequireEverythingFunctionalDependencies

This pipeline specifies that in order to compute a message of some edge update rules request everything that is available locally. This includes all inbound messages (including on the same edge) and marginals over all local edge-clusters (this may or may not include marginals on single edges, depends on the local factorisation constraint).

See also: DefaultFunctionalDependencies, RequireMessageFunctionalDependencies, RequireMarginalFunctionalDependencies

source

Node traits

Each factor node has to define the ReactiveMP.as_node_functional_form trait function and to specify a ReactiveMP.ValidNodeFunctionalForm singleton as a return object. By default ReactiveMP.as_node_functional_form returns ReactiveMP.UndefinedNodeFunctionalForm. Objects that do not specify this property correctly cannot be used in model specification.

Note

@node macro does that automatically

ReactiveMP.ValidNodeFunctionalFormType
ValidNodeFunctionalForm

Trait specification for an object that can be used in model specification as a factor node.

See also: ReactiveMP.as_node_functional_form, ReactiveMP.UndefinedNodeFunctionalForm

source
ReactiveMP.UndefinedNodeFunctionalFormType
UndefinedNodeFunctionalForm

Trait specification for an object that can not be used in model specification as a factor node.

See also: ReactiveMP.as_node_functional_form, ReactiveMP.ValidNodeFunctionalForm

source
ReactiveMP.as_node_functional_formFunction
as_node_functional_form(object)

Determines object node functional form trait specification. Returns either ValidNodeFunctionalForm() or UndefinedNodeFunctionalForm().

See also: ReactiveMP.ValidNodeFunctionalForm, ReactiveMP.UndefinedNodeFunctionalForm

source
diff --git a/dev/lib/prod/index.html b/dev/lib/prod/index.html index d354a8626..ad9a3aa46 100644 --- a/dev/lib/prod/index.html +++ b/dev/lib/prod/index.html @@ -1,9 +1,9 @@ -Prod implementation · ReactiveMP.jl

Prod implementation

Base.prodMethod
prod(strategy, left, right)

prod function is used to find a product of two probability distrubutions (or any other objects) over same variable (e.g. 𝓝(x|μ1, σ1) × 𝓝(x|μ2, σ2)). There are multiple strategies for prod function, e.g. ProdAnalytical, ProdGeneric or ProdPreserveType.

Examples:

using ReactiveMP
+Prod implementation · ReactiveMP.jl

Prod implementation

Base.prodMethod
prod(strategy, left, right)

prod function is used to find a product of two probability distrubutions (or any other objects) over same variable (e.g. 𝓝(x|μ1, σ1) × 𝓝(x|μ2, σ2)). There are multiple strategies for prod function, e.g. ProdAnalytical, ProdGeneric or ProdPreserveType.

Examples:

using ReactiveMP
 
 product = prod(ProdAnalytical(), NormalMeanVariance(-1.0, 1.0), NormalMeanVariance(1.0, 1.0))
 
 mean(product), var(product)
 
 # output
-(0.0, 0.5)

See also: prod_analytical_rule, ProdAnalytical, ProdGeneric

source
ReactiveMP.ProdAnalyticalType
ProdAnalytical

ProdAnalytical is one of the strategies for prod function. This strategy uses analytical prod methods but does not constraint a prod to be in any specific form. It throws an NoAnalyticalProdException if no analytical rules is available, use ProdGeneric prod strategy to fallback to approximation methods.

Note: ProdAnalytical ignores missing values and simply returns the non-missing argument. Returns missing in case if both arguments are missing.

See also: prod, ProdPreserveType, ProdGeneric

source
ReactiveMP.ProdFinalType
ProdFinal{T}

The ProdFinal is a wrapper around a distribution. By passing it as a message along an edge of the graph the corresponding marginal is calculated as the distribution of the ProdFinal. In a sense, the ProdFinal ignores any further prod with any other distribution for calculating the marginal and only check for variate types of two distributions. Trying to prod two instances of ProdFinal will result in an error. Note: ProdFinal is not a prod strategy, as opposed to ProdAnalytical and ProdGeneric.

See also: [BIFM]

source
ReactiveMP.ProdPreserveTypeLeftType
ProdPreserveTypeLeft

ProdPreserveTypeLeft is one of the strategies for prod function. This strategy constraint an output of a prod to be in the functional form as left argument. By default it fallbacks to a ProdPreserveType strategy and converts an output to a prespecified type but can be overwritten for some distributions for better performance.

See also: prod, ProdPreserveType, ProdPreserveTypeRight

source
ReactiveMP.ProdPreserveTypeRightType
ProdPreserveTypeRight

ProdPreserveTypeRight is one of the strategies for prod function. This strategy constraint an output of a prod to be in the functional form as right argument. By default it fallbacks to a ProdPreserveType strategy and converts an output to a prespecified type but can be overwritten for some distributions for better performance.

See also: prod, ProdPreserveType, ProdPreserveTypeLeft

source

Dist product

ReactiveMP.DistProductType
DistProduct

If inference backend cannot return an analytical solution for a product of two distributions it may fallback to the DistProduct structure DistProduct is useful to propagate the exact forms of two messages until it hits some approximation method for form-constraint. However DistProduct cannot be used to compute statistics such as mean or variance. It has to be approximated before using in actual inference procedure.

Backend exploits form constraints specification which usually help to deal with intractable distributions products.

See also: prod, ProdGeneric

source
ReactiveMP.ProdGenericType
ProdGeneric{C}

ProdGeneric is one of the strategies for prod function. This strategy does not fail in case of no analytical rule is available, but simply creates a product tree, there all nodes represent the prod function and all leaves are valid Distribution object. This object does not define any statistical properties (such as mean or var etc) and cannot be used during the inference procedure. However this object plays imporant part in the functional form constraints implementation. In a few words this object keeps all the information of a product of messages and propagates this information in the functional form constraint.

ProdGeneric has a "fallback" method, which it may or may not use under some circumstances. For example if the fallback method is ProdAnalytical (which is the default one) - ProdGeneric will try to optimize prod tree with analytical solutions where possible.

See also: prod, DistProduct, ProdAnalytical, ProdPreserveType, prod_analytical_rule, GenericLogPdfVectorisedProduct

source
ReactiveMP.GenericLogPdfVectorisedProductType
GenericLogPdfVectorisedProduct

An efficient linearized implementation of product of multiple generic log-pdf objects. This structure prevents DistProduct tree from growing too much in case of identical log-pdf objects. This trick significantly reduces Julia compilation times when analytical product rules are not available but messages are of the same type. Essentially this structure linearizes leafes of the DistProduct tree in case if it sees objects of the same type (via dispatch).

See also: DistProduct

source
+(0.0, 0.5)

See also: prod_analytical_rule, ProdAnalytical, ProdGeneric

source
ReactiveMP.ProdAnalyticalType
ProdAnalytical

ProdAnalytical is one of the strategies for prod function. This strategy uses analytical prod methods but does not constraint a prod to be in any specific form. It throws an NoAnalyticalProdException if no analytical rules is available, use ProdGeneric prod strategy to fallback to approximation methods.

Note: ProdAnalytical ignores missing values and simply returns the non-missing argument. Returns missing in case if both arguments are missing.

See also: prod, ProdPreserveType, ProdGeneric

source
ReactiveMP.ProdFinalType
ProdFinal{T}

The ProdFinal is a wrapper around a distribution. By passing it as a message along an edge of the graph the corresponding marginal is calculated as the distribution of the ProdFinal. In a sense, the ProdFinal ignores any further prod with any other distribution for calculating the marginal and only check for variate types of two distributions. Trying to prod two instances of ProdFinal will result in an error. Note: ProdFinal is not a prod strategy, as opposed to ProdAnalytical and ProdGeneric.

See also: [BIFM]

source
ReactiveMP.ProdPreserveTypeLeftType
ProdPreserveTypeLeft

ProdPreserveTypeLeft is one of the strategies for prod function. This strategy constraint an output of a prod to be in the functional form as left argument. By default it fallbacks to a ProdPreserveType strategy and converts an output to a prespecified type but can be overwritten for some distributions for better performance.

See also: prod, ProdPreserveType, ProdPreserveTypeRight

source
ReactiveMP.ProdPreserveTypeRightType
ProdPreserveTypeRight

ProdPreserveTypeRight is one of the strategies for prod function. This strategy constraint an output of a prod to be in the functional form as right argument. By default it fallbacks to a ProdPreserveType strategy and converts an output to a prespecified type but can be overwritten for some distributions for better performance.

See also: prod, ProdPreserveType, ProdPreserveTypeLeft

source

Dist product

ReactiveMP.DistProductType
DistProduct

If inference backend cannot return an analytical solution for a product of two distributions it may fallback to the DistProduct structure DistProduct is useful to propagate the exact forms of two messages until it hits some approximation method for form-constraint. However DistProduct cannot be used to compute statistics such as mean or variance. It has to be approximated before using in actual inference procedure.

Backend exploits form constraints specification which usually help to deal with intractable distributions products.

See also: prod, ProdGeneric

source
ReactiveMP.ProdGenericType
ProdGeneric{C}

ProdGeneric is one of the strategies for prod function. This strategy does not fail in case of no analytical rule is available, but simply creates a product tree, there all nodes represent the prod function and all leaves are valid Distribution object. This object does not define any statistical properties (such as mean or var etc) and cannot be used during the inference procedure. However this object plays imporant part in the functional form constraints implementation. In a few words this object keeps all the information of a product of messages and propagates this information in the functional form constraint.

ProdGeneric has a "fallback" method, which it may or may not use under some circumstances. For example if the fallback method is ProdAnalytical (which is the default one) - ProdGeneric will try to optimize prod tree with analytical solutions where possible.

See also: prod, DistProduct, ProdAnalytical, ProdPreserveType, prod_analytical_rule, GenericLogPdfVectorisedProduct

source
ReactiveMP.GenericLogPdfVectorisedProductType
GenericLogPdfVectorisedProduct

An efficient linearized implementation of product of multiple generic log-pdf objects. This structure prevents DistProduct tree from growing too much in case of identical log-pdf objects. This trick significantly reduces Julia compilation times when analytical product rules are not available but messages are of the same type. Essentially this structure linearizes leafes of the DistProduct tree in case if it sees objects of the same type (via dispatch).

See also: DistProduct

source
diff --git a/dev/lib/rules/rules/index.html b/dev/lib/rules/rules/index.html index 1201989f7..cbffc8ff3 100644 --- a/dev/lib/rules/rules/index.html +++ b/dev/lib/rules/rules/index.html @@ -1,11 +1,11 @@ -Message update rules · ReactiveMP.jl

Rules implementation

Message update rules

ReactiveMP.ruleFunction
rule(fform, on, vconstraint, mnames, messages, qnames, marginals, meta, __node)

This function is used to compute an outbound message for a given node

Arguments

  • fform: Functional form of the node in form of a type of the node, e.g. ::Type{ <: NormalMeanVariance } or ::typeof(+)
  • on: Outbound interface's tag for which a message has to be computed, e.g. ::Val{:out} or ::Val{:μ}
  • vconstraint: Variable constraints for an outbound interface, e.g. Marginalisation or MomentMatching
  • mnames: Ordered messages names in form of the Val type, eg. ::Val{ (:mean, :precision) }
  • messages: Tuple of message of the same length as mnames used to compute an outbound message
  • qnames: Ordered marginal names in form of the Val type, eg. ::Val{ (:mean, :precision) }
  • marginals: Tuple of marginals of the same length as qnames used to compute an outbound message
  • meta: Extra meta information
  • addons: Extra addons information
  • __node: Node reference

See also: @rule, marginalrule, @marginalrule

source
ReactiveMP.@ruleMacro
@rule NodeType(:Edge, Constraint) (Arguments..., [ meta::MetaType ]) = begin
+Message update rules · ReactiveMP.jl

Rules implementation

Message update rules

ReactiveMP.ruleFunction
rule(fform, on, vconstraint, mnames, messages, qnames, marginals, meta, __node)

This function is used to compute an outbound message for a given node

Arguments

  • fform: Functional form of the node in form of a type of the node, e.g. ::Type{ <: NormalMeanVariance } or ::typeof(+)
  • on: Outbound interface's tag for which a message has to be computed, e.g. ::Val{:out} or ::Val{:μ}
  • vconstraint: Variable constraints for an outbound interface, e.g. Marginalisation or MomentMatching
  • mnames: Ordered messages names in form of the Val type, eg. ::Val{ (:mean, :precision) }
  • messages: Tuple of message of the same length as mnames used to compute an outbound message
  • qnames: Ordered marginal names in form of the Val type, eg. ::Val{ (:mean, :precision) }
  • marginals: Tuple of marginals of the same length as qnames used to compute an outbound message
  • meta: Extra meta information
  • addons: Extra addons information
  • __node: Node reference

See also: @rule, marginalrule, @marginalrule

source
ReactiveMP.@ruleMacro
@rule NodeType(:Edge, Constraint) (Arguments..., [ meta::MetaType ]) = begin
     # rule body
     return ...
 end

The @rule macro help to define new methods for the rule function. It works particularly well in combination with the @node macro. It has a specific structure, which must specify:

  • NodeType: must be a valid Julia type. If some attempt to define a rule for a Julia function (for example +), use typeof(+)
  • Edge: edge label, usually edge labels are defined with the @node macro
  • Constrain: DEPRECATED, please just use the Marginalisation label
  • Arguments: defines a list of the input arguments for the rule
    • m_* prefix indicates that the argument is of type Message from the edge *
    • q_* prefix indicates that the argument is of type Marginal on the edge *
  • Meta::MetaType - optionally, a user can specify a Meta object of type MetaType. This can be useful is some attempts to try different rules with different approximation methods or if the rule itself requires some temporary storage or cache. The default meta is nothing.

Here are various examples of the @rule macro usage:

  1. Belief-Propagation (or Sum-Product) message update rule for the NormalMeanVariance node toward the edge with the Marginalisation constraint. Input arguments are m_out and m_v, which are the messages from the corresponding edges out and v and have the type PointMass.
@rule NormalMeanVariance(:μ, Marginalisation) (m_out::PointMass, m_v::PointMass) = NormalMeanVariance(mean(m_out), mean(m_v))
  1. Mean-field message update rule for the NormalMeanVariance node towards the edge with the Marginalisation constraint. Input arguments are q_out and q_v, which are the marginals on the corresponding edges out and v of type Any.
@rule NormalMeanVariance(:μ, Marginalisation) (q_out::Any, q_v::Any) = NormalMeanVariance(mean(q_out), mean(q_v))
  1. Structured Variational message update rule for the NormalMeanVariance node towards the :out edge with the Marginalisation constraint. Input arguments are m_μ, which is a message from the μ edge of type UnivariateNormalDistributionsFamily, and q_v, which is a marginal on the v edge of type Any.
@rule NormalMeanVariance(:out, Marginalisation) (m_μ::UnivariateNormalDistributionsFamily, q_v::Any) = begin
     m_μ_mean, m_μ_cov = mean_cov(m_μ)
     return NormalMeanVariance(m_μ_mean, m_μ_cov + mean(q_v))
-end

See also: rule, marginalrule, [@marginalrule], @call_rule

source
ReactiveMP.@call_ruleMacro
@call_rule NodeType(:edge, Constraint) (argument1 = value1, argument2 = value2, ..., [ meta = ... ])

The @call_rule macro helps to call the rule method with an easier syntax. The structure of the macro is almost the same as in the @rule macro, but there is no begin ... end block, but instead each argument must have a specified value with the = operator.

See also: @rule, rule, @call_marginalrule

source

Marginal update rules

ReactiveMP.marginalruleFunction
marginalrule(fform, on, mnames, messages, qnames, marginals, meta, __node)

This function is used to compute a local joint marginal for a given node

Arguments

  • fform: Functional form of the node in form of a type of the node, e.g. ::Type{ <: NormalMeanVariance } or ::typeof(+)
  • on: Local joint marginal tag , e.g. ::Val{ :mean_precision } or ::Val{ :out_mean_precision }
  • mnames: Ordered messages names in form of the Val type, eg. ::Val{ (:mean, :precision) }
  • messages: Tuple of message of the same length as mnames used to compute an outbound message
  • qnames: Ordered marginal names in form of the Val type, eg. ::Val{ (:mean, :precision) }
  • marginals: Tuple of marginals of the same length as qnames used to compute an outbound message
  • meta: Extra meta information
  • __node: Node reference

See also: rule, @rule @marginalrule

source
ReactiveMP.@call_ruleMacro
@call_rule NodeType(:edge, Constraint) (argument1 = value1, argument2 = value2, ..., [ meta = ... ])

The @call_rule macro helps to call the rule method with an easier syntax. The structure of the macro is almost the same as in the @rule macro, but there is no begin ... end block, but instead each argument must have a specified value with the = operator.

See also: @rule, rule, @call_marginalrule

source

Marginal update rules

ReactiveMP.marginalruleFunction
marginalrule(fform, on, mnames, messages, qnames, marginals, meta, __node)

This function is used to compute a local joint marginal for a given node

Arguments

  • fform: Functional form of the node in form of a type of the node, e.g. ::Type{ <: NormalMeanVariance } or ::typeof(+)
  • on: Local joint marginal tag , e.g. ::Val{ :mean_precision } or ::Val{ :out_mean_precision }
  • mnames: Ordered messages names in form of the Val type, eg. ::Val{ (:mean, :precision) }
  • messages: Tuple of message of the same length as mnames used to compute an outbound message
  • qnames: Ordered marginal names in form of the Val type, eg. ::Val{ (:mean, :precision) }
  • marginals: Tuple of marginals of the same length as qnames used to compute an outbound message
  • meta: Extra meta information
  • __node: Node reference

See also: rule, @rule @marginalrule

source
ReactiveMP.@marginalruleMacro
@marginalrule NodeType(:Cluster) (Arguments..., [ meta::MetaType ]) = begin
     # rule body
     return ...
 end

The @marginalrule macro help to define new methods for the marginalrule function. It works particularly well in combination with the @node macro. It has a specific structure, which must specify:

  • NodeType: must be a valid Julia type. If some attempt to define a rule for a Julia function (for example +), use typeof(+)
  • Cluster: edge cluster that contains joined edge labels with the _ symbol. Usually edge labels are defined with the @node macro
  • Arguments: defines a list of the input arguments for the rule
    • m_* prefix indicates that the argument is of type Message from the edge *
    • q_* prefix indicates that the argument is of type Marginal on the edge *
  • Meta::MetaType - optionally, a user can specify a Meta object of type MetaType. This can be useful is some attempts to try different rules with different approximation methods or if the rule itself requires some temporary storage or cache. The default meta is nothing.

The @marginalrule can return a NamedTuple in the return statement. This would indicate some variables in the joint marginal for the Cluster are independent and the joint itself is factorised. For example if some attempts to compute a marginal for the q(x, y) it is possible to return (x = ..., y = ...) as the result of the computation to indicate that q(x, y) = q(x)q(y).

Here are various examples of the @marginalrule macro usage:

  1. Marginal computation rule around the NormalMeanPrecision node for the q(out, μ). The rule accepts arguments m_out and m_μ, which are the messages

from the out and μ edges respectively, and q_τ which is the marginal on the edge τ.

@marginalrule NormalMeanPrecision(:out_μ) (m_out::UnivariateNormalDistributionsFamily, m_μ::UnivariateNormalDistributionsFamily, q_τ::Any) = begin
@@ -20,7 +20,7 @@
     return MvNormalWeightedMeanPrecision(xi, W)
 end
  1. Marginal computation rule around the NormalMeanPrecision node for the q(out, μ). The rule accepts arguments m_out and m_μ, which are the messages from the

out and μ edges respectively, and q_τ which is the marginal on the edge τ. In this example the result of the computation is a NamedTuple

@marginalrule NormalMeanPrecision(:out_μ) (m_out::PointMass, m_μ::UnivariateNormalDistributionsFamily, q_τ::Any) = begin
     return (out = m_out, μ = prod(ProdAnalytical(), NormalMeanPrecision(mean(m_out), mean(q_τ)), m_μ))
-end
source
ReactiveMP.@call_marginalruleMacro
@call_marginalrule NodeType(:edge) (argument1 = value1, argument2 = value2, ..., [ meta = ... ])

The @call_marginalrule macro helps to call the marginalrule method with an easier syntax. The structure of the macro is almost the same as in the @marginalrule macro, but there is no begin ... end block, but instead each argument must have a specified value with the = operator.

See also: @marginalrule, marginalrule, @call_rule

source

Testing utilities for the update rules

ReactiveMP.@test_rulesMacro
@test_rules [options] rule [ test_entries... ]

The @test_rules macro generates test cases for message update rules for probabilistic programming models that follow the "message passing" paradigm. It takes a rule specification as input and generates a set of tests based on that specification. This macro is provided by ReactiveMP.

Note: The Test module must be imported explicitly. The @test_rules macro tries to use the @test macro, which must be defined globally.

Arguments

The macro takes three arguments:

  • options: An optional argument that specifies the options for the test generation process. See below for details.
  • rule: A rule specification in the same format as the @rule macro, e.g. Beta(:out, Marginalisation) or NormalMeanVariance(:μ, Marginalisation).
  • test_entries: An array of named tuples (input = ..., output = ...). The input entry has the same format as the input for the @rule macro. The output entry specifies the expected output.

Options

The following options are available:

  • check_type_promotion: By default, this option is set to false. If set to true, the macro generates an extensive list of extra tests that aim to check the correct type promotion within the tests. For example, if all inputs are of type Float32, then the expected output should also be of type Float32. See the paramfloattype and convert_paramfloattype functions for details.
  • atol: Sets the desired accuracy for the tests. The tests use the custom_isapprox function from ReactiveMP to check if outputs are approximately the same. This argument can be either a single number or an array of key => value pairs.
  • extra_float_types: A set of extra float types to be used in the check_type_promotion tests. This argument has no effect if check_type_promotion is set to false.

The default values for the atol option are:

  • Float32: 1e-4
  • Float64: 1e-6
  • BigFloat: 1e-8

Examples


+end
source
ReactiveMP.@call_marginalruleMacro
@call_marginalrule NodeType(:edge) (argument1 = value1, argument2 = value2, ..., [ meta = ... ])

The @call_marginalrule macro helps to call the marginalrule method with an easier syntax. The structure of the macro is almost the same as in the @marginalrule macro, but there is no begin ... end block, but instead each argument must have a specified value with the = operator.

See also: @marginalrule, marginalrule, @call_rule

source

Testing utilities for the update rules

ReactiveMP.@test_rulesMacro
@test_rules [options] rule [ test_entries... ]

The @test_rules macro generates test cases for message update rules for probabilistic programming models that follow the "message passing" paradigm. It takes a rule specification as input and generates a set of tests based on that specification. This macro is provided by ReactiveMP.

Note: The Test module must be imported explicitly. The @test_rules macro tries to use the @test macro, which must be defined globally.

Arguments

The macro takes three arguments:

  • options: An optional argument that specifies the options for the test generation process. See below for details.
  • rule: A rule specification in the same format as the @rule macro, e.g. Beta(:out, Marginalisation) or NormalMeanVariance(:μ, Marginalisation).
  • test_entries: An array of named tuples (input = ..., output = ...). The input entry has the same format as the input for the @rule macro. The output entry specifies the expected output.

Options

The following options are available:

  • check_type_promotion: By default, this option is set to false. If set to true, the macro generates an extensive list of extra tests that aim to check the correct type promotion within the tests. For example, if all inputs are of type Float32, then the expected output should also be of type Float32. See the paramfloattype and convert_paramfloattype functions for details.
  • atol: Sets the desired accuracy for the tests. The tests use the custom_isapprox function from ReactiveMP to check if outputs are approximately the same. This argument can be either a single number or an array of key => value pairs.
  • extra_float_types: A set of extra float types to be used in the check_type_promotion tests. This argument has no effect if check_type_promotion is set to false.

The default values for the atol option are:

  • Float32: 1e-4
  • Float64: 1e-6
  • BigFloat: 1e-8

Examples


 @test_rules [check_type_promotion = true] Beta(:out, Marginalisation) [
     (input = (m_a = PointMass(1.0), m_b = PointMass(2.0)), output = Beta(1.0, 2.0)),
     (input = (m_a = PointMass(2.0), m_b = PointMass(2.0)), output = Beta(2.0, 2.0)),
@@ -31,4 +31,4 @@
     (input = (q_a = PointMass(1.0), q_b = PointMass(2.0)), output = Beta(1.0, 2.0)),
     (input = (q_a = PointMass(2.0), q_b = PointMass(2.0)), output = Beta(2.0, 2.0)),
     (input = (q_a = PointMass(3.0), q_b = PointMass(3.0)), output = Beta(3.0, 3.0))
-]

See also: ReactiveMP.@test_marginalrules

source
+]

See also: ReactiveMP.@test_marginalrules

source
diff --git a/dev/search/index.html b/dev/search/index.html index 98563a565..d34c91c75 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · ReactiveMP.jl

Loading search...

    +Search · ReactiveMP.jl

    Loading search...