Skip to content

Commit

Permalink
docs: cleaning up documentation and dead links.
Browse files Browse the repository at this point in the history
  • Loading branch information
DoktorMike committed Jun 17, 2024
1 parent 190dbed commit c33b8f8
Show file tree
Hide file tree
Showing 3 changed files with 8 additions and 7 deletions.
10 changes: 5 additions & 5 deletions src/dense.jl
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The out `y` will be a vector of length `out*4`, or a batch with
The output will have applied the function `σ(y)` to each row/element of `y` except the first `out` ones.
Keyword `bias=false` will switch off trainable bias for the layer.
The initialisation of the weight matrix is `W = init(out*4, in)`, calling the function
given to keyword `init`, with default [`glorot_uniform`](@doc Flux.glorot_uniform).
given to keyword `init`, with default [`glorot_uniform`].
The weight matrix and/or the bias vector (of length `out`) may also be provided explicitly.
Remember that in this case the number of rows in the weight matrix `W` MUST be a multiple of 4.
The same holds true for the `bias` vector.
Expand Down Expand Up @@ -79,7 +79,7 @@ The out `y` will be a vector of length `out`, or a batch with
The output will have applied the function `softplus(y)` to each row/element of `y`.
Keyword `bias=false` will switch off trainable bias for the layer.
The initialisation of the weight matrix is `W = init(out, in)`, calling the function
given to keyword `init`, with default [`glorot_uniform`](@doc Flux.glorot_uniform).
given to keyword `init`, with default [`glorot_uniform`].
The weight matrix and/or the bias vector (of length `out`) may also be provided explicitly.
# Arguments:
Expand Down Expand Up @@ -119,12 +119,12 @@ distribution whose forward pass is simply given by:
The input `x` should be a vector of length `in`, or batch of vectors represented
as an `in × N` matrix, or any array with `size(x,1) == in`.
The out `y` will be a vector of length `out*4`, or a batch with
`size(y) == (out*4, size(x)[2:end]...)`
The out `y` will be a vector of length `out*2`, or a batch with
`size(y) == (out*2, size(x)[2:end]...)`
The output will have applied the function `σ(y)` to each row/element of `y` except the first `out` ones.
Keyword `bias=false` will switch off trainable bias for the layer.
The initialisation of the weight matrix is `W = init(out*4, in)`, calling the function
given to keyword `init`, with default [`glorot_uniform`](@doc Flux.glorot_uniform).
given to keyword `init`, with default `glorot_uniform`.
The weight matrix and/or the bias vector (of length `out`) may also be provided explicitly.
Remember that in this case the number of rows in the weight matrix `W` MUST be a multiple of 2.
The same holds true for the `bias` vector.
Expand Down
3 changes: 2 additions & 1 deletion src/losses.jl
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,8 @@ mveloss(y, μ, σ) = (1 / 2) * (((y - μ) .^ 2) ./ σ + log.(σ))
"""
mveloss(y, μ, σ, β)
DOCSTRING
Calculates the Mean-Variance loss for a Normal distribution. This is merely the negative log likelihood.
This loss should be used with the MVE network type.
# Arguments:
- `y`: targets
Expand Down
2 changes: 1 addition & 1 deletion src/utils.jl
Original file line number Diff line number Diff line change
Expand Up @@ -93,9 +93,9 @@ Returns the predictions along with the epistemic and aleatoric uncertainty.
- `m`: the model which has to have the last layer be Normal Inverse Gamma(NIG) layer
- `x`: the input data which has to be given as an array or vector
"""
predict(m, x) = predict(last_type(m), m, x)
last_type(m::Chain) = last_type(m[end])
last_type(m) = typeof(m)
predict(m, x) = predict(last_type(m), m, x)

function predict(::Type{<:NIG}, m, x)
#(pred = γ, eu = uncertainty(ν, α, β), au = uncertainty(α, β))
Expand Down

0 comments on commit c33b8f8

Please sign in to comment.