Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

simple prior use test #461

Merged
merged 31 commits into from
Dec 25, 2019
Merged

simple prior use test #461

merged 31 commits into from
Dec 25, 2019

Conversation

Affie
Copy link
Member

@Affie Affie commented Dec 3, 2019

I feel this test should pass

@dehann
Copy link
Member

dehann commented Dec 3, 2019

Hi, oh great thanks will resolve today!

@dehann dehann added this to the v0.8.3 milestone Dec 3, 2019
@dehann
Copy link
Member

dehann commented Dec 4, 2019

hi, didnt get through this today, but will look first thing in the morning.


@test isapprox(x0_m, 0.0, atol = 0.1)
@test isapprox(x1_m, 0.0, atol = 0.1)
@test isapprox(x2_m, 0.0, atol = 0.1)
Copy link
Member

@dehann dehann Dec 4, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Affie, first question: if you are putting two priors at the ends of a 3 variable chain (x0,x1,x2), either at + or - 1 -- why would you expect them to all be at average 0? I would expect (x0,x2) to be at + or -1 one respectively, with x1 floating somewhere near 0 in the middle?

This was based on me reading the code.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

x0 and x2 should be near 0, and only x1 should be "on" 0. Test with loose atol=0.1 bit hacky, but okay :-) x0 and x2 should be around 0.1 or something like that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please note, I changed the standard deviation to 0.01.
So its more like -0.01, 0, 0.01 for x0,x1,x2.

-1/1 <-x0-> 1/0.01 <-x1-> 1/0.01 <-x2-> 1/1

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm assuming you did the calculation for multiplying two Gaussians, and found that the new mean should be at +- 0.01? What I mean is if you were to multiply N(-1,0.1) * N(+1,0.1), the answer will be at zero with a new covariance slightly smaller than 0.1.

@dehann
Copy link
Member

dehann commented Dec 5, 2019

first test graph

Screenshot from 2019-12-04 20-07-58

@Affie
Copy link
Member Author

Affie commented Dec 5, 2019

I don't agree on changes.
What about this: changing the variable ordering changes the answers.
image

GraphInit = true

  • Left: Means: x0: -0.41846814644201763, x1: -0.3480670649963539, x2: -0.1642135339004435
  • Right: Means: x0: 0.031117603087722025, x1: 0.1416983152637742, x2: 0.23945144191614984

GraphInit = false

  • Left: Means: x0: -0.33884567700033713, x1: -0.281886695060186, x2: -0.19827585825173885
  • Right: Means: x0: 0.07968446871195635, x1: 0.14465511208606313, x2: 0.22910276233886342

Also, forcing it into one clique gives this answer:
Means: x0: -0.051543580782896746, x1: -0.04257426664936715, x2: -0.037174528557433466

@Affie
Copy link
Member Author

Affie commented Dec 5, 2019

If #221 is the issue I brought up it is related.
And second graph tests for #458

@dehann
Copy link
Member

dehann commented Dec 6, 2019

adding @tonioteran here for interest sake.

I strongly agree that changing the variable ordering should not change the numerical result. Thanks for describing, I definitely need to spend time on this and make sure we understand what is happening here.

@Affie
Copy link
Member Author

Affie commented Dec 6, 2019

I looks like it either has to do with the structure or the priors on the down solve.
Hand calculations shows the tree structure is correct in first graph. I believe the message priors are used wrong on the down solve causing the small errors we see.

@dehann
Copy link
Member

dehann commented Dec 10, 2019

acknowledged thanks, definitely will do a deep dive on this soon.

@dehann
Copy link
Member

dehann commented Dec 15, 2019

xref #462

@dehann
Copy link
Member

dehann commented Dec 18, 2019

xref #485

@dehann
Copy link
Member

dehann commented Dec 18, 2019

xref #487

@codecov-io
Copy link

codecov-io commented Dec 19, 2019

Codecov Report

Merging #461 into master will increase coverage by 1.65%.
The diff coverage is 21.62%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #461      +/-   ##
==========================================
+ Coverage   45.04%   46.69%   +1.65%     
==========================================
  Files          29       29              
  Lines        4234     3735     -499     
==========================================
- Hits         1907     1744     -163     
+ Misses       2327     1991     -336
Impacted Files Coverage Δ
src/IncrementalInference.jl 66.66% <ø> (ø) ⬆️
src/AdditionalUtils.jl 0% <ø> (ø) ⬆️
src/FGOSUtils.jl 25.92% <ø> (+2.84%) ⬆️
src/SubGraphFunctions.jl 56.25% <0%> (+15.4%) ⬆️
src/Deprecated.jl 0% <0%> (ø) ⬆️
src/JunctionTree.jl 72.86% <100%> (+2.36%) ⬆️
src/CliqStateMachine.jl 49.67% <100%> (+0.1%) ⬆️
src/SolveTree01.jl 59.76% <100%> (-0.19%) ⬇️
src/CliqStateMachineUtils.jl 29.42% <61.53%> (+1.47%) ⬆️
... and 3 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b339a5e...4594d34. Read the comment docs.

@dehann dehann merged commit d0ede42 into master Dec 25, 2019
@dehann dehann mentioned this pull request Dec 25, 2019
@dehann dehann mentioned this pull request Dec 27, 2019
@dehann dehann deleted the test/priorusetest branch June 15, 2020 03:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants