-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add methodology documentation #32
Conversation
reformat for github
Codecov Report
@@ Coverage Diff @@
## main #32 +/- ##
==========================================
+ Coverage 82.56% 85.96% +3.40%
==========================================
Files 5 6 +1
Lines 453 520 +67
==========================================
+ Hits 374 447 +73
+ Misses 79 73 -6
Flags with carried forward coverage won't be shown. Click here to find out more.
|
docs/discussion.md
Outdated
## 1-D Linear | ||
We do not know the true function $f$. We only know values of $f$ at a set of grid points $x_i$. | ||
|
||
To find $f(x)$ for $x$ in $(x_0,x_0)$, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you mean (x_0, x_1)?
docs/discussion.md
Outdated
|
||
We need additional information, so let's use the slopes at $f(x_0)$ and $f(x_1)$: $f'(x_0)$ and $f'(x_1)$. | ||
|
||
Warning of notation change. I am shortening $f(x_i)$ to $f(i)$ at this point. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"We are" shortening? (Or some phrase not in the first person.)
docs/figs/fig_1Dcubic.png
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These figures are super. We also need to check in the source code used to generate them.
docs/discussion.md
Outdated
return f[x]; | ||
|
||
|
||
## Hypercube |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would find it a useful reminder of what the hypercube is actually storing, especially since it's cached for efficiency later. Also, maybe this could be in a separate "definitions" section.
docs/discussion.md
Outdated
Rewrite the polynomial: | ||
$f(\mu)=c_{0}\cdot f(0)+c_{1}\cdot f(1)+d_{0}\cdot s_0\cdot (f(1)-f(-1))+d_{1}\cdot s_1\cdot (f(2)-f(0))$ | ||
|
||
Group the coefficients: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:)
docs/discussion.md
Outdated
The terms $d_{0}\cdot s_0$ and $d_{1}\cdot s_1$ are referred to as *cubic_slope_coefficients*. | ||
|
||
## 2-D Cubic, with pictures | ||
There are 4<sup>N</sup> points to weigh. Number of calculations grows faster than that. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few sentences seem to have gotten fragmented; these just need to read more fluidly.
docs/figs/btwxt presentation.pptx
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I made a mistake in viewing the figures - I thought the png files were from your own code that had generated plots of the different cubic splines. If so, then you'd check in that code (Python or whatever). The pptx file is binary and I don't think it makes sense to commit it to the repository. @nealkruis what's your opinion?
docs/discussion.md
Outdated
<img width="70%" src="./figs/fig_2Dlinear.png"> | ||
</p> | ||
|
||
Four independent terms, one for each vertex. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps something like "Four independent terms, one for each vertex, describe a weight that can be assigned to each one. We will extend this logic to an N-dimensional system."
docs/discussion.md
Outdated
+&\mu_1\cdot [(1-\mu_{0})\cdot f(x_{01}) + \mu_0 \cdot f(x_{11})] | ||
\end{align*}$$ | ||
|
||
Or, to do the calculation in one pass: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aligning the equation along the vertex descriptors (caveat: I only tested this rendering in VSCode) drives the point home better:
Or, expanding the equation to do the calculation in one pass:
$$\begin{align*}
f(\mu)=(1-\mu_0)\cdot (1-\mu_1)\cdot \mathbf{f(x_{00})}& \ + \\\mu_0\cdot (1-\mu_1)\cdot \mathbf{f(x_{10})}& \ + \\(1-\mu_0)\cdot \mu_1\cdot \mathbf{f(x_{01})}& \ + \\\mu_0 \cdot \mu_1\cdot \mathbf{f(x_{11})}
\end{align*}$$
docs/discussion.md
Outdated
$f(\mu)=a\cdot \mu ^3+b\cdot \mu ^2+c\cdot \mu+d$ | ||
|
||
Note that | ||
$f'=\dfrac{df}{dx}=\dfrac{d\mu}{dx} \cdot \dfrac{df}{d\mu}=\dfrac{1}{\delta_x}\cdot\dfrac{df}{d\mu}$: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you mean \Delta x instead of \delta_x?
Either way, I'm not convinced this shortcut adds much - later on we need to use denominators that also subtract one x from another, and the delta_x is less informative than the actual subtraction (I think that
docs/discussion.md
Outdated
|
||
|
||
## N-D Cubic/Linear Pseudocode | ||
```c++ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This following pseudocode is much better than what's in the slides! The un-introduced function names were no help.
docs/discussion.md
Outdated
|
||
## 2-D Linear pseudocode | ||
We left off with: | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We didn't leave off with anything, since this isn't a talk. :)
docs/discussion.md
Outdated
## N-D Linear pseudocode | ||
```c++ | ||
f(x) = 0; | ||
for each vertex x[i][j] in the hypercube: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This pseudocode is maybe useful, but it's not descriptive enough. What does dim
range over? vertex_weight
could use an earlier mention. i
and j
don't imply N dimensions to me, which is I think what we're after here (the slides used ...k
as well, but then didn't carry it through the loop).
docs/discussion.md
Outdated
|
||
Four independent terms, one for each vertex. | ||
|
||
## 2-D Linear pseudocode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I think the 2-D linear pseudocode is both distracting and anyway you'd unroll the loop if you wrote it as real code, not think of it in this conditional format. Maybe we remove this?
docs/discussion.md
Outdated
## **How the btwxt library works** | ||
## Big Ladder Software | ||
|
||
## 1-D Linear |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should there be a hyphen between the "1" and "D"? It looks funny but I guess we'd want it for "N-D"...
docs/discussion.md
Outdated
|
||
## Linear Interpolation | ||
Linear interpolation along an axis is often sufficient if either of the following conditions apply: | ||
1) the known funciton values are closely spaced, such that variations between neighboring points are within acceptable tolerances, or |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: funciton
docs/discussion.md
Outdated
\end{align*} | ||
``` | ||
|
||
Terms $c_{0}$ and $c_{1}$ are referred to as *interpolation coefficients*. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
interpolation_coefficients (with the underscore, as below)
docs/discussion.md
Outdated
\end{align*} | ||
``` | ||
|
||
Notice that sum of these coefficients is unity, ensuring that the interpolation returns a weighted average of the neighboring function values. We therefore refer to these coefficients as *weights*, or *weight factors*. The linearly interpolated function $f(\mu)$ consists of a train of straight-line segments connecting each pair of neighboring points. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see "weight factors" used in the code, but "weight coefficients" is used further in the document.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really minor change requests this time. I really like how this flows; I can understand our assumptions and the algebra has just the right amount of explicitness for my personal taste!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is very close! Thanks, @spahrenk!
Mostly minor details here.
Do we want to include source code changes here too?
docs/discussion.md
Outdated
<p align="center"> | ||
<img width="70%" src="./figs/fig_1D_cubic.png"> | ||
</p> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really helpful! Shouldn't the slope be the approximate slope, though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if this was changed or not. I don't see the approximate slopes here.
docs/discussion.md
Outdated
<img width="50%" src="./figs/fig_2D_cubic_pic1.png"> | ||
</p> | ||
|
||
As for the 2-D linear case, the coefficient for each vertex is the product of the coefficients computed for each axis, separately. Therefore, we first find the $2\cdot4=8$ single-axis coefficients, then we evaluate each of the $4^2=16$ pair products, which are the combined weighting factors for the 2-D vertices. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this say "As for the 2-D cubic case..."? What coefficients are we referring to here? I think we're talking about weighting factors (per the following paragraph)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't seem to be clarified in the changes. I suspect "coefficient" is really talking about the "weighting factor".
Several changes that you may or may not favor.
<p align="center"> | ||
<img width="60%" src="./figs/fig_cubic.png"> | ||
</p> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This still doesn't look like it's matching the approximated slopes from the previous figure.
\end{align*} | ||
$$ | ||
|
||
The coefficient multiplying each function value comprises the *weight* of the *vertex* located at the corresponding 2-D grid-point coordinates. The weight for that vertex is simply the product of the `weighting factors` associated with each of the two axes. Notice that there are two unique `weighting factors` (i.e., $\mu_i$ and $1-\mu_i$) associated with each axis, so we must evaluate $2+2=4$ single-axis weights, in total. These are then combined to form $2\cdot 2=4$ weights for the full rectangle. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
docs/figs/fig_2D_linear.png
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kinda liked the first image, but probably the new one is more descriptive. :)
curve colors had been reversed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have any context for what this commit was intended to accomplish, so I can't give a relevant review. In the future, you could add an explanatory comment of your own to the "conversation" section of the PR, and/or be a little more descriptive in the commit message (Also, please use our internal message style starting with a capital letter: “{Present-tense action} {on the thing this commit fixes}. “)
Curve colors had been reversed in the most recent commit. An attempt was made to correct this. |
Replaced actual slopes with estimated slope son cubic figure. |
Description
Author Progress Checklist:
Reviewer Checklist: