Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pyinform.error.InformError: an inform error occurred - "negative state in timeseries" #34

Open
fishbacp opened this issue Dec 18, 2021 · 2 comments

Comments

@fishbacp
Copy link

I have two simple time series, xs and ys, having 5000 samples each. I attempted to compute the transfer entropy via

T=transfer_entropy(xs, ys, k)

using various lag values, k. Each attempt yielded the following error message:

Traceback (most recent call last):
  File "/Users/fishbacp/Desktop/Python2022/transfer_entropy.py", line 16, in <module>
    T=transfer_entropy(x_source,x_target,1)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pyinform/transferentropy.py", line 222, in transfer_entropy
    error_guard(e)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pyinform/error.py", line 63, in error_guard
    raise InformError(e, func)
pyinform.error.InformError: an inform error occurred - "negative state in timeseries"

Any insights as to the error source? Should I be adjusting other keyword arguments?

@Lszzz
Copy link

Lszzz commented Jan 25, 2023

Have you solved the problem?

@jakehanson
Copy link
Contributor

Hi there,

The error is that observations should not contain negative values; I believe the reason for this error is that we are enumerating the states of a system, so a negative value is unexpected. More specifically, the negative value will throw off an attempt to calculate the base of the logarithm.

If your observations contain negative values, the solution is simply to remap the observed values to the positive integers using pyinform.utils.coalesce_series.

For example:

xs = [0,-1,-1,-1,-1,0,0,0,0]
ys = [0,0,1,1,1,1,0,0,0]

coal_xs, b = pyinform.utils.coalesce_series(xs)
transfer_entropy(coal_xs,ys,k=2)

returns the correct answer:

0.6792696431662097

In general, all that matters for information-theoretic calculations is the distribution of states and not the actual value of the states. So the entropy of xs = [-1, -1, 0] is the same as [+1, +1, 0] since they both result in the probability distribution [2/3, 1/3].

One last thing to note is that your observations should be comprised of discrete states. If you are working with continuous-valued observations, you will want to bin these observations first using pyinform.utils.binning.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants