Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configuration for 'eval' and 'gen', Sampling in Intensity-Free Models, and compacted event times in Multi-Step Inference #13

Open
junwoopark92 opened this issue Nov 25, 2023 · 7 comments
Labels
bug Something isn't working

Comments

@junwoopark92
Copy link

junwoopark92 commented Nov 25, 2023

Hello. Thank you for your efforts in TPP benchmarking.

I have a few questions.

Some models have both train, eval, and gen in examples/configs/experiment_config.yaml, but for models that don't have one of them (eval or gen), how should it be handled?

For Intensity Free (IFTPP), it seems that thinning is not used because intensity is not used. In that case, how should sampling be done when only Density is known? Looking at the EasyTPP paper, it seems that you have somehow addressed this, given the presence of RMSE and ACC.

In the case of multi-step inference, it seems that most events are clustered around the initial event. Is this a natural phenomenon? I observed the same phenomenon in both ODETPP and NHP.

I would really appreciate it if you could provide answers.

@iLampard
Copy link
Collaborator

Hi,

For IFTPP, we have to follow the original author's approach to do the sampling, which is in fact not compatible with our current framework. Thats why the master branch has no such code yet. We are considering pushing it to a new branch in the future. For the moment, you can directly use author's code.

For multi-step sampling, we indeed notice similar things but we have not found any bug yet (if you found any please let us know). The only difference between our version and original version( e.g., https://github.com/ant-research/hypro_tpp/blob/main/hypro_tpp/lib/thinning.py, https://github.com/yangalan123/anhp-andtt/blob/master/anhp/esm/thinning.py) is, we perform the batch-wise prediction. We are committed to closely test this part of code again.

@iLampard
Copy link
Collaborator

we will look at the multi-step generation code and get back to you shortly

@junwoopark92
Copy link
Author

Thank you for the response.

I have been trying to identify the cause of consistently small values in the sampled delta time over the past week.

I discovered that there is no cumulative process during the sampling of exp(lambda*) in the thinning algorithm.

In my opinion, it seems that the sampled dt values should be accumulated.

exp_numbers = self.sample_exp_distribution(intensity_upper_bound)

After the above line, I think we have to add this below code line.

exp_numbers = torch.cumsum(exp_numbers, dim=-1)

Could you please review this once?

@iLampard
Copy link
Collaborator

iLampard commented Dec 4, 2023

thanks for point this. Let me test it.

@iLampard iLampard added the bug Something isn't working label Dec 4, 2023
@iLampard
Copy link
Collaborator

i add this line. exp_numbers = torch.cumsum(exp_numbers, dim=-1), but i found the results become event more clustered.

i am still working on this issue. will get back to you when i fix it.

@junwoopark92
Copy link
Author

The more clustered result seems somewhat unusual. This is because the sampled delta, when accumulated, will evidently create a larger variance.

Actually, when we made the code change, we had results that more closely approximate the actual delta distribution compared to before adding the code.

In Figures, the orange line denotes the true distribution of inter-event time (i.e., delta) and the blue line denotes the distribution of sampled delta.

  • Before
    image

  • After
    image

And, as seen in the pseudocode below, accumulating delta in the Thinning algorithm is a common practice.
image

I didn't tidy it up thoroughly because I was a bit lazy, but we are fairly confident about these results.

@iLampard
Copy link
Collaborator

Hi,

Thanks for the analysis. It is a great job.

I do notice that the accumulation of dt sampling is missing in the code. Im working on check the bug these days and also did some tests.

Except this potential bug, there is also some padding problem for the multi-step generation code.

hope to fix all these in next version.

The more clustered result seems somewhat unusual. This is because the sampled delta, when accumulated, will evidently create a larger variance.

Actually, when we made the code change, we had results that more closely approximate the actual delta distribution compared to before adding the code.

In Figures, the orange line denotes the true distribution of inter-event time (i.e., delta) and the blue line denotes the distribution of sampled delta.

  • Before
    image
  • After
    image

And, as seen in the pseudocode below, accumulating delta in the Thinning algorithm is a common practice. image

I didn't tidy it up thoroughly because I was a bit lazy, but we are fairly confident about these results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants