-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase speed for large data sets #2
Comments
Haven't tried this exact package yet, but I tried dbscan::pointdensity which also uses the same C++ library (ANN). It's roughly twice as fast when plotting ~100k points. I'm currently testing it. |
I added |
How do you use the kde2d method? I've downloaded the latest version of this package from github but |
Hi @olechnwin, |
I've been testing some alternatives because
Some other options are summarized in this (in progress?) paper: https://vita.had.co.nz/papers/density-estimation.pdf |
When plotting millions of points, counting the number of neighbors of each point is extremely slow. The current algorithm calculates the pairwise distance for all points. This could be optimized, for instance with this approach. Other ideas are welcome.
The text was updated successfully, but these errors were encountered: