-
-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
anomalist: Open issues and enhancements #3436
Comments
Other minor things to work on:
|
Hi! I wanted to share my experience with Anomalist in an edge case, the Multidimensional Poverty Index: It's an edge case because the data shows indicators for only one year (current margin estimates) and for 2, 3, 4 years tops (harmonized over time). I expected to see a comparison of the old and new indicators in the latter category, but I can't see that. I suppose because there are no anomalies there, right? But what I see are potential anomalies in the time change and Gaussian process checks that should probably not be there. The pictures above show minimal variations considered anomalies (perhaps it's a problem of how small the numbers are?) and also only for the indicators of the previous version of the dataset, 6125. I can't see the analysis for the updated dataset, even when I used Indicator Upgrader. Am I doing something wrong? Thanks! |
Hi @paarriagadap, thanks for reporting that. Why you see anomalies of the old datasetNormally, you don't need to include the old dataset (6125) to calculate the anomalies. Anomalist will automatically find if there is an old version of your new dataset. Why version change anomalies do not appearIn principle, Anomalist will:
Have the names changed? If not, maybe you ran indicator upgrader after anomalist? (And therefore anomalist didn't know what to map). Why you see anomalies that are not importantThis is indeed a tricky case. If I understand correctly, you have an indicator with very little data, and the range of values is very small. So, even if the anomaly is relatively small, there is not enough context to realise that. Normally you would have data for many countries, and small anomalies would be small in scale, and their weighted score would hence be small. Please let me know if you need more clarifications. |
Thanks @pabloarosado. Some additional comments here:
|
Hi @paarriagadap I have reset the anomalies table and recalculated them. Now it's working again. |
Thank you, @pabloarosado! Most of the anomalies detected are expected, but now I can see them. I will take a closer look if there are any that are unexpected. |
@paarriagadap Thanks for giving it a try! It was very helpful in fixing all sorts of bugs and improving performance. You should give it a second try next week when we merge all of them. |
I'll drop here some ideas for future improvements that came up during the last data call:
|
Following up on #3436 (comment), I've added an 'export as csv' option and enabled faceting when there ar multiple timeseries added! thanks @pabloarosado |
One-liner
Open issues and possible enhancements of Anomalist
Context & details
See more details in #3340
Open issues
(Unlikely to happen) If you filter by a certain indicator and then unselect its dataset, Anomalist fails.
(Unlikely to happen) If you have already started Anomalist, and want to add a new dataset to the list, if you press "Detect anomalies" nothing happens. The only option is to re-scan all datasets (which takes long).
Add a "hide anomaly" button to each anomaly.
Improvements on AI summary (see conversation):
Improve the Anomalist workflow. Mojmir's working on letting Anomalist be automatically triggered for any new datasets in a staging server. But the UI is not yet adapted accordingly. Also, clarify what happens if the user, e.g. adds a new dataset to the list.
The text was updated successfully, but these errors were encountered: