-
-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Annotation are not being used #1679
Comments
Who gets notified when an annotation is made? I don't recall getting any emails. If email is too intrusive, how about a notification upon sign-in as you are doing with the "other stuff in bulkoaders"? |
That depends on what's being annotated, but primarily the specimen's collection's data quality contact. And that part is easy to adjust.
|
If I go to review annotations, I have to search on every collection individually (or the search returns too many) and for many of the collections there are no annotations - I spend a lot of time for no results going collection by collection. Could we search by institution OR collection? At times, I'd like to be able to see just the UTEP annotations or just MSB. Other times I may be more focused on a single collection. Also, we need a way to mark them resolved, need research, etc. or the number will just continue to increase forever. They should never go away (we had a looooong discussion about this at SPNHC/TDWG) but we need a way to sort them better so that we aren't constantly reviewing them over and over. |
Yea, sure - I'll do that now. The search form is more or less an afterthought - the original intention was that we'd respond to the emails (eg, instant gratification for the user who's bothered to leave us information). That's obviously not happening to the degree it should, I'm happy to consider WHATEVER.
We can talk about that too, but there is a way to mark them as "reviewed" and filter on that. Not all will be resolvable, and I'm not really convinced we can categorize that sort of thing - my initial reaction is that "someone's looked at this and responded to the degree current data and resources allow" is good enough.
They don't and won't - annotations become a part of the specimen record.
Lots of those discussions tend to be about "problems" we've long since solved.... |
I think it would be helpful if this were more obvious. Having to enter "NOT NULL" is not very intuitive. Could there just be a check box where this happens in the background? [ ] "Check here to see only annotations which have no review comments" We should think about adding to our annotations so that they can be more easily sorted. Just as we have templates for GitHub issues, templates for coordinate, taxon, other annotations would help those making them as well as those reviewing them. |
You can just click the link, but I can do whatever in the UI. There are also things like... which have been reviewed, but may need work anyway - why the default view is to see more than you may need.
I don't see a technical problem with that, but I'm not sure I understand what/how that'd work. Demo/example? |
I had no idea and I'd like to make it as easy to understand as possible NULL isn't a word that would resonate with a lot of people. |
screen shot 2018-09-12 at 10 59 51 am Yep and those are scary. Will what I do to fix mine affect anyone else? Do all of these share a set of coordinates or are they a bunch of separate coords that don't map to Washington? It is overwhelming and probably means that no matter how many times I see it I am just going to move along to something I feel more comfortable with. And if I was the one who fixed mine, it's going to frustrate me that I have to keep seeing it. I know what you are getting at, but I think we need to be as focused as possible, especially when these come from within Arctos. Remember, we have limited resources and time. We'd like to be perfect, but we are really just scrambling to get last year's (decade's?) data in the system in the first place. |
I'll see if I can set up a template example in the next week or so... |
Yea, the parenthetical bits are meant to help address that.
Hopefully! Ideally the one locality would be fixed, everyone gets emails, yay everybody, yay shared data, all done. It seems like reality doesn't always quite get there. The data are stored denormalized as multiple annotations. The screenshot above is built from...
... with the expectation that multi-specimen annotations would usually involve data shared between specimens. (That one's almost certainly from the "find not in the shape" link on edit geography.) It wouldn't be much of a problem for me to allow resolving annotations for individual specimens, but that would require you doing something for each of the 472 specimens involved in https://arctos.database.museum/info/reviewAnnotation.cfm?ANNOTATION_GROUP_ID=701 rather than just doing something for the group. |
Update:
|
Added annotations to the report (currently in Random). |
|
|
This is a staff time issue. We do see the annotations come through email,
but have to find chunks of time to address them. Obviously time is a
limitation.
…On Wed, Oct 9, 2019 at 8:50 AM dustymc ***@***.***> wrote:
***@***.***> select REVIEWED_FG, count(*) from annotations group by REVIEWED_FG;
REVIEWED_FG COUNT(*)
----------- ----------
1 988
0 14190
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#1679?email_source=notifications&email_token=ADQ7JBGOXZP67M2O3O6F6F3QNXVSBA5CNFSM4FUWWRIKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAYE4NQ#issuecomment-540036662>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADQ7JBFX24WJ4EIWDIAXIIDQNXVSBANCNFSM4FUWWRIA>
.
|
Yep, no surprise there. The "any..." fields may help mitigate known-conflicting data a bit - I was going to mention this in that thread - but they may also make it worse depending if eg, the coordinates or description is actually the problem. I've been looking for a way to fund this at the level of Arctos, but I'm not sure that's possible. Figuring out the problem may involve digging through your paper files, which probably means this is something each collection will have to deal with individually. If we can get around that, "these are known problems, fixing them and developing the infrastructure to detect/prevent/fix similar problems (and maybe digitizing those paper files) would make our data more capable of doing more things, but we don't have the resources to do that" seems like a sell-able proposal. |
Why is there no documentation on this? I can't find any information on what exactly I am supposed to do with these records. |
If anyone is poking around in there and wants to write up a quick "this is
what I did" as a shared google doc, we can write up something quickly to
add to the documentation.
…On Thu, Sep 17, 2020 at 7:07 AM Mary Beth ***@***.***> wrote:
* [EXTERNAL]*
Why is there no documentation on this? I can't find any information on
what exactly I am supposed to do with these records.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1679 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADQ7JBEBNBJDIDONAVK43P3SGICY3ANCNFSM4FUWWRIA>
.
|
Clearly we need better documentation and probably still a bit more tweaking to this system. I am assigning this to myself, with no promise to tackle it immediately, but maybe by the end of the year.... |
Latest stats, if anyone's keeping track:
The first notifications went out last night, I'm not sure we can do more, this can probably be closed or moved to docs. I did notice that several collections have no active data quality contact so aren't getting the emails; perhaps this is a communication problem as much as anything. @campmlc are you volunteering? If not I will throw together something minimal for the handbook - I agree this should have basic documentation NOW, we can expand and clean up later. |
|
|
I don't think there's more to be done here, getting collections to not ignore bad data does not seem to have a technical solution, closing. |
Arctos has a sophisticated system for annotating various data objects. Submitting an annotation alerts anyone who might have an interest. They're still mostly ignored.
Most everyone has a few unreviewed annotations hanging around:
Is there some way I can help, something we could be doing differently, ???? I don't imagine that our current response inspires users to keep submitting annotations.
(And maybe this should be a section of that publication. We built annotations when a major project to do about the same thing never happened, it's a form of crowdsourcing, grouping multiple annotations is a novel thing that essentially allows annotating arbitrary queries, etc. - AFAIK no other system has anything remotely similar.)
The text was updated successfully, but these errors were encountered: