Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR introduces an additional model called
funding-acknowledgement
, to parse the content of the funding and acknowledgement sections. This include identification of the mentioned entities (person, affiliation/institution, project), with a particular effort on funder and funding information. Funder name, grant number, funded project, funding program and grant name are recognized.Results are serialized in the TEI with the list of funders in the TEI header:
The
@ref
attribute link the funder to one or more funding element, which describe the funding with (when identified) grant number, grant name, project funded and name of the funding program:As visible above, the acknowledgement and funding sections are enriched with inline mark-up corresponding to the identified entities.
In addition, it is possible to consolidate the identified funder by a look-up currently limited to CrossRef funder registry, using the CrossRef REST API. When the funder name is matched with a certain certainty with a registered CrossRef funder, we add the DOI of the funder as well as additional normalized name, acronym and country.
Consolidation will be improved via biblio-glutton in a next phase.
The PR includes a complete revision of the segmentation training data regarding acknowledgement and funding section and a set of around 1500 manually annotated funding and acknowledgement sections.
Standalone model accuracy (strict field matching): the winner is SciBERT+CRF, as usual for NER in scientific texts