Skip to content

Commit

Permalink
update biblio
Browse files Browse the repository at this point in the history
  • Loading branch information
ceteri committed Jan 8, 2024
1 parent 33c84cc commit b0fcff9
Show file tree
Hide file tree
Showing 2 changed files with 44 additions and 32 deletions.
1 change: 1 addition & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ repos:
rev: v4.4.0
hooks:
- id: trailing-whitespace
exclude: ^docs/
- id: check-builtin-literals
- id: check-executables-have-shebangs
- id: check-merge-conflict
Expand Down
75 changes: 43 additions & 32 deletions docs/biblio.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,9 @@
Where possible, the bibliography entries use conventions at
<https://www.bibsonomy.org/>
for [*citation keys*](https://bibdesk.sourceforge.io/manual/BibDeskHelp_2.html).

Journal abbreviations come from
<https://academic-accelerator.com/Journal-Abbreviation/System>
based on [*ISO 4*](https://en.wikipedia.org/wiki/ISO_4) standards.

Links to online versions of cited works use
[DOI](https://www.doi.org/)
for [*persistent identifiers*](https://www.crossref.org/education/metadata/persistent-identifiers/).
Expand All @@ -20,118 +18,131 @@ URLs are listed.

### aarsen2023ner

["SpanMarker for Named Entity Recognition"](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
**Tom Aarsen**
["SpanMarker for Named Entity Recognition"](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
**Tom Aarsen**
*Radboud University* (2023-06-01)
> A span-level Named Entity Recognition (NER) model that aims to improve performance while reducing computational requirements. SpanMarker leverages special marker tokens and utilizes BERT-style encoders with position IDs and attention mask matrices to capture contextual information effectively
### aue07dbpedia

["DBpedia: A Nucleus for a Web of Open Data"](https://doi.org/10.1007/978-3-540-76298-0_52)
**Sören Auer**, **Christian Bizer**, **Georgi Kobilarov**, **Jens Lehmann**, **Richard Cyganiak**, **Zachary Ives**
["DBpedia: A Nucleus for a Web of Open Data"](https://doi.org/10.1007/978-3-540-76298-0_52)
**Sören Auer**, **Christian Bizer**, **Georgi Kobilarov**, **Jens Lehmann**, **Richard Cyganiak**, **Zachary Ives**
*ISWC* (2007-11-11)
> DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data.
## – B –

### bachbhg17

["Hinge-Loss Markov Random Fields and Probabilistic Soft Logic"](https://arxiv.org/abs/1505.04406)
**Stephen Bach**, **Matthias Broecheler**, **Bert Huang**, **Lise Getoor**
["Hinge-Loss Markov Random Fields and Probabilistic Soft Logic"](https://arxiv.org/abs/1505.04406)
**Stephen Bach**, **Matthias Broecheler**, **Bert Huang**, **Lise Getoor**
*JMLR* (2017–11–17)
> We introduce two new formalisms for modeling structured data, and show that they can both capture rich structure and scale to big data. The first, hinge-loss Markov random fields (HL-MRFs), is a new kind of probabilistic graphical model that generalizes different approaches to convex inference.
## – C –

### cabot2023redfm

["RED<sup>FM</sup>: a Filtered and Multilingual Relation Extraction Dataset"](https://arxiv.org/abs/2306.09802)
**Pere-Lluís Huguet Cabot**, **Simone Tedeschi**, **Axel-Cyrille Ngonga Ngomo**, **Roberto Navigli**
["RED<sup>FM</sup>: a Filtered and Multilingual Relation Extraction Dataset"](https://arxiv.org/abs/2306.09802)
**Pere-Lluís Huguet Cabot**, **Simone Tedeschi**, **Axel-Cyrille Ngonga Ngomo**, **Roberto Navigli**
_ACL_ (2023-06-19)
> Relation Extraction (RE) is a task that identifies relationships between entities in a text, enabling the acquisition of relational facts and bridging the gap between natural language and structured knowledge. However, current RE models often rely on small datasets with low coverage of relation types, particularly when working with languages other than English. In this paper, we address the above issue and provide two new resources that enable the training and evaluation of multilingual RE systems.
## – E –

### erxlebengkmv14

["Introducing Wikidata to the Linked Data Web"](https://doi.org/10.1007/978-3-319-11964-9_4)
**Fredo Erxleben**, **Michael Günther**, **Markus Krötzsch**, **Julian Mendez**, **Denny Vrandečić**
_ISWC_ (2014-10-19)
> We introduce new RDF exports that connect Wikidata to the Linked Data Web. We explain the data model of Wikidata and discuss its encoding in RDF. Moreover, we introduce several partial exports that provide more selective or simplified views on the data.

## – G –

### galkin2023ultra

["Towards Foundation Models for Knowledge Graph Reasoning"](https://arxiv.org/abs/2310.04562)
**Mikhail Galkin**, **Xinyu Yuan**, **Hesham Mostafa**, **Jian Tang**, **Zhaocheng Zhu**
["Towards Foundation Models for Knowledge Graph Reasoning"](https://arxiv.org/abs/2310.04562)
**Mikhail Galkin**, **Xinyu Yuan**, **Hesham Mostafa**, **Jian Tang**, **Zhaocheng Zhu**
preprint (2023–10–06)
> ULTRA builds relational representations as a function conditioned on their interactions. Such a conditioning strategy allows a pre-trained ULTRA model to inductively generalize to any unseen KG with any relation vocabulary and to be fine-tuned on any graph.
## – H –

### hahnr88

["Automatic generation of hypertext knowledge bases"](https://doi.org/10.1145/966861.45429)
**Udo Hahn**, **Ulrich Reimer**
["Automatic generation of hypertext knowledge bases"](https://doi.org/10.1145/966861.45429)
**Udo Hahn**, **Ulrich Reimer**
_ACM SIGOIS_ 9:2 (1988-04-01)
> The condensation process transforms the text representation structures resulting from the text parse into a more abstract thematic description of what the text is about, filtering out irrelevant knowledge structures and preserving only the most salient concepts.
### hamilton2020grl

[_Graph Representation Learning_](https://www.cs.mcgill.ca/~wlh/grl_book/)
**William Hamilton**
[_Graph Representation Learning_](https://www.cs.mcgill.ca/~wlh/grl_book/)
**William Hamilton**
Morgan and Claypool (pre-print 2020)
> A brief but comprehensive introduction to graph representation learning, including methods for embedding graph data, graph neural networks, and deep generative models of graphs.
### hangyyls19

["OpenNRE: An Open and Extensible Toolkit for Neural Relation Extraction"](https://doi.org/10.18653/v1/D19-3029)
**Xu Han**, **Tianyu Gao**, **Yuan Yao**, **Deming Ye**, **Zhiyuan Liu**, **Maosong Sun**
["OpenNRE: An Open and Extensible Toolkit for Neural Relation Extraction"](https://doi.org/10.18653/v1/D19-3029)
**Xu Han**, **Tianyu Gao**, **Yuan Yao**, **Deming Ye**, **Zhiyuan Liu**, **Maosong Sun**
*EMNLP* (2019-11-03)
> OpenNRE is an open-source and extensible toolkit that provides a unified framework to implement neural models for relation extraction (RE).
### honnibal2020spacy

["spaCy: Industrial-strength Natural Language Processing in Python"](https://doi.org/10.5281/zenodo.1212303)
**Matthew Honnibal**, **Ines Montani**, **Sofie Van Landeghem**, **Adriane Boyd**
["spaCy: Industrial-strength Natural Language Processing in Python"](https://doi.org/10.5281/zenodo.1212303)
**Matthew Honnibal**, **Ines Montani**, **Sofie Van Landeghem**, **Adriane Boyd**
*Explosion AI* (2016-10-18)
> spaCy is a library for advanced Natural Language Processing in Python and Cython. It's built on the very latest research, and was designed from day one to be used in real products.
## – L –

### lee2023ingram

["InGram: Inductive Knowledge Graph Embedding via Relation Graphs"](https://arxiv.org/abs/2305.19987)
**Jaejun Lee**, **Chanyoung Chung**, **Joyce Jiyoung Whang**
["InGram: Inductive Knowledge Graph Embedding via Relation Graphs"](https://arxiv.org/abs/2305.19987)
**Jaejun Lee**, **Chanyoung Chung**, **Joyce Jiyoung Whang**
_ICML_ (2023–08–17)
> In this paper, we propose an INductive knowledge GRAph eMbedding method, InGram, that can generate embeddings of new relations as well as new entities at inference time.
## – M –

### mihalcea04textrank

["TextRank: Bringing Order into Text"](https://www.aclweb.org/anthology/W04-3252/)
**Rada Mihalcea**, **Paul Tarau**
*EMNLP* pp. 404-411 (2004-07-25)
["TextRank: Bringing Order into Text"](https://www.aclweb.org/anthology/W04-3252/)
**Rada Mihalcea**, **Paul Tarau**
*EMNLP* pp. 404-411 (2004-07-25)
> In this paper, the authors introduce TextRank, a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications.
## – N –

### nathan2016ptr

["PyTextRank, a Python implementation of TextRank for phrase extraction and summarization of text documents"](https://doi.org/10.5281/zenodo.4637885)
["PyTextRank, a Python implementation of TextRank for phrase extraction and summarization of text documents"](https://doi.org/10.5281/zenodo.4637885)
**Paco Nathan**, et al.
*Derwen* (2016-10-03)
> Python implementation of TextRank algorithms ("textgraphs") for phrase extraction
### nathan2023glod

["Graph Levels of Detail"](https://blog.derwen.ai/graph-levels-of-detail-ea4226abba55)
**Paco Nathan**
["Graph Levels of Detail"](https://blog.derwen.ai/graph-levels-of-detail-ea4226abba55)
**Paco Nathan**
*Derwen* (2023-11-12)
> How can we work with graph data in more abstracted, aggregate perspectives? While we can run queries on graph data to compute aggregate measures, we don’t have programmatic means of “zooming out” to consider a large graph the way that one zooms out when using an online map.
## – W –

### warmerdam2023pydata

["Natural Intelligence is All You Need™"](https://youtu.be/C9p7suS-NGk?si=7Ohq3BV654ia2Im4)
**Vincent Warmerdam**
["Natural Intelligence is All You Need™"](https://youtu.be/C9p7suS-NGk?si=7Ohq3BV654ia2Im4)
**Vincent Warmerdam**
*PyData Amsterdam* (2023-09-15)
> In this talk I will try to show you what might happen if you allow yourself the creative freedom to rethink and reinvent common practices once in a while. As it turns out, in order to do that, natural intelligence is all you need. And we may start needing a lot of it in the near future.
### wolf2020transformers

["Transformers: State-of-the-Art Natural Language Processing"](https://doi.org/10.18653/v1/2020.emnlp-demos.6)
**Thomas Wolf**, **Lysandre Debut**, **Victor Sanh**, **Julien Chaumond**, **Clement Delangue**, **Anthony Moi**, **Pierric Cistac**, **Tim Rault**, **Remi Louf**, **Morgan Funtowicz**, **Joe Davison**, **Sam Shleifer**, **Patrick von Platen**, **Clara Ma**, **Yacine Jernite**, **Julien Plu**, **Canwen Xu**, **Teven Le Scao**, **Sylvain Gugger**, **Mariama Drame**, **Quentin Lhoest**, **Alexander Rush**
["Transformers: State-of-the-Art Natural Language Processing"](https://doi.org/10.18653/v1/2020.emnlp-demos.6)
**Thomas Wolf**, **Lysandre Debut**, **Victor Sanh**, **Julien Chaumond**, **Clement Delangue**, **Anthony Moi**, **Pierric Cistac**, **Tim Rault**, **Remi Louf**, **Morgan Funtowicz**, **Joe Davison**, **Sam Shleifer**, **Patrick von Platen**, **Clara Ma**, **Yacine Jernite**, **Julien Plu**, **Canwen Xu**, **Teven Le Scao**, **Sylvain Gugger**, **Mariama Drame**, **Quentin Lhoest**, **Alexander Rush**
*EMNLP* (2020-11-16)
> The library consists of carefully engineered state-of-the art Transformer architectures under a unified API. Backing this library is a curated collection of pretrained models made by and available for the community.

0 comments on commit b0fcff9

Please sign in to comment.