-
Notifications
You must be signed in to change notification settings - Fork 44
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
feat: get details about a paper's citations
- Loading branch information
1 parent
6b4f2c7
commit 0397761
Showing
5 changed files
with
168,436 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
from semanticscholar.Paper import Paper | ||
from semanticscholar.BaseReference import BaseReference | ||
|
||
|
||
class Citation(BaseReference): | ||
''' | ||
This class abstracts a citation. | ||
''' | ||
|
||
def __init__(self, data: dict) -> None: | ||
super().__init__(data) | ||
if 'citingPaper' in data: | ||
self._paper = Paper(data['citingPaper']) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
{"isInfluential": true, "contexts": ["later introduced some changes to the original Transformer by utilizing only the decoder part of the transformer and applying causal masking [15].", "In particular, an autoregressive model termed Generative Pre-trained Transformer (GPT) [15] is responsible for significant breakthrough in textto-image [16] and language models [17].", "A GPT architecture later introduced some changes to the original Transformer by utilizing only the decoder part of the transformer and applying causal masking [15].", "The DT, based on the GPT architecture, is illustrated in Fig.", "(GPT) [15] is responsible for significant breakthrough in textto-image [16] and language models [17].", "In particular, the DT architecture, based on GPT, was chosen to be with embedded dimension of 128, 1 hidden layer, 1 attention head and a ReLU activation function."], "intents": ["methodology", "background"], "citingPaper": {"paperId": "bdcbc589c8d1e4dd0c6572630f8511557406ad2f", "externalIds": {"DBLP": "journals/ral/MonastirskyAS23", "DOI": "10.1109/LRA.2022.3229266", "CorpusId": 254723237}, "corpusId": 254723237, "publicationVenue": {"id": "93c335b7-edf4-45f5-8ddc-7c5835154945", "name": "IEEE Robotics and Automation Letters", "alternate_names": ["IEEE Robot Autom Lett"], "issn": "2377-3766", "url": "https://www.ieee.org/membership-catalog/productdetail/showProductDetailPage.html?product=PER481-ELE", "alternate_urls": ["http://ieeexplore.ieee.org/servlet/opac?punumber=7083369"]}, "url": "https://www.semanticscholar.org/paper/bdcbc589c8d1e4dd0c6572630f8511557406ad2f", "title": "Learning to Throw With a Handful of Samples Using Decision Transformers", "abstract": "Throwing objects by a robot extends its reach and has many industrial applications. While analytical models can provide efficient performance, they require accurate estimation of system parameters. Reinforcement Learning (RL) algorithms can provide an accurate throwing policy without prior knowledge. However, they require an extensive amount of real world samples which may be time consuming and, most importantly, pose danger. Training in simulation, on the other hand, would most likely result in poor performance on the real robot. In this letter, we explore the use of Decision Transformers (DT) and their ability to transfer from a simulation-based policy into the real-world. Contrary to RL, we re-frame the problem as sequence modelling and train a DT by supervised learning. The DT is trained off-line on data collected from a far-from-reality simulation through random actions without any prior knowledge on how to throw. Then, the DT is fine-tuned on an handful ($\\sim 5$) of real throws. Results on various objects show accurate throws reaching an error of approximately 4 cm. Also, the DT can extrapolate and accurately throw to goals that are out-of-distribution to the training data. We additionally show that few expert throw samples, and no pre-training in simulation, are sufficient for training an accurate policy.", "venue": "IEEE Robotics and Automation Letters", "year": 2023, "referenceCount": 31, "citationCount": 1, "influentialCitationCount": 0, "isOpenAccess": false, "openAccessPdf": null, "fieldsOfStudy": ["Computer Science"], "s2FieldsOfStudy": [{"category": "Computer Science", "source": "external"}, {"category": "Computer Science", "source": "s2-fos-model"}], "publicationTypes": ["JournalArticle"], "publicationDate": "2023-02-01", "journal": {"volume": "8", "pages": "576-583", "name": "IEEE Robotics and Automation Letters"}, "authors": [{"authorId": "97501516", "name": "M. Monastirsky"}, {"authorId": "2049081755", "name": "Osher Azulay"}, {"authorId": "38111870", "name": "A. Sintov"}]}} |
Oops, something went wrong.