-
Notifications
You must be signed in to change notification settings - Fork 18
[Task Submission] Multilingual SCAN (multilingual_scan
)
#19
base: main
Are you sure you want to change the base?
Conversation
Thanks for submitting a task to GenBench. It seems that you're submitting the data files of the task in your PR. For the final submission, you will need to host the dataset files somewhere else (preferably as a HuggingFace dataset). |
Hello! We are getting quite close to the deadline (September 1, 11:59PM anywhere on earth), which is why I wanted to remind you of the fact that your PR still needs some attention: see Amir's message above. Please don't forget to submit your accompanying paper to Openreview via https://openreview.net/group?id=GenBench.org/2023/Workshop by September 1. Good luck finalising your PR and paper, feel free to tag us if you have questions. |
Thanks for the heads-up, this is now fixed! |
Multilingual SCAN
It is widely acknowledged that fine-tuning a pretrained model generally results in better performance on a given task, compared to training the model from scratch. Evidence suggest that this is also the case for compositional generalization tasks. However, it has also been shown that multilingual models may not exhibit consistent performance across languages, with low resource languages often doing worse. Can we expect similar variations between languages when testing a multilingual model for compositionality?
The majority of research on compositional generalisation has focussed on English data and models. With the ambition to gain a deeper understanding on this issue from a multilingual perspective, we aim to adapt SCAN an existing compositionality benchmark into multiple languages, in order to evaluate multilingual LLMs for compositional generalization.
Authors
Implementation
N/A
Usage
N/A
Checklist:
genbench-cli test-task
tool.