-
Notifications
You must be signed in to change notification settings - Fork 309
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updating Paper Metadata for 2024.eacl-demo.23 #3177
Labels
Comments
firojalam
added
correction
for corrections submitted to the anthology
metadata
Correction to metadata
labels
Apr 2, 2024
@firojalam Could you highlight missing or incorrect author names? |
@anthology-assist , Hawasly, Majd and Thank you |
xinru1414
pushed a commit
that referenced
this issue
May 22, 2024
anthology-assist
added
the
pending
has been dealt with and is awaiting a PR to be merged
label
May 22, 2024
This correction was made to the wrong paper. Can we please carefully check the author lists for the following two papers? |
Dear Matt,
Thanks for your email.
Unfortunately, now bibs are incorrect for both of our papers.
Regards
Firoj
................
Firoj Alam, PhD
http://sites.google.com/site/firojalam/
…On Thu, 4 Jul 2024 at 5:21 AM Matt Post ***@***.***> wrote:
This correction was made to the wrong paper. Can we please carefully check
the author lists for the following two papers?
- https://aclanthology.org/2024.eacl-demo.23/
- https://aclanthology.org/2024.eacl-long.30/
—
Reply to this email directly, view it on GitHub
<#3177 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABABTSDJGQI5NK2KTGWO2CLZKSWQ7AVCNFSM6AAAAABFTKXB5SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMBXHA4DANJRGE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
mjpost
added a commit
that referenced
this issue
Jul 9, 2024
mjpost
pushed a commit
that referenced
this issue
Jul 9, 2024
* Paper Revision: 2024.lrec-main.336, closes #3352. * Paper Metadata: 2023.findings-emnlp.410, closes #3353. * Paper Metadata: 2024.sigul-1.27, closes #3354. * Paper Metadata: 2023.acl-long.735, closes #3361. * Paper Revision 2023.acl-long.519, closes #3362. * Paper Revision 2024.lrec-main.1021, closes #3365. * Paper Metadata: 2024.lrec-main.1021, closes #3364. * Paper meta data 2024.lrec-main.424, closes #3366. * Paper Metadata: 2024.signlang-1.15, closes #3368. * Paper Revision of 2023.acl-long.599, closes #3373. * Paper Revision: 2023.findings-emnlp.986, closes #3374. * Paper Revision: 2023.repl4nlp-1.23, closes #3375. * Paper Revision 2024.eacl-tutorials.1, closes #3376. * Paper Metadata: 2024.eacl-tutorials.1, closes #3377. * Paper Revision 2023.alp-1.6, closes #3378. * Paper Revision 2022.lrec-1.571, closes #3379. * Paper revision for 2024.findings-eacl.100, closes #3384. * Paper Revision: 2024.lrec-main.405, closes #3389. * 2024.lrec-main.919, closes #3385. * Paper Metadata: 2024.ecnlp-1.8, closes #3386. * Paper Metadata: 2024.determit-1.5, closes #3387. * Author meta data for Wasi Ahmad. * Paper Revision 2024.lrec-main.872, closes #3396. * Paper Revision: 2024.lrec-main.1536, closes #3397. * Paper Metadata: 2024.cogalex-1.13, closes #3399. * Herman -> Hermann * Paper Revision 2024.naacl-long.41, closes #3400. * Paper Revision: 2024.humeval-1.25, closes #3538. * Paper Revision: 2024.naacl-long.159, closes #3562. * Paper Revision 2024.naacl-long.53, closes #3406. * Paper Revision 2024.lrec-main.1250, closes #3459. * Paper erruatum for 2024.sigul-1.38, closes #3394. * Paper Revision 2024.naacl-long.467, closes #3576. * Paper Revision 2024.naacl-long.117, closes #3573. * Paper Metadata: 2022.findings-acl.196, closes #3579. * Paper Metadata: 2024.naacl-long.23, closes #3559. * Paper meta data: 2024.ltedi-1.17, closes #3578. * Paper Metadata: 2022.ltedi-1.43, closes #3584. * Paper Metadata: 2024.naacl-long.417, closes #3580. * Author Metadata: Xuan Long Do, closes #3558. * Author Metadata: Ivan P. Yamshchikov, closes #3544. * Paper Metadata: 2024.naacl-long.211, closes #3535. * Paper Metadata: 2024.naacl-srw.11, closes #3531. * Paper abstract correction for 2024.eurali-1.1, closes #3526. * Author Metadata: Joel Ruben Antony Moniz, closes #3499, #3497, #3498. * Paper Metadata: 2022.nllp-1.13, closes #3492. * Paper Metadata: 2022.findings-acl.131, closes #3491. * Paper Metadata: 2023.findings-emnlp.892, closes #3473. * Paper Metadata: 2023.findings-emnlp.850, closes #3442. * Paper Revision 2023.emnlp-main.413, closes #3596. * Paper Revision 2021.emnlp-main.636, closes #3595. * Paper Revision 2024.naacl-long.351, closes #3594. * Paper Revision 2024.naacl-long.24, closes #3592. * Paper Revision 2024.naacl-long.236, closes #3591. * Author Metadata: Manuel Vargas Guzmán, closes #3601. * Author Metadata: 2024.naacl-long.441, closes #3600. * Paper Metadata: 2024.naacl-long.236, closes #3589. * Author name meta data for 2024.findings-naacl.258, closes #3588. * Author Metadata: Haraldur Hauksson, closes #3577. * Paper Metadata: 2024.findings-naacl.253, closes #3570. * Paper Metadata: 2024.naacl-long.342, closes #3569. * Author Metadata: Md Omar Faruqe, closes #3606. * Paper Metadata: 2024.bea-1.51, closes #3604. * Paper Metadata: 2024.readi-1.4, closes #3609. * Author Metadata: Roshan Sharma, closes #3566. * Author Metadata: Derry Wijaya, closes #3557. * Author Metadata: HyoJung Han, closes #3553. * Author Metadata: G. M. Shibli, closes #3509. * Name variant: Chandra Kiran Reddy Evuru * Name correction correction (#3177)
This issue has been solved. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Confirm that this is a metadata correction
Anthology ID
2024.eacl-demo.23
Type of Paper Metadata Correction
Correction to Paper Title
No response
Correction to Paper Abstract
No response
Correction to Author Name(s)
@inproceedings{dalvi-etal-2024-llmebench,
title = "{LLM}e{B}ench: A Flexible Framework for Accelerating {LLM}s Benchmarking",
author = "Dalvi, Fahim and
Hasanain, Maram and
Boughorbel, Sabri and
Mousi, Basel and
Abdaljalil, Samir and
Nazar, Nizi and
Abdelali, Ahmed and
Chowdhury, Shammur Absar and
Mubarak, Hamdy and
Ali, Ahmed and
Hawasly, Majd and
Durrani, Nadir and
Alam, Firoj",
editor = "Aletras, Nikolaos and
De Clercq, Orphee",
booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = mar,
year = "2024",
address = "St. Julians, Malta",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.eacl-demo.23",
pages = "214--222",
abstract = "The recent development and success of Large Language Models (LLMs) necessitate an evaluation of their performance across diverse NLP tasks in different languages. Although several frameworks have been developed and made publicly available, their customization capabilities for specific tasks and datasets are often complex for different users. In this study, we introduce the LLMeBench framework, which can be seamlessly customized to evaluate LLMs for any NLP task, regardless of language. The framework features generic dataset loaders, several model providers, and pre-implements most standard evaluation metrics. It supports in-context learning with zero- and few-shot settings. A specific dataset and task can be evaluated for a given LLM in less than 20 lines of code while allowing full flexibility to extend the framework for custom datasets, models, or tasks. The framework has been tested on 31 unique NLP tasks using 53 publicly available datasets within 90 experimental setups, involving approximately 296K data points. We open-sourced LLMeBench for the community (https://github.com/qcri/LLMeBench/) and a video demonstrating the framework is available online (https://youtu.be/9cC2m{_}abk3A)).",
}
The text was updated successfully, but these errors were encountered: