You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am new to Spider2 and looking for guidance on how to use it effectively for benchmarking our existing Text-to-SQL model. While I have successfully set up Spider2 locally, I am unsure about the next steps to proceed with benchmarking.
Here is what I have done so far:
Followed the steps mentioned in the Spider2 Lite documentation.
Copied the SQLite files and executed evaluate.py.
However, I am not sure how to proceed with running Spider2 for evaluating my own Text-to-SQL model. Could you please guide me on:
The steps required to configure Spider2 to evaluate a custom Text-to-SQL model.
How to integrate my model into the Spider2 benchmarking pipeline.
Any resources, tutorials, or documentation that would help me better understand and use Spider2 for this purpose.
Best practices for benchmarking Text-to-SQL models using Spider2.
I would greatly appreciate any pointers or advice to get started.
Thank you in advance for your time and help!
The text was updated successfully, but these errors were encountered:
Thank you for your prompt response, really appreciate that. I have installed dependencies in dinsql and tried to runrun.sh script. I am getting token limit exceeded error. Its more of a gpt issue and I am trying to process prompt in chunks now. I have few more questions.
Is there a minimum dataset/sqls requirement for evaluation?
Do we need to provide preprocessed json files or these will be created from sql dump file, if we somehow provide sql dump file of our own database.
I am new to this fields therefor you might find my questions silly.
Hi,
I am new to Spider2 and looking for guidance on how to use it effectively for benchmarking our existing Text-to-SQL model. While I have successfully set up Spider2 locally, I am unsure about the next steps to proceed with benchmarking.
Here is what I have done so far:
Followed the steps mentioned in the Spider2 Lite documentation.
Copied the SQLite files and executed evaluate.py.
However, I am not sure how to proceed with running Spider2 for evaluating my own Text-to-SQL model. Could you please guide me on:
The steps required to configure Spider2 to evaluate a custom Text-to-SQL model.
How to integrate my model into the Spider2 benchmarking pipeline.
Any resources, tutorials, or documentation that would help me better understand and use Spider2 for this purpose.
Best practices for benchmarking Text-to-SQL models using Spider2.
I would greatly appreciate any pointers or advice to get started.
Thank you in advance for your time and help!
The text was updated successfully, but these errors were encountered: