-
Notifications
You must be signed in to change notification settings - Fork 14.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SIP-128] AI/LLM query generation in SQL lab #28167
Comments
We'll need to cover a lot of things that are missing from this SIP: This will need to be put up for a DISCUSS thread on the mailing list to move forward, but I think the proposal needs more detail/resolution. |
We'll need to cover a lot of things that are missing from this SIP: • What are the security/privacy implications? • How do we (as an open-source solution) stay vendor-agnostic here? What's the abstraction layer? |
Either two Options (draft) - feel free to add what you think. (probably I need to find a way of doing this as collaborative one). If maintainers can create a sheet that works, else I can create a spreadsheet. For evaluating or suggesting various approaches or implementation ideas
Based on my evaluation - use langchain is better -
|
@surapuramakhil thanks. I think it makes sense to update the description with all new info and make sure you are covering all the technical/architectural considerations. First question that comes to mind, how do you intend to pull the right metadata from the database for the LLM to use? There is a limited context window and you just can't pull the whole schema for both context and performance limitations. |
@geido based on my research langchain already solves this. They wrote pipelines for generating Queries from text. It works for any llm model. we can just piggyback on that. All I am planning to is have llm_provider or llm_factory which creats llm based on user needs and send to their pipeline. |
@geido as you said updated description.
Let's try with langchain and see its results. |
It looks more like a toy for now:
This won't work for production databases that might have hundreds of tables and columns. |
I think having langchain in the repo might be a nice thing to have to enable LLM-related capabilities. However, that would be a separate SIP to illustrate how langchain could be leveraged in the repo. It looks like starting from SQL generation is hard. |
Why do you think so? It's the first use case which Apache superset needs |
As someone who has actually implemented this exact idea in superset for a hackathon a few months back, this is a pipe-dream at best (to be fairly blunt). Using RAG to pull relevant table metadata at prompt-time still led to unmanageable levels of LLM hallucination that only grows worse as the size of the warehouse being queried increases. Something like this may be feasible for a user with a handful of tables, but at-scale it simply doesn't work. And a query that is 99% correct is functionally worthless if this is intended to be utilized by folks who don't have the skills necessary to parse through AI-generated SQL. |
This is the problem with Language Model. That's exactly why LLM choice is given to users. If the situation were the scale is high, the best they can with high context size model like Gemini pro-1.5. Thats a separate Data Science problem which Apache Superset doesn't need to solve. just leverage what is available.
This is a separate data science problem which Apache Superset doesn't need to solve, currently langchain community (quite popular in datascience) are solving this problem. we just leverage it. this might protect from hallucination https://python.langchain.com/docs/use_cases/sql/query_checking/ As both evolve (by time), Quality of Queries will become better & better.
I agree with you about this, this doesn't solve fully for those who doesn't necessary knowledge to understand AI generated SQL. It's a copilot instead of an auto pilot. |
Ah, I have found this. This is a premium feature of Preset |
@surapuramakhil do you still intend to move forward with this? |
Hi @rusackas We have implemented LLM based query generation for our use case which is using self hosted model . We have also developed an adapter which can support popular LLM as service platforms like chat GPT etc. using API key configurations. What is the process ahead to move forward on this sip now. Should we create a PR. |
@ved-kashyap-samsung are you on Apache Superset Slack? You can find me there as "Diego Pucci". I was the lead engineer for AI Assist for Preset, I should be able to help with getting the SIP right. Please, get in touch. |
@ved-kashyap-samsung I think adding AI to Superset requires a proposal for consensus on the approach. If you want to open a PR with what you have, you're more than welcome to, but it's unlikely it'll get merged without going through a SIP proess. You can add your details/approach here if you want to use this SIP, or you can open your own SIP. Please reach out on slack if you'd like assistance. |
Last call for interested parties to sign up and dial in this proposal. In a couple more weeks, this will have gone 6 months without being brought up for discussion on the ASF mailing list, and will be closed as inactive. Thanks to everyone, however it plays out :) |
Maybe a less exciting idea for it: could we just use a chrome extension for the purpose? It will be much easier to use it along with superset, without worrying about the main update on Superset. |
You can absolutely use/create a chrome plugin. I think it could be a Superset plugin if you want to author such a thing, or something more generalized. Either way, you wouldn't need a SIP, but we'd be happy to help evangelize the effort if it comes to fruition. |
Our team started to work on Superset plugin. If anyone made progress and wants to collaborate, let us talk. thanks |
@mkrishna23 how should one get in touch for interested parties? |
Closing this one out as Discarded since there seems to be no interest. |
Please make sure you are familiar with the SIP process documented
here. The SIP will be numbered by a committer upon acceptance.
[SIP] Proposal for AI/LLM query generation in SQL lab <title>
Motivation
To make Apache superset dashboard/chats creation possible for Non Dev/SQL background users.
#27272
Proposed Change
Describe how the feature will be implemented, or the problem will be solved. If possible, include mocks, screenshots, or screencasts (even if from different tools).
This is current SQL LAB used for showing SQL editor box
Forward user prompts to LLM Model Along with other system prompts - which shares databases schema information (consider it as RAG) for quality prompt responses. (optional) Some additional query / prompts which are required for understanding data. Maybe sharing first 10 rows. (Or) distinct values for a column, etc. (whatever is necessary)
populate the editor with the query generated by the model.
Query Generation
there are already pipelines in langchain for this https://python.langchain.com/docs/use_cases/sql/quickstart/#convert-question-to-sql-query.
we can use these pipelines for generating Queries from text. It works for any llm model. we can just piggyback on that. All I am planning to be have llm_provider or llm_factory which creates llm based on user needs and send to their pipeline.
Few Technical Implementations / Considerations
LLM access as API would give choice whether they want to use existing services rather than deploying. Packaging LLM in superset deployment is not feasible
Backend Architecture Diagram
New or Changed Public Interfaces
Describe any new additions to the model, views or
REST
endpoints. Describe any changes to existing visualizations, dashboards and React components. Describe changes that affect the Superset CLI and how Superset is deployed.New dependencies
Describe any
npm
/PyPI
packages that are required. Are they actively maintained? What are their licenses?Migration Plan and Compatibility
Describe any database migrations that are necessary, or updates to stored URLs.
Rejected Alternatives
Describe alternative approaches that were considered and rejected.
The text was updated successfully, but these errors were encountered: