-
-
Notifications
You must be signed in to change notification settings - Fork 413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Full query string helpers #1604
Full query string helpers #1604
Conversation
Considering the limitations of what we already have, these additions are most welcome. Ping me if you ever want to pair on a Cookbook addition for this. |
53359cb
to
fdf3bea
Compare
fdf3bea
to
e350e76
Compare
e350e76
to
1ae85d1
Compare
I'm not using haskell at work any more and I don't have time to revive this PR myself, but I still think it's a worthwhile addition to servant and am available to help anyone who wants to get it over the finish line. |
d387119
to
8dec489
Compare
8dec489
to
bcbd3d2
Compare
🎉 🎉 🎉 🎉 |
@tchoutri Since this PR broke backwards compatibility, we need to do a major version bump. Do we do that now, or just before the release? The advantage of doing it now is that then we don't have to remember that this PR changed the interface. |
@ysangkok yep let's do it right away. If you can't do it in the next 10 hours I'll do it in my morning. :) |
In order for LLMs to interact with the REST API (which is on the goals of the so-called LAG project), we need to provide an openapi3 compatible API. We do this by depending on 'servant-openapi3', and add appropriate instances for the servant REST API. Due to some issues, it seems like it is impossible (or very difficult), to make custom gpts invoke REST endpoints with JSON bodies... So, for now we simply send all parameters via Query Parameters. Until the next servant release, we have to explicitly name all query parameters. See haskell-servant/servant#1604 for the PR that we are interested in.
In order for LLMs to interact with the REST API (which is on the goals of the so-called LAG project), we need to provide an openapi3 compatible API. We do this by depending on 'servant-openapi3', and add appropriate instances for the servant REST API. Due to some issues, it seems like it is impossible (or very difficult), to make custom gpts invoke REST endpoints with JSON bodies... So, for now we simply send all parameters via Query Parameters. Until the next servant release, we have to explicitly name all query parameters. See haskell-servant/servant#1604 for the PR that we are interested in.
In order for LLMs to interact with the REST API (which is on the goals of the so-called LAG project), we need to provide an openapi3 compatible API. We do this by depending on 'servant-openapi3', and add appropriate instances for the servant REST API. Due to some issues, it seems like it is impossible (or very difficult), to make custom gpts invoke REST endpoints with JSON bodies... So, for now we simply send all parameters via Query Parameters. Until the next servant release, we have to explicitly name all query parameters. See haskell-servant/servant#1604 for the PR that we are interested in.
The goal of this PR is to adress a use-case that is not currently covered by servant: query strings with dynamic parameter names.
QueryParam
and its friends are super useful, but extract the parameter name from a type-level string, so while this is useful in many cases, it cannot cover every use-case. A typical use-case would be implementing a search filter where filters are not static and can't be enumerated ahead of time.I'll propose two helpers, directly derived from a use case that came up at work:
[(ByteString, Maybe ByteString)]
value. It cannot fail and provides raw access to the query string:QueryString :> Get '[JSON] [Book]
.This was the first version that i implemented for my concrete use-case at work, to get things going.
[]
for a list of parameters:books?filter[search]=value&filter[author][name]=value
. The corresponding type would beDeepQuery "filter" BookQuery :> Get '[JSON] [Book]
.This is based on what's currently used for my use-case at work.
Raw query string capture
Deep object query parsing