-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add openAPI spec for Connector Builder Server #17535
Conversation
$ref: "#/components/responses/ExceptionResponse" | ||
"422": | ||
$ref: "#/components/responses/InvalidInputResponse" | ||
/v1/streams/list: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This endpoint solves two problems listed in the tech spec:
- (tech spec link) allows the webapp to populate the stream dropdown without having to manually parse the connector definition yaml file
- (tech spec link) provides the URLs for each stream so that the webapp does not need to manually parse the connector definition yaml file
Since the backend implementation of this endpoint is just parsing the connector definition file to extract the stream info, it should be very fast. This is important because this request will likely need to be submitted on every change to the yaml contents in the webapp (potentially after some short delay), so that the stream list and URLs are always accurate.
type: object | ||
required: | ||
- request | ||
- response |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I decided to combine a request and the response for that request into a single object here, since I think every request should have a response and it may be more helpful to our users if they are presented as such. But let me know if anyone disagrees with this approach, or if it is overly complex to implement for any reason.
body: | ||
type: object | ||
description: The body of the HTTP request, if present | ||
headers: | ||
type: object | ||
description: The headers of the HTTP request |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't super sure if these should be JSON objects or just strings. I went with JSON objects here because it looks like that's what they are in the debug log messages of the read
command, but open to feedback here
results: | ||
type: array | ||
description: The RECORD and STATE AirbyteMessages coming from the read operation | ||
items: | ||
type: string | ||
logs: | ||
type: array | ||
description: The LOG AirbyteMessages coming from the read operation | ||
items: | ||
type: string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I set these to string
, because I didn't seen an obvious way to reference just the AirbyteMessage or AirbyteLogMessage schema from the protocol file, but I'm open to suggestions here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I figured out how to do this after all! Just needed to reference the whole airbyte protocol in one schema, and then add more slashes when referencing that schema to access its children
Ideas from BG:
|
url: "https://docs.airbyte.com/connector-development/config-based/overview/" | ||
|
||
paths: | ||
/v1/stream/read: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@girarda @brianjlai see this commit for the changes to this endpoint coming from our Backlog Grooming discussion: 2b52fb0
Here's a screenshot of the updated Swagger view of this spec since it's a little easier to see what the final product looks like there:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried a nested grouping here of pages nested inside of slices, with a sliceId and pageId in each nested object.
What do you think of this structure? I thought it made sense because if a user is doing both stream slicing and pagination, then seeing each page broken down by slice makes sense. If they are not using one of those, then there will just be one element in that level, e.g. if they are not using pagination then there will just be a single element in pages
.
I also wasn't super sure about sliceId
and pageId
-- I figured we wanted some way to identify which slices and which pages are which, but I wasn't sure if we have something to use as the "IDs" here. Since they are just strings, maybe we can just put in the URL parameter / header that changes for each slice/page?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in practice, stream slices correspond to states, so we could identify them using the state object, for example:
{
"repository": "airbytehq/integration-test",
"created_at": "2021-06-29T03:44:45Z"
}
we don't currently persist or keep track of the number of pages within a slice, but each page corresponds to a single request/response so we could identify them using a counter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, I think that makes sense as a way to differentiate them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I decided to just remove the sliceId
and pageId
fields from this. I didn't feel like they would add much value, because the slice "state" can be extracted from the results, and the page number is basically just the index of the page in the pages
array
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the slice "state" can be extracted from the results
the slice represents a range ("2021-05-29T03:44:45Z" - "2021-06-29T03:44:45Z"), whereas the output state only represents the end state.
Do we only care about showing the end state here?
numSlices: | ||
type: integer | ||
description: Number of stream slices to read from the source | ||
default: 1 | ||
numPages: | ||
type: integer | ||
description: Number of pages to read from the source | ||
default: 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added these request params for the eventual goal of allowing users to specify the number of slices/pages to request when testing
state: | ||
type: object | ||
description: State blob to use for incremental streams |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added this state
request param as well for testing incremental streams. I'll bring this up in our Low Code Builder Sync, but we will probably eventually want to have a text input where users can enter a JSON state blob, which can be passed in to this parameter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a good addition, but also noting we'll need to make sure this pane to specify state is only visible/editable when a stream supports incremental and not just full refresh
stream: | ||
type: string | ||
description: Name of the stream to read | ||
numSlices: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
open question: do we want to request a number of slices or specify which slice to request?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would it look like to specify a specific slice to request here? Passing in a state blob, or something else?
Also, based on our discussion this morning around the Next page
/ Next slice
behavior this morning, I think we'll want to update this part of the API to accommodate that behavior
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure we get much benefit from allowing developers to specify arbitrary middle slices as input via a literal number(or middle pages for that matter). But maybe I'm not thinking about every scenario.
I like the idea of a state object to be passed in during testing seems like it would address both these case to test incremental, but also as a means to test a slice in the middle if necessary
definition: | ||
$ref: "#/components/schemas/ConnectorDefinitionBody" | ||
description: The config-based connector definition contents | ||
StreamsListRead: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this how the server will share the schema (whenever we support schema detection)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so, because detecting the schema will require making an API call to the source.
This /streams/list endpoint is meant to be very fast, as it will likely be called on every change to the yaml editor contents in order to keep the stream list up to date. I think for schema detection, we will likely just want to do that as part of the /stream/read API call, e.g. add another property to the return value of that function that contains the schema of the results.
url: "https://docs.airbyte.com/connector-development/config-based/overview/" | ||
|
||
paths: | ||
/v1/stream/read: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we add endpoints for check and discovery? An alternative would be to specify the operation as part of the request, but I think they're different enough to warrant different endpoints
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note from discussion: check/discover/spec endpoints can be added in a later phase. Phase 1 will only be focused on read. Will add a note about this to the tech spec as well
definition: | ||
$ref: "#/components/schemas/ConnectorDefinitionBody" | ||
description: The config-based connector definition contents | ||
StreamsListRead: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would it be valuable to also return the stream's slices so they can be displayed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean exactly by "return the stream's slices"? Are you suggesting that this should return all of the possible slices that are configured for a given stream, e.g. if they are using a list slicer? And this would be displayed somewhere in the testing panel?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed this over zoom - the idea here is to have this return a list of all the slices that will be read for this stream, so that they can be displayed somewhere to provide visual feedback on how far through the stream slicing the user has made it. Added a commented-out block here as this can be done in a future phase (will add a note to the tech spec as well)
description: The HTTP request sent to the source API for this page | ||
required: | ||
- url | ||
- headers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not 100% sure if headers should be required or not, but it should be consistent with the body and request parameters fields
|
||
paths: | ||
/v1/stream/read: | ||
post: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how come we want these endpoints defined as POST
calls? Because we're using a request body? Or I guess a real sync is sort of a POST call since the end result has records written to a destination, even if the test one doesnt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I was sort of following the convention in our main Airbyte API where we use post calls and pass all of the inputs in the request body. Especially for this API, since we need to pass the entire connector definition contents to the server I felt like that needed to go in a request body as opposed to being something like a URL parameter. And we can't have a request bodies for get requests
default: 1 | ||
numPages: | ||
type: integer | ||
description: Number of pages to read from the source |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Number of pages to read per-slice from the source
We should probably be very explicit in the description about what this means. Presumably we mean number of pages per slice we will retrieve. i.e. slices = 3, pages = 5 means that for each slice we'll get 5 pages worth of records.
As it reads now, it might be conflated to mean 3 total pages, but pages are kind of a subprocess of slicing
airbyteMessages: | ||
type: array | ||
description: The RECORD/STATE/LOG AirbyteMessages coming from the read operation for this page | ||
items: | ||
$ref: "#/components/schemas/AirbyteProtocol/definitions/AirbyteMessage" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since I was able to figure out how to reference the actual airbyte protocol yaml definition for AirbyteMessage in this API spec, I decided to just consolidate the results
and logs
fields here into a single airbyteMessages
field, which will contain all of the AirbyteMessages coming from the connector, and the frontend can just filter down to specific types if needed
@brianjlai @girarda I've updated this spec based on your feedback and what we discussed on zoom, and I think this is ready for another look now. Here are the changes I made this time, not too many: 423097e |
items: | ||
type: object | ||
required: | ||
- pages |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should have another field describing the slice (it's defined as a json object)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@girarda By "describing the slice", are you talking about the state object that is emitted at the end of the slice? If so, that will be contained in the last page's airbyteMessages
field.
Or is there some other information about the slice that should be returned here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean the exact slice object
eg
{
start_time: "2021-01-01",
end_time: "2021-01-31"
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I understand now. Yeah, I think that would be useful information to show to the user. I will add a sliceDescriptor
here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
type: object | ||
description: The headers of the HTTP response, if any | ||
ConnectorDefinitionBody: | ||
$ref: ../../../../airbyte-cdk/python/airbyte_cdk/sources/declarative/config_component_schema.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we'll need to make sure to wire up the builds to regen if this referenced file changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sherifnada Do you have an example or know how that would be done? If not I can make a ticket
type: integer | ||
description: Number of pages to read from the source for each slice | ||
default: 1 | ||
state: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this be a proper STATE message?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good callout! Yes it should be, will update
stream: | ||
type: string | ||
description: Name of the stream to read | ||
numPages: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this currently does not exist as a first-class-concept in the CDK i.e: we'd need to make some non-trivial changes tot he backend to do this. Any reason we can't just use limit
to control the number of records returned?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea was that users may want to test out pagination by first fetching the first page of data, then clicking a "Next page" button to load the next page. Since we don't have any good way of fetching a specific page, we thought that it would be easier to implement changes to the CDK to limit the number of pages.
So when the user clicks "Next page", we would fetch 2 pages of data and display the second. Then if they click "Next page" again, we would fetch 3 pages of data and display the third, and so on.
The other motivation for this is that we may not want to fetch all pages of data every time the user clicks "Test", because some APIs may have hundreds or thousands of pages, which could make requesting all of those pages take many seconds or minutes, which wouldn't be a great user experience. So we wanted to limit the number of pages actually being fetched from the source API to mitigate this and make the "Test" experience snappy.
Though, I think it would be fine for the first MVP two not implement this page-limiting behavior and just fetch all pages instead, and maybe this is something we can add later if there is a need for it. So I can comment out this parameter for now
type: object | ||
required: | ||
- airbyteMessages | ||
- request |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@girarda is there already a plan for how these objects specifically will be returned from the CDK?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty sure that'll require changes on multiple layers of the CDK because Source
s are not aware of HTTP requests and responses.
I have a rough proposal here: #17839.
Will bring it up for grooming
* master: (304 commits) Bump helm chart version reference to 0.40.27 (#18152) Bump helm chart version reference to 0.40.26 (#18094) Update deployment.yaml (#18151) Publishes Postgres, MySQL, MSSQL source with changes from #18041 (#18086) Fix minor DBT Cloud Errors. (#18147) Sentry Integration : Stop reporting all non system-error error types. (#18133) Docs: Fix backoff stategy docs (#18143) 🐛 Destination GCS: Fix error logs to log 'Gcs' rather than 'AWS' (#17901) Add openAPI spec for Connector Builder Server (#17535) Alex/mvp UI for dbt cloud integration (#18095) increased timeout for sat tests (#18128) Bmoric/remove dep connector worker (#17977) `recordsRead` should be a long (#18123) doc_update_oath_issue_gsc (#17967) 🎉 Source Zendesk Chat: engagements data fix infinity looping + gradlew format (#18121) 🐛 Source Zendesk Chat: engagements data fix infinity looping (#17745) Custom APM Tracing (#17947) 11679 BigQuery-Denormalized Destination: improve code coverage (#17827) increased timeout for sat tests (#18114) docs: clarify language (#18090) ...
* add openapi spec * add 'a' * rename stream test to stream read and add logs * move logs * group results by slice/page and add more request params * address PR/zoom feedback * move request and response into their own definitions * add sliceDescriptor * fix type of state prop and remove numPages * change order
What
Resolves #17424
Adds an OpenAPI spec for the Connector Builder Server, which will provide the backend functionality for the Connector Builder web application, as described in the tech spec: https://docs.google.com/document/d/11HrieUnA7oa6YsDhOVZpAVkOoZt2RWQQWJLPf7xQodY/edit#heading=h.ugjhw5e99cnr
How
Describe the solution