-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Low-Code Migration QA] On-the-fly data comparison to QA connectors #22690
Comments
@lazebnyi Can you take a look at this issue? The goal is to use this doc as the basis but we want to have a tool that ensures that no data is written to storage. Instead, all the data has to store in memory. Does this make sense? |
cc'ing @evantahler as well for FYI (and also in case you have ideas to share on the topic) |
Well, if the goal is to use seeded sandbox data, and all streams are already tested via SAT/Connector Acceptance... isn't just running the acceptance tests enough? If expected_records match, then we are good? If all the streams are not well seeded, perhaps the thing to do is to run the connector via docker and pipe the raw output to a file for both the old and new versions. Then, we can use a diff tool to see if there are any changes. |
@evantahler This is a related but separate effort and I think both should happen in paralle. We're working on seeding the accounts but this effort allows us to use customer data for the QA effort without requiring an approval from the customers. I'm looping you in a doc and we can get back to this issue after we are aligned |
Done - #24421 |
Based on the latest conversation around the QA topic, we would like to explore the idea of creating a tool that can help us compare the results between the low-code connector and the python connector without writing any data to storage. All the comparisons should happen in memory and the tool should create a report that highlights:
The text was updated successfully, but these errors were encountered: