-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CT-2749] [Spike] Investigate merging merge_from_artifact
and add_from_artifact
#7965
Labels
stale
Issues that have gone stale
Comments
github-actions
bot
changed the title
[Spike] Investigate merging
[CT-2749] [Spike] Investigate merging Jun 27, 2023
merge_from_artifact
and add_from_artifact
merge_from_artifact
and add_from_artifact
5 tasks
This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please comment on the issue or else it will be closed in 7 days. |
Although we are closing this issue as stale, it's not gone forever. Issues can be reopened if there is renewed community interest. Just add a comment to notify the maintainers. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Thinking more about this: Do we actually need separate methods for
merge_from_artifact
andadd_from_artifact
? Or could it simply be that:defer_relation
attributeThen, all tasks could call this same method. We don't need the divergent behavior between
clone
andrun
, and we set ourselves up more nicely for future work around contract inference, dev/prod diff, ...Pseudo code:
The complication I could foresee here: Traditional defer behavior requires the population of the adapter cache (for the "nonexisting" part). I had suggested doing things differently in
clone
'sbefore_run
setup so that we defer first, and then cache, so that we can also cache the "other" schemas (indicated bydefer_relation
). I'm not sure if that's actually necessary, or if it's extra complication that isn't really worth it.Originally posted by @jtcohen6 in #7881 (comment)
The text was updated successfully, but these errors were encountered: