-
Notifications
You must be signed in to change notification settings - Fork 129
Conversation
…ts per host and current task
… latest versions for all subsets of entered assets
…for specific asset by entered workfile variants
…ing by current context
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is working really well !!!
I've tried with multiple configurations on multiple tasks and it got what I expected every time. Well done!
So let's just deal with your original points to discuss and it will be ready to merge.
Let's drop the workfile creation altogether. I think this should simply load all the expected content into the current file and let user deal with the actual saving. I can very much imagine using this on later version of the file, if I need to reset for example.
that's fine, but let's move the preset from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not entirely sure what this PR is intended to do - admittedly the code was a bit hard to read through and get a glimpse of what exactly it is trying to do. Anyway, did have some notes on some of the code that stood out as I skimmed the code.
Also, this could really benefit from a README.md
or something to explain how to use it. Hehe.
pype/lib.py
Outdated
valid_subsets_by_id[subset_id] = subset | ||
variants_per_subset_id[subset_id] = variant | ||
|
||
# break variants loop if got here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment could be improved by describing why it's breaking there as opposed to explaining that it's just breaking the loop. Now it seems to explain what the break
command does. Should it maybe instead explain e.g. # break variants loop on finding the first matching variant
?
There's also been a conversation about turning this into it's own class (which i agree with btw), but unfortunately, it fell in the battle of converting repos to public...meaning I accidentaly deleted the convo :). @antirotor can you please have a look at the code one more time and comment? We need to keep in mind that this will likely be heavily expanded in the future with things like linking assets together, loaded asset packages and possibly triggering things like publishing (auto scene build and send to farm anyone?) like what was discussed here getavalon/core#130 @jezscha I've tested this in nuke as well and it works really well. Can you please have a look? Is there anything, in particular, you need it to return to be able to hook to it the kind of scene build we have in the nuke now? As far as I can tell you can do as you please with the loaded containers, considering the function returns them after it runs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have few remarks about code structure:
-
I would get it out of
lib.py
. This deserves its own package. It is quite messy to track changes in file with +500 lines of code and this one has ~1300. -
There is no ftrack or host specific code so maybe it is not necessary to have it as a class. But I would nevertheless mark functions as private - and I believe most of them are. If there is even slightest chance that this will have to use host specific code, I would wrap it as a class so each host specific implementation can inherit and override whats necessary without need to import specific functions from some package.
-
I would break
load_containers_by_asset_data()
into more pieces. Linter warning about to high complexity is in this case very true - consider writing test for such function. It would be impossible to test all lines of this complex function. Most of it are for loops that can be simplified or made more readable by using filter functions, generators, etc. -
More docstrings 🙏
-
Some value pairs used through the code could be written as tuples rather then independent pair of strings as they are connected. For example:
variant_loader_names = variant["loaders"] variant_loader_count = len(variant_loader_names)
can be:
var_loader = (variant["loaders"], len(variant_loader_names))
as Tuple is immutable it ensures those two values belongs together and using it later take less
code writing (accessing them likevar_loader[0]
is shorter thenvariant_loader_count
. Yes, that is
cosmetics 😺 -
If that code would be taken out into separate class or package it is good practice to define all regexes somewhere at the top. And when used inside loops it's best to compile them first (
re.compile
)
And one last thing - I sort of dislike name variant. I think that in this case it isn't something variable per se maybe something like data template or subset template to be more correct.
pype/lib.py
Outdated
subset = valid_subsets_by_id[subset_id] | ||
subset_name = subset["name"] | ||
|
||
variant = variants_per_subset_id[subset_id] | ||
|
||
variant_loader_names = variant["loaders"] | ||
variant_loader_count = len(variant_loader_names) | ||
|
||
variant_repre_names = variant["repre_names"] | ||
variant_repre_count = len(variant_repre_names) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
subset = valid_subsets_by_id[subset_id] | |
subset_name = subset["name"] | |
variant = variants_per_subset_id[subset_id] | |
variant_loader_names = variant["loaders"] | |
variant_loader_count = len(variant_loader_names) | |
variant_repre_names = variant["repre_names"] | |
variant_repre_count = len(variant_repre_names) | |
subset = (valid_subsets_by_id[subset_id], subset["name"]) | |
variant = variants_per_subset_id[subset_id] | |
variant_loader = (variant["loaders"], len(variant_loader_names)) | |
variant_repre = (variant["repre_names"], len(variant_repre_names)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see my review comment about using tuples
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
re-submitted review text by accident, please ignore.
precompile regex Co-Authored-By: Ondřej Samohel <33513211+antirotor@users.noreply.github.com>
Current state of this PR was "for testing" if it works because I wasn't sure about fully understanding of the feature and my implementation, so I'm glad most of the comments are not logic based :)
|
I realized I didn't understood the feature properly: Both methods |
yes exactly. even though I can very much imagine that extending this by actually saving the workfile will be desirable in some cases. This framework should focus on finding, and loading required content only. For example to recreate the behavior of current |
fully agreed. We should come up with name that we can use across pype for these types of thing. It's the same type of preset as is used for Might be a good candidate for a name here as well? "Create a new scene building sounds pretty reasonable to me |
It is because I'm not using same method as uses |
… `create_first_workfile`
…r reorganized a little bit
…ere post processing can be implemented by host
PR in pype-config: https://bitbucket.org/pypeclub/pype-config/pull-requests/149/draft-pype-611-build-workfile
Implemented few methods that can help to build first workfile.
1.) Create first workfile:
2.) Load containers for first workfile:
family
- main family of published subsetsubset_name_filters
- regexes of subset name OPTIONAL skipped if not setfamily
andregex
must match (at least one combination)get_asset_link
returns empty list)asset_entity
andcontainers