Skip to content
This repository has been archived by the owner on Sep 20, 2024. It is now read-only.

Houdini: simple bgeo publishing #4588

Merged
merged 33 commits into from
Jul 17, 2023

Conversation

antirotor
Copy link
Member

@antirotor antirotor commented Mar 7, 2023

Brief description

Support for simple publishing of bgeo files.

Description

This is adding basic support for bgeo publishing in Houdini. It will allow publishing bgeo in all supported formats (selectable in the creator options). If selected node has output on sop level, it will be used automatically as path in file node.

Additional info

For future work it might be good to rethink how "pointcache" family is implemented regarding to various format and validation needs. Maybe default option for file format can be added to settings, some additional validation, and so on.

Testing notes:

  1. Create some geometry
  2. Select it
  3. Create BGEO publishing instance
  4. Publish

Resolves #3098

@antirotor antirotor added host: Houdini type: feature Larger, user affecting changes and completely new things labels Mar 7, 2023
@antirotor antirotor self-assigned this Mar 7, 2023
@ynbot
Copy link
Contributor

ynbot commented Mar 7, 2023

Task linked: OP-3129 Houdini bgeo publishing

Copy link
Collaborator

@BigRoy BigRoy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some early comments

openpype/hosts/houdini/plugins/create/create_bgeo.py Outdated Show resolved Hide resolved
openpype/hosts/houdini/plugins/publish/extract_bgeo.py Outdated Show resolved Hide resolved
openpype/hosts/houdini/plugins/publish/extract_bgeo.py Outdated Show resolved Hide resolved
@antirotor antirotor marked this pull request as ready for review March 8, 2023 16:30
@moonyuet
Copy link
Member

moonyuet commented Mar 22, 2023

Tested in my machine with some of my old project.
It's kinda confusing me when I tested it for the first time but it works for exporting simple bgeo.
But it would be great if there is a validator to check to see if the user puts the SOP path into the bgeo instance.
Something like this:

# -*- coding: utf-8 -*-
"""Validator plugin for SOP Path Setting in bgeo isntance."""
import pyblish.api
from openpype.pipeline import PublishValidationError


class ValidateNoSOPPath(pyblish.api.InstancePlugin):
    """Validate if SOP Path in bgeo isntance exists."""

    order = pyblish.api.ValidatorOrder
    families = ["bgeo"]
    hosts = ["houdini"]
    label = "BGEO SOP Path"

    def process(self, instance):

        import hou

        node = hou.node(instance.data.get("instance_node"))
        sop_path = node.evalParm("soppath")
        if not sop_path:
            raise PublishValidationError("Empty SOP Path found in "
                                         "the bgeo isntance Geometry")

If the user didn't put the SOP path and without the validator checked, it won't export any bgeo path to the pyblish directory (and it can't pass the integrate.py)

image

@BigRoy
Copy link
Collaborator

BigRoy commented Apr 3, 2023

@antirotor Are you expected to do additional changes based on the open comments on this PR or does this just need some testing before it's ready to merge?

@antirotor
Copy link
Member Author

@antirotor Are you expected to do additional changes based on the open comments on this PR or does this just need some testing before it's ready to merge?

Yep, I would like to add the validator and logic around the output node index.

@antirotor antirotor requested a review from BigRoy June 21, 2023 15:39
@antirotor
Copy link
Member Author

@BigRoy I think this is resolved, just want to check with you

@BigRoy
Copy link
Collaborator

BigRoy commented Jun 28, 2023

I think this is resolved, just want to check with you

I still have some issues with adding the extra family on Create into families data. It's just so non-dynamic even though depending on the creator we always want the extra family so it seems. There are just so many edge cases that I can think of where this logic could break. The most apparent one currently is that I suspect that any non-legacy pointcache instances already created with the new publisher in Houdini prior this PR will now break due to the fact that those do not have the additional family stored on the node's data. To me this just doesn't seem "instance data" related since it's not a toggle within the creator but the creator itself should always contain that family. As such there's also no need to store that data on the actual node at all (I'd even say it's cluttered data there + a user could potentially break it by just removing the value).

A collector doing it would already be safer plus doesn't require us storing this basically non-dynamic/changeable data on the node (which makes no sense because it's just an implementation detail). A simple example collector could do:

if creator_identifier == "io.openpype.creators.houdini.bgeo":
    families += ["bgeo"]
elif creator_identifier == "io.openpype.creators.houdini.pointcache":
    families += ["abc"]

Not perfect either because then the new publisher can't specify user parameters based on that collector because there's unfortunately no notion of PreCollector plug-ins which run before the user interaction but after the creator.

Anyway - let's at the very least fix up the (what I suspect will be an) issue with existing pointcache instances in the new publisher and I'd say this is an "ok" implementation.


For more context, storing data on the node.

I have the same issues with instance_node being stored as a parameter on the node with the node's name, but now if I change the node's name that's not in sync and now you can't publish in Houdini. As such, you can't rename a publish ROP node at all.

Similarly with the instance_id - you can't duplicate instances due to that being stored on the node and thus when duplicating in Houdini (and Fusion too) now you have two nodes with the same instance id. The publisher itself doesn't understand, the artist doesn't understand, even though technically the instance_id as far as I know is just an implementation detail for just that one publishing session which could just reset on "reset" of the publisher or alike and thus not be stored on the node at all.

Since in Houdini node.path() is always unique I'd actually argue that both instance_node and instance_id should actually be the node.path() and not be stored parameters nor be read from it, but always be set to the actual node.

I can see how it might differ in other hosts, but it's basically what is already breaking Houdini, Fusion (and likely Maya new publisher too?). We should be storing as little data as possible in the scene through which we can produce reproducable publishes.

Copy link
Contributor

@Minkiu Minkiu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haven't tested, but code looks good.

@antirotor
Copy link
Member Author

I still have some issues with adding the extra family on Create into families data. It's just so non-dynamic even though depending on the creator we always want the extra family so it seems. There are just so many edge cases that I can think of where this logic could break.

I've moved the logic to collector - it should handle legacy instances. The problem with additional families, collection and plugin parameters is there and has no solution currently. In this particular case, it doesn't matter that much because it has no additional parameters defined down the line - but in theory, it should have - for example format specification for bgeo should be probably parameter of the bgeo extractor plugin.

I don't mind having those families set by creator - you can argue that it is static, but moving it to collector (in this particular case) just move the static stuff down the line - testing creator_identifier and assigning family based on that is just the same piece of code just moved further. Yes, you can do lot of other things in collectors if you wish, but nothing is preventing doing so even when the families is already defined in the creator. Clear data driven workflow isn't there already because of the settings and various presumptions.

I agree totally on what we are storing on the instance itself and it should be as minimal as possible.

@antirotor antirotor dismissed BigRoy’s stale review July 14, 2023 10:45

Requested changes were incorporated in the PR

@antirotor antirotor merged commit b26b5fd into develop Jul 17, 2023
@antirotor antirotor deleted the feature/OP-3129_houdini-bgeo-publishing branch July 17, 2023 14:38
@ynbot ynbot added this to the next-patch milestone Jul 17, 2023
@BigRoy
Copy link
Collaborator

BigRoy commented Mar 4, 2024

I tested the bgeo publisher with some of my old scene(which is quite huge), with two scanerios(with geometry node and sopoutput node) after studying the code. The publisher doesn't error out and bgeo is published successfully. The only thing is that it gets the error for increment and save since the geometry creation includes a merge node before the sopoutput as the node cook is in progress. We may need to introduce node cook (i.e. node.cook(force=True)) in increment_current_file.py before performing save function.

image image

So we've just hit this error for the first time now - when publishing USD files on our end.
I wonder if anyone else has had this issue since and whether there are any suitable/save workarounds for the code?

@moonyuet have you had this since?


Side note: I've asked SideFX support about this as well, ticket: #135661

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
host: After Effects host: Houdini sponsored Client endorsed or requested type: documentation type: feature Larger, user affecting changes and completely new things
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

Houdini bgeo publishing
10 participants