Skip to content

Workflow

Roy Nieterau edited this page Sep 19, 2019 · 12 revisions

Some minor notes on Workflow with colorbleed-config. Will move to extended documentation at some point.

At Colorbleed we've a couple of regular stages in production that we separated:

Modeling

At Colorbleed we mostly model in Maya and for generating a model publish we have a relatively strict colorbleed.model family that performs a lot of checks on your geometry, e.g. UVs, naming conventions, freeze transforms, invalid polygons, etc. This is to ensure output models have a consistency of being relatively clean and you can have thorough expectations of the output.

Note: For output content that is less strict and/or animated you can use the colorbleed.pointcache family that is less strict and outputs only an Alembic .abc pointcache. However, we do recommend the workflow with models as it ensures clean propagation of input models for rigging and other stages of production.

Rigging

The colorbleed.rig family generates a mayaAscii file that contains a rig with controls. Upon loading a rig it will automatically create a publish instance for that character, so that when the animator is done you can directly consistently publish the output.

The requirements for publishing a rig is that the publish instance's out_SET and controls_SET must at least be populated with some content.

  • controls_SET: This should house the control transforms (not parent groups) and will be validated on having no keys on unlocked keyable attributes, have default values for all transform controls (zero translate, zero rotate, one scale) and ensure the Visibility attribute is locked (so one doesn't accidentally key it)
  • out_SET: This is the out publishable content that the rig will output after animation. Usually this is the parent group that contains all output geometry in the rig. This will end up generating an Alembic pointcache from the animation scene.

Lookdev

The Look Development is the process of generating textures and shaders to apply to a model. Our look development process is built-around Maya and is published as the colorbleed.look family.

The look development process is renderer-agnostic and should support basically all render engines that apply shaders as shadingEngines to the geometry. This is how Maya by default assigns shaders internally.

The colorbleed.look family output results in:

  • A mayaAscii (.ma) file that contains the Maya shaders.
  • A JSON shader relationhips (.json) file that holds which shader should be applied to which mesh by .cbId.
  • A resources/ folder that contains any textures (incl. sequences or UDIM) used by the shaders (from Maya file nodes)

Look variations (subsets) per Renderlayer

It is possible to publish different look subsets from a single file, e.g. a blue and a red variation for the same asset. To support this the creation of the look through Avalon > Create.. > Look imprints the currently active renderlayer into the instance. This is visible in the Attribute Editor for the instance* as the renderLayer attribute.

For example one can use the masterLayer for the blue subset and a separate redLayer renderlayer for the red subset. To do this:

  1. Go to masterLayer.
  2. Create look lookBlue.
  3. Go to redLayer.
  4. Create look lookRed.

When publishing now the active shaders will be collected and published from their respective renderlayers, as can be seen in the instance's attribute editor.

*Note: due to a bug in Maya 2018/2019 custom attributes on objectSets are visible under the "Arnold Attributes" tab whenever Arnold is loaded.

Assigning a published look.

To assign a published look the easiest way for an artist would be to use the Maya Look Assigner. With this interface one can list all available looks for the current geometry in the scene and assign it right away.

To assign the shader with the Look Manager right-click on the Looks on the right-hand side and click "Assign looks.."

Assigning a publish look in code.

This can also be done with the colorbleed.maya.lib.assign_look function, for example:

from maya import cmds
import colorbleed.maya.lib as lib
import avalon.io as io

# Get all DAG nodes for current selection
nodes = cmds.ls(dag=True, selection=True, long=True)

# Get the latest version for look default for asset Hero
# using avalon.io
asset = io.find_one({"name": "Hero", 
                     "type": "asset"})
subset = io.find_one({"name": "lookDefault", 
                      "type": "subset", 
                      "parent", asset["_id"]})
version = io.find_one({"type": "version", 
                       "parent": subset["_id"]}, 
                       sort=[("name", -1)])

# Assign this version to all nodes
# note: that it will skip assignment to nodes that are
#       included as related nodes in that specific look.
lib.assign_look(nodes, version)

Grooming (Yeti)

The Grooming workflow in Maya with Yeti works by:

  1. Creating a "yeti groom rig" (yetiRig) for an asset.
  2. Loading that into a simulation scene, connecting it with the published character alembic (attach it to a moving character)
  3. Cache out "yeti groom cache" (yetiCache) from that scene.

Warning: There is a known bug in Yeti that whenever two different textures with the same name are on the search path (even when in different yeti nodes!) that it will re-use the first texture it finds. Please make sure that the textures are named differently for each asset, e.g. hero_arms_DSP.exr as opposed to arms_DSP.exr.

Yeti Rig (yetiRig family)

The Yeti Rig family results in a .ma file that can be loaded up for simulating moving characters. It will also export a single frame cache that lookdev artists can use to apply shaders and publish the regular colorbleed.look shader assignments.

Any textures that are loaded into the Yeti Graph will be included in the publish in its resources folder. Upon loading the cache files for the Yeti Rig or Yeti Cache family the loaded yeti node will have this resources folder set in the search path. (Note: this is currently a full absolute path; so watch out with mixed-OS farms!)

We recommend structuring the yeti rig in such a way that you have an "inputs_GRP" that makes it easy in the simulation scene to know which meshes should be connected to it and where. Usually we put the Yeti Nodes in an "out_GRP" to indicate that it will be the output to cache of that rig.

The Yeti Rig family automatically creates an input_SET inside the publish instance, it's recommended to put the meshes in that set that should end up getting connections from the animation scenes. (E.g. the geometry in the "inputs_GRP" described above). This input_SET was intended for a "connection manager" to automate the connections to a loaded animation however we never ended up streamlining that workflow as the need wasn't high enough. The prototype exists but we are not actively using it in production.

Unlike animation rigs loading a yeti rig does not automatically create a publish instance for it. You need to manually create the yetiCache instances.

Yeti Cache (yetiCache family)

The Yeti Cache family results in a .fur sequence (or a single frame if for a still) that you can load up.

Note that publishing the yetiCache family will force a new simulation. There's currently no implementation that publishes a cache if it's already been cached locally on a disk. It will always perform the extraction first.

  • This loaded published yetiCache content does not need the Yeti Interactive license but uses the Render License as it doesn't compute the the .fur cache. You will only need an Interactive License for the yetiRig.
  • The cache can be loaded and rendered as is. It does not require to be reconnected to an input geometry at this stage.

FX

The majority of the FX we produce with SideFX Houdini. However, this could also be created in Maya since most of our effects work, including smoke, demolitions, jiggle sims, fire and so forth result in an Alembic pointcache or OpenVDB sequence. As such, whatever generates those outputs could be used in production.

Publishing in Houdini

To publish content in Houdini you create the output instance through Avalon > Create... The instances will be generated as ROP nodes in Houdini's /out context.

For example, publishing a Pointcache will generate an Alembic ROP node in /out where the SOP Path attribute is (by default) set to the currently selected node upon creation of the instance. This SOP Path refers to the Houdini node path that it will publish.

Lighting (rendering)

The Colorbleed-config supports rendering in Maya and is relatively renderer-agnostic so should work with most renderers. It has been tested and used with V-Ray (vrayformaya), Redshift and Arnold (mtoa). We usually refer to this task as lighting.

Submitting renders to Deadline

To submit your renders the first thing you'll need to do is create a "renderglobalsDefault" instance. This is what the pipeline uses to recognize it needs to process the renderlayers for submission at publish time.

You can create it through:

  • Colorbleed > Rendering > DL Submission Settings UI (recommended, as this allows to customize how your scene gets submitted) available as a separate tool: maya-deadline-submission-settings
  • OR Avalon > Create.. > Render Globals

With the render globals created you can do Avalon > Publish.. to start submitting your scene.

Rendering custom frame ranges (explicit frame list)

You can use Deadline Frame List Formatting to specify custom frames for rendering as opposed to a start-end frame range. This implementation allows for holes in the render range and you can submit a sequence as 1-100,300-400 for example.

To activate this use the useCustomFrameList on the renderglobalsDefault instance and then set the frame list value in the frameList string attribute to your custom frame list.

Rendering multiple cameras per layer

With the support for multiple cameras per renderlayer it is now possible to submit a single layer with multiple cameras. To do this:

  1. Ensure multiple cameras are set as renderable in the layer.
  2. Update the "filename" prefix to support multiple cameras. The easiest trick is to run "Publish" and validate, likely it will fail on "Filename Prefix", right click it and press repair.
  3. Submit your render.

It's important to understand that the resulting published subset will be prefixed with the camera name. As such, it's good to ensure your camera name/hierarchy is as you expect them on the first submission and keep them the same over time.

Deadline Web Service (Submit to Deadline)

The code that colorbleed-config uses to submit to Deadline runs though the Deadline Web Service as opposed to the local Deadline commands. As such, this means you will need to have the Deadline Web Service running on the server and you'll have to set the environment variable AVALON_DEADLINE to your Deadline Web Service's address, e.g. http://servername:8082.

Note: The Deadline Web Service does not automatically come installed with the Deadline Repository installation but instead is part of the Deadline Client installation files. As such when you run the Web Service on the same machine that hosts the Deadline Repository then you will also need to install the Deadline Web Service from the Client files.

Compositing

Fusion is our main tool for compositing and is implemented within the Avalon pipeline and colorbleed-config. As such it is possible to load published image sequences (renders) through the Avalon Loader and manage them in your comp with the Avalon Manager. Similarly publishing is supported in Fusion to render locally or submit to deadline and publish the resulting output as the composited image sequence.

Due to no scripting support in BlackmagicDesign's Fusion for importing Alembic it's impossible to load Alembic files through the API. As such the Colorbleed-config does not support loading Alembic files (or Maya Cameras) through the Avalon Loader. These can currently only be loaded manually through the Fusion interface.