diff --git a/CHANGELOG.md b/CHANGELOG.md index d118bbff546..9b5d40a52fc 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,35 +1,75 @@ # Changelog -## [3.12.0-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD) +## [3.12.1-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD) -[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.11.1...HEAD) +[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.12.0...HEAD) ### 📖 Documentation +- Docs: Added minimal permissions for MongoDB [\#3441](https://github.com/pypeclub/OpenPype/pull/3441) + +**🆕 New features** + +- Maya: Add VDB to Arnold loader [\#3433](https://github.com/pypeclub/OpenPype/pull/3433) + +**🚀 Enhancements** + +- Blender: Bugfix - Set fps properly on open [\#3426](https://github.com/pypeclub/OpenPype/pull/3426) +- Blender: pre pyside install for all platforms [\#3400](https://github.com/pypeclub/OpenPype/pull/3400) + +**🐛 Bug fixes** + +- Nuke: prerender reviewable fails [\#3450](https://github.com/pypeclub/OpenPype/pull/3450) +- Maya: fix hashing in Python 3 for tile rendering [\#3447](https://github.com/pypeclub/OpenPype/pull/3447) +- LogViewer: Escape html characters in log message [\#3443](https://github.com/pypeclub/OpenPype/pull/3443) +- Nuke: Slate frame is integrated [\#3427](https://github.com/pypeclub/OpenPype/pull/3427) + +**🔀 Refactored code** + +- Clockify: Use query functions in clockify actions [\#3458](https://github.com/pypeclub/OpenPype/pull/3458) +- General: Use query functions in rest api calls [\#3457](https://github.com/pypeclub/OpenPype/pull/3457) +- General: Use Anatomy after move to pipeline [\#3436](https://github.com/pypeclub/OpenPype/pull/3436) +- General: Anatomy moved to pipeline [\#3435](https://github.com/pypeclub/OpenPype/pull/3435) + +## [3.12.0](https://github.com/pypeclub/OpenPype/tree/3.12.0) (2022-06-28) + +[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.0-nightly.3...3.12.0) + +### 📖 Documentation + +- Fix typo in documentation: pyenv on mac [\#3417](https://github.com/pypeclub/OpenPype/pull/3417) - Linux: update OIIO package [\#3401](https://github.com/pypeclub/OpenPype/pull/3401) -- General: Add ability to change user value for templates [\#3366](https://github.com/pypeclub/OpenPype/pull/3366) -- Multiverse: expose some settings to GUI [\#3350](https://github.com/pypeclub/OpenPype/pull/3350) **🚀 Enhancements** +- Webserver: Added CORS middleware [\#3422](https://github.com/pypeclub/OpenPype/pull/3422) +- Attribute Defs UI: Files widget show what is allowed to drop in [\#3411](https://github.com/pypeclub/OpenPype/pull/3411) +- General: Add ability to change user value for templates [\#3366](https://github.com/pypeclub/OpenPype/pull/3366) - Hosts: More options for in-host callbacks [\#3357](https://github.com/pypeclub/OpenPype/pull/3357) -- Maya: Allow more data to be published along camera 🎥 [\#3304](https://github.com/pypeclub/OpenPype/pull/3304) +- Multiverse: expose some settings to GUI [\#3350](https://github.com/pypeclub/OpenPype/pull/3350) **🐛 Bug fixes** +- NewPublisher: Fix subset name change on change of creator plugin [\#3420](https://github.com/pypeclub/OpenPype/pull/3420) +- Bug: fix invalid avalon import [\#3418](https://github.com/pypeclub/OpenPype/pull/3418) - Nuke: Fix keyword argument in query function [\#3414](https://github.com/pypeclub/OpenPype/pull/3414) +- Houdini: fix loading and updating vbd/bgeo sequences [\#3408](https://github.com/pypeclub/OpenPype/pull/3408) - Nuke: Collect representation files based on Write [\#3407](https://github.com/pypeclub/OpenPype/pull/3407) - General: Filter representations before integration start [\#3398](https://github.com/pypeclub/OpenPype/pull/3398) - Maya: look collector typo [\#3392](https://github.com/pypeclub/OpenPype/pull/3392) - TVPaint: Make sure exit code is set to not None [\#3382](https://github.com/pypeclub/OpenPype/pull/3382) - Maya: vray device aspect ratio fix [\#3381](https://github.com/pypeclub/OpenPype/pull/3381) +- Flame: bunch of publishing issues [\#3377](https://github.com/pypeclub/OpenPype/pull/3377) - Harmony: added unc path to zifile command in Harmony [\#3372](https://github.com/pypeclub/OpenPype/pull/3372) - Standalone: settings improvements [\#3355](https://github.com/pypeclub/OpenPype/pull/3355) - Nuke: Load full model hierarchy by default [\#3328](https://github.com/pypeclub/OpenPype/pull/3328) **🔀 Refactored code** +- Unreal: Use client query functions [\#3421](https://github.com/pypeclub/OpenPype/pull/3421) +- General: Move editorial lib to pipeline [\#3419](https://github.com/pypeclub/OpenPype/pull/3419) - Kitsu: renaming to plural func sync\_all\_projects [\#3397](https://github.com/pypeclub/OpenPype/pull/3397) +- Houdini: Use client query functions [\#3395](https://github.com/pypeclub/OpenPype/pull/3395) - Hiero: Use client query functions [\#3393](https://github.com/pypeclub/OpenPype/pull/3393) - Nuke: Use client query functions [\#3391](https://github.com/pypeclub/OpenPype/pull/3391) - Maya: Use client query functions [\#3385](https://github.com/pypeclub/OpenPype/pull/3385) @@ -43,6 +83,7 @@ **Merged pull requests:** +- Sync Queue: Added far future value for null values for dates [\#3371](https://github.com/pypeclub/OpenPype/pull/3371) - Maya - added support for single frame playblast review [\#3369](https://github.com/pypeclub/OpenPype/pull/3369) ## [3.11.1](https://github.com/pypeclub/OpenPype/tree/3.11.1) (2022-06-20) @@ -73,7 +114,6 @@ - nuke: adding extract thumbnail settings 3.10 [\#3347](https://github.com/pypeclub/OpenPype/pull/3347) - General: Fix last version function [\#3345](https://github.com/pypeclub/OpenPype/pull/3345) - Deadline: added OPENPYPE\_MONGO to filter [\#3336](https://github.com/pypeclub/OpenPype/pull/3336) -- Nuke: fixing farm publishing if review is disabled [\#3306](https://github.com/pypeclub/OpenPype/pull/3306) **🔀 Refactored code** @@ -83,11 +123,6 @@ [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.0-nightly.4...3.11.0) -### 📖 Documentation - -- Documentation: Add app key to template documentation [\#3299](https://github.com/pypeclub/OpenPype/pull/3299) -- doc: adding royal render and multiverse to the web site [\#3285](https://github.com/pypeclub/OpenPype/pull/3285) - **🚀 Enhancements** - Settings: Settings can be extracted from UI [\#3323](https://github.com/pypeclub/OpenPype/pull/3323) @@ -95,12 +130,6 @@ - Ftrack: Action to easily create daily review session [\#3310](https://github.com/pypeclub/OpenPype/pull/3310) - TVPaint: Extractor use mark in/out range to render [\#3309](https://github.com/pypeclub/OpenPype/pull/3309) - Ftrack: Delivery action can work on ReviewSessions [\#3307](https://github.com/pypeclub/OpenPype/pull/3307) -- Maya: Look assigner UI improvements [\#3298](https://github.com/pypeclub/OpenPype/pull/3298) -- Ftrack: Action to transfer values of hierarchical attributes [\#3284](https://github.com/pypeclub/OpenPype/pull/3284) -- Maya: better handling of legacy review subsets names [\#3269](https://github.com/pypeclub/OpenPype/pull/3269) -- General: Updated windows oiio tool [\#3268](https://github.com/pypeclub/OpenPype/pull/3268) -- Unreal: add support for skeletalMesh and staticMesh to loaders [\#3267](https://github.com/pypeclub/OpenPype/pull/3267) -- Maya: reference loaders could store placeholder in referenced url [\#3264](https://github.com/pypeclub/OpenPype/pull/3264) **🐛 Bug fixes** @@ -108,26 +137,14 @@ - Houdini: Fix Houdini VDB manage update wrong file attribute name [\#3322](https://github.com/pypeclub/OpenPype/pull/3322) - Nuke: anatomy compatibility issue hacks [\#3321](https://github.com/pypeclub/OpenPype/pull/3321) - hiero: otio p3 compatibility issue - metadata on effect use update 3.11 [\#3314](https://github.com/pypeclub/OpenPype/pull/3314) -- General: Vendorized modules for Python 2 and update poetry lock [\#3305](https://github.com/pypeclub/OpenPype/pull/3305) -- Fix - added local targets to install host [\#3303](https://github.com/pypeclub/OpenPype/pull/3303) -- Settings: Add missing default settings for nuke gizmo [\#3301](https://github.com/pypeclub/OpenPype/pull/3301) -- Maya: Fix swaped width and height in reviews [\#3300](https://github.com/pypeclub/OpenPype/pull/3300) -- Maya: point cache publish handles Maya instances [\#3297](https://github.com/pypeclub/OpenPype/pull/3297) -- Global: extract review slate issues [\#3286](https://github.com/pypeclub/OpenPype/pull/3286) -- Webpublisher: return only active projects in ProjectsEndpoint [\#3281](https://github.com/pypeclub/OpenPype/pull/3281) -- Hiero: add support for task tags 3.10.x [\#3279](https://github.com/pypeclub/OpenPype/pull/3279) -- General: Fix Oiio tool path resolving [\#3278](https://github.com/pypeclub/OpenPype/pull/3278) -- Maya: Fix udim support for e.g. uppercase \ tag [\#3266](https://github.com/pypeclub/OpenPype/pull/3266) **🔀 Refactored code** - Blender: Use client query functions [\#3331](https://github.com/pypeclub/OpenPype/pull/3331) -- General: Define query functions [\#3288](https://github.com/pypeclub/OpenPype/pull/3288) **Merged pull requests:** - Maya: add pointcache family to gpu cache loader [\#3318](https://github.com/pypeclub/OpenPype/pull/3318) -- Maya look: skip empty file attributes [\#3274](https://github.com/pypeclub/OpenPype/pull/3274) ## [3.10.0](https://github.com/pypeclub/OpenPype/tree/3.10.0) (2022-05-26) diff --git a/inno_setup.iss b/inno_setup.iss index ead9907955e..fa050ef1d6c 100644 --- a/inno_setup.iss +++ b/inno_setup.iss @@ -18,7 +18,8 @@ AppPublisher=Orbi Tools s.r.o AppPublisherURL=http://pype.club AppSupportURL=http://pype.club AppUpdatesURL=http://pype.club -DefaultDirName={autopf}\{#MyAppName} +DefaultDirName={autopf}\{#MyAppName}\{#AppVer} +UsePreviousAppDir=no DisableProgramGroupPage=yes OutputBaseFilename={#MyAppName}-{#AppVer}-install AllowCancelDuringInstall=yes @@ -27,7 +28,7 @@ AllowCancelDuringInstall=yes PrivilegesRequiredOverridesAllowed=dialog SetupIconFile=igniter\openpype.ico OutputDir=build\ -Compression=lzma +Compression=lzma2 SolidCompression=yes WizardStyle=modern @@ -37,6 +38,11 @@ Name: "english"; MessagesFile: "compiler:Default.isl" [Tasks] Name: "desktopicon"; Description: "{cm:CreateDesktopIcon}"; GroupDescription: "{cm:AdditionalIcons}"; Flags: unchecked +[InstallDelete] +; clean everything in previous installation folder +Type: filesandordirs; Name: "{app}\*" + + [Files] Source: "build\{#build}\*"; DestDir: "{app}"; Flags: ignoreversion recursesubdirs createallsubdirs ; NOTE: Don't use "Flags: ignoreversion" on any shared system files diff --git a/openpype/client/entities.py b/openpype/client/entities.py index 9864fee469f..8b0c2598170 100644 --- a/openpype/client/entities.py +++ b/openpype/client/entities.py @@ -15,12 +15,15 @@ from openpype.lib.mongo import OpenPypeMongoConnection -def _get_project_connection(project_name=None): +def _get_project_database(): db_name = os.environ.get("AVALON_DB") or "avalon" - mongodb = OpenPypeMongoConnection.get_mongo_client()[db_name] - if project_name: - return mongodb[project_name] - return mongodb + return OpenPypeMongoConnection.get_mongo_client()[db_name] + + +def _get_project_connection(project_name): + if not project_name: + raise ValueError("Invalid project name {}".format(str(project_name))) + return _get_project_database()[project_name] def _prepare_fields(fields, required_fields=None): @@ -55,7 +58,7 @@ def _convert_ids(in_ids): def get_projects(active=True, inactive=False, fields=None): - mongodb = _get_project_connection() + mongodb = _get_project_database() for project_name in mongodb.collection_names(): if project_name in ("system.indexes",): continue @@ -146,6 +149,7 @@ def _get_assets( project_name, asset_ids=None, asset_names=None, + parent_ids=None, standard=True, archived=False, fields=None @@ -161,6 +165,7 @@ def _get_assets( project_name (str): Name of project where to look for queried entities. asset_ids (list[str|ObjectId]): Asset ids that should be found. asset_names (list[str]): Name assets that should be found. + parent_ids (list[str|ObjectId]): Parent asset ids. standard (bool): Query standart assets (type 'asset'). archived (bool): Query archived assets (type 'archived_asset'). fields (list[str]): Fields that should be returned. All fields are @@ -196,6 +201,12 @@ def _get_assets( return [] query_filter["name"] = {"$in": list(asset_names)} + if parent_ids is not None: + parent_ids = _convert_ids(parent_ids) + if not parent_ids: + return [] + query_filter["data.visualParent"] = {"$in": parent_ids} + conn = _get_project_connection(project_name) return conn.find(query_filter, _prepare_fields(fields)) @@ -205,6 +216,7 @@ def get_assets( project_name, asset_ids=None, asset_names=None, + parent_ids=None, archived=False, fields=None ): @@ -219,6 +231,7 @@ def get_assets( project_name (str): Name of project where to look for queried entities. asset_ids (list[str|ObjectId]): Asset ids that should be found. asset_names (list[str]): Name assets that should be found. + parent_ids (list[str|ObjectId]): Parent asset ids. archived (bool): Add also archived assets. fields (list[str]): Fields that should be returned. All fields are returned if 'None' is passed. @@ -229,7 +242,13 @@ def get_assets( """ return _get_assets( - project_name, asset_ids, asset_names, True, archived, fields + project_name, + asset_ids, + asset_names, + parent_ids, + True, + archived, + fields ) @@ -237,6 +256,7 @@ def get_archived_assets( project_name, asset_ids=None, asset_names=None, + parent_ids=None, fields=None ): """Archived assets for specified project by passed filters. @@ -250,6 +270,7 @@ def get_archived_assets( project_name (str): Name of project where to look for queried entities. asset_ids (list[str|ObjectId]): Asset ids that should be found. asset_names (list[str]): Name assets that should be found. + parent_ids (list[str|ObjectId]): Parent asset ids. fields (list[str]): Fields that should be returned. All fields are returned if 'None' is passed. @@ -259,7 +280,7 @@ def get_archived_assets( """ return _get_assets( - project_name, asset_ids, asset_names, False, True, fields + project_name, asset_ids, asset_names, parent_ids, False, True, fields ) @@ -991,8 +1012,10 @@ def get_representations_parents(project_name, representations): versions_by_subset_id = collections.defaultdict(list) subsets_by_subset_id = {} subsets_by_asset_id = collections.defaultdict(list) + output = {} for representation in representations: repre_id = representation["_id"] + output[repre_id] = (None, None, None, None) version_id = representation["parent"] repres_by_version_id[version_id].append(representation) @@ -1022,7 +1045,6 @@ def get_representations_parents(project_name, representations): project = get_project(project_name) - output = {} for version_id, representations in repres_by_version_id.items(): asset = None subset = None @@ -1062,7 +1084,7 @@ def get_representation_parents(project_name, representation): parents_by_repre_id = get_representations_parents( project_name, [representation] ) - return parents_by_repre_id.get(repre_id) + return parents_by_repre_id[repre_id] def get_thumbnail_id_from_source(project_name, src_type, src_id): diff --git a/openpype/hooks/pre_global_host_data.py b/openpype/hooks/pre_global_host_data.py index ea5e290d6ff..6577e37cbe6 100644 --- a/openpype/hooks/pre_global_host_data.py +++ b/openpype/hooks/pre_global_host_data.py @@ -1,11 +1,10 @@ -from openpype.api import Anatomy from openpype.lib import ( PreLaunchHook, EnvironmentPrepData, prepare_app_environments, prepare_context_environments ) -from openpype.pipeline import AvalonMongoDB +from openpype.pipeline import AvalonMongoDB, Anatomy class GlobalHostDataHook(PreLaunchHook): diff --git a/openpype/host/__init__.py b/openpype/host/__init__.py new file mode 100644 index 00000000000..84a2fa930a9 --- /dev/null +++ b/openpype/host/__init__.py @@ -0,0 +1,13 @@ +from .host import ( + HostBase, + IWorkfileHost, + ILoadHost, + INewPublisher, +) + +__all__ = ( + "HostBase", + "IWorkfileHost", + "ILoadHost", + "INewPublisher", +) diff --git a/openpype/host/host.py b/openpype/host/host.py new file mode 100644 index 00000000000..48907e7ec7e --- /dev/null +++ b/openpype/host/host.py @@ -0,0 +1,524 @@ +import logging +import contextlib +from abc import ABCMeta, abstractproperty, abstractmethod +import six + +# NOTE can't import 'typing' because of issues in Maya 2020 +# - shiboken crashes on 'typing' module import + + +class MissingMethodsError(ValueError): + """Exception when host miss some required methods for specific workflow. + + Args: + host (HostBase): Host implementation where are missing methods. + missing_methods (list[str]): List of missing methods. + """ + + def __init__(self, host, missing_methods): + joined_missing = ", ".join( + ['"{}"'.format(item) for item in missing_methods] + ) + message = ( + "Host \"{}\" miss methods {}".format(host.name, joined_missing) + ) + super(MissingMethodsError, self).__init__(message) + + +@six.add_metaclass(ABCMeta) +class HostBase(object): + """Base of host implementation class. + + Host is pipeline implementation of DCC application. This class should help + to identify what must/should/can be implemented for specific functionality. + + Compared to 'avalon' concept: + What was before considered as functions in host implementation folder. The + host implementation should primarily care about adding ability of creation + (mark subsets to be published) and optionaly about referencing published + representations as containers. + + Host may need extend some functionality like working with workfiles + or loading. Not all host implementations may allow that for those purposes + can be logic extended with implementing functions for the purpose. There + are prepared interfaces to be able identify what must be implemented to + be able use that functionality. + - current statement is that it is not required to inherit from interfaces + but all of the methods are validated (only their existence!) + + # Installation of host before (avalon concept): + ```python + from openpype.pipeline import install_host + import openpype.hosts.maya.api as host + + install_host(host) + ``` + + # Installation of host now: + ```python + from openpype.pipeline import install_host + from openpype.hosts.maya.api import MayaHost + + host = MayaHost() + install_host(host) + ``` + + Todo: + - move content of 'install_host' as method of this class + - register host object + - install legacy_io + - install global plugin paths + - store registered plugin paths to this object + - handle current context (project, asset, task) + - this must be done in many separated steps + - have it's object of host tools instead of using globals + + This implementation will probably change over time when more + functionality and responsibility will be added. + """ + + _log = None + + def __init__(self): + """Initialization of host. + + Register DCC callbacks, host specific plugin paths, targets etc. + (Part of what 'install' did in 'avalon' concept.) + + Note: + At this moment global "installation" must happen before host + installation. Because of this current limitation it is recommended + to implement 'install' method which is triggered after global + 'install'. + """ + + pass + + @property + def log(self): + if self._log is None: + self._log = logging.getLogger(self.__class__.__name__) + return self._log + + @abstractproperty + def name(self): + """Host name.""" + + pass + + def get_current_context(self): + """Get current context information. + + This method should be used to get current context of host. Usage of + this method can be crutial for host implementations in DCCs where + can be opened multiple workfiles at one moment and change of context + can't be catched properly. + + Default implementation returns values from 'legacy_io.Session'. + + Returns: + dict: Context with 3 keys 'project_name', 'asset_name' and + 'task_name'. All of them can be 'None'. + """ + + from openpype.pipeline import legacy_io + + if legacy_io.is_installed(): + legacy_io.install() + + return { + "project_name": legacy_io.Session["AVALON_PROJECT"], + "asset_name": legacy_io.Session["AVALON_ASSET"], + "task_name": legacy_io.Session["AVALON_TASK"] + } + + def get_context_title(self): + """Context title shown for UI purposes. + + Should return current context title if possible. + + Note: + This method is used only for UI purposes so it is possible to + return some logical title for contextless cases. + Is not meant for "Context menu" label. + + Returns: + str: Context title. + None: Default title is used based on UI implementation. + """ + + # Use current context to fill the context title + current_context = self.get_current_context() + project_name = current_context["project_name"] + asset_name = current_context["asset_name"] + task_name = current_context["task_name"] + items = [] + if project_name: + items.append(project_name) + if asset_name: + items.append(asset_name) + if task_name: + items.append(task_name) + if items: + return "/".join(items) + return None + + @contextlib.contextmanager + def maintained_selection(self): + """Some functionlity will happen but selection should stay same. + + This is DCC specific. Some may not allow to implement this ability + that is reason why default implementation is empty context manager. + + Yields: + None: Yield when is ready to restore selected at the end. + """ + + try: + yield + finally: + pass + + +class ILoadHost: + """Implementation requirements to be able use reference of representations. + + The load plugins can do referencing even without implementation of methods + here, but switch and removement of containers would not be possible. + + Questions: + - Is list container dependency of host or load plugins? + - Should this be directly in HostBase? + - how to find out if referencing is available? + - do we need to know that? + """ + + @staticmethod + def get_missing_load_methods(host): + """Look for missing methods on "old type" host implementation. + + Method is used for validation of implemented functions related to + loading. Checks only existence of methods. + + Args: + Union[ModuleType, HostBase]: Object of host where to look for + required methods. + + Returns: + list[str]: Missing method implementations for loading workflow. + """ + + if isinstance(host, ILoadHost): + return [] + + required = ["ls"] + missing = [] + for name in required: + if not hasattr(host, name): + missing.append(name) + return missing + + @staticmethod + def validate_load_methods(host): + """Validate implemented methods of "old type" host for load workflow. + + Args: + Union[ModuleType, HostBase]: Object of host to validate. + + Raises: + MissingMethodsError: If there are missing methods on host + implementation. + """ + missing = ILoadHost.get_missing_load_methods(host) + if missing: + raise MissingMethodsError(host, missing) + + @abstractmethod + def get_containers(self): + """Retreive referenced containers from scene. + + This can be implemented in hosts where referencing can be used. + + Todo: + Rename function to something more self explanatory. + Suggestion: 'get_containers' + + Returns: + list[dict]: Information about loaded containers. + """ + + pass + + # --- Deprecated method names --- + def ls(self): + """Deprecated variant of 'get_containers'. + + Todo: + Remove when all usages are replaced. + """ + + return self.get_containers() + + +@six.add_metaclass(ABCMeta) +class IWorkfileHost: + """Implementation requirements to be able use workfile utils and tool.""" + + @staticmethod + def get_missing_workfile_methods(host): + """Look for missing methods on "old type" host implementation. + + Method is used for validation of implemented functions related to + workfiles. Checks only existence of methods. + + Args: + Union[ModuleType, HostBase]: Object of host where to look for + required methods. + + Returns: + list[str]: Missing method implementations for workfiles workflow. + """ + + if isinstance(host, IWorkfileHost): + return [] + + required = [ + "open_file", + "save_file", + "current_file", + "has_unsaved_changes", + "file_extensions", + "work_root", + ] + missing = [] + for name in required: + if not hasattr(host, name): + missing.append(name) + return missing + + @staticmethod + def validate_workfile_methods(host): + """Validate methods of "old type" host for workfiles workflow. + + Args: + Union[ModuleType, HostBase]: Object of host to validate. + + Raises: + MissingMethodsError: If there are missing methods on host + implementation. + """ + + missing = IWorkfileHost.get_missing_workfile_methods(host) + if missing: + raise MissingMethodsError(host, missing) + + @abstractmethod + def get_workfile_extensions(self): + """Extensions that can be used as save. + + Questions: + This could potentially use 'HostDefinition'. + """ + + return [] + + @abstractmethod + def save_workfile(self, dst_path=None): + """Save currently opened scene. + + Args: + dst_path (str): Where the current scene should be saved. Or use + current path if 'None' is passed. + """ + + pass + + @abstractmethod + def open_workfile(self, filepath): + """Open passed filepath in the host. + + Args: + filepath (str): Path to workfile. + """ + + pass + + @abstractmethod + def get_current_workfile(self): + """Retreive path to current opened file. + + Returns: + str: Path to file which is currently opened. + None: If nothing is opened. + """ + + return None + + def workfile_has_unsaved_changes(self): + """Currently opened scene is saved. + + Not all hosts can know if current scene is saved because the API of + DCC does not support it. + + Returns: + bool: True if scene is saved and False if has unsaved + modifications. + None: Can't tell if workfiles has modifications. + """ + + return None + + def work_root(self, session): + """Modify workdir per host. + + Default implementation keeps workdir untouched. + + Warnings: + We must handle this modification with more sofisticated way because + this can't be called out of DCC so opening of last workfile + (calculated before DCC is launched) is complicated. Also breaking + defined work template is not a good idea. + Only place where it's really used and can make sense is Maya. There + workspace.mel can modify subfolders where to look for maya files. + + Args: + session (dict): Session context data. + + Returns: + str: Path to new workdir. + """ + + return session["AVALON_WORKDIR"] + + # --- Deprecated method names --- + def file_extensions(self): + """Deprecated variant of 'get_workfile_extensions'. + + Todo: + Remove when all usages are replaced. + """ + return self.get_workfile_extensions() + + def save_file(self, dst_path=None): + """Deprecated variant of 'save_workfile'. + + Todo: + Remove when all usages are replaced. + """ + + self.save_workfile() + + def open_file(self, filepath): + """Deprecated variant of 'open_workfile'. + + Todo: + Remove when all usages are replaced. + """ + + return self.open_workfile(filepath) + + def current_file(self): + """Deprecated variant of 'get_current_workfile'. + + Todo: + Remove when all usages are replaced. + """ + + return self.get_current_workfile() + + def has_unsaved_changes(self): + """Deprecated variant of 'workfile_has_unsaved_changes'. + + Todo: + Remove when all usages are replaced. + """ + + return self.workfile_has_unsaved_changes() + + +class INewPublisher: + """Functions related to new creation system in new publisher. + + New publisher is not storing information only about each created instance + but also some global data. At this moment are data related only to context + publish plugins but that can extend in future. + """ + + @staticmethod + def get_missing_publish_methods(host): + """Look for missing methods on "old type" host implementation. + + Method is used for validation of implemented functions related to + new publish creation. Checks only existence of methods. + + Args: + Union[ModuleType, HostBase]: Host module where to look for + required methods. + + Returns: + list[str]: Missing method implementations for new publsher + workflow. + """ + + if isinstance(host, INewPublisher): + return [] + + required = [ + "get_context_data", + "update_context_data", + ] + missing = [] + for name in required: + if not hasattr(host, name): + missing.append(name) + return missing + + @staticmethod + def validate_publish_methods(host): + """Validate implemented methods of "old type" host. + + Args: + Union[ModuleType, HostBase]: Host module to validate. + + Raises: + MissingMethodsError: If there are missing methods on host + implementation. + """ + missing = INewPublisher.get_missing_publish_methods(host) + if missing: + raise MissingMethodsError(host, missing) + + @abstractmethod + def get_context_data(self): + """Get global data related to creation-publishing from workfile. + + These data are not related to any created instance but to whole + publishing context. Not saving/returning them will cause that each + reset of publishing resets all values to default ones. + + Context data can contain information about enabled/disabled publish + plugins or other values that can be filled by artist. + + Returns: + dict: Context data stored using 'update_context_data'. + """ + + pass + + @abstractmethod + def update_context_data(self, data, changes): + """Store global context data to workfile. + + Called when some values in context data has changed. + + Without storing the values in a way that 'get_context_data' would + return them will each reset of publishing cause loose of filled values + by artist. Best practice is to store values into workfile, if possible. + + Args: + data (dict): New data as are. + changes (dict): Only data that has been changed. Each value has + tuple with '(, )' value. + """ + + pass diff --git a/openpype/hosts/blender/api/pipeline.py b/openpype/hosts/blender/api/pipeline.py index 93d81145bc3..ea405b028e1 100644 --- a/openpype/hosts/blender/api/pipeline.py +++ b/openpype/hosts/blender/api/pipeline.py @@ -93,7 +93,7 @@ def set_start_end_frames(): # Default scene settings frameStart = scene.frame_start frameEnd = scene.frame_end - fps = scene.render.fps + fps = scene.render.fps / scene.render.fps_base resolution_x = scene.render.resolution_x resolution_y = scene.render.resolution_y @@ -116,7 +116,8 @@ def set_start_end_frames(): scene.frame_start = frameStart scene.frame_end = frameEnd - scene.render.fps = fps + scene.render.fps = round(fps) + scene.render.fps_base = round(fps) / fps scene.render.resolution_x = resolution_x scene.render.resolution_y = resolution_y diff --git a/openpype/hosts/blender/hooks/pre_pyside_install.py b/openpype/hosts/blender/hooks/pre_pyside_install.py index a37f8f03793..e5f66d2a26e 100644 --- a/openpype/hosts/blender/hooks/pre_pyside_install.py +++ b/openpype/hosts/blender/hooks/pre_pyside_install.py @@ -1,6 +1,7 @@ import os import re import subprocess +from platform import system from openpype.lib import PreLaunchHook @@ -13,12 +14,9 @@ class InstallPySideToBlender(PreLaunchHook): For pipeline implementation is required to have Qt binding installed in blender's python packages. - - Prelaunch hook can work only on Windows right now. """ app_groups = ["blender"] - platforms = ["windows"] def execute(self): # Prelaunch hook is not crucial @@ -34,25 +32,28 @@ def inner_execute(self): # Get blender's python directory version_regex = re.compile(r"^[2-3]\.[0-9]+$") + platform = system().lower() executable = self.launch_context.executable.executable_path - if os.path.basename(executable).lower() != "blender.exe": + expected_executable = "blender" + if platform == "windows": + expected_executable += ".exe" + + if os.path.basename(executable).lower() != expected_executable: self.log.info(( - "Executable does not lead to blender.exe file. Can't determine" - " blender's python to check/install PySide2." + f"Executable does not lead to {expected_executable} file." + "Can't determine blender's python to check/install PySide2." )) return - executable_dir = os.path.dirname(executable) + versions_dir = os.path.dirname(executable) + if platform == "darwin": + versions_dir = os.path.join( + os.path.dirname(versions_dir), "Resources" + ) version_subfolders = [] - for name in os.listdir(executable_dir): - fullpath = os.path.join(name, executable_dir) - if not os.path.isdir(fullpath): - continue - - if not version_regex.match(name): - continue - - version_subfolders.append(name) + for dir_entry in os.scandir(versions_dir): + if dir_entry.is_dir() and version_regex.match(dir_entry.name): + version_subfolders.append(dir_entry.name) if not version_subfolders: self.log.info( @@ -72,16 +73,21 @@ def inner_execute(self): version_subfolder = version_subfolders[0] - pythond_dir = os.path.join( - os.path.dirname(executable), - version_subfolder, - "python" - ) + python_dir = os.path.join(versions_dir, version_subfolder, "python") + python_lib = os.path.join(python_dir, "lib") + python_version = "python" + + if platform != "windows": + for dir_entry in os.scandir(python_lib): + if dir_entry.is_dir() and dir_entry.name.startswith("python"): + python_lib = dir_entry.path + python_version = dir_entry.name + break # Change PYTHONPATH to contain blender's packages as first python_paths = [ - os.path.join(pythond_dir, "lib"), - os.path.join(pythond_dir, "lib", "site-packages"), + python_lib, + os.path.join(python_lib, "site-packages"), ] python_path = self.launch_context.env.get("PYTHONPATH") or "" for path in python_path.split(os.pathsep): @@ -91,7 +97,15 @@ def inner_execute(self): self.launch_context.env["PYTHONPATH"] = os.pathsep.join(python_paths) # Get blender's python executable - python_executable = os.path.join(pythond_dir, "bin", "python.exe") + python_bin = os.path.join(python_dir, "bin") + if platform == "windows": + python_executable = os.path.join(python_bin, "python.exe") + else: + python_executable = os.path.join(python_bin, python_version) + # Check for python with enabled 'pymalloc' + if not os.path.exists(python_executable): + python_executable += "m" + if not os.path.exists(python_executable): self.log.warning( "Couldn't find python executable for blender. {}".format( @@ -106,7 +120,15 @@ def inner_execute(self): return # Install PySide2 in blender's python - self.install_pyside_windows(python_executable) + if platform == "windows": + result = self.install_pyside_windows(python_executable) + else: + result = self.install_pyside(python_executable) + + if result: + self.log.info("Successfully installed PySide2 module to blender.") + else: + self.log.warning("Failed to install PySide2 module to blender.") def install_pyside_windows(self, python_executable): """Install PySide2 python module to blender's python. @@ -144,19 +166,41 @@ def install_pyside_windows(self, python_executable): lpDirectory=os.path.dirname(python_executable) ) process_handle = process_info["hProcess"] - obj = win32event.WaitForSingleObject( - process_handle, win32event.INFINITE - ) + win32event.WaitForSingleObject(process_handle, win32event.INFINITE) returncode = win32process.GetExitCodeProcess(process_handle) - if returncode == 0: - self.log.info( - "Successfully installed PySide2 module to blender." - ) - return + return returncode == 0 except pywintypes.error: pass - self.log.warning("Failed to install PySide2 module to blender.") + def install_pyside(self, python_executable): + """Install PySide2 python module to blender's python.""" + try: + # Parameters + # - use "-m pip" as module pip to install PySide2 and argument + # "--ignore-installed" is to force install module to blender's + # site-packages and make sure it is binary compatible + args = [ + python_executable, + "-m", + "pip", + "install", + "--ignore-installed", + "PySide2", + ] + process = subprocess.Popen( + args, stdout=subprocess.PIPE, universal_newlines=True + ) + process.communicate() + return process.returncode == 0 + except PermissionError: + self.log.warning( + "Permission denied with command:" + "\"{}\".".format(" ".join(args)) + ) + except OSError as error: + self.log.warning(f"OS error has occurred: \"{error}\".") + except subprocess.SubprocessError: + pass def is_pyside_installed(self, python_executable): """Check if PySide2 module is in blender's pip list. @@ -169,7 +213,7 @@ def is_pyside_installed(self, python_executable): args = [python_executable, "-m", "pip", "list"] process = subprocess.Popen(args, stdout=subprocess.PIPE) stdout, _ = process.communicate() - lines = stdout.decode().split("\r\n") + lines = stdout.decode().split(os.linesep) # Second line contain dashes that define maximum length of module name. # Second column of dashes define maximum length of module version. package_dashes, *_ = lines[1].split(" ") diff --git a/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py b/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py index 0aca7c38d5c..5db89a0ab96 100644 --- a/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py +++ b/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py @@ -1,8 +1,12 @@ import re +from types import NoneType import pyblish import openpype.hosts.flame.api as opfapi from openpype.hosts.flame.otio import flame_export -import openpype.lib as oplib +from openpype.pipeline.editorial import ( + is_overlapping_otio_ranges, + get_media_range_with_retimes +) # # developer reload modules from pprint import pformat @@ -75,6 +79,12 @@ def process(self, context): marker_data["handleEnd"] ) + # make sure there is not NoneType rather 0 + if isinstance(head, NoneType): + head = 0 + if isinstance(tail, NoneType): + tail = 0 + # make sure value is absolute if head != 0: head = abs(head) @@ -125,7 +135,8 @@ def process(self, context): "flameAddTasks": self.add_tasks, "tasks": { task["name"]: {"type": task["type"]} - for task in self.add_tasks} + for task in self.add_tasks}, + "representations": [] }) self.log.debug("__ inst_data: {}".format(pformat(inst_data))) @@ -271,7 +282,7 @@ def _get_head_tail(self, clip_data, otio_clip, handle_start, handle_end): # HACK: it is here to serve for versions bellow 2021.1 if not any([head, tail]): - retimed_attributes = oplib.get_media_range_with_retimes( + retimed_attributes = get_media_range_with_retimes( otio_clip, handle_start, handle_end) self.log.debug( ">> retimed_attributes: {}".format(retimed_attributes)) @@ -370,7 +381,7 @@ def _get_otio_clip_instance_data(self, clip_data): continue if otio_clip.name not in segment.name.get_value(): continue - if oplib.is_overlapping_otio_ranges( + if is_overlapping_otio_ranges( parent_range, timeline_range, strict=True): # add pypedata marker to otio_clip metadata diff --git a/openpype/hosts/flame/plugins/publish/extract_subset_resources.py b/openpype/hosts/flame/plugins/publish/extract_subset_resources.py index 6ad8f8cf966..d34f5d58545 100644 --- a/openpype/hosts/flame/plugins/publish/extract_subset_resources.py +++ b/openpype/hosts/flame/plugins/publish/extract_subset_resources.py @@ -23,6 +23,8 @@ class ExtractSubsetResources(openpype.api.Extractor): hosts = ["flame"] # plugin defaults + keep_original_representation = False + default_presets = { "thumbnail": { "active": True, @@ -45,7 +47,9 @@ class ExtractSubsetResources(openpype.api.Extractor): export_presets_mapping = {} def process(self, instance): - if "representations" not in instance.data: + + if not self.keep_original_representation: + # remove previeous representation if not needed instance.data["representations"] = [] # flame objects @@ -82,7 +86,11 @@ def process(self, instance): # add default preset type for thumbnail and reviewable video # update them with settings and override in case the same # are found in there - export_presets = deepcopy(self.default_presets) + _preset_keys = [k.split('_')[0] for k in self.export_presets_mapping] + export_presets = { + k: v for k, v in deepcopy(self.default_presets).items() + if k not in _preset_keys + } export_presets.update(self.export_presets_mapping) # loop all preset names and @@ -218,9 +226,14 @@ def process(self, instance): opfapi.export_clip( export_dir_path, exporting_clip, preset_path, **export_kwargs) + repr_name = unique_name # make sure only first segment is used if underscore in name # HACK: `ftrackreview_withLUT` will result only in `ftrackreview` - repr_name = unique_name.split("_")[0] + if ( + "thumbnail" in unique_name + or "ftrackreview" in unique_name + ): + repr_name = unique_name.split("_")[0] # create representation data representation_data = { @@ -259,7 +272,7 @@ def process(self, instance): if os.path.splitext(f)[-1] == ".mov" ] # then try if thumbnail is not in unique name - or unique_name == "thumbnail" + or repr_name == "thumbnail" ): representation_data["files"] = files.pop() else: diff --git a/openpype/hosts/fusion/api/lib.py b/openpype/hosts/fusion/api/lib.py index 29f3a3a3eb3..001eb636ee3 100644 --- a/openpype/hosts/fusion/api/lib.py +++ b/openpype/hosts/fusion/api/lib.py @@ -3,9 +3,16 @@ import re import contextlib -from bson.objectid import ObjectId from Qt import QtGui +from openpype.client import ( + get_asset_by_name, + get_subset_by_name, + get_last_version_by_subset_id, + get_representation_by_id, + get_representation_by_name, + get_representation_parents, +) from openpype.pipeline import ( switch_container, legacy_io, @@ -93,13 +100,16 @@ def switch_item(container, raise ValueError("Must have at least one change provided to switch.") # Collect any of current asset, subset and representation if not provided - # so we can use the original name from those. + # so we can use the original name from those. + project_name = legacy_io.active_project() if any(not x for x in [asset_name, subset_name, representation_name]): - _id = ObjectId(container["representation"]) - representation = legacy_io.find_one({ - "type": "representation", "_id": _id - }) - version, subset, asset, project = legacy_io.parenthood(representation) + repre_id = container["representation"] + representation = get_representation_by_id(project_name, repre_id) + repre_parent_docs = get_representation_parents(representation) + if repre_parent_docs: + version, subset, asset, _ = repre_parent_docs + else: + version = subset = asset = None if asset_name is None: asset_name = asset["name"] @@ -111,39 +121,26 @@ def switch_item(container, representation_name = representation["name"] # Find the new one - asset = legacy_io.find_one({ - "name": asset_name, - "type": "asset" - }) + asset = get_asset_by_name(project_name, asset_name, fields=["_id"]) assert asset, ("Could not find asset in the database with the name " "'%s'" % asset_name) - subset = legacy_io.find_one({ - "name": subset_name, - "type": "subset", - "parent": asset["_id"] - }) + subset = get_subset_by_name( + project_name, subset_name, asset["_id"], fields=["_id"] + ) assert subset, ("Could not find subset in the database with the name " "'%s'" % subset_name) - version = legacy_io.find_one( - { - "type": "version", - "parent": subset["_id"] - }, - sort=[('name', -1)] + version = get_last_version_by_subset_id( + project_name, subset["_id"], fields=["_id"] ) - assert version, "Could not find a version for {}.{}".format( asset_name, subset_name ) - representation = legacy_io.find_one({ - "name": representation_name, - "type": "representation", - "parent": version["_id"]} + representation = get_representation_by_name( + project_name, representation_name, version["_id"] ) - assert representation, ("Could not find representation in the database " "with the name '%s'" % representation_name) diff --git a/openpype/hosts/fusion/plugins/load/load_sequence.py b/openpype/hosts/fusion/plugins/load/load_sequence.py index b860abd88b6..abd0f4e411e 100644 --- a/openpype/hosts/fusion/plugins/load/load_sequence.py +++ b/openpype/hosts/fusion/plugins/load/load_sequence.py @@ -1,6 +1,7 @@ import os import contextlib +from openpype.client import get_version_by_id from openpype.pipeline import ( load, legacy_io, @@ -123,7 +124,7 @@ def loader_shift(loader, frame, relative=True): class FusionLoadSequence(load.LoaderPlugin): """Load image sequence into Fusion""" - families = ["imagesequence", "review", "render"] + families = ["imagesequence", "review", "render", "plate"] representations = ["*"] label = "Load sequence" @@ -211,10 +212,8 @@ def update(self, container, representation): path = self._get_first_image(root) # Get start frame from version data - version = legacy_io.find_one({ - "type": "version", - "_id": representation["parent"] - }) + project_name = legacy_io.active_project() + version = get_version_by_id(project_name, representation["parent"]) start = version["data"].get("frameStart") if start is None: self.log.warning("Missing start frame for updated version" diff --git a/openpype/hosts/fusion/scripts/fusion_switch_shot.py b/openpype/hosts/fusion/scripts/fusion_switch_shot.py index 704f4207965..52a157c56eb 100644 --- a/openpype/hosts/fusion/scripts/fusion_switch_shot.py +++ b/openpype/hosts/fusion/scripts/fusion_switch_shot.py @@ -4,6 +4,11 @@ import logging # Pipeline imports +from openpype.client import ( + get_project, + get_asset_by_name, + get_versions, +) from openpype.pipeline import ( legacy_io, install_host, @@ -164,9 +169,9 @@ def update_frame_range(comp, representations): """ - version_ids = [r["parent"] for r in representations] - versions = legacy_io.find({"type": "version", "_id": {"$in": version_ids}}) - versions = list(versions) + project_name = legacy_io.active_project() + version_ids = {r["parent"] for r in representations} + versions = list(get_versions(project_name, version_ids)) versions = [v for v in versions if v["data"].get("frameStart", None) is not None] @@ -203,11 +208,12 @@ def switch(asset_name, filepath=None, new=True): # Assert asset name exists # It is better to do this here then to wait till switch_shot does it - asset = legacy_io.find_one({"type": "asset", "name": asset_name}) + project_name = legacy_io.active_project() + asset = get_asset_by_name(project_name, asset_name) assert asset, "Could not find '%s' in the database" % asset_name # Get current project - self._project = legacy_io.find_one({"type": "project"}) + self._project = get_project(project_name) # Go to comp if not filepath: diff --git a/openpype/hosts/fusion/utility_scripts/switch_ui.py b/openpype/hosts/fusion/utility_scripts/switch_ui.py index 70eb3d0a19c..01d55db6473 100644 --- a/openpype/hosts/fusion/utility_scripts/switch_ui.py +++ b/openpype/hosts/fusion/utility_scripts/switch_ui.py @@ -7,6 +7,7 @@ import qtawesome as qta +from openpype.client import get_assets from openpype import style from openpype.pipeline import ( install_host, @@ -142,7 +143,7 @@ def _refresh(self): # Clear any existing items self._assets.clear() - asset_names = [a["name"] for a in self.collect_assets()] + asset_names = self.collect_asset_names() completer = QtWidgets.QCompleter(asset_names) self._assets.setCompleter(completer) @@ -165,8 +166,14 @@ def collect_slap_comps(self, directory): items = glob.glob("{}/*.comp".format(directory)) return items - def collect_assets(self): - return list(legacy_io.find({"type": "asset"}, {"name": True})) + def collect_asset_names(self): + project_name = legacy_io.active_project() + asset_docs = get_assets(project_name, fields=["name"]) + asset_names = { + asset_doc["name"] + for asset_doc in asset_docs + } + return list(asset_names) def populate_comp_box(self, files): """Ensure we display the filename only but the path is stored as well diff --git a/openpype/hosts/hiero/api/lib.py b/openpype/hosts/hiero/api/lib.py index 8c8c31bc4c3..2f66f3ddd71 100644 --- a/openpype/hosts/hiero/api/lib.py +++ b/openpype/hosts/hiero/api/lib.py @@ -19,8 +19,9 @@ get_last_versions, get_representations, ) -from openpype.pipeline import legacy_io -from openpype.api import (Logger, Anatomy, get_anatomy_settings) +from openpype.settings import get_anatomy_settings +from openpype.pipeline import legacy_io, Anatomy +from openpype.api import Logger from . import tags try: diff --git a/openpype/hosts/hiero/plugins/publish/precollect_instances.py b/openpype/hosts/hiero/plugins/publish/precollect_instances.py index b891a37d9dd..2d0ec6fc997 100644 --- a/openpype/hosts/hiero/plugins/publish/precollect_instances.py +++ b/openpype/hosts/hiero/plugins/publish/precollect_instances.py @@ -1,5 +1,5 @@ import pyblish -import openpype +from openpype.pipeline.editorial import is_overlapping_otio_ranges from openpype.hosts.hiero import api as phiero from openpype.hosts.hiero.api.otio import hiero_export import hiero @@ -275,7 +275,7 @@ def test_any_audio(self, track_item): parent_range = otio_audio.range_in_parent() # if any overaling clip found then return True - if openpype.lib.is_overlapping_otio_ranges( + if is_overlapping_otio_ranges( parent_range, timeline_range, strict=False): return True @@ -304,7 +304,7 @@ def get_otio_clip_instance_data(self, track_item): continue self.log.debug("__ parent_range: {}".format(parent_range)) self.log.debug("__ timeline_range: {}".format(timeline_range)) - if openpype.lib.is_overlapping_otio_ranges( + if is_overlapping_otio_ranges( parent_range, timeline_range, strict=True): # add pypedata marker to otio_clip metadata diff --git a/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py b/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py index 202287f1c39..d7d1c79d731 100644 --- a/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py +++ b/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py @@ -5,8 +5,7 @@ import colorbleed.usdlib as usdlib from openpype.client import get_asset_by_name -from openpype.api import Anatomy -from openpype.pipeline import legacy_io +from openpype.pipeline import legacy_io, Anatomy class AvalonURIOutputProcessor(base.OutputProcessorBase): diff --git a/openpype/hosts/maya/api/__init__.py b/openpype/hosts/maya/api/__init__.py index 5d76bf0f04f..a6c5f50e1a8 100644 --- a/openpype/hosts/maya/api/__init__.py +++ b/openpype/hosts/maya/api/__init__.py @@ -5,11 +5,11 @@ """ from .pipeline import ( - install, uninstall, ls, containerise, + MayaHost, ) from .plugin import ( Creator, @@ -40,11 +40,11 @@ __all__ = [ - "install", "uninstall", "ls", "containerise", + "MayaHost", "Creator", "Loader", diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py index 43f738bfc78..34340a13a53 100644 --- a/openpype/hosts/maya/api/lib.py +++ b/openpype/hosts/maya/api/lib.py @@ -3224,7 +3224,7 @@ def maintained_time(): cmds.currentTime(ct, edit=True) -def iter_visible_in_frame_range(nodes, start, end): +def iter_visible_nodes_in_range(nodes, start, end): """Yield nodes that are visible in start-end frame range. - Ignores intermediateObjects completely. diff --git a/openpype/hosts/maya/api/pipeline.py b/openpype/hosts/maya/api/pipeline.py index d9276ddf4a1..d08e8d19261 100644 --- a/openpype/hosts/maya/api/pipeline.py +++ b/openpype/hosts/maya/api/pipeline.py @@ -1,13 +1,15 @@ import os -import sys import errno import logging +import contextlib from maya import utils, cmds, OpenMaya import maya.api.OpenMaya as om import pyblish.api +from openpype.settings import get_project_settings +from openpype.host import HostBase, IWorkfileHost, ILoadHost import openpype.hosts.maya from openpype.tools.utils import host_tools from openpype.lib import ( @@ -28,6 +30,14 @@ ) from openpype.hosts.maya.lib import copy_workspace_mel from . import menu, lib +from .workio import ( + open_file, + save_file, + file_extensions, + has_unsaved_changes, + work_root, + current_file +) log = logging.getLogger("openpype.hosts.maya") @@ -40,49 +50,121 @@ AVALON_CONTAINERS = ":AVALON_CONTAINERS" -self = sys.modules[__name__] -self._ignore_lock = False -self._events = {} +class MayaHost(HostBase, IWorkfileHost, ILoadHost): + name = "maya" -def install(): - from openpype.settings import get_project_settings + def __init__(self): + super(MayaHost, self).__init__() + self._op_events = {} - project_settings = get_project_settings(os.getenv("AVALON_PROJECT")) - # process path mapping - dirmap_processor = MayaDirmap("maya", project_settings) - dirmap_processor.process_dirmap() + def install(self): + project_settings = get_project_settings(os.getenv("AVALON_PROJECT")) + # process path mapping + dirmap_processor = MayaDirmap("maya", project_settings) + dirmap_processor.process_dirmap() - pyblish.api.register_plugin_path(PUBLISH_PATH) - pyblish.api.register_host("mayabatch") - pyblish.api.register_host("mayapy") - pyblish.api.register_host("maya") + pyblish.api.register_plugin_path(PUBLISH_PATH) + pyblish.api.register_host("mayabatch") + pyblish.api.register_host("mayapy") + pyblish.api.register_host("maya") - register_loader_plugin_path(LOAD_PATH) - register_creator_plugin_path(CREATE_PATH) - register_inventory_action_path(INVENTORY_PATH) - log.info(PUBLISH_PATH) + register_loader_plugin_path(LOAD_PATH) + register_creator_plugin_path(CREATE_PATH) + register_inventory_action_path(INVENTORY_PATH) + self.log.info(PUBLISH_PATH) - log.info("Installing callbacks ... ") - register_event_callback("init", on_init) + self.log.info("Installing callbacks ... ") + register_event_callback("init", on_init) - if lib.IS_HEADLESS: - log.info(("Running in headless mode, skipping Maya " - "save/open/new callback installation..")) + if lib.IS_HEADLESS: + self.log.info(( + "Running in headless mode, skipping Maya save/open/new" + " callback installation.." + )) - return + return + + _set_project() + self._register_callbacks() + + menu.install() + + register_event_callback("save", on_save) + register_event_callback("open", on_open) + register_event_callback("new", on_new) + register_event_callback("before.save", on_before_save) + register_event_callback("taskChanged", on_task_changed) + register_event_callback("workfile.save.before", before_workfile_save) + + def open_workfile(self, filepath): + return open_file(filepath) + + def save_workfile(self, filepath=None): + return save_file(filepath) - _set_project() - _register_callbacks() + def work_root(self, session): + return work_root(session) - menu.install() + def get_current_workfile(self): + return current_file() - register_event_callback("save", on_save) - register_event_callback("open", on_open) - register_event_callback("new", on_new) - register_event_callback("before.save", on_before_save) - register_event_callback("taskChanged", on_task_changed) - register_event_callback("workfile.save.before", before_workfile_save) + def workfile_has_unsaved_changes(self): + return has_unsaved_changes() + + def get_workfile_extensions(self): + return file_extensions() + + def get_containers(self): + return ls() + + @contextlib.contextmanager + def maintained_selection(self): + with lib.maintained_selection(): + yield + + def _register_callbacks(self): + for handler, event in self._op_events.copy().items(): + if event is None: + continue + + try: + OpenMaya.MMessage.removeCallback(event) + self._op_events[handler] = None + except RuntimeError as exc: + self.log.info(exc) + + self._op_events[_on_scene_save] = OpenMaya.MSceneMessage.addCallback( + OpenMaya.MSceneMessage.kBeforeSave, _on_scene_save + ) + + self._op_events[_before_scene_save] = ( + OpenMaya.MSceneMessage.addCheckCallback( + OpenMaya.MSceneMessage.kBeforeSaveCheck, + _before_scene_save + ) + ) + + self._op_events[_on_scene_new] = OpenMaya.MSceneMessage.addCallback( + OpenMaya.MSceneMessage.kAfterNew, _on_scene_new + ) + + self._op_events[_on_maya_initialized] = ( + OpenMaya.MSceneMessage.addCallback( + OpenMaya.MSceneMessage.kMayaInitialized, + _on_maya_initialized + ) + ) + + self._op_events[_on_scene_open] = OpenMaya.MSceneMessage.addCallback( + OpenMaya.MSceneMessage.kAfterOpen, _on_scene_open + ) + + self.log.info("Installed event handler _on_scene_save..") + self.log.info("Installed event handler _before_scene_save..") + self.log.info("Installed event handler _on_scene_new..") + self.log.info("Installed event handler _on_maya_initialized..") + self.log.info("Installed event handler _on_scene_open..") def _set_project(): @@ -106,44 +188,6 @@ def _set_project(): cmds.workspace(workdir, openWorkspace=True) -def _register_callbacks(): - for handler, event in self._events.copy().items(): - if event is None: - continue - - try: - OpenMaya.MMessage.removeCallback(event) - self._events[handler] = None - except RuntimeError as e: - log.info(e) - - self._events[_on_scene_save] = OpenMaya.MSceneMessage.addCallback( - OpenMaya.MSceneMessage.kBeforeSave, _on_scene_save - ) - - self._events[_before_scene_save] = OpenMaya.MSceneMessage.addCheckCallback( - OpenMaya.MSceneMessage.kBeforeSaveCheck, _before_scene_save - ) - - self._events[_on_scene_new] = OpenMaya.MSceneMessage.addCallback( - OpenMaya.MSceneMessage.kAfterNew, _on_scene_new - ) - - self._events[_on_maya_initialized] = OpenMaya.MSceneMessage.addCallback( - OpenMaya.MSceneMessage.kMayaInitialized, _on_maya_initialized - ) - - self._events[_on_scene_open] = OpenMaya.MSceneMessage.addCallback( - OpenMaya.MSceneMessage.kAfterOpen, _on_scene_open - ) - - log.info("Installed event handler _on_scene_save..") - log.info("Installed event handler _before_scene_save..") - log.info("Installed event handler _on_scene_new..") - log.info("Installed event handler _on_maya_initialized..") - log.info("Installed event handler _on_scene_open..") - - def _on_maya_initialized(*args): emit_event("init") @@ -475,7 +519,6 @@ def on_task_changed(): workdir = legacy_io.Session["AVALON_WORKDIR"] if os.path.exists(workdir): log.info("Updating Maya workspace for task change to %s", workdir) - _set_project() # Set Maya fileDialog's start-dir to /scenes diff --git a/openpype/hosts/maya/api/plugin.py b/openpype/hosts/maya/api/plugin.py index f05893a7b46..9280805945f 100644 --- a/openpype/hosts/maya/api/plugin.py +++ b/openpype/hosts/maya/api/plugin.py @@ -9,8 +9,8 @@ LoaderPlugin, get_representation_path, AVALON_CONTAINER_ID, + Anatomy, ) -from openpype.api import Anatomy from openpype.settings import get_project_settings from .pipeline import containerise from . import lib diff --git a/openpype/hosts/maya/api/setdress.py b/openpype/hosts/maya/api/setdress.py index bea8f154b14..159bfe9eb30 100644 --- a/openpype/hosts/maya/api/setdress.py +++ b/openpype/hosts/maya/api/setdress.py @@ -296,12 +296,9 @@ def update_package_version(container, version): assert current_representation is not None, "This is a bug" - repre_parents = get_representation_parents( - project_name, current_representation + version_doc, subset_doc, asset_doc, project_doc = ( + get_representation_parents(project_name, current_representation) ) - version_doc = subset_doc = asset_doc = project_doc = None - if repre_parents: - version_doc, subset_doc, asset_doc, project_doc = repre_parents if version == -1: new_version = get_last_version_by_subset_id( diff --git a/openpype/hosts/maya/plugins/create/create_review.py b/openpype/hosts/maya/plugins/create/create_review.py index fbf3399f61d..ba51ffa0093 100644 --- a/openpype/hosts/maya/plugins/create/create_review.py +++ b/openpype/hosts/maya/plugins/create/create_review.py @@ -15,6 +15,8 @@ class CreateReview(plugin.Creator): keepImages = False isolate = False imagePlane = True + Width = 0 + Height = 0 transparency = [ "preset", "simple", @@ -33,6 +35,8 @@ def __init__(self, *args, **kwargs): for key, value in animation_data.items(): data[key] = value + data["review_width"] = self.Width + data["review_height"] = self.Height data["isolate"] = self.isolate data["keepImages"] = self.keepImages data["imagePlane"] = self.imagePlane diff --git a/openpype/hosts/maya/plugins/load/load_vdb_to_arnold.py b/openpype/hosts/maya/plugins/load/load_vdb_to_arnold.py new file mode 100644 index 00000000000..d458c5abda4 --- /dev/null +++ b/openpype/hosts/maya/plugins/load/load_vdb_to_arnold.py @@ -0,0 +1,139 @@ +import os + +from openpype.api import get_project_settings +from openpype.pipeline import ( + load, + get_representation_path +) +# TODO aiVolume doesn't automatically set velocity fps correctly, set manual? + + +class LoadVDBtoArnold(load.LoaderPlugin): + """Load OpenVDB for Arnold in aiVolume""" + + families = ["vdbcache"] + representations = ["vdb"] + + label = "Load VDB to Arnold" + icon = "cloud" + color = "orange" + + def load(self, context, name, namespace, data): + + from maya import cmds + from openpype.hosts.maya.api.pipeline import containerise + from openpype.hosts.maya.api.lib import unique_namespace + + try: + family = context["representation"]["context"]["family"] + except ValueError: + family = "vdbcache" + + # Check if the plugin for arnold is available on the pc + try: + cmds.loadPlugin("mtoa", quiet=True) + except Exception as exc: + self.log.error("Encountered exception:\n%s" % exc) + return + + asset = context['asset'] + asset_name = asset["name"] + namespace = namespace or unique_namespace( + asset_name + "_", + prefix="_" if asset_name[0].isdigit() else "", + suffix="_", + ) + + # Root group + label = "{}:{}".format(namespace, name) + root = cmds.group(name=label, empty=True) + + settings = get_project_settings(os.environ['AVALON_PROJECT']) + colors = settings['maya']['load']['colors'] + + c = colors.get(family) + if c is not None: + cmds.setAttr(root + ".useOutlinerColor", 1) + cmds.setAttr(root + ".outlinerColor", + (float(c[0]) / 255), + (float(c[1]) / 255), + (float(c[2]) / 255) + ) + + # Create VRayVolumeGrid + grid_node = cmds.createNode("aiVolume", + name="{}Shape".format(root), + parent=root) + + self._set_path(grid_node, + path=self.fname, + representation=context["representation"]) + + # Lock the shape node so the user can't delete the transform/shape + # as if it was referenced + cmds.lockNode(grid_node, lock=True) + + nodes = [root, grid_node] + self[:] = nodes + + return containerise( + name=name, + namespace=namespace, + nodes=nodes, + context=context, + loader=self.__class__.__name__) + + def update(self, container, representation): + + from maya import cmds + + path = get_representation_path(representation) + + # Find VRayVolumeGrid + members = cmds.sets(container['objectName'], query=True) + grid_nodes = cmds.ls(members, type="aiVolume", long=True) + assert len(grid_nodes) == 1, "This is a bug" + + # Update the VRayVolumeGrid + self._set_path(grid_nodes[0], path=path, representation=representation) + + # Update container representation + cmds.setAttr(container["objectName"] + ".representation", + str(representation["_id"]), + type="string") + + def switch(self, container, representation): + self.update(container, representation) + + def remove(self, container): + + from maya import cmds + + # Get all members of the avalon container, ensure they are unlocked + # and delete everything + members = cmds.sets(container['objectName'], query=True) + cmds.lockNode(members, lock=False) + cmds.delete([container['objectName']] + members) + + # Clean up the namespace + try: + cmds.namespace(removeNamespace=container['namespace'], + deleteNamespaceContent=True) + except RuntimeError: + pass + + @staticmethod + def _set_path(grid_node, + path, + representation): + """Apply the settings for the VDB path to the aiVolume node""" + from maya import cmds + + if not os.path.exists(path): + raise RuntimeError("Path does not exist: %s" % path) + + is_sequence = bool(representation["context"].get("frame")) + cmds.setAttr(grid_node + ".useFrameExtension", is_sequence) + + # Set file path + cmds.setAttr(grid_node + ".filename", path, type="string") diff --git a/openpype/hosts/maya/plugins/load/load_vdb_to_redshift.py b/openpype/hosts/maya/plugins/load/load_vdb_to_redshift.py index 70bd9d22e2a..c6a69dfe354 100644 --- a/openpype/hosts/maya/plugins/load/load_vdb_to_redshift.py +++ b/openpype/hosts/maya/plugins/load/load_vdb_to_redshift.py @@ -1,11 +1,21 @@ import os from openpype.api import get_project_settings -from openpype.pipeline import load +from openpype.pipeline import ( + load, + get_representation_path +) class LoadVDBtoRedShift(load.LoaderPlugin): - """Load OpenVDB in a Redshift Volume Shape""" + """Load OpenVDB in a Redshift Volume Shape + + Note that the RedshiftVolumeShape is created without a RedshiftVolume + shader assigned. To get the Redshift volume to render correctly assign + a RedshiftVolume shader (in the Hypershade) and set the density, scatter + and emission channels to the channel names of the volumes in the VDB file. + + """ families = ["vdbcache"] representations = ["vdb"] @@ -55,7 +65,7 @@ def load(self, context, name=None, namespace=None, data=None): # Root group label = "{}:{}".format(namespace, name) - root = cmds.group(name=label, empty=True) + root = cmds.createNode("transform", name=label) settings = get_project_settings(os.environ['AVALON_PROJECT']) colors = settings['maya']['load']['colors'] @@ -74,9 +84,9 @@ def load(self, context, name=None, namespace=None, data=None): name="{}RVSShape".format(label), parent=root) - cmds.setAttr("{}.fileName".format(volume_node), - self.fname, - type="string") + self._set_path(volume_node, + path=self.fname, + representation=context["representation"]) nodes = [root, volume_node] self[:] = nodes @@ -87,3 +97,56 @@ def load(self, context, name=None, namespace=None, data=None): nodes=nodes, context=context, loader=self.__class__.__name__) + + def update(self, container, representation): + from maya import cmds + + path = get_representation_path(representation) + + # Find VRayVolumeGrid + members = cmds.sets(container['objectName'], query=True) + grid_nodes = cmds.ls(members, type="RedshiftVolumeShape", long=True) + assert len(grid_nodes) == 1, "This is a bug" + + # Update the VRayVolumeGrid + self._set_path(grid_nodes[0], path=path, representation=representation) + + # Update container representation + cmds.setAttr(container["objectName"] + ".representation", + str(representation["_id"]), + type="string") + + def remove(self, container): + from maya import cmds + + # Get all members of the avalon container, ensure they are unlocked + # and delete everything + members = cmds.sets(container['objectName'], query=True) + cmds.lockNode(members, lock=False) + cmds.delete([container['objectName']] + members) + + # Clean up the namespace + try: + cmds.namespace(removeNamespace=container['namespace'], + deleteNamespaceContent=True) + except RuntimeError: + pass + + def switch(self, container, representation): + self.update(container, representation) + + @staticmethod + def _set_path(grid_node, + path, + representation): + """Apply the settings for the VDB path to the RedshiftVolumeShape""" + from maya import cmds + + if not os.path.exists(path): + raise RuntimeError("Path does not exist: %s" % path) + + is_sequence = bool(representation["context"].get("frame")) + cmds.setAttr(grid_node + ".useFrameExtension", is_sequence) + + # Set file path + cmds.setAttr(grid_node + ".fileName", path, type="string") diff --git a/openpype/hosts/maya/plugins/publish/collect_review.py b/openpype/hosts/maya/plugins/publish/collect_review.py index 15b89ad53cc..eb872c29358 100644 --- a/openpype/hosts/maya/plugins/publish/collect_review.py +++ b/openpype/hosts/maya/plugins/publish/collect_review.py @@ -71,6 +71,8 @@ def process(self, instance): data['handles'] = instance.data.get('handles', None) data['step'] = instance.data['step'] data['fps'] = instance.data['fps'] + data['review_width'] = instance.data['review_width'] + data['review_height'] = instance.data['review_height'] data["isolate"] = instance.data["isolate"] cmds.setAttr(str(instance) + '.active', 1) self.log.debug('data {}'.format(instance.context[i].data)) diff --git a/openpype/hosts/maya/plugins/publish/extract_animation.py b/openpype/hosts/maya/plugins/publish/extract_animation.py index b0beb5968e6..8ed2d8d7a38 100644 --- a/openpype/hosts/maya/plugins/publish/extract_animation.py +++ b/openpype/hosts/maya/plugins/publish/extract_animation.py @@ -7,7 +7,7 @@ extract_alembic, suspended_refresh, maintained_selection, - iter_visible_in_frame_range + iter_visible_nodes_in_range ) @@ -83,7 +83,7 @@ def process(self, instance): # flag does not filter out those that are only hidden on some # frames as it counts "animated" or "connected" visibilities as # if it's always visible. - nodes = list(iter_visible_in_frame_range(nodes, + nodes = list(iter_visible_nodes_in_range(nodes, start=start, end=end)) diff --git a/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py b/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py index 4110ad474d1..b744bfd0feb 100644 --- a/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py +++ b/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py @@ -30,7 +30,7 @@ def process(self, instance): # get cameras members = instance.data['setMembers'] - cameras = cmds.ls(members, leaf=True, shapes=True, long=True, + cameras = cmds.ls(members, leaf=True, long=True, dag=True, type="camera") # validate required settings @@ -61,10 +61,30 @@ def process(self, instance): if bake_to_worldspace: job_str += ' -worldSpace' - for member in member_shapes: - self.log.info(f"processing {member}") + + # if baked, drop the camera hierarchy to maintain + # clean output and backwards compatibility + camera_root = cmds.listRelatives( + camera, parent=True, fullPath=True)[0] + job_str += ' -root {0}'.format(camera_root) + + for member in members: + descendants = cmds.listRelatives(member, + allDescendents=True, + fullPath=True) or [] + shapes = cmds.ls(descendants, shapes=True, + noIntermediate=True, long=True) + cameras = cmds.ls(shapes, type="camera", long=True) + if cameras: + if not set(shapes) - set(cameras): + continue + self.log.warning(( + "Camera hierarchy contains additional geometry. " + "Extraction will fail.") + ) transform = cmds.listRelatives( - member, parent=True, fullPath=True)[0] + member, parent=True, fullPath=True) + transform = transform[0] if transform else member job_str += ' -root {0}'.format(transform) job_str += ' -file "{0}"'.format(path) diff --git a/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py b/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py index 1cb30e65ea2..8d6c4b5f3ce 100644 --- a/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py +++ b/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py @@ -172,18 +172,19 @@ def process(self, instance): dag=True, shapes=True, long=True) + + attrs = {"backgroundColorR": 0.0, + "backgroundColorG": 0.0, + "backgroundColorB": 0.0, + "overscan": 1.0} + # Fix PLN-178: Don't allow background color to be non-black - for cam in cmds.ls( + for cam, (attr, value) in itertools.product(cmds.ls( baked_camera_shapes, type="camera", dag=True, - shapes=True, long=True): - attrs = {"backgroundColorR": 0.0, - "backgroundColorG": 0.0, - "backgroundColorB": 0.0, - "overscan": 1.0} - for attr, value in attrs.items(): - plug = "{0}.{1}".format(cam, attr) - unlock(plug) - cmds.setAttr(plug, value) + long=True), attrs.items()): + plug = "{0}.{1}".format(cam, attr) + unlock(plug) + cmds.setAttr(plug, value) self.log.info("Performing extraction..") cmds.select(cmds.ls(members, dag=True, diff --git a/openpype/hosts/maya/plugins/publish/extract_playblast.py b/openpype/hosts/maya/plugins/publish/extract_playblast.py index 246b2cc0f19..233a0b60c2d 100644 --- a/openpype/hosts/maya/plugins/publish/extract_playblast.py +++ b/openpype/hosts/maya/plugins/publish/extract_playblast.py @@ -1,4 +1,7 @@ import os +import glob +import contextlib + import clique import capture @@ -48,8 +51,32 @@ def process(self, instance): ['override_viewport_options'] ) preset = lib.load_capture_preset(data=self.capture_preset) - + # Grab capture presets from the project settings + capture_presets = self.capture_preset + # Set resolution variables from capture presets + width_preset = capture_presets["Resolution"]["width"] + height_preset = capture_presets["Resolution"]["height"] + # Set resolution variables from asset values + asset_data = instance.data["assetEntity"]["data"] + asset_width = asset_data.get("resolutionWidth") + asset_height = asset_data.get("resolutionHeight") + review_instance_width = instance.data.get("review_width") + review_instance_height = instance.data.get("review_height") preset['camera'] = camera + + # Tests if project resolution is set, + # if it is a value other than zero, that value is + # used, if not then the asset resolution is + # used + if review_instance_width and review_instance_height: + preset['width'] = review_instance_width + preset['height'] = review_instance_height + elif width_preset and height_preset: + preset['width'] = width_preset + preset['height'] = height_preset + elif asset_width and asset_height: + preset['width'] = asset_width + preset['height'] = asset_height preset['start_frame'] = start preset['end_frame'] = end camera_option = preset.get("camera_option", {}) diff --git a/openpype/hosts/maya/plugins/publish/extract_pointcache.py b/openpype/hosts/maya/plugins/publish/extract_pointcache.py index 7aa3aaee2a0..775b5e99396 100644 --- a/openpype/hosts/maya/plugins/publish/extract_pointcache.py +++ b/openpype/hosts/maya/plugins/publish/extract_pointcache.py @@ -7,7 +7,7 @@ extract_alembic, suspended_refresh, maintained_selection, - iter_visible_in_frame_range + iter_visible_nodes_in_range ) @@ -86,7 +86,7 @@ def process(self, instance): # flag does not filter out those that are only hidden on some # frames as it counts "animated" or "connected" visibilities as # if it's always visible. - nodes = list(iter_visible_in_frame_range(nodes, + nodes = list(iter_visible_nodes_in_range(nodes, start=start, end=end)) diff --git a/openpype/hosts/maya/plugins/publish/extract_thumbnail.py b/openpype/hosts/maya/plugins/publish/extract_thumbnail.py index 20a226796a4..4f28aa167cb 100644 --- a/openpype/hosts/maya/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/maya/plugins/publish/extract_thumbnail.py @@ -58,7 +58,29 @@ def process(self, instance): "overscan": 1.0, "depthOfField": cmds.getAttr("{0}.depthOfField".format(camera)), } - + capture_presets = capture_preset + # Set resolution variables from capture presets + width_preset = capture_presets["Resolution"]["width"] + height_preset = capture_presets["Resolution"]["height"] + # Set resolution variables from asset values + asset_data = instance.data["assetEntity"]["data"] + asset_width = asset_data.get("resolutionWidth") + asset_height = asset_data.get("resolutionHeight") + review_instance_width = instance.data.get("review_width") + review_instance_height = instance.data.get("review_height") + # Tests if project resolution is set, + # if it is a value other than zero, that value is + # used, if not then the asset resolution is + # used + if review_instance_width and review_instance_height: + preset['width'] = review_instance_width + preset['height'] = review_instance_height + elif width_preset and height_preset: + preset['width'] = width_preset + preset['height'] = height_preset + elif asset_width and asset_height: + preset['width'] = asset_width + preset['height'] = asset_height stagingDir = self.staging_dir(instance) filename = "{0}".format(instance.name) path = os.path.join(stagingDir, filename) diff --git a/openpype/hosts/maya/plugins/publish/validate_camera_contents.py b/openpype/hosts/maya/plugins/publish/validate_camera_contents.py index 87712a4cea7..5f6faddbe7a 100644 --- a/openpype/hosts/maya/plugins/publish/validate_camera_contents.py +++ b/openpype/hosts/maya/plugins/publish/validate_camera_contents.py @@ -51,18 +51,7 @@ def get_invalid(cls, instance): raise RuntimeError("No cameras found in empty instance.") if not cls.validate_shapes: - cls.log.info("Not validating shapes in the content.") - - for member in members: - parents = cmds.ls(member, long=True)[0].split("|")[1:-1] - parents_long_named = [ - "|".join(parents[:i]) for i in range(1, 1 + len(parents)) - ] - if cameras[0] in parents_long_named: - cls.log.error( - "{} is parented under camera {}".format( - member, cameras[0])) - invalid.extend(member) + cls.log.info("not validating shapes in the content") return invalid # non-camera shapes diff --git a/openpype/hosts/maya/plugins/publish/validate_frame_range.py b/openpype/hosts/maya/plugins/publish/validate_frame_range.py index 98b5b4d79b2..c51766379e3 100644 --- a/openpype/hosts/maya/plugins/publish/validate_frame_range.py +++ b/openpype/hosts/maya/plugins/publish/validate_frame_range.py @@ -27,6 +27,7 @@ class ValidateFrameRange(pyblish.api.InstancePlugin): "yeticache"] optional = True actions = [openpype.api.RepairAction] + exclude_families = [] def process(self, instance): context = instance.context @@ -56,7 +57,9 @@ def process(self, instance): # compare with data on instance errors = [] - + if [ef for ef in self.exclude_families + if instance.data["family"] in ef]: + return if(inst_start != frame_start_handle): errors.append("Instance start frame [ {} ] doesn't " "match the one set on instance [ {} ]: " diff --git a/openpype/hosts/maya/plugins/publish/validate_visible_only.py b/openpype/hosts/maya/plugins/publish/validate_visible_only.py index 2a10171113a..59a7f976ab7 100644 --- a/openpype/hosts/maya/plugins/publish/validate_visible_only.py +++ b/openpype/hosts/maya/plugins/publish/validate_visible_only.py @@ -1,7 +1,7 @@ import pyblish.api import openpype.api -from openpype.hosts.maya.api.lib import iter_visible_in_frame_range +from openpype.hosts.maya.api.lib import iter_visible_nodes_in_range import openpype.hosts.maya.api.action @@ -40,7 +40,7 @@ def get_invalid(cls, instance): nodes = instance[:] start, end = cls.get_frame_range(instance) - if not any(iter_visible_in_frame_range(nodes, start, end)): + if not any(iter_visible_nodes_in_range(nodes, start, end)): # Return the nodes we have considered so the user can identify # them with the select invalid action return nodes diff --git a/openpype/hosts/maya/startup/userSetup.py b/openpype/hosts/maya/startup/userSetup.py index a3ab483add1..10e68c2ddbe 100644 --- a/openpype/hosts/maya/startup/userSetup.py +++ b/openpype/hosts/maya/startup/userSetup.py @@ -1,10 +1,11 @@ import os from openpype.api import get_project_settings from openpype.pipeline import install_host -from openpype.hosts.maya import api +from openpype.hosts.maya.api import MayaHost from maya import cmds -install_host(api) +host = MayaHost() +install_host(host) print("starting OpenPype usersetup") diff --git a/openpype/hosts/nuke/api/lib.py b/openpype/hosts/nuke/api/lib.py index 45a1e727039..f565ec85468 100644 --- a/openpype/hosts/nuke/api/lib.py +++ b/openpype/hosts/nuke/api/lib.py @@ -20,21 +20,23 @@ ) from openpype.api import ( Logger, - Anatomy, BuildWorkfile, get_version_from_path, - get_anatomy_settings, get_workdir_data, get_asset, get_current_project_settings, ) from openpype.tools.utils import host_tools from openpype.lib.path_tools import HostDirmap -from openpype.settings import get_project_settings +from openpype.settings import ( + get_project_settings, + get_anatomy_settings, +) from openpype.modules import ModulesManager from openpype.pipeline import ( discover_legacy_creator_plugins, legacy_io, + Anatomy, ) from . import gizmo_menu diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py b/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py index 5ea7c352b97..fc16e189fbd 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py +++ b/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py @@ -104,7 +104,10 @@ def process(self, instance): self, instance, o_name, o_data["extension"], multiple_presets) - if "render.farm" in families: + if ( + "render.farm" in families or + "prerender.farm" in families + ): if "review" in instance.data["families"]: instance.data["families"].remove("review") diff --git a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py index e0c4bdb9535..6d930d358d1 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py +++ b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py @@ -4,6 +4,7 @@ import copy import pyblish.api +import six import openpype from openpype.hosts.nuke.api import ( @@ -12,7 +13,6 @@ get_view_process_node ) - class ExtractSlateFrame(openpype.api.Extractor): """Extracts movie and thumbnail with baked in luts @@ -236,6 +236,48 @@ def _render_slate_to_sequence(self, instance): int(slate_first_frame) ) + # Add file to representation files + # - get write node + write_node = instance.data["writeNode"] + # - evaluate filepaths for first frame and slate frame + first_filename = os.path.basename( + write_node["file"].evaluate(first_frame)) + slate_filename = os.path.basename( + write_node["file"].evaluate(slate_first_frame)) + + # Find matching representation based on first filename + matching_repre = None + is_sequence = None + for repre in instance.data["representations"]: + files = repre["files"] + if ( + not isinstance(files, six.string_types) + and first_filename in files + ): + matching_repre = repre + is_sequence = True + break + + elif files == first_filename: + matching_repre = repre + is_sequence = False + break + + if not matching_repre: + self.log.info(( + "Matching reresentaion was not found." + " Representation files were not filled with slate." + )) + return + + # Add frame to matching representation files + if not is_sequence: + matching_repre["files"] = [first_filename, slate_filename] + elif slate_filename not in matching_repre["files"]: + matching_repre["files"].insert(0, slate_filename) + + self.log.warning("Added slate frame to representation files") + def add_comment_slate_node(self, instance, node): comment = instance.context.data.get("comment") diff --git a/openpype/hosts/nuke/plugins/publish/precollect_writes.py b/openpype/hosts/nuke/plugins/publish/precollect_writes.py index 049958bd07d..a97f34b3704 100644 --- a/openpype/hosts/nuke/plugins/publish/precollect_writes.py +++ b/openpype/hosts/nuke/plugins/publish/precollect_writes.py @@ -35,6 +35,7 @@ def process(self, instance): if node is None: return + instance.data["writeNode"] = node self.log.debug("checking instance: {}".format(instance)) # Determine defined file type diff --git a/openpype/hosts/resolve/api/lib.py b/openpype/hosts/resolve/api/lib.py index 22f83c6eedc..c4717bd370a 100644 --- a/openpype/hosts/resolve/api/lib.py +++ b/openpype/hosts/resolve/api/lib.py @@ -4,7 +4,7 @@ import os import contextlib from opentimelineio import opentime -import openpype +from openpype.pipeline.editorial import is_overlapping_otio_ranges from ..otio import davinci_export as otio_export @@ -824,7 +824,7 @@ def get_otio_clip_instance_data(otio_timeline, timeline_item_data): continue if otio_clip.name not in timeline_item.GetName(): continue - if openpype.lib.is_overlapping_otio_ranges( + if is_overlapping_otio_ranges( parent_range, timeline_range, strict=True): # add pypedata marker to otio_clip metadata diff --git a/openpype/hosts/resolve/plugins/load/load_clip.py b/openpype/hosts/resolve/plugins/load/load_clip.py index cf88b14e812..86dd6850e7f 100644 --- a/openpype/hosts/resolve/plugins/load/load_clip.py +++ b/openpype/hosts/resolve/plugins/load/load_clip.py @@ -1,6 +1,10 @@ from copy import deepcopy from importlib import reload +from openpype.client import ( + get_version_by_id, + get_last_version_by_subset_id, +) from openpype.hosts import resolve from openpype.pipeline import ( get_representation_path, @@ -96,10 +100,8 @@ def update(self, container, representation): namespace = container['namespace'] timeline_item_data = resolve.get_pype_timeline_item_by_name(namespace) timeline_item = timeline_item_data["clip"]["item"] - version = legacy_io.find_one({ - "type": "version", - "_id": representation["parent"] - }) + project_name = legacy_io.active_project() + version = get_version_by_id(project_name, representation["parent"]) version_data = version.get("data", {}) version_name = version.get("name", None) colorspace = version_data.get("colorspace", None) @@ -138,19 +140,22 @@ def update(self, container, representation): @classmethod def set_item_color(cls, timeline_item, version): - # define version name version_name = version.get("name", None) # get all versions in list - versions = legacy_io.find({ - "type": "version", - "parent": version["parent"] - }).distinct('name') - - max_version = max(versions) + project_name = legacy_io.active_project() + last_version_doc = get_last_version_by_subset_id( + project_name, + version["parent"], + fields=["name"] + ) + if last_version_doc: + last_version = last_version_doc["name"] + else: + last_version = None # set clip colour - if version_name == max_version: + if version_name == last_version: timeline_item.SetClipColor(cls.clip_color_last) else: timeline_item.SetClipColor(cls.clip_color) diff --git a/openpype/hosts/tvpaint/plugins/load/load_workfile.py b/openpype/hosts/tvpaint/plugins/load/load_workfile.py index 462f12abf06..c6dc765a27b 100644 --- a/openpype/hosts/tvpaint/plugins/load/load_workfile.py +++ b/openpype/hosts/tvpaint/plugins/load/load_workfile.py @@ -10,8 +10,8 @@ from openpype.pipeline import ( registered_host, legacy_io, + Anatomy, ) -from openpype.api import Anatomy from openpype.hosts.tvpaint.api import lib, pipeline, plugin diff --git a/openpype/hosts/unreal/api/rendering.py b/openpype/hosts/unreal/api/rendering.py index b2732506fce..29e4747f6ec 100644 --- a/openpype/hosts/unreal/api/rendering.py +++ b/openpype/hosts/unreal/api/rendering.py @@ -2,7 +2,7 @@ import unreal -from openpype.api import Anatomy +from openpype.pipeline import Anatomy from openpype.hosts.unreal.api import pipeline diff --git a/openpype/hosts/unreal/plugins/create/create_render.py b/openpype/hosts/unreal/plugins/create/create_render.py index a3e125a94e5..950799cc108 100644 --- a/openpype/hosts/unreal/plugins/create/create_render.py +++ b/openpype/hosts/unreal/plugins/create/create_render.py @@ -1,6 +1,5 @@ import unreal -from openpype.pipeline import legacy_io from openpype.hosts.unreal.api import pipeline from openpype.hosts.unreal.api.plugin import Creator diff --git a/openpype/hosts/unreal/plugins/load/load_camera.py b/openpype/hosts/unreal/plugins/load/load_camera.py index e93be486b06..ca6b0ce7368 100644 --- a/openpype/hosts/unreal/plugins/load/load_camera.py +++ b/openpype/hosts/unreal/plugins/load/load_camera.py @@ -6,7 +6,7 @@ from unreal import EditorAssetLibrary from unreal import EditorLevelLibrary from unreal import EditorLevelUtils - +from openpype.client import get_assets, get_asset_by_name from openpype.pipeline import ( AVALON_CONTAINER_ID, legacy_io, @@ -24,14 +24,6 @@ class CameraLoader(plugin.Loader): icon = "cube" color = "orange" - def _get_data(self, asset_name): - asset_doc = legacy_io.find_one({ - "type": "asset", - "name": asset_name - }) - - return asset_doc.get("data") - def _set_sequence_hierarchy( self, seq_i, seq_j, min_frame_j, max_frame_j ): @@ -177,6 +169,19 @@ def load(self, context, name, namespace, data): EditorLevelLibrary.save_all_dirty_levels() EditorLevelLibrary.load_level(level) + project_name = legacy_io.active_project() + # TODO refactor + # - Creationg of hierarchy should be a function in unreal integration + # - it's used in multiple loaders but must not be loader's logic + # - hard to say what is purpose of the loop + # - variables does not match their meaning + # - why scene is stored to sequences? + # - asset documents vs. elements + # - cleanup variable names in whole function + # - e.g. 'asset', 'asset_name', 'asset_data', 'asset_doc' + # - really inefficient queries of asset documents + # - existing asset in scene is considered as "with correct values" + # - variable 'elements' is modified during it's loop # Get all the sequences in the hierarchy. It will create them, if # they don't exist. sequences = [] @@ -201,26 +206,30 @@ def load(self, context, name, namespace, data): factory=unreal.LevelSequenceFactoryNew() ) - asset_data = legacy_io.find_one({ - "type": "asset", - "name": h.split('/')[-1] - }) - - id = asset_data.get('_id') + asset_data = get_asset_by_name( + project_name, + h.split('/')[-1], + fields=["_id", "data.fps"] + ) start_frames = [] end_frames = [] - elements = list( - legacy_io.find({"type": "asset", "data.visualParent": id})) + elements = list(get_assets( + project_name, + parent_ids=[asset_data["_id"]], + fields=["_id", "data.clipIn", "data.clipOut"] + )) + for e in elements: start_frames.append(e.get('data').get('clipIn')) end_frames.append(e.get('data').get('clipOut')) - elements.extend(legacy_io.find({ - "type": "asset", - "data.visualParent": e.get('_id') - })) + elements.extend(get_assets( + project_name, + parent_ids=[e["_id"]], + fields=["_id", "data.clipIn", "data.clipOut"] + )) min_frame = min(start_frames) max_frame = max(end_frames) @@ -256,7 +265,7 @@ def load(self, context, name, namespace, data): sequences[i], sequences[i + 1], frame_ranges[i + 1][0], frame_ranges[i + 1][1]) - data = self._get_data(asset) + data = get_asset_by_name(project_name, asset)["data"] cam_seq.set_display_rate( unreal.FrameRate(data.get("fps"), 1.0)) cam_seq.set_playback_start(0) diff --git a/openpype/hosts/unreal/plugins/load/load_layout.py b/openpype/hosts/unreal/plugins/load/load_layout.py index c65cd25ac83..3f16a68ead6 100644 --- a/openpype/hosts/unreal/plugins/load/load_layout.py +++ b/openpype/hosts/unreal/plugins/load/load_layout.py @@ -1,6 +1,5 @@ # -*- coding: utf-8 -*- """Loader for layouts.""" -import os import json from pathlib import Path @@ -12,6 +11,7 @@ from unreal import FBXImportType from unreal import MathLibrary as umath +from openpype.client import get_asset_by_name, get_assets from openpype.pipeline import ( discover_loader_plugins, loaders_from_representation, @@ -88,15 +88,6 @@ def _get_abc_loader(loaders, family): return None - @staticmethod - def _get_data(asset_name): - asset_doc = legacy_io.find_one({ - "type": "asset", - "name": asset_name - }) - - return asset_doc.get("data") - @staticmethod def _set_sequence_hierarchy( seq_i, seq_j, max_frame_i, min_frame_j, max_frame_j, map_paths @@ -364,26 +355,30 @@ def _generate_sequence(self, h, h_dir): factory=unreal.LevelSequenceFactoryNew() ) - asset_data = legacy_io.find_one({ - "type": "asset", - "name": h_dir.split('/')[-1] - }) - - id = asset_data.get('_id') + project_name = legacy_io.active_project() + asset_data = get_asset_by_name( + project_name, + h_dir.split('/')[-1], + fields=["_id", "data.fps"] + ) start_frames = [] end_frames = [] - elements = list( - legacy_io.find({"type": "asset", "data.visualParent": id})) + elements = list(get_assets( + project_name, + parent_ids=[asset_data["_id"]], + fields=["_id", "data.clipIn", "data.clipOut"] + )) for e in elements: start_frames.append(e.get('data').get('clipIn')) end_frames.append(e.get('data').get('clipOut')) - elements.extend(legacy_io.find({ - "type": "asset", - "data.visualParent": e.get('_id') - })) + elements.extend(get_assets( + project_name, + parent_ids=[e["_id"]], + fields=["_id", "data.clipIn", "data.clipOut"] + )) min_frame = min(start_frames) max_frame = max(end_frames) @@ -659,7 +654,8 @@ def load(self, context, name, namespace, options): frame_ranges[i + 1][0], frame_ranges[i + 1][1], [level]) - data = self._get_data(asset) + project_name = legacy_io.active_project() + data = get_asset_by_name(project_name, asset)["data"] shot.set_display_rate( unreal.FrameRate(data.get("fps"), 1.0)) shot.set_playback_start(0) diff --git a/openpype/hosts/unreal/plugins/publish/collect_render_instances.py b/openpype/hosts/unreal/plugins/publish/collect_render_instances.py index 9fb45ea7a7e..cb28f4bf60f 100644 --- a/openpype/hosts/unreal/plugins/publish/collect_render_instances.py +++ b/openpype/hosts/unreal/plugins/publish/collect_render_instances.py @@ -3,7 +3,7 @@ import unreal -from openpype.api import Anatomy +from openpype.pipeline import Anatomy from openpype.hosts.unreal.api import pipeline import pyblish.api diff --git a/openpype/hosts/unreal/plugins/publish/extract_layout.py b/openpype/hosts/unreal/plugins/publish/extract_layout.py index 87e6693a97b..8924df36a7f 100644 --- a/openpype/hosts/unreal/plugins/publish/extract_layout.py +++ b/openpype/hosts/unreal/plugins/publish/extract_layout.py @@ -9,6 +9,7 @@ from unreal import EditorLevelLibrary as ell from unreal import EditorAssetLibrary as eal +from openpype.client import get_representation_by_name import openpype.api from openpype.pipeline import legacy_io @@ -34,6 +35,7 @@ def process(self, instance): "Wrong level loaded" json_data = [] + project_name = legacy_io.active_project() for member in instance[:]: actor = ell.get_actor_reference(member) @@ -57,17 +59,13 @@ def process(self, instance): self.log.error("AssetContainer not found.") return - parent = eal.get_metadata_tag(asset_container, "parent") + parent_id = eal.get_metadata_tag(asset_container, "parent") family = eal.get_metadata_tag(asset_container, "family") - self.log.info("Parent: {}".format(parent)) - blend = legacy_io.find_one( - { - "type": "representation", - "parent": ObjectId(parent), - "name": "blend" - }, - projection={"_id": True}) + self.log.info("Parent: {}".format(parent_id)) + blend = get_representation_by_name( + project_name, "blend", parent_id, fields=["_id"] + ) blend_id = blend["_id"] json_element = {} diff --git a/openpype/lib/anatomy.py b/openpype/lib/anatomy.py index 3fbc05ee881..6d339f058fd 100644 --- a/openpype/lib/anatomy.py +++ b/openpype/lib/anatomy.py @@ -1,1260 +1,38 @@ -import os -import re -import copy -import platform -import collections -import numbers +"""Code related to project Anatomy was moved +to 'openpype.pipeline.anatomy' please change your imports as soon as +possible. File will be probably removed in OpenPype 3.14.* +""" -from openpype.settings.lib import ( - get_default_anatomy_settings, - get_anatomy_settings -) -from .path_templates import ( - TemplateUnsolved, - TemplateResult, - TemplatesDict, - FormatObject, -) -from .log import PypeLogger +import warnings +import functools -log = PypeLogger().get_logger(__name__) -try: - StringType = basestring -except NameError: - StringType = str +class AnatomyDeprecatedWarning(DeprecationWarning): + pass -class ProjectNotSet(Exception): - """Exception raised when is created Anatomy without project name.""" +def anatomy_deprecated(func): + """Mark functions as deprecated. - -class RootCombinationError(Exception): - """This exception is raised when templates has combined root types.""" - - def __init__(self, roots): - joined_roots = ", ".join( - ["\"{}\"".format(_root) for _root in roots] - ) - # TODO better error message - msg = ( - "Combination of root with and" - " without root name in AnatomyTemplates. {}" - ).format(joined_roots) - - super(RootCombinationError, self).__init__(msg) - - -class Anatomy: - """Anatomy module helps to keep project settings. - - Wraps key project specifications, AnatomyTemplates and Roots. - - Args: - project_name (str): Project name to look on overrides. - """ - - root_key_regex = re.compile(r"{(root?[^}]+)}") - root_name_regex = re.compile(r"root\[([^]]+)\]") - - def __init__(self, project_name=None, site_name=None): - if not project_name: - project_name = os.environ.get("AVALON_PROJECT") - - if not project_name: - raise ProjectNotSet(( - "Implementation bug: Project name is not set. Anatomy requires" - " to load data for specific project." - )) - - self.project_name = project_name - - self._data = self._prepare_anatomy_data( - get_anatomy_settings(project_name, site_name) - ) - self._site_name = site_name - self._templates_obj = AnatomyTemplates(self) - self._roots_obj = Roots(self) - - # Anatomy used as dictionary - # - implemented only getters returning copy - def __getitem__(self, key): - return copy.deepcopy(self._data[key]) - - def get(self, key, default=None): - return copy.deepcopy(self._data).get(key, default) - - def keys(self): - return copy.deepcopy(self._data).keys() - - def values(self): - return copy.deepcopy(self._data).values() - - def items(self): - return copy.deepcopy(self._data).items() - - @staticmethod - def default_data(): - """Default project anatomy data. - - Always return fresh loaded data. May be used as data for new project. - - Not used inside Anatomy itself. - """ - return get_default_anatomy_settings(clear_metadata=False) - - @staticmethod - def _prepare_anatomy_data(anatomy_data): - """Prepare anatomy data for further processing. - - Method added to replace `{task}` with `{task[name]}` in templates. - """ - templates_data = anatomy_data.get("templates") - if templates_data: - # Replace `{task}` with `{task[name]}` in templates - value_queue = collections.deque() - value_queue.append(templates_data) - while value_queue: - item = value_queue.popleft() - if not isinstance(item, dict): - continue - - for key in tuple(item.keys()): - value = item[key] - if isinstance(value, dict): - value_queue.append(value) - - elif isinstance(value, StringType): - item[key] = value.replace("{task}", "{task[name]}") - return anatomy_data - - def reset(self): - """Reset values of cached data in templates and roots objects.""" - self._data = self._prepare_anatomy_data( - get_anatomy_settings(self.project_name, self._site_name) - ) - self.templates_obj.reset() - self.roots_obj.reset() - - @property - def templates(self): - """Wrap property `templates` of Anatomy's AnatomyTemplates instance.""" - return self._templates_obj.templates - - @property - def templates_obj(self): - """Return `AnatomyTemplates` object of current Anatomy instance.""" - return self._templates_obj - - def format(self, *args, **kwargs): - """Wrap `format` method of Anatomy's `templates_obj`.""" - return self._templates_obj.format(*args, **kwargs) - - def format_all(self, *args, **kwargs): - """Wrap `format_all` method of Anatomy's `templates_obj`.""" - return self._templates_obj.format_all(*args, **kwargs) - - @property - def roots(self): - """Wrap `roots` property of Anatomy's `roots_obj`.""" - return self._roots_obj.roots - - @property - def roots_obj(self): - """Return `Roots` object of current Anatomy instance.""" - return self._roots_obj - - def root_environments(self): - """Return OPENPYPE_ROOT_* environments for current project in dict.""" - return self._roots_obj.root_environments() - - def root_environmets_fill_data(self, template=None): - """Environment variable values in dictionary for rootless path. - - Args: - template (str): Template for environment variable key fill. - By default is set to `"${}"`. - """ - return self.roots_obj.root_environmets_fill_data(template) - - def find_root_template_from_path(self, *args, **kwargs): - """Wrapper for Roots `find_root_template_from_path`.""" - return self.roots_obj.find_root_template_from_path(*args, **kwargs) - - def path_remapper(self, *args, **kwargs): - """Wrapper for Roots `path_remapper`.""" - return self.roots_obj.path_remapper(*args, **kwargs) - - def all_root_paths(self): - """Wrapper for Roots `all_root_paths`.""" - return self.roots_obj.all_root_paths() - - def set_root_environments(self): - """Set OPENPYPE_ROOT_* environments for current project.""" - self._roots_obj.set_root_environments() - - def root_names(self): - """Return root names for current project.""" - return self.root_names_from_templates(self.templates) - - def _root_keys_from_templates(self, data): - """Extract root key from templates in data. - - Args: - data (dict): Data that may contain templates as string. - - Return: - set: Set of all root names from templates as strings. - - Output example: `{"root[work]", "root[publish]"}` - """ - - output = set() - if isinstance(data, dict): - for value in data.values(): - for root in self._root_keys_from_templates(value): - output.add(root) - - elif isinstance(data, str): - for group in re.findall(self.root_key_regex, data): - output.add(group) - - return output - - def root_value_for_template(self, template): - """Returns value of root key from template.""" - root_templates = [] - for group in re.findall(self.root_key_regex, template): - root_templates.append("{" + group + "}") - - if not root_templates: - return None - - return root_templates[0].format(**{"root": self.roots}) - - def root_names_from_templates(self, templates): - """Extract root names form anatomy templates. - - Returns None if values in templates contain only "{root}". - Empty list is returned if there is no "root" in templates. - Else returns all root names from templates in list. - - RootCombinationError is raised when templates contain both root types, - basic "{root}" and with root name specification "{root[work]}". - - Args: - templates (dict): Anatomy templates where roots are not filled. - - Return: - list/None: List of all root names from templates as strings when - multiroot setup is used, otherwise None is returned. - """ - roots = list(self._root_keys_from_templates(templates)) - # Return empty list if no roots found in templates - if not roots: - return roots - - # Raise exception when root keys have roots with and without root name. - # Invalid output example: ["root", "root[project]", "root[render]"] - if len(roots) > 1 and "root" in roots: - raise RootCombinationError(roots) - - # Return None if "root" without root name in templates - if len(roots) == 1 and roots[0] == "root": - return None - - names = set() - for root in roots: - for group in re.findall(self.root_name_regex, root): - names.add(group) - return list(names) - - def fill_root(self, template_path): - """Fill template path where is only "root" key unfilled. - - Args: - template_path (str): Path with "root" key in. - Example path: "{root}/projects/MyProject/Shot01/Lighting/..." - - Return: - str: formatted path - """ - # NOTE does not care if there are different keys than "root" - return template_path.format(**{"root": self.roots}) - - @classmethod - def fill_root_with_path(cls, rootless_path, root_path): - """Fill path without filled "root" key with passed path. - - This is helper to fill root with different directory path than anatomy - has defined no matter if is single or multiroot. - - Output path is same as input path if `rootless_path` does not contain - unfilled root key. - - Args: - rootless_path (str): Path without filled "root" key. Example: - "{root[work]}/MyProject/..." - root_path (str): What should replace root key in `rootless_path`. - - Returns: - str: Path with filled root. - """ - output = str(rootless_path) - for group in re.findall(cls.root_key_regex, rootless_path): - replacement = "{" + group + "}" - output = output.replace(replacement, root_path) - - return output - - def replace_root_with_env_key(self, filepath, template=None): - """Replace root of path with environment key. - - # Example: - ## Project with roots: - ``` - { - "nas": { - "windows": P:/projects", - ... - } - ... - } - ``` - - ## Entered filepath - "P:/projects/project/asset/task/animation_v001.ma" - - ## Entered template - "<{}>" - - ## Output - "/project/asset/task/animation_v001.ma" - - Args: - filepath (str): Full file path where root should be replaced. - template (str): Optional template for environment key. Must - have one index format key. - Default value if not entered: "${}" - - Returns: - str: Path where root is replaced with environment root key. - - Raise: - ValueError: When project's roots were not found in entered path. - """ - success, rootless_path = self.find_root_template_from_path(filepath) - if not success: - raise ValueError( - "{}: Project's roots were not found in path: {}".format( - self.project_name, filepath - ) - ) - - data = self.root_environmets_fill_data(template) - return rootless_path.format(**data) - - -class AnatomyTemplateUnsolved(TemplateUnsolved): - """Exception for unsolved template when strict is set to True.""" - - msg = "Anatomy template \"{0}\" is unsolved.{1}{2}" - - -class AnatomyTemplateResult(TemplateResult): - rootless = None - - def __new__(cls, result, rootless_path): - new_obj = super(AnatomyTemplateResult, cls).__new__( - cls, - str(result), - result.template, - result.solved, - result.used_values, - result.missing_keys, - result.invalid_types - ) - new_obj.rootless = rootless_path - return new_obj - - def validate(self): - if not self.solved: - raise AnatomyTemplateUnsolved( - self.template, - self.missing_keys, - self.invalid_types - ) - - -class AnatomyTemplates(TemplatesDict): - inner_key_pattern = re.compile(r"(\{@.*?[^{}0]*\})") - inner_key_name_pattern = re.compile(r"\{@(.*?[^{}0]*)\}") - - def __init__(self, anatomy): - super(AnatomyTemplates, self).__init__() - self.anatomy = anatomy - self.loaded_project = None - - def __getitem__(self, key): - return self.templates[key] - - def get(self, key, default=None): - return self.templates.get(key, default) - - def reset(self): - self._raw_templates = None - self._templates = None - self._objected_templates = None - - @property - def project_name(self): - return self.anatomy.project_name - - @property - def roots(self): - return self.anatomy.roots - - @property - def templates(self): - self._validate_discovery() - return self._templates - - @property - def objected_templates(self): - self._validate_discovery() - return self._objected_templates - - def _validate_discovery(self): - if self.project_name != self.loaded_project: - self.reset() - - if self._templates is None: - self._discover() - self.loaded_project = self.project_name - - def _format_value(self, value, data): - if isinstance(value, RootItem): - return self._solve_dict(value, data) - - result = super(AnatomyTemplates, self)._format_value(value, data) - if isinstance(result, TemplateResult): - rootless_path = self._rootless_path(result, data) - result = AnatomyTemplateResult(result, rootless_path) - return result - - def set_templates(self, templates): - if not templates: - self.reset() - return - - self._raw_templates = copy.deepcopy(templates) - templates = copy.deepcopy(templates) - v_queue = collections.deque() - v_queue.append(templates) - while v_queue: - item = v_queue.popleft() - if not isinstance(item, dict): - continue - - for key in tuple(item.keys()): - value = item[key] - if isinstance(value, dict): - v_queue.append(value) - - elif ( - isinstance(value, StringType) - and "{task}" in value - ): - item[key] = value.replace("{task}", "{task[name]}") - - solved_templates = self.solve_template_inner_links(templates) - self._templates = solved_templates - self._objected_templates = self.create_ojected_templates( - solved_templates - ) - - def default_templates(self): - """Return default templates data with solved inner keys.""" - return self.solve_template_inner_links( - self.anatomy["templates"] - ) - - def _discover(self): - """ Loads anatomy templates from yaml. - Default templates are loaded if project is not set or project does - not have set it's own. - TODO: create templates if not exist. - - Returns: - TemplatesResultDict: Contain templates data for current project of - default templates. - """ - - if self.project_name is None: - # QUESTION create project specific if not found? - raise AssertionError(( - "Project \"{0}\" does not have his own templates." - " Trying to use default." - ).format(self.project_name)) - - self.set_templates(self.anatomy["templates"]) - - @classmethod - def replace_inner_keys(cls, matches, value, key_values, key): - """Replacement of inner keys in template values.""" - for match in matches: - anatomy_sub_keys = ( - cls.inner_key_name_pattern.findall(match) - ) - if key in anatomy_sub_keys: - raise ValueError(( - "Unsolvable recursion in inner keys, " - "key: \"{}\" is in his own value." - " Can't determine source, please check Anatomy templates." - ).format(key)) - - for anatomy_sub_key in anatomy_sub_keys: - replace_value = key_values.get(anatomy_sub_key) - if replace_value is None: - raise KeyError(( - "Anatomy templates can't be filled." - " Anatomy key `{0}` has" - " invalid inner key `{1}`." - ).format(key, anatomy_sub_key)) - - valid = isinstance(replace_value, (numbers.Number, StringType)) - if not valid: - raise ValueError(( - "Anatomy templates can't be filled." - " Anatomy key `{0}` has" - " invalid inner key `{1}`" - " with value `{2}`." - ).format(key, anatomy_sub_key, str(replace_value))) - - value = value.replace(match, str(replace_value)) - - return value - - @classmethod - def prepare_inner_keys(cls, key_values): - """Check values of inner keys. - - Check if inner key exist in template group and has valid value. - It is also required to avoid infinite loop with unsolvable recursion - when first inner key's value refers to second inner key's value where - first is used. - """ - keys_to_solve = set(key_values.keys()) - while True: - found = False - for key in tuple(keys_to_solve): - value = key_values[key] - - if isinstance(value, StringType): - matches = cls.inner_key_pattern.findall(value) - if not matches: - keys_to_solve.remove(key) - continue - - found = True - key_values[key] = cls.replace_inner_keys( - matches, value, key_values, key - ) - continue - - elif not isinstance(value, dict): - keys_to_solve.remove(key) - continue - - subdict_found = False - for _key, _value in tuple(value.items()): - matches = cls.inner_key_pattern.findall(_value) - if not matches: - continue - - subdict_found = True - found = True - key_values[key][_key] = cls.replace_inner_keys( - matches, _value, key_values, - "{}.{}".format(key, _key) - ) - - if not subdict_found: - keys_to_solve.remove(key) - - if not found: - break - - return key_values - - @classmethod - def solve_template_inner_links(cls, templates): - """Solve templates inner keys identified by "{@*}". - - Process is split into 2 parts. - First is collecting all global keys (keys in top hierarchy where value - is not dictionary). All global keys are set for all group keys (keys - in top hierarchy where value is dictionary). Value of a key is not - overridden in group if already contain value for the key. - - In second part all keys with "at" symbol in value are replaced with - value of the key afterward "at" symbol from the group. - - Args: - templates (dict): Raw templates data. - - Example: - templates:: - key_1: "value_1", - key_2: "{@key_1}/{filling_key}" - - group_1: - key_3: "value_3/{@key_2}" - - group_2: - key_2": "value_2" - key_4": "value_4/{@key_2}" - - output:: - key_1: "value_1" - key_2: "value_1/{filling_key}" - - group_1: { - key_1: "value_1" - key_2: "value_1/{filling_key}" - key_3: "value_3/value_1/{filling_key}" - - group_2: { - key_1: "value_1" - key_2: "value_2" - key_4: "value_3/value_2" - """ - default_key_values = templates.pop("defaults", {}) - for key, value in tuple(templates.items()): - if isinstance(value, dict): - continue - default_key_values[key] = templates.pop(key) - - # Pop "others" key before before expected keys are processed - other_templates = templates.pop("others") or {} - - keys_by_subkey = {} - for sub_key, sub_value in templates.items(): - key_values = {} - key_values.update(default_key_values) - key_values.update(sub_value) - keys_by_subkey[sub_key] = cls.prepare_inner_keys(key_values) - - for sub_key, sub_value in other_templates.items(): - if sub_key in keys_by_subkey: - log.warning(( - "Key \"{}\" is duplicated in others. Skipping." - ).format(sub_key)) - continue - - key_values = {} - key_values.update(default_key_values) - key_values.update(sub_value) - keys_by_subkey[sub_key] = cls.prepare_inner_keys(key_values) - - default_keys_by_subkeys = cls.prepare_inner_keys(default_key_values) - - for key, value in default_keys_by_subkeys.items(): - keys_by_subkey[key] = value - - return keys_by_subkey - - def _dict_to_subkeys_list(self, subdict, pre_keys=None): - if pre_keys is None: - pre_keys = [] - output = [] - for key in subdict: - value = subdict[key] - result = list(pre_keys) - result.append(key) - if isinstance(value, dict): - for item in self._dict_to_subkeys_list(value, result): - output.append(item) - else: - output.append(result) - return output - - def _keys_to_dicts(self, key_list, value): - if not key_list: - return None - if len(key_list) == 1: - return {key_list[0]: value} - return {key_list[0]: self._keys_to_dicts(key_list[1:], value)} - - def _rootless_path(self, result, final_data): - used_values = result.used_values - missing_keys = result.missing_keys - template = result.template - invalid_types = result.invalid_types - if ( - "root" not in used_values - or "root" in missing_keys - or "{root" not in template - ): - return - - for invalid_type in invalid_types: - if "root" in invalid_type: - return - - root_keys = self._dict_to_subkeys_list({"root": used_values["root"]}) - if not root_keys: - return - - output = str(result) - for used_root_keys in root_keys: - if not used_root_keys: - continue - - used_value = used_values - root_key = None - for key in used_root_keys: - used_value = used_value[key] - if root_key is None: - root_key = key - else: - root_key += "[{}]".format(key) - - root_key = "{" + root_key + "}" - output = output.replace(str(used_value), root_key) - - return output - - def format(self, data, strict=True): - copy_data = copy.deepcopy(data) - roots = self.roots - if roots: - copy_data["root"] = roots - result = super(AnatomyTemplates, self).format(copy_data) - result.strict = strict - return result - - def format_all(self, in_data, only_keys=True): - """ Solves templates based on entered data. - - Args: - data (dict): Containing keys to be filled into template. - - Returns: - TemplatesResultDict: Output `TemplateResult` have `strict` - attribute set to False so accessing unfilled keys in templates - won't raise any exceptions. - """ - return self.format(in_data, strict=False) - - -class RootItem(FormatObject): - """Represents one item or roots. - - Holds raw data of root item specification. Raw data contain value - for each platform, but current platform value is used when object - is used for formatting of template. - - Args: - root_raw_data (dict): Dictionary containing root values by platform - names. ["windows", "linux" and "darwin"] - name (str, optional): Root name which is representing. Used with - multi root setup otherwise None value is expected. - parent_keys (list, optional): All dictionary parent keys. Values of - `parent_keys` are used for get full key which RootItem is - representing. Used for replacing root value in path with - formattable key. e.g. parent_keys == ["work"] -> {root[work]} - parent (object, optional): It is expected to be `Roots` object. - Value of `parent` won't affect code logic much. + It will result in a warning being emitted when the function is used. """ - def __init__( - self, root_raw_data, name=None, parent_keys=None, parent=None - ): - lowered_platform_keys = {} - for key, value in root_raw_data.items(): - lowered_platform_keys[key.lower()] = value - self.raw_data = lowered_platform_keys - self.cleaned_data = self._clean_roots(lowered_platform_keys) - self.name = name - self.parent_keys = parent_keys or [] - self.parent = parent - - self.available_platforms = list(lowered_platform_keys.keys()) - self.value = lowered_platform_keys.get(platform.system().lower()) - self.clean_value = self.clean_root(self.value) - - def __format__(self, *args, **kwargs): - return self.value.__format__(*args, **kwargs) - - def __str__(self): - return str(self.value) - - def __repr__(self): - return self.__str__() - - def __getitem__(self, key): - if isinstance(key, numbers.Number): - return self.value[key] - - additional_info = "" - if self.parent and self.parent.project_name: - additional_info += " for project \"{}\"".format( - self.parent.project_name - ) - - raise AssertionError( - "Root key \"{}\" is missing{}.".format( - key, additional_info - ) - ) - - def full_key(self): - """Full key value for dictionary formatting in template. - - Returns: - str: Return full replacement key for formatting. This helps when - multiple roots are set. In that case e.g. `"root[work]"` is - returned. - """ - if not self.name: - return "root" - - joined_parent_keys = "".join( - ["[{}]".format(key) for key in self.parent_keys] - ) - return "root{}".format(joined_parent_keys) - - def clean_path(self, path): - """Just replace backslashes with forward slashes.""" - return str(path).replace("\\", "/") - - def clean_root(self, root): - """Makes sure root value does not end with slash.""" - if root: - root = self.clean_path(root) - while root.endswith("/"): - root = root[:-1] - return root - - def _clean_roots(self, raw_data): - """Clean all values of raw root item values.""" - cleaned = {} - for key, value in raw_data.items(): - cleaned[key] = self.clean_root(value) - return cleaned - - def path_remapper(self, path, dst_platform=None, src_platform=None): - """Remap path for specific platform. - - Args: - path (str): Source path which need to be remapped. - dst_platform (str, optional): Specify destination platform - for which remapping should happen. - src_platform (str, optional): Specify source platform. This is - recommended to not use and keep unset until you really want - to use specific platform. - roots (dict/RootItem/None, optional): It is possible to remap - path with different roots then instance where method was - called has. - - Returns: - str/None: When path does not contain known root then - None is returned else returns remapped path with "{root}" - or "{root[]}". - """ - cleaned_path = self.clean_path(path) - if dst_platform: - dst_root_clean = self.cleaned_data.get(dst_platform) - if not dst_root_clean: - key_part = "" - full_key = self.full_key() - if full_key != "root": - key_part += "\"{}\" ".format(full_key) - - log.warning( - "Root {}miss platform \"{}\" definition.".format( - key_part, dst_platform - ) - ) - return None - - if cleaned_path.startswith(dst_root_clean): - return cleaned_path - - if src_platform: - src_root_clean = self.cleaned_data.get(src_platform) - if src_root_clean is None: - log.warning( - "Root \"{}\" miss platform \"{}\" definition.".format( - self.full_key(), src_platform - ) - ) - return None - - if not cleaned_path.startswith(src_root_clean): - return None - - subpath = cleaned_path[len(src_root_clean):] - if dst_platform: - # `dst_root_clean` is used from upper condition - return dst_root_clean + subpath - return self.clean_value + subpath - - result, template = self.find_root_template_from_path(path) - if not result: - return None - - def parent_dict(keys, value): - if not keys: - return value - - key = keys.pop(0) - return {key: parent_dict(keys, value)} - - if dst_platform: - format_value = parent_dict(list(self.parent_keys), dst_root_clean) - else: - format_value = parent_dict(list(self.parent_keys), self.value) - - return template.format(**{"root": format_value}) - - def find_root_template_from_path(self, path): - """Replaces known root value with formattable key in path. - - All platform values are checked for this replacement. - - Args: - path (str): Path where root value should be found. - - Returns: - tuple: Tuple contain 2 values: `success` (bool) and `path` (str). - When success it True then path should contain replaced root - value with formattable key. - - Example: - When input path is:: - "C:/windows/path/root/projects/my_project/file.ext" - - And raw data of item looks like:: - { - "windows": "C:/windows/path/root", - "linux": "/mount/root" - } - - Output will be:: - (True, "{root}/projects/my_project/file.ext") - - If any of raw data value wouldn't match path's root output is:: - (False, "C:/windows/path/root/projects/my_project/file.ext") - """ - result = False - output = str(path) - - root_paths = list(self.cleaned_data.values()) - mod_path = self.clean_path(path) - for root_path in root_paths: - # Skip empty paths - if not root_path: - continue - - if mod_path.startswith(root_path): - result = True - replacement = "{" + self.full_key() + "}" - output = replacement + mod_path[len(root_path):] - break - - return (result, output) - - -class Roots: - """Object which should be used for formatting "root" key in templates. - - Args: - anatomy Anatomy: Anatomy object created for a specific project. - """ - - env_prefix = "OPENPYPE_PROJECT_ROOT" - roots_filename = "roots.json" - - def __init__(self, anatomy): - self.anatomy = anatomy - self.loaded_project = None - self._roots = None - - def __format__(self, *args, **kwargs): - return self.roots.__format__(*args, **kwargs) - - def __getitem__(self, key): - return self.roots[key] - - def reset(self): - """Reset current roots value.""" - self._roots = None - - def path_remapper( - self, path, dst_platform=None, src_platform=None, roots=None - ): - """Remap path for specific platform. - - Args: - path (str): Source path which need to be remapped. - dst_platform (str, optional): Specify destination platform - for which remapping should happen. - src_platform (str, optional): Specify source platform. This is - recommended to not use and keep unset until you really want - to use specific platform. - roots (dict/RootItem/None, optional): It is possible to remap - path with different roots then instance where method was - called has. - - Returns: - str/None: When path does not contain known root then - None is returned else returns remapped path with "{root}" - or "{root[]}". - """ - if roots is None: - roots = self.roots - - if roots is None: - raise ValueError("Roots are not set. Can't find path.") - - if "{root" in path: - path = path.format(**{"root": roots}) - # If `dst_platform` is not specified then return else continue. - if not dst_platform: - return path - - if isinstance(roots, RootItem): - return roots.path_remapper(path, dst_platform, src_platform) - - for _root in roots.values(): - result = self.path_remapper( - path, dst_platform, src_platform, _root - ) - if result is not None: - return result - - def find_root_template_from_path(self, path, roots=None): - """Find root value in entered path and replace it with formatting key. - - Args: - path (str): Source path where root will be searched. - roots (Roots/dict, optional): It is possible to use different - roots than instance where method was triggered has. - - Returns: - tuple: Output contains tuple with bool representing success as - first value and path with or without replaced root with - formatting key as second value. - - Raises: - ValueError: When roots are not entered and can't be loaded. - """ - if roots is None: - log.debug( - "Looking for matching root in path \"{}\".".format(path) - ) - roots = self.roots - - if roots is None: - raise ValueError("Roots are not set. Can't find path.") - - if isinstance(roots, RootItem): - return roots.find_root_template_from_path(path) - - for root_name, _root in roots.items(): - success, result = self.find_root_template_from_path(path, _root) - if success: - log.info("Found match in root \"{}\".".format(root_name)) - return success, result - - log.warning("No matching root was found in current setting.") - return (False, path) - - def set_root_environments(self): - """Set root environments for current project.""" - for key, value in self.root_environments().items(): - os.environ[key] = value - - def root_environments(self): - """Use root keys to create unique keys for environment variables. - - Concatenates prefix "OPENPYPE_ROOT" with root keys to create unique - keys. - - Returns: - dict: Result is `{(str): (str)}` dicitonary where key represents - unique key concatenated by keys and value is root value of - current platform root. - - Example: - With raw root values:: - "work": { - "windows": "P:/projects/work", - "linux": "/mnt/share/projects/work", - "darwin": "/darwin/path/work" - }, - "publish": { - "windows": "P:/projects/publish", - "linux": "/mnt/share/projects/publish", - "darwin": "/darwin/path/publish" - } - - Result on windows platform:: - { - "OPENPYPE_ROOT_WORK": "P:/projects/work", - "OPENPYPE_ROOT_PUBLISH": "P:/projects/publish" - } - - Short example when multiroot is not used:: - { - "OPENPYPE_ROOT": "P:/projects" - } - """ - return self._root_environments() - - def all_root_paths(self, roots=None): - """Return all paths for all roots of all platforms.""" - if roots is None: - roots = self.roots - - output = [] - if isinstance(roots, RootItem): - for value in roots.raw_data.values(): - output.append(value) - return output - - for _roots in roots.values(): - output.extend(self.all_root_paths(_roots)) - return output - - def _root_environments(self, keys=None, roots=None): - if not keys: - keys = [] - if roots is None: - roots = self.roots - - if isinstance(roots, RootItem): - key_items = [self.env_prefix] - for _key in keys: - key_items.append(_key.upper()) - - key = "_".join(key_items) - # Make sure key and value does not contain unicode - # - can happen in Python 2 hosts - return {str(key): str(roots.value)} - - output = {} - for _key, _value in roots.items(): - _keys = list(keys) - _keys.append(_key) - output.update(self._root_environments(_keys, _value)) - return output - - def root_environmets_fill_data(self, template=None): - """Environment variable values in dictionary for rootless path. - - Args: - template (str): Template for environment variable key fill. - By default is set to `"${}"`. - """ - if template is None: - template = "${}" - return self._root_environmets_fill_data(template) - - def _root_environmets_fill_data(self, template, keys=None, roots=None): - if keys is None and roots is None: - return { - "root": self._root_environmets_fill_data( - template, [], self.roots - ) - } - - if isinstance(roots, RootItem): - key_items = [Roots.env_prefix] - for _key in keys: - key_items.append(_key.upper()) - key = "_".join(key_items) - return template.format(key) - - output = {} - for key, value in roots.items(): - _keys = list(keys) - _keys.append(key) - output[key] = self._root_environmets_fill_data( - template, _keys, value - ) - return output - - @property - def project_name(self): - """Return project name which will be used for loading root values.""" - return self.anatomy.project_name - - @property - def roots(self): - """Property for filling "root" key in templates. - - This property returns roots for current project or default root values. - Warning: - Default roots value may cause issues when project use different - roots settings. That may happen when project use multiroot - templates but default roots miss their keys. - """ - if self.project_name != self.loaded_project: - self._roots = None - - if self._roots is None: - self._roots = self._discover() - self.loaded_project = self.project_name - return self._roots - - def _discover(self): - """ Loads current project's roots or default. - - Default roots are loaded if project override's does not contain roots. - - Returns: - `RootItem` or `dict` with multiple `RootItem`s when multiroot - setting is used. - """ - - return self._parse_dict(self.anatomy["roots"], parent=self) - - @staticmethod - def _parse_dict(data, key=None, parent_keys=None, parent=None): - """Parse roots raw data into RootItem or dictionary with RootItems. - - Converting raw roots data to `RootItem` helps to handle platform keys. - This method is recursive to be able handle multiroot setup and - is static to be able to load default roots without creating new object. - - Args: - data (dict): Should contain raw roots data to be parsed. - key (str, optional): Current root key. Set by recursion. - parent_keys (list): Parent dictionary keys. Set by recursion. - parent (Roots, optional): Parent object set in `RootItem` - helps to keep RootItem instance updated with `Roots` object. - - Returns: - `RootItem` or `dict` with multiple `RootItem`s when multiroot - setting is used. - """ - if not parent_keys: - parent_keys = [] - is_last = False - for value in data.values(): - if isinstance(value, StringType): - is_last = True - break - - if is_last: - return RootItem(data, key, parent_keys, parent=parent) - - output = {} - for _key, value in data.items(): - _parent_keys = list(parent_keys) - _parent_keys.append(_key) - output[_key] = Roots._parse_dict(value, _key, _parent_keys, parent) - return output + @functools.wraps(func) + def new_func(*args, **kwargs): + warnings.simplefilter("always", AnatomyDeprecatedWarning) + warnings.warn( + ( + "Deprecated import of 'Anatomy'." + " Class was moved to 'openpype.pipeline.anatomy'." + " Please change your imports of Anatomy in codebase." + ), + category=AnatomyDeprecatedWarning + ) + return func(*args, **kwargs) + return new_func + + +@anatomy_deprecated +def Anatomy(*args, **kwargs): + from openpype.pipeline.anatomy import Anatomy + return Anatomy(*args, **kwargs) diff --git a/openpype/lib/applications.py b/openpype/lib/applications.py index a81bdeca0f2..d2298486458 100644 --- a/openpype/lib/applications.py +++ b/openpype/lib/applications.py @@ -20,10 +20,7 @@ METADATA_KEYS, M_DYNAMIC_KEY_LABEL ) -from . import ( - PypeLogger, - Anatomy -) +from . import PypeLogger from .profiles_filtering import filter_profiles from .local_settings import get_openpype_username from .avalon_context import ( @@ -1305,7 +1302,7 @@ def get_app_environments_for_context( dict: Environments for passed context and application. """ - from openpype.pipeline import AvalonMongoDB + from openpype.pipeline import AvalonMongoDB, Anatomy # Avalon database connection dbcon = AvalonMongoDB() diff --git a/openpype/lib/avalon_context.py b/openpype/lib/avalon_context.py index a03f0663002..616460410ed 100644 --- a/openpype/lib/avalon_context.py +++ b/openpype/lib/avalon_context.py @@ -14,7 +14,6 @@ get_project_settings, get_system_settings ) -from .anatomy import Anatomy from .profiles_filtering import filter_profiles from .events import emit_event from .path_templates import StringTemplate @@ -593,6 +592,7 @@ def get_workdir_with_workdir_data( )) if not anatomy: + from openpype.pipeline import Anatomy anatomy = Anatomy(project_name) if not template_key: @@ -604,7 +604,10 @@ def get_workdir_with_workdir_data( anatomy_filled = anatomy.format(workdir_data) # Output is TemplateResult object which contain useful data - return anatomy_filled[template_key]["folder"] + path = anatomy_filled[template_key]["folder"] + if path: + path = os.path.normpath(path) + return path def get_workdir( @@ -635,6 +638,7 @@ def get_workdir( TemplateResult: Workdir path. """ if not anatomy: + from openpype.pipeline import Anatomy anatomy = Anatomy(project_doc["name"]) workdir_data = get_workdir_data( @@ -747,6 +751,8 @@ def compute_session_changes( @with_pipeline_io def get_workdir_from_session(session=None, template_key=None): + from openpype.pipeline import Anatomy + if session is None: session = legacy_io.Session project_name = session["AVALON_PROJECT"] @@ -762,7 +768,10 @@ def get_workdir_from_session(session=None, template_key=None): host_name, project_name=project_name ) - return anatomy_filled[template_key]["folder"] + path = anatomy_filled[template_key]["folder"] + if path: + path = os.path.normpath(path) + return path @with_pipeline_io @@ -853,6 +862,8 @@ def create_workfile_doc(asset_doc, task_name, filename, workdir, dbcon=None): dbcon (AvalonMongoDB): Optionally enter avalon AvalonMongoDB object and `legacy_io` is used if not entered. """ + from openpype.pipeline import Anatomy + # Use legacy_io if dbcon is not entered if not dbcon: dbcon = legacy_io @@ -1673,6 +1684,7 @@ def _get_task_context_data_for_anatomy( """ if anatomy is None: + from openpype.pipeline import Anatomy anatomy = Anatomy(project_doc["name"]) asset_name = asset_doc["name"] @@ -1741,6 +1753,7 @@ def get_custom_workfile_template_by_context( """ if anatomy is None: + from openpype.pipeline import Anatomy anatomy = Anatomy(project_doc["name"]) # get project, asset, task anatomy context data diff --git a/openpype/lib/editorial.py b/openpype/lib/editorial.py index 32d6dc36887..49220b4f15d 100644 --- a/openpype/lib/editorial.py +++ b/openpype/lib/editorial.py @@ -1,289 +1,102 @@ -import os -import re -import clique -from .import_utils import discover_host_vendor_module - -try: - import opentimelineio as otio - from opentimelineio import opentime as _ot -except ImportError: - if not os.environ.get("AVALON_APP"): - raise - otio = discover_host_vendor_module("opentimelineio") - _ot = discover_host_vendor_module("opentimelineio.opentime") - - -def otio_range_to_frame_range(otio_range): - start = _ot.to_frames( - otio_range.start_time, otio_range.start_time.rate) - end = start + _ot.to_frames( - otio_range.duration, otio_range.duration.rate) - return start, end - - -def otio_range_with_handles(otio_range, instance): - handle_start = instance.data["handleStart"] - handle_end = instance.data["handleEnd"] - handles_duration = handle_start + handle_end - fps = float(otio_range.start_time.rate) - start = _ot.to_frames(otio_range.start_time, fps) - duration = _ot.to_frames(otio_range.duration, fps) - - return _ot.TimeRange( - start_time=_ot.RationalTime((start - handle_start), fps), - duration=_ot.RationalTime((duration + handles_duration), fps) - ) - - -def is_overlapping_otio_ranges(test_otio_range, main_otio_range, strict=False): - test_start, test_end = otio_range_to_frame_range(test_otio_range) - main_start, main_end = otio_range_to_frame_range(main_otio_range) - covering_exp = bool( - (test_start <= main_start) and (test_end >= main_end) - ) - inside_exp = bool( - (test_start >= main_start) and (test_end <= main_end) - ) - overlaying_right_exp = bool( - (test_start <= main_end) and (test_end >= main_end) - ) - overlaying_left_exp = bool( - (test_end >= main_start) and (test_start <= main_start) - ) - - if not strict: - return any(( - covering_exp, - inside_exp, - overlaying_right_exp, - overlaying_left_exp - )) - else: - return covering_exp - - -def convert_to_padded_path(path, padding): - """ - Return correct padding in sequence string +"""Code related to editorial utility functions was moved +to 'openpype.pipeline.editorial' please change your imports as soon as +possible. File will be probably removed in OpenPype 3.14.* +""" - Args: - path (str): path url or simple file name - padding (int): number of padding +import warnings +import functools - Returns: - type: string with reformated path - Example: - convert_to_padded_path("plate.%d.exr") > plate.%04d.exr +class EditorialDeprecatedWarning(DeprecationWarning): + pass - """ - if "%d" in path: - path = re.sub("%d", "%0{padding}d".format(padding=padding), path) - return path +def editorial_deprecated(func): + """Mark functions as deprecated. -def trim_media_range(media_range, source_range): + It will result in a warning being emitted when the function is used. """ - Trim input media range with clip source range. - Args: - media_range (otio._ot._ot.TimeRange): available range of media - source_range (otio._ot._ot.TimeRange): clip required range + @functools.wraps(func) + def new_func(*args, **kwargs): + warnings.simplefilter("always", EditorialDeprecatedWarning) + warnings.warn( + ( + "Call to deprecated function '{}'." + " Function was moved to 'openpype.pipeline.editorial'." + ).format(func.__name__), + category=EditorialDeprecatedWarning, + stacklevel=2 + ) + return func(*args, **kwargs) + return new_func - Returns: - otio._ot._ot.TimeRange: trimmed media range - """ - rw_media_start = _ot.RationalTime( - media_range.start_time.value + source_range.start_time.value, - media_range.start_time.rate - ) - rw_media_duration = _ot.RationalTime( - source_range.duration.value, - media_range.duration.rate - ) - return _ot.TimeRange( - rw_media_start, rw_media_duration) - - -def range_from_frames(start, duration, fps): - """ - Returns otio time range. +@editorial_deprecated +def otio_range_to_frame_range(*args, **kwargs): + from openpype.pipeline.editorial import otio_range_to_frame_range - Args: - start (int): frame start - duration (int): frame duration - fps (float): frame range + return otio_range_to_frame_range(*args, **kwargs) - Returns: - otio._ot._ot.TimeRange: created range - """ - return _ot.TimeRange( - _ot.RationalTime(start, fps), - _ot.RationalTime(duration, fps) - ) +@editorial_deprecated +def otio_range_with_handles(*args, **kwargs): + from openpype.pipeline.editorial import otio_range_with_handles + return otio_range_with_handles(*args, **kwargs) -def frames_to_secons(frames, framerate): - """ - Returning secons. - Args: - frames (int): frame - framerate (float): frame rate +@editorial_deprecated +def is_overlapping_otio_ranges(*args, **kwargs): + from openpype.pipeline.editorial import is_overlapping_otio_ranges - Returns: - float: second value + return is_overlapping_otio_ranges(*args, **kwargs) - """ - rt = _ot.from_frames(frames, framerate) - return _ot.to_seconds(rt) +@editorial_deprecated +def convert_to_padded_path(*args, **kwargs): + from openpype.pipeline.editorial import convert_to_padded_path -def frames_to_timecode(frames, framerate): - rt = _ot.from_frames(frames, framerate) - return _ot.to_timecode(rt) + return convert_to_padded_path(*args, **kwargs) -def make_sequence_collection(path, otio_range, metadata): - """ - Make collection from path otio range and otio metadata. +@editorial_deprecated +def trim_media_range(*args, **kwargs): + from openpype.pipeline.editorial import trim_media_range - Args: - path (str): path to image sequence with `%d` - otio_range (otio._ot._ot.TimeRange): range to be used - metadata (dict): data where padding value can be found + return trim_media_range(*args, **kwargs) - Returns: - list: dir_path (str): path to sequence, collection object - """ - if "%" not in path: - return None - file_name = os.path.basename(path) - dir_path = os.path.dirname(path) - head = file_name.split("%")[0] - tail = os.path.splitext(file_name)[-1] - first, last = otio_range_to_frame_range(otio_range) - collection = clique.Collection( - head=head, tail=tail, padding=metadata["padding"]) - collection.indexes.update([i for i in range(first, last)]) - return dir_path, collection - - -def _sequence_resize(source, length): - step = float(len(source) - 1) / (length - 1) - for i in range(length): - low, ratio = divmod(i * step, 1) - high = low + 1 if ratio > 0 else low - yield (1 - ratio) * source[int(low)] + ratio * source[int(high)] - - -def get_media_range_with_retimes(otio_clip, handle_start, handle_end): - source_range = otio_clip.source_range - available_range = otio_clip.available_range() - media_in = available_range.start_time.value - media_out = available_range.end_time_inclusive().value - - # modifiers - time_scalar = 1. - offset_in = 0 - offset_out = 0 - time_warp_nodes = [] - - # Check for speed effects and adjust playback speed accordingly - for effect in otio_clip.effects: - if isinstance(effect, otio.schema.LinearTimeWarp): - time_scalar = effect.time_scalar - - elif isinstance(effect, otio.schema.FreezeFrame): - # For freeze frame, playback speed must be set after range - time_scalar = 0. - - elif isinstance(effect, otio.schema.TimeEffect): - # For freeze frame, playback speed must be set after range - name = effect.name - effect_name = effect.effect_name - if "TimeWarp" not in effect_name: - continue - metadata = effect.metadata - lookup = metadata.get("lookup") - if not lookup: - continue - - # time warp node - tw_node = { - "Class": "TimeWarp", - "name": name - } - tw_node.update(metadata) - tw_node["lookup"] = list(lookup) - - # get first and last frame offsets - offset_in += lookup[0] - offset_out += lookup[-1] - - # add to timewarp nodes - time_warp_nodes.append(tw_node) - - # multiply by time scalar - offset_in *= time_scalar - offset_out *= time_scalar - - # filip offset if reversed speed - if time_scalar < 0: - _offset_in = offset_out - _offset_out = offset_in - offset_in = _offset_in - offset_out = _offset_out - - # scale handles - handle_start *= abs(time_scalar) - handle_end *= abs(time_scalar) - - # filip handles if reversed speed - if time_scalar < 0: - _handle_start = handle_end - _handle_end = handle_start - handle_start = _handle_start - handle_end = _handle_end - - source_in = source_range.start_time.value - - media_in_trimmed = ( - media_in + source_in + offset_in) - media_out_trimmed = ( - media_in + source_in + ( - ((source_range.duration.value - 1) * abs( - time_scalar)) + offset_out)) - - # calculate available handles - if (media_in_trimmed - media_in) < handle_start: - handle_start = (media_in_trimmed - media_in) - if (media_out - media_out_trimmed) < handle_end: - handle_end = (media_out - media_out_trimmed) - - # create version data - version_data = { - "versionData": { - "retime": True, - "speed": time_scalar, - "timewarps": time_warp_nodes, - "handleStart": round(handle_start), - "handleEnd": round(handle_end) - } - } - - returning_dict = { - "mediaIn": media_in_trimmed, - "mediaOut": media_out_trimmed, - "handleStart": round(handle_start), - "handleEnd": round(handle_end) - } - - # add version data only if retime - if time_warp_nodes or time_scalar != 1.: - returning_dict.update(version_data) - - return returning_dict +@editorial_deprecated +def range_from_frames(*args, **kwargs): + from openpype.pipeline.editorial import range_from_frames + + return range_from_frames(*args, **kwargs) + + +@editorial_deprecated +def frames_to_secons(*args, **kwargs): + from openpype.pipeline.editorial import frames_to_seconds + + return frames_to_seconds(*args, **kwargs) + + +@editorial_deprecated +def frames_to_timecode(*args, **kwargs): + from openpype.pipeline.editorial import frames_to_timecode + + return frames_to_timecode(*args, **kwargs) + + +@editorial_deprecated +def make_sequence_collection(*args, **kwargs): + from openpype.pipeline.editorial import make_sequence_collection + + return make_sequence_collection(*args, **kwargs) + + +@editorial_deprecated +def get_media_range_with_retimes(*args, **kwargs): + from openpype.pipeline.editorial import get_media_range_with_retimes + + return get_media_range_with_retimes(*args, **kwargs) diff --git a/openpype/lib/import_utils.py b/openpype/lib/import_utils.py deleted file mode 100644 index e88c07fca6c..00000000000 --- a/openpype/lib/import_utils.py +++ /dev/null @@ -1,25 +0,0 @@ -import os -import sys -import importlib -from .log import PypeLogger as Logger - -log = Logger().get_logger(__name__) - - -def discover_host_vendor_module(module_name): - host = os.environ["AVALON_APP"] - pype_root = os.environ["OPENPYPE_REPOS_ROOT"] - main_module = module_name.split(".")[0] - module_path = os.path.join( - pype_root, "hosts", host, "vendor", main_module) - - log.debug( - "Importing module from host vendor path: `{}`".format(module_path)) - - if not os.path.exists(module_path): - log.warning( - "Path not existing: `{}`".format(module_path)) - return None - - sys.path.insert(1, module_path) - return importlib.import_module(module_name) diff --git a/openpype/lib/path_tools.py b/openpype/lib/path_tools.py index caad20f4d64..4f28be3302b 100644 --- a/openpype/lib/path_tools.py +++ b/openpype/lib/path_tools.py @@ -9,7 +9,6 @@ from openpype.client import get_project from openpype.settings import get_project_settings -from .anatomy import Anatomy from .profiles_filtering import filter_profiles log = logging.getLogger(__name__) @@ -227,6 +226,7 @@ def fill_paths(path_list, anatomy): def create_project_folders(basic_paths, project_name): + from openpype.pipeline import Anatomy anatomy = Anatomy(project_name) concat_paths = concatenate_splitted_paths(basic_paths, anatomy) diff --git a/openpype/modules/avalon_apps/rest_api.py b/openpype/modules/avalon_apps/rest_api.py index b35f5bf3579..a52ce1b6df9 100644 --- a/openpype/modules/avalon_apps/rest_api.py +++ b/openpype/modules/avalon_apps/rest_api.py @@ -5,7 +5,12 @@ from aiohttp.web_response import Response -from openpype.pipeline import AvalonMongoDB +from openpype.client import ( + get_projects, + get_project, + get_assets, + get_asset_by_name, +) from openpype_modules.webserver.base_routes import RestApiEndpoint @@ -14,19 +19,13 @@ def __init__(self, resource): self.resource = resource super(_RestApiEndpoint, self).__init__() - @property - def dbcon(self): - return self.resource.dbcon - class AvalonProjectsEndpoint(_RestApiEndpoint): async def get(self) -> Response: - output = [] - for project_name in self.dbcon.database.collection_names(): - project_doc = self.dbcon.database[project_name].find_one({ - "type": "project" - }) - output.append(project_doc) + output = [ + project_doc + for project_doc in get_projects() + ] return Response( status=200, body=self.resource.encode(output), @@ -36,9 +35,7 @@ async def get(self) -> Response: class AvalonProjectEndpoint(_RestApiEndpoint): async def get(self, project_name) -> Response: - project_doc = self.dbcon.database[project_name].find_one({ - "type": "project" - }) + project_doc = get_project(project_name) if project_doc: return Response( status=200, @@ -53,9 +50,7 @@ async def get(self, project_name) -> Response: class AvalonAssetsEndpoint(_RestApiEndpoint): async def get(self, project_name) -> Response: - asset_docs = list(self.dbcon.database[project_name].find({ - "type": "asset" - })) + asset_docs = list(get_assets(project_name)) return Response( status=200, body=self.resource.encode(asset_docs), @@ -65,10 +60,7 @@ async def get(self, project_name) -> Response: class AvalonAssetEndpoint(_RestApiEndpoint): async def get(self, project_name, asset_name) -> Response: - asset_doc = self.dbcon.database[project_name].find_one({ - "type": "asset", - "name": asset_name - }) + asset_doc = get_asset_by_name(project_name, asset_name) if asset_doc: return Response( status=200, @@ -88,9 +80,6 @@ def __init__(self, avalon_module, server_manager): self.module = avalon_module self.server_manager = server_manager - self.dbcon = AvalonMongoDB() - self.dbcon.install() - self.prefix = "/avalon" self.endpoint_defs = ( diff --git a/openpype/modules/clockify/launcher_actions/ClockifyStart.py b/openpype/modules/clockify/launcher_actions/ClockifyStart.py index 4669f98b011..7663aecc316 100644 --- a/openpype/modules/clockify/launcher_actions/ClockifyStart.py +++ b/openpype/modules/clockify/launcher_actions/ClockifyStart.py @@ -1,16 +1,9 @@ -from openpype.api import Logger -from openpype.pipeline import ( - legacy_io, - LauncherAction, -) +from openpype.client import get_asset_by_name +from openpype.pipeline import LauncherAction from openpype_modules.clockify.clockify_api import ClockifyAPI -log = Logger.get_logger(__name__) - - class ClockifyStart(LauncherAction): - name = "clockify_start_timer" label = "Clockify - Start Timer" icon = "clockify_icon" @@ -24,20 +17,19 @@ def is_compatible(self, session): return False def process(self, session, **kwargs): - project_name = session['AVALON_PROJECT'] - asset_name = session['AVALON_ASSET'] - task_name = session['AVALON_TASK'] + project_name = session["AVALON_PROJECT"] + asset_name = session["AVALON_ASSET"] + task_name = session["AVALON_TASK"] description = asset_name - asset = legacy_io.find_one({ - 'type': 'asset', - 'name': asset_name - }) - if asset is not None: - desc_items = asset.get('data', {}).get('parents', []) + asset_doc = get_asset_by_name( + project_name, asset_name, fields=["data.parents"] + ) + if asset_doc is not None: + desc_items = asset_doc.get("data", {}).get("parents", []) desc_items.append(asset_name) desc_items.append(task_name) - description = '/'.join(desc_items) + description = "/".join(desc_items) project_id = self.clockapi.get_project_id(project_name) tag_ids = [] diff --git a/openpype/modules/clockify/launcher_actions/ClockifySync.py b/openpype/modules/clockify/launcher_actions/ClockifySync.py index 356bbd03069..c346a1b4f62 100644 --- a/openpype/modules/clockify/launcher_actions/ClockifySync.py +++ b/openpype/modules/clockify/launcher_actions/ClockifySync.py @@ -1,11 +1,6 @@ +from openpype.client import get_projects, get_project from openpype_modules.clockify.clockify_api import ClockifyAPI -from openpype.api import Logger -from openpype.pipeline import ( - legacy_io, - LauncherAction, -) - -log = Logger.get_logger(__name__) +from openpype.pipeline import LauncherAction class ClockifySync(LauncherAction): @@ -22,39 +17,36 @@ def is_compatible(self, session): return self.have_permissions def process(self, session, **kwargs): - project_name = session.get('AVALON_PROJECT', None) + project_name = session.get("AVALON_PROJECT") or "" projects_to_sync = [] - if project_name.strip() == '' or project_name is None: - for project in legacy_io.projects(): - projects_to_sync.append(project) + if project_name.strip(): + projects_to_sync = [get_project(project_name)] else: - project = legacy_io.find_one({'type': 'project'}) - projects_to_sync.append(project) + projects_to_sync = get_projects() projects_info = {} for project in projects_to_sync: - task_types = project['config']['tasks'].keys() - projects_info[project['name']] = task_types + task_types = project["config"]["tasks"].keys() + projects_info[project["name"]] = task_types clockify_projects = self.clockapi.get_projects() for project_name, task_types in projects_info.items(): - if project_name not in clockify_projects: - response = self.clockapi.add_project(project_name) - if 'id' not in response: - self.log.error('Project {} can\'t be created'.format( - project_name - )) - continue - project_id = response['id'] - else: - project_id = clockify_projects[project_name] + if project_name in clockify_projects: + continue + + response = self.clockapi.add_project(project_name) + if "id" not in response: + self.log.error("Project {} can't be created".format( + project_name + )) + continue clockify_workspace_tags = self.clockapi.get_tags() for task_type in task_types: if task_type not in clockify_workspace_tags: response = self.clockapi.add_tag(task_type) - if 'id' not in response: + if "id" not in response: self.log.error('Task {} can\'t be created'.format( task_type )) diff --git a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py index 9964e3c6467..3707c5709fd 100644 --- a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py @@ -710,7 +710,9 @@ def process(self, instance): new_payload["JobInfo"].update(tiles_data["JobInfo"]) new_payload["PluginInfo"].update(tiles_data["PluginInfo"]) - job_hash = hashlib.sha256("{}_{}".format(file_index, file)) + self.log.info("hashing {} - {}".format(file_index, file)) + job_hash = hashlib.sha256( + ("{}_{}".format(file_index, file)).encode("utf-8")) frame_jobs[frame] = job_hash.hexdigest() new_payload["JobInfo"]["ExtraInfo0"] = job_hash.hexdigest() new_payload["JobInfo"]["ExtraInfo1"] = file diff --git a/openpype/modules/deadline/plugins/publish/submit_publish_job.py b/openpype/modules/deadline/plugins/publish/submit_publish_job.py index b54b00d0994..9dd1428a63e 100644 --- a/openpype/modules/deadline/plugins/publish/submit_publish_job.py +++ b/openpype/modules/deadline/plugins/publish/submit_publish_job.py @@ -1045,7 +1045,7 @@ def _get_publish_folder(self, anatomy, template_data, get publish_path Args: - anatomy (pype.lib.anatomy.Anatomy): + anatomy (openpype.pipeline.anatomy.Anatomy): template_data (dict): pre-calculated collected data for process asset (string): asset name subset (string): subset name (actually group name of subset) diff --git a/openpype/modules/ftrack/event_handlers_server/event_user_assigment.py b/openpype/modules/ftrack/event_handlers_server/event_user_assigment.py index 82b79e986b8..88d252e8cf8 100644 --- a/openpype/modules/ftrack/event_handlers_server/event_user_assigment.py +++ b/openpype/modules/ftrack/event_handlers_server/event_user_assigment.py @@ -2,11 +2,11 @@ import subprocess from openpype.client import get_asset_by_id, get_asset_by_name +from openpype.settings import get_project_settings +from openpype.pipeline import Anatomy from openpype_modules.ftrack.lib import BaseEvent from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY -from openpype.api import Anatomy, get_project_settings - class UserAssigmentEvent(BaseEvent): """ diff --git a/openpype/modules/ftrack/event_handlers_user/action_create_folders.py b/openpype/modules/ftrack/event_handlers_user/action_create_folders.py index 81f38e0c390..9806f83773d 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_create_folders.py +++ b/openpype/modules/ftrack/event_handlers_user/action_create_folders.py @@ -1,7 +1,7 @@ import os import collections import copy -from openpype.api import Anatomy +from openpype.pipeline import Anatomy from openpype_modules.ftrack.lib import BaseAction, statics_icon diff --git a/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py b/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py index 3400c509ab0..79d04a78549 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py +++ b/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py @@ -11,9 +11,8 @@ get_versions, get_representations ) -from openpype.api import Anatomy from openpype.lib import StringTemplate, TemplateUnsolved -from openpype.pipeline import AvalonMongoDB +from openpype.pipeline import AvalonMongoDB, Anatomy from openpype_modules.ftrack.lib import BaseAction, statics_icon diff --git a/openpype/modules/ftrack/event_handlers_user/action_delivery.py b/openpype/modules/ftrack/event_handlers_user/action_delivery.py index 4b799b092bc..ad82af39a3c 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_delivery.py +++ b/openpype/modules/ftrack/event_handlers_user/action_delivery.py @@ -10,12 +10,13 @@ get_versions, get_representations ) -from openpype.api import Anatomy, config +from openpype.pipeline import Anatomy from openpype_modules.ftrack.lib import BaseAction, statics_icon from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY from openpype_modules.ftrack.lib.custom_attributes import ( query_custom_attributes ) +from openpype.lib import config from openpype.lib.delivery import ( path_from_representation, get_format_dict, diff --git a/openpype/modules/ftrack/event_handlers_user/action_fill_workfile_attr.py b/openpype/modules/ftrack/event_handlers_user/action_fill_workfile_attr.py index d30c41a7499..d91649d7baa 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_fill_workfile_attr.py +++ b/openpype/modules/ftrack/event_handlers_user/action_fill_workfile_attr.py @@ -11,13 +11,13 @@ get_project, get_assets, ) -from openpype.api import get_project_settings +from openpype.settings import get_project_settings from openpype.lib import ( get_workfile_template_key, get_workdir_data, - Anatomy, StringTemplate, ) +from openpype.pipeline import Anatomy from openpype_modules.ftrack.lib import BaseAction, statics_icon from openpype_modules.ftrack.lib.avalon_sync import create_chunks diff --git a/openpype/modules/ftrack/event_handlers_user/action_rv.py b/openpype/modules/ftrack/event_handlers_user/action_rv.py index 2480ea7f95a..d05f0c47f6d 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_rv.py +++ b/openpype/modules/ftrack/event_handlers_user/action_rv.py @@ -11,10 +11,10 @@ get_version_by_name, get_representation_by_name ) -from openpype.api import Anatomy from openpype.pipeline import ( get_representation_path, AvalonMongoDB, + Anatomy, ) from openpype_modules.ftrack.lib import BaseAction, statics_icon diff --git a/openpype/modules/ftrack/event_handlers_user/action_store_thumbnails_to_avalon.py b/openpype/modules/ftrack/event_handlers_user/action_store_thumbnails_to_avalon.py index d655dddcaf3..8748f426bdc 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_store_thumbnails_to_avalon.py +++ b/openpype/modules/ftrack/event_handlers_user/action_store_thumbnails_to_avalon.py @@ -14,8 +14,7 @@ get_representations ) from openpype_modules.ftrack.lib import BaseAction, statics_icon -from openpype.api import Anatomy -from openpype.pipeline import AvalonMongoDB +from openpype.pipeline import AvalonMongoDB, Anatomy from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY diff --git a/openpype/modules/log_viewer/tray/widgets.py b/openpype/modules/log_viewer/tray/widgets.py index ed08e62109e..c7ac64ab70a 100644 --- a/openpype/modules/log_viewer/tray/widgets.py +++ b/openpype/modules/log_viewer/tray/widgets.py @@ -1,3 +1,4 @@ +import html from Qt import QtCore, QtWidgets import qtawesome from .models import LogModel, LogsFilterProxy @@ -286,7 +287,7 @@ def set_detail(self, logs=None): if level == "debug": line_f = ( " -" - " {{ {loggerName} }}: [" + " {{ {logger_name} }}: [" " {message}" " ]" ) @@ -299,7 +300,7 @@ def set_detail(self, logs=None): elif level == "warning": line_f = ( "*** WRN:" - " >>> {{ {loggerName} }}: [" + " >>> {{ {logger_name} }}: [" " {message}" " ]" ) @@ -307,16 +308,25 @@ def set_detail(self, logs=None): line_f = ( "!!! ERR:" " {timestamp}" - " >>> {{ {loggerName} }}: [" + " >>> {{ {logger_name} }}: [" " {message}" " ]" ) + logger_name = log["loggerName"] + timestamp = "" + if not show_timecode: + timestamp = log["timestamp"] + message = log["message"] exc = log.get("exception") if exc: - log["message"] = exc["message"] + message = exc["message"] - line = line_f.format(**log) + line = line_f.format( + message=html.escape(message), + logger_name=logger_name, + timestamp=timestamp + ) if show_timecode: timestamp = log["timestamp"] diff --git a/openpype/modules/sync_server/providers/local_drive.py b/openpype/modules/sync_server/providers/local_drive.py index 68f604b39c1..172cb338cf2 100644 --- a/openpype/modules/sync_server/providers/local_drive.py +++ b/openpype/modules/sync_server/providers/local_drive.py @@ -4,10 +4,11 @@ import threading import time -from openpype.api import Logger, Anatomy +from openpype.api import Logger +from openpype.pipeline import Anatomy from .abstract_provider import AbstractProvider -log = Logger().get_logger("SyncServer") +log = Logger.get_logger("SyncServer") class LocalDriveHandler(AbstractProvider): diff --git a/openpype/modules/sync_server/sync_server_module.py b/openpype/modules/sync_server/sync_server_module.py index 698b296a52d..4027561d227 100644 --- a/openpype/modules/sync_server/sync_server_module.py +++ b/openpype/modules/sync_server/sync_server_module.py @@ -9,14 +9,12 @@ from openpype.modules import OpenPypeModule from openpype_interfaces import ITrayModule -from openpype.api import ( - Anatomy, +from openpype.settings import ( get_project_settings, get_system_settings, - get_local_site_id ) -from openpype.lib import PypeLogger -from openpype.pipeline import AvalonMongoDB +from openpype.lib import PypeLogger, get_local_site_id +from openpype.pipeline import AvalonMongoDB, Anatomy from openpype.settings.lib import ( get_default_anatomy_settings, get_anatomy_settings @@ -28,7 +26,7 @@ from .utils import time_function, SyncStatus, SiteAlreadyPresentError -log = PypeLogger().get_logger("SyncServer") +log = PypeLogger.get_logger("SyncServer") class SyncServerModule(OpenPypeModule, ITrayModule): diff --git a/openpype/modules/sync_server/tray/models.py b/openpype/modules/sync_server/tray/models.py index c49edeafb90..6d1e85c17a2 100644 --- a/openpype/modules/sync_server/tray/models.py +++ b/openpype/modules/sync_server/tray/models.py @@ -1,6 +1,7 @@ import os import attr from bson.objectid import ObjectId +import datetime from Qt import QtCore from Qt.QtCore import Qt @@ -413,6 +414,23 @@ def get_index(self, id): return index return None + def _convert_date(self, date_value, current_date): + """Converts 'date_value' to string. + + Value of date_value might contain date in the future, used for nicely + sort queued items next to last downloaded. + """ + try: + converted_date = None + # ignore date in the future - for sorting only + if date_value and date_value < current_date: + converted_date = date_value.strftime("%Y%m%dT%H%M%SZ") + except (AttributeError, TypeError): + # ignore unparseable values + pass + + return converted_date + class SyncRepresentationSummaryModel(_SyncRepresentationModel): """ @@ -560,7 +578,7 @@ def add_page_records(self, local_site, remote_site, representations): remote_provider = lib.translate_provider_for_icon(self.sync_server, self.project, remote_site) - + current_date = datetime.datetime.now() for repre in result.get("paginatedResults"): files = repre.get("files", []) if isinstance(files, dict): # aggregate returns dictionary @@ -570,14 +588,10 @@ def add_page_records(self, local_site, remote_site, representations): if not files: continue - local_updated = remote_updated = None - if repre.get('updated_dt_local'): - local_updated = \ - repre.get('updated_dt_local').strftime("%Y%m%dT%H%M%SZ") - - if repre.get('updated_dt_remote'): - remote_updated = \ - repre.get('updated_dt_remote').strftime("%Y%m%dT%H%M%SZ") + local_updated = self._convert_date(repre.get('updated_dt_local'), + current_date) + remote_updated = self._convert_date(repre.get('updated_dt_remote'), + current_date) avg_progress_remote = lib.convert_progress( repre.get('avg_progress_remote', '0')) @@ -645,6 +659,8 @@ def get_query(self, limit=0): if limit == 0: limit = SyncRepresentationSummaryModel.PAGE_SIZE + # replace null with value in the future for better sorting + dummy_max_date = datetime.datetime(2099, 1, 1) aggr = [ {"$match": self.get_match_part()}, {'$unwind': '$files'}, @@ -687,7 +703,7 @@ def get_query(self, limit=0): {'$cond': [ {'$size': "$order_remote.last_failed_dt"}, "$order_remote.last_failed_dt", - [] + [dummy_max_date] ]} ]}}, 'updated_dt_local': {'$first': { @@ -696,7 +712,7 @@ def get_query(self, limit=0): {'$cond': [ {'$size': "$order_local.last_failed_dt"}, "$order_local.last_failed_dt", - [] + [dummy_max_date] ]} ]}}, 'files_size': {'$ifNull': ["$files.size", 0]}, @@ -1039,6 +1055,7 @@ def add_page_records(self, local_site, remote_site, representations): self.project, remote_site) + current_date = datetime.datetime.now() for repre in result.get("paginatedResults"): # log.info("!!! repre:: {}".format(repre)) files = repre.get("files", []) @@ -1046,16 +1063,12 @@ def add_page_records(self, local_site, remote_site, representations): files = [files] for file in files: - local_updated = remote_updated = None - if repre.get('updated_dt_local'): - local_updated = \ - repre.get('updated_dt_local').strftime( - "%Y%m%dT%H%M%SZ") - - if repre.get('updated_dt_remote'): - remote_updated = \ - repre.get('updated_dt_remote').strftime( - "%Y%m%dT%H%M%SZ") + local_updated = self._convert_date( + repre.get('updated_dt_local'), + current_date) + remote_updated = self._convert_date( + repre.get('updated_dt_remote'), + current_date) remote_progress = lib.convert_progress( repre.get('progress_remote', '0')) @@ -1104,6 +1117,7 @@ def get_query(self, limit=0): if limit == 0: limit = SyncRepresentationSummaryModel.PAGE_SIZE + dummy_max_date = datetime.datetime(2099, 1, 1) aggr = [ {"$match": self.get_match_part()}, {"$unwind": "$files"}, @@ -1147,7 +1161,7 @@ def get_query(self, limit=0): '$cond': [ {'$size': "$order_remote.last_failed_dt"}, "$order_remote.last_failed_dt", - [] + [dummy_max_date] ] } ] @@ -1160,7 +1174,7 @@ def get_query(self, limit=0): '$cond': [ {'$size': "$order_local.last_failed_dt"}, "$order_local.last_failed_dt", - [] + [dummy_max_date] ] } ] diff --git a/openpype/modules/webserver/cors_middleware.py b/openpype/modules/webserver/cors_middleware.py new file mode 100644 index 00000000000..f1cd7b04b39 --- /dev/null +++ b/openpype/modules/webserver/cors_middleware.py @@ -0,0 +1,283 @@ +r""" +=============== +CORS Middleware +=============== +.. versionadded:: 0.2.0 +Dealing with CORS headers for aiohttp applications. +**IMPORTANT:** There is a `aiohttp-cors +`_ library, which handles CORS +headers by attaching additional handlers to aiohttp application for +OPTIONS (preflight) requests. In same time this CORS middleware mimics the +logic of `django-cors-headers `_, +where all handling done in the middleware without any additional handlers. This +approach allows aiohttp application to respond with CORS headers for OPTIONS or +wildcard handlers, which is not possible with ``aiohttp-cors`` due to +https://github.com/aio-libs/aiohttp-cors/issues/241 issue. +For detailed information about CORS (Cross Origin Resource Sharing) please +visit: +- `Wikipedia `_ +- Or `MDN `_ +Configuration +============= +**IMPORTANT:** By default, CORS middleware do not allow any origins to access +content from your aiohttp appliction. Which means, you need carefully check +possible options and provide custom values for your needs. +Usage +===== +.. code-block:: python + import re + from aiohttp import web + from aiohttp_middlewares import cors_middleware + from aiohttp_middlewares.cors import DEFAULT_ALLOW_HEADERS + # Unsecure configuration to allow all CORS requests + app = web.Application( + middlewares=[cors_middleware(allow_all=True)] + ) + # Allow CORS requests from URL http://localhost:3000 + app = web.Application( + middlewares=[ + cors_middleware(origins=["http://localhost:3000"]) + ] + ) + # Allow CORS requests from all localhost urls + app = web.Application( + middlewares=[ + cors_middleware( + origins=[re.compile(r"^https?\:\/\/localhost")] + ) + ] + ) + # Allow CORS requests from https://frontend.myapp.com as well + # as allow credentials + CORS_ALLOW_ORIGINS = ["https://frontend.myapp.com"] + app = web.Application( + middlewares=[ + cors_middleware( + origins=CORS_ALLOW_ORIGINS, + allow_credentials=True, + ) + ] + ) + # Allow CORS requests only for API urls + app = web.Application( + middelwares=[ + cors_middleware( + origins=CORS_ALLOW_ORIGINS, + urls=[re.compile(r"^\/api")], + ) + ] + ) + # Allow CORS requests for POST & PATCH methods, and for all + # default headers and `X-Client-UID` + app = web.Application( + middlewares=[ + cors_middleware( + origings=CORS_ALLOW_ORIGINS, + allow_methods=("POST", "PATCH"), + allow_headers=DEFAULT_ALLOW_HEADERS + + ("X-Client-UID",), + ) + ] + ) +""" + +import logging +import re +from typing import Pattern, Tuple + +from aiohttp import web + +from aiohttp_middlewares.annotations import ( + Handler, + Middleware, + StrCollection, + UrlCollection, +) +from aiohttp_middlewares.utils import match_path + + +ACCESS_CONTROL = "Access-Control" +ACCESS_CONTROL_ALLOW = f"{ACCESS_CONTROL}-Allow" +ACCESS_CONTROL_ALLOW_CREDENTIALS = f"{ACCESS_CONTROL_ALLOW}-Credentials" +ACCESS_CONTROL_ALLOW_HEADERS = f"{ACCESS_CONTROL_ALLOW}-Headers" +ACCESS_CONTROL_ALLOW_METHODS = f"{ACCESS_CONTROL_ALLOW}-Methods" +ACCESS_CONTROL_ALLOW_ORIGIN = f"{ACCESS_CONTROL_ALLOW}-Origin" +ACCESS_CONTROL_EXPOSE_HEADERS = f"{ACCESS_CONTROL}-Expose-Headers" +ACCESS_CONTROL_MAX_AGE = f"{ACCESS_CONTROL}-Max-Age" +ACCESS_CONTROL_REQUEST_METHOD = f"{ACCESS_CONTROL}-Request-Method" + +DEFAULT_ALLOW_HEADERS = ( + "accept", + "accept-encoding", + "authorization", + "content-type", + "dnt", + "origin", + "user-agent", + "x-csrftoken", + "x-requested-with", +) +DEFAULT_ALLOW_METHODS = ("DELETE", "GET", "OPTIONS", "PATCH", "POST", "PUT") +DEFAULT_URLS: Tuple[Pattern[str]] = (re.compile(r".*"),) + +logger = logging.getLogger(__name__) + + +def cors_middleware( + *, + allow_all: bool = False, + origins: UrlCollection = None, + urls: UrlCollection = None, + expose_headers: StrCollection = None, + allow_headers: StrCollection = DEFAULT_ALLOW_HEADERS, + allow_methods: StrCollection = DEFAULT_ALLOW_METHODS, + allow_credentials: bool = False, + max_age: int = None, +) -> Middleware: + """Middleware to provide CORS headers for aiohttp applications. + :param allow_all: + When enabled, allow any Origin to access content from your aiohttp web + application. **Please be careful with enabling this option as it may + result in security issues for your application.** By default: ``False`` + :param origins: + Allow content access for given list of origins. Support supplying + strings for exact origin match or regex instances. By default: ``None`` + :param urls: + Allow contect access for given list of URLs in aiohttp application. + By default: *apply CORS headers for all URLs* + :param expose_headers: + List of headers to be exposed with every CORS request. By default: + ``None`` + :param allow_headers: + List of allowed headers. By default: + .. code-block:: python + ( + "accept", + "accept-encoding", + "authorization", + "content-type", + "dnt", + "origin", + "user-agent", + "x-csrftoken", + "x-requested-with", + ) + :param allow_methods: + List of allowed methods. By default: + .. code-block:: python + ("DELETE", "GET", "OPTIONS", "PATCH", "POST", "PUT") + :param allow_credentials: + When enabled apply allow credentials header in response, which results + in sharing cookies on shared resources. **Please be careful with + allowing credentials for CORS requests.** By default: ``False`` + :param max_age: Access control max age in seconds. By default: ``None`` + """ + check_urls: UrlCollection = DEFAULT_URLS if urls is None else urls + + @web.middleware + async def middleware( + request: web.Request, handler: Handler + ) -> web.StreamResponse: + # Initial vars + request_method = request.method + request_path = request.rel_url.path + + # Is this an OPTIONS request + is_options_request = request_method == "OPTIONS" + + # Is this a preflight request + is_preflight_request = ( + is_options_request + and ACCESS_CONTROL_REQUEST_METHOD in request.headers + ) + + # Log extra data + log_extra = { + "is_preflight_request": is_preflight_request, + "method": request_method.lower(), + "path": request_path, + } + + # Check whether CORS should be enabled for given URL or not. By default + # CORS enabled for all URLs + if not match_items(check_urls, request_path): + logger.debug( + "Request should not be processed via CORS middleware", + extra=log_extra, + ) + return await handler(request) + + # If this is a preflight request - generate empty response + if is_preflight_request: + response = web.StreamResponse() + # Otherwise - call actual handler + else: + response = await handler(request) + + # Now check origin heaer + origin = request.headers.get("Origin") + # Empty origin - do nothing + if not origin: + logger.debug( + "Request does not have Origin header. CORS headers not " + "available for given requests", + extra=log_extra, + ) + return response + + # Set allow credentials header if necessary + if allow_credentials: + response.headers[ACCESS_CONTROL_ALLOW_CREDENTIALS] = "true" + + # Check whether current origin satisfies CORS policy + if not allow_all and not (origins and match_items(origins, origin)): + logger.debug( + "CORS headers not allowed for given Origin", extra=log_extra + ) + return response + + # Now start supplying CORS headers + # First one is Access-Control-Allow-Origin + if allow_all and not allow_credentials: + cors_origin = "*" + else: + cors_origin = origin + response.headers[ACCESS_CONTROL_ALLOW_ORIGIN] = cors_origin + + # Then Access-Control-Expose-Headers + if expose_headers: + response.headers[ACCESS_CONTROL_EXPOSE_HEADERS] = ", ".join( + expose_headers + ) + + # Now, if this is an options request, respond with extra Allow headers + if is_options_request: + response.headers[ACCESS_CONTROL_ALLOW_HEADERS] = ", ".join( + allow_headers + ) + response.headers[ACCESS_CONTROL_ALLOW_METHODS] = ", ".join( + allow_methods + ) + if max_age is not None: + response.headers[ACCESS_CONTROL_MAX_AGE] = str(max_age) + + # If this is preflight request - do not allow other middlewares to + # process this request + if is_preflight_request: + logger.debug( + "Provide CORS headers with empty response for preflight " + "request", + extra=log_extra, + ) + raise web.HTTPOk(text="", headers=response.headers) + + # Otherwise return normal response + logger.debug("Provide CORS headers for request", extra=log_extra) + return response + + return middleware + + +def match_items(items: UrlCollection, value: str) -> bool: + """Go through all items and try to match item with given value.""" + return any(match_path(item, value) for item in items) diff --git a/openpype/modules/webserver/server.py b/openpype/modules/webserver/server.py index 83a29e074e8..82b681f4066 100644 --- a/openpype/modules/webserver/server.py +++ b/openpype/modules/webserver/server.py @@ -1,15 +1,18 @@ +import re import threading import asyncio from aiohttp import web from openpype.lib import PypeLogger +from .cors_middleware import cors_middleware log = PypeLogger.get_logger("WebServer") class WebServerManager: """Manger that care about web server thread.""" + def __init__(self, port=None, host=None): self.port = port or 8079 self.host = host or "localhost" @@ -18,7 +21,13 @@ def __init__(self, port=None, host=None): self.handlers = {} self.on_stop_callbacks = [] - self.app = web.Application() + self.app = web.Application( + middlewares=[ + cors_middleware( + origins=[re.compile(r"^https?\:\/\/localhost")] + ) + ] + ) # add route with multiple methods for single "external app" diff --git a/openpype/pipeline/__init__.py b/openpype/pipeline/__init__.py index 2e441fbf270..2cf785d981d 100644 --- a/openpype/pipeline/__init__.py +++ b/openpype/pipeline/__init__.py @@ -6,6 +6,7 @@ from .mongodb import ( AvalonMongoDB, ) +from .anatomy import Anatomy from .create import ( BaseCreator, @@ -96,6 +97,9 @@ # --- MongoDB --- "AvalonMongoDB", + # --- Anatomy --- + "Anatomy", + # --- Create --- "BaseCreator", "Creator", diff --git a/openpype/pipeline/anatomy.py b/openpype/pipeline/anatomy.py new file mode 100644 index 00000000000..73081f18fbf --- /dev/null +++ b/openpype/pipeline/anatomy.py @@ -0,0 +1,1257 @@ +import os +import re +import copy +import platform +import collections +import numbers + +import six + +from openpype.settings.lib import get_anatomy_settings +from openpype.lib.path_templates import ( + TemplateUnsolved, + TemplateResult, + TemplatesDict, + FormatObject, +) +from openpype.lib.log import PypeLogger + +log = PypeLogger.get_logger(__name__) + + +class ProjectNotSet(Exception): + """Exception raised when is created Anatomy without project name.""" + + +class RootCombinationError(Exception): + """This exception is raised when templates has combined root types.""" + + def __init__(self, roots): + joined_roots = ", ".join( + ["\"{}\"".format(_root) for _root in roots] + ) + # TODO better error message + msg = ( + "Combination of root with and" + " without root name in AnatomyTemplates. {}" + ).format(joined_roots) + + super(RootCombinationError, self).__init__(msg) + + +class Anatomy: + """Anatomy module helps to keep project settings. + + Wraps key project specifications, AnatomyTemplates and Roots. + + Args: + project_name (str): Project name to look on overrides. + """ + + root_key_regex = re.compile(r"{(root?[^}]+)}") + root_name_regex = re.compile(r"root\[([^]]+)\]") + + def __init__(self, project_name=None, site_name=None): + if not project_name: + project_name = os.environ.get("AVALON_PROJECT") + + if not project_name: + raise ProjectNotSet(( + "Implementation bug: Project name is not set. Anatomy requires" + " to load data for specific project." + )) + + self.project_name = project_name + + self._data = self._prepare_anatomy_data( + get_anatomy_settings(project_name, site_name) + ) + self._site_name = site_name + self._templates_obj = AnatomyTemplates(self) + self._roots_obj = Roots(self) + + # Anatomy used as dictionary + # - implemented only getters returning copy + def __getitem__(self, key): + return copy.deepcopy(self._data[key]) + + def get(self, key, default=None): + return copy.deepcopy(self._data).get(key, default) + + def keys(self): + return copy.deepcopy(self._data).keys() + + def values(self): + return copy.deepcopy(self._data).values() + + def items(self): + return copy.deepcopy(self._data).items() + + @staticmethod + def _prepare_anatomy_data(anatomy_data): + """Prepare anatomy data for further processing. + + Method added to replace `{task}` with `{task[name]}` in templates. + """ + templates_data = anatomy_data.get("templates") + if templates_data: + # Replace `{task}` with `{task[name]}` in templates + value_queue = collections.deque() + value_queue.append(templates_data) + while value_queue: + item = value_queue.popleft() + if not isinstance(item, dict): + continue + + for key in tuple(item.keys()): + value = item[key] + if isinstance(value, dict): + value_queue.append(value) + + elif isinstance(value, six.string_types): + item[key] = value.replace("{task}", "{task[name]}") + return anatomy_data + + def reset(self): + """Reset values of cached data in templates and roots objects.""" + self._data = self._prepare_anatomy_data( + get_anatomy_settings(self.project_name, self._site_name) + ) + self.templates_obj.reset() + self.roots_obj.reset() + + @property + def templates(self): + """Wrap property `templates` of Anatomy's AnatomyTemplates instance.""" + return self._templates_obj.templates + + @property + def templates_obj(self): + """Return `AnatomyTemplates` object of current Anatomy instance.""" + return self._templates_obj + + def format(self, *args, **kwargs): + """Wrap `format` method of Anatomy's `templates_obj`.""" + return self._templates_obj.format(*args, **kwargs) + + def format_all(self, *args, **kwargs): + """Wrap `format_all` method of Anatomy's `templates_obj`.""" + return self._templates_obj.format_all(*args, **kwargs) + + @property + def roots(self): + """Wrap `roots` property of Anatomy's `roots_obj`.""" + return self._roots_obj.roots + + @property + def roots_obj(self): + """Return `Roots` object of current Anatomy instance.""" + return self._roots_obj + + def root_environments(self): + """Return OPENPYPE_ROOT_* environments for current project in dict.""" + return self._roots_obj.root_environments() + + def root_environmets_fill_data(self, template=None): + """Environment variable values in dictionary for rootless path. + + Args: + template (str): Template for environment variable key fill. + By default is set to `"${}"`. + """ + return self.roots_obj.root_environmets_fill_data(template) + + def find_root_template_from_path(self, *args, **kwargs): + """Wrapper for Roots `find_root_template_from_path`.""" + return self.roots_obj.find_root_template_from_path(*args, **kwargs) + + def path_remapper(self, *args, **kwargs): + """Wrapper for Roots `path_remapper`.""" + return self.roots_obj.path_remapper(*args, **kwargs) + + def all_root_paths(self): + """Wrapper for Roots `all_root_paths`.""" + return self.roots_obj.all_root_paths() + + def set_root_environments(self): + """Set OPENPYPE_ROOT_* environments for current project.""" + self._roots_obj.set_root_environments() + + def root_names(self): + """Return root names for current project.""" + return self.root_names_from_templates(self.templates) + + def _root_keys_from_templates(self, data): + """Extract root key from templates in data. + + Args: + data (dict): Data that may contain templates as string. + + Return: + set: Set of all root names from templates as strings. + + Output example: `{"root[work]", "root[publish]"}` + """ + + output = set() + if isinstance(data, dict): + for value in data.values(): + for root in self._root_keys_from_templates(value): + output.add(root) + + elif isinstance(data, str): + for group in re.findall(self.root_key_regex, data): + output.add(group) + + return output + + def root_value_for_template(self, template): + """Returns value of root key from template.""" + root_templates = [] + for group in re.findall(self.root_key_regex, template): + root_templates.append("{" + group + "}") + + if not root_templates: + return None + + return root_templates[0].format(**{"root": self.roots}) + + def root_names_from_templates(self, templates): + """Extract root names form anatomy templates. + + Returns None if values in templates contain only "{root}". + Empty list is returned if there is no "root" in templates. + Else returns all root names from templates in list. + + RootCombinationError is raised when templates contain both root types, + basic "{root}" and with root name specification "{root[work]}". + + Args: + templates (dict): Anatomy templates where roots are not filled. + + Return: + list/None: List of all root names from templates as strings when + multiroot setup is used, otherwise None is returned. + """ + roots = list(self._root_keys_from_templates(templates)) + # Return empty list if no roots found in templates + if not roots: + return roots + + # Raise exception when root keys have roots with and without root name. + # Invalid output example: ["root", "root[project]", "root[render]"] + if len(roots) > 1 and "root" in roots: + raise RootCombinationError(roots) + + # Return None if "root" without root name in templates + if len(roots) == 1 and roots[0] == "root": + return None + + names = set() + for root in roots: + for group in re.findall(self.root_name_regex, root): + names.add(group) + return list(names) + + def fill_root(self, template_path): + """Fill template path where is only "root" key unfilled. + + Args: + template_path (str): Path with "root" key in. + Example path: "{root}/projects/MyProject/Shot01/Lighting/..." + + Return: + str: formatted path + """ + # NOTE does not care if there are different keys than "root" + return template_path.format(**{"root": self.roots}) + + @classmethod + def fill_root_with_path(cls, rootless_path, root_path): + """Fill path without filled "root" key with passed path. + + This is helper to fill root with different directory path than anatomy + has defined no matter if is single or multiroot. + + Output path is same as input path if `rootless_path` does not contain + unfilled root key. + + Args: + rootless_path (str): Path without filled "root" key. Example: + "{root[work]}/MyProject/..." + root_path (str): What should replace root key in `rootless_path`. + + Returns: + str: Path with filled root. + """ + output = str(rootless_path) + for group in re.findall(cls.root_key_regex, rootless_path): + replacement = "{" + group + "}" + output = output.replace(replacement, root_path) + + return output + + def replace_root_with_env_key(self, filepath, template=None): + """Replace root of path with environment key. + + # Example: + ## Project with roots: + ``` + { + "nas": { + "windows": P:/projects", + ... + } + ... + } + ``` + + ## Entered filepath + "P:/projects/project/asset/task/animation_v001.ma" + + ## Entered template + "<{}>" + + ## Output + "/project/asset/task/animation_v001.ma" + + Args: + filepath (str): Full file path where root should be replaced. + template (str): Optional template for environment key. Must + have one index format key. + Default value if not entered: "${}" + + Returns: + str: Path where root is replaced with environment root key. + + Raise: + ValueError: When project's roots were not found in entered path. + """ + success, rootless_path = self.find_root_template_from_path(filepath) + if not success: + raise ValueError( + "{}: Project's roots were not found in path: {}".format( + self.project_name, filepath + ) + ) + + data = self.root_environmets_fill_data(template) + return rootless_path.format(**data) + + +class AnatomyTemplateUnsolved(TemplateUnsolved): + """Exception for unsolved template when strict is set to True.""" + + msg = "Anatomy template \"{0}\" is unsolved.{1}{2}" + + +class AnatomyTemplateResult(TemplateResult): + rootless = None + + def __new__(cls, result, rootless_path): + new_obj = super(AnatomyTemplateResult, cls).__new__( + cls, + str(result), + result.template, + result.solved, + result.used_values, + result.missing_keys, + result.invalid_types + ) + new_obj.rootless = rootless_path + return new_obj + + def validate(self): + if not self.solved: + raise AnatomyTemplateUnsolved( + self.template, + self.missing_keys, + self.invalid_types + ) + + def copy(self): + tmp = TemplateResult( + str(self), + self.template, + self.solved, + self.used_values, + self.missing_keys, + self.invalid_types + ) + return self.__class__(tmp, self.rootless) + + +class AnatomyTemplates(TemplatesDict): + inner_key_pattern = re.compile(r"(\{@.*?[^{}0]*\})") + inner_key_name_pattern = re.compile(r"\{@(.*?[^{}0]*)\}") + + def __init__(self, anatomy): + super(AnatomyTemplates, self).__init__() + self.anatomy = anatomy + self.loaded_project = None + + def __getitem__(self, key): + return self.templates[key] + + def get(self, key, default=None): + return self.templates.get(key, default) + + def reset(self): + self._raw_templates = None + self._templates = None + self._objected_templates = None + + @property + def project_name(self): + return self.anatomy.project_name + + @property + def roots(self): + return self.anatomy.roots + + @property + def templates(self): + self._validate_discovery() + return self._templates + + @property + def objected_templates(self): + self._validate_discovery() + return self._objected_templates + + def _validate_discovery(self): + if self.project_name != self.loaded_project: + self.reset() + + if self._templates is None: + self._discover() + self.loaded_project = self.project_name + + def _format_value(self, value, data): + if isinstance(value, RootItem): + return self._solve_dict(value, data) + + result = super(AnatomyTemplates, self)._format_value(value, data) + if isinstance(result, TemplateResult): + rootless_path = self._rootless_path(result, data) + result = AnatomyTemplateResult(result, rootless_path) + return result + + def set_templates(self, templates): + if not templates: + self.reset() + return + + self._raw_templates = copy.deepcopy(templates) + templates = copy.deepcopy(templates) + v_queue = collections.deque() + v_queue.append(templates) + while v_queue: + item = v_queue.popleft() + if not isinstance(item, dict): + continue + + for key in tuple(item.keys()): + value = item[key] + if isinstance(value, dict): + v_queue.append(value) + + elif ( + isinstance(value, six.string_types) + and "{task}" in value + ): + item[key] = value.replace("{task}", "{task[name]}") + + solved_templates = self.solve_template_inner_links(templates) + self._templates = solved_templates + self._objected_templates = self.create_ojected_templates( + solved_templates + ) + + def default_templates(self): + """Return default templates data with solved inner keys.""" + return self.solve_template_inner_links( + self.anatomy["templates"] + ) + + def _discover(self): + """ Loads anatomy templates from yaml. + Default templates are loaded if project is not set or project does + not have set it's own. + TODO: create templates if not exist. + + Returns: + TemplatesResultDict: Contain templates data for current project of + default templates. + """ + + if self.project_name is None: + # QUESTION create project specific if not found? + raise AssertionError(( + "Project \"{0}\" does not have his own templates." + " Trying to use default." + ).format(self.project_name)) + + self.set_templates(self.anatomy["templates"]) + + @classmethod + def replace_inner_keys(cls, matches, value, key_values, key): + """Replacement of inner keys in template values.""" + for match in matches: + anatomy_sub_keys = ( + cls.inner_key_name_pattern.findall(match) + ) + if key in anatomy_sub_keys: + raise ValueError(( + "Unsolvable recursion in inner keys, " + "key: \"{}\" is in his own value." + " Can't determine source, please check Anatomy templates." + ).format(key)) + + for anatomy_sub_key in anatomy_sub_keys: + replace_value = key_values.get(anatomy_sub_key) + if replace_value is None: + raise KeyError(( + "Anatomy templates can't be filled." + " Anatomy key `{0}` has" + " invalid inner key `{1}`." + ).format(key, anatomy_sub_key)) + + if not ( + isinstance(replace_value, numbers.Number) + or isinstance(replace_value, six.string_types) + ): + raise ValueError(( + "Anatomy templates can't be filled." + " Anatomy key `{0}` has" + " invalid inner key `{1}`" + " with value `{2}`." + ).format(key, anatomy_sub_key, str(replace_value))) + + value = value.replace(match, str(replace_value)) + + return value + + @classmethod + def prepare_inner_keys(cls, key_values): + """Check values of inner keys. + + Check if inner key exist in template group and has valid value. + It is also required to avoid infinite loop with unsolvable recursion + when first inner key's value refers to second inner key's value where + first is used. + """ + keys_to_solve = set(key_values.keys()) + while True: + found = False + for key in tuple(keys_to_solve): + value = key_values[key] + + if isinstance(value, six.string_types): + matches = cls.inner_key_pattern.findall(value) + if not matches: + keys_to_solve.remove(key) + continue + + found = True + key_values[key] = cls.replace_inner_keys( + matches, value, key_values, key + ) + continue + + elif not isinstance(value, dict): + keys_to_solve.remove(key) + continue + + subdict_found = False + for _key, _value in tuple(value.items()): + matches = cls.inner_key_pattern.findall(_value) + if not matches: + continue + + subdict_found = True + found = True + key_values[key][_key] = cls.replace_inner_keys( + matches, _value, key_values, + "{}.{}".format(key, _key) + ) + + if not subdict_found: + keys_to_solve.remove(key) + + if not found: + break + + return key_values + + @classmethod + def solve_template_inner_links(cls, templates): + """Solve templates inner keys identified by "{@*}". + + Process is split into 2 parts. + First is collecting all global keys (keys in top hierarchy where value + is not dictionary). All global keys are set for all group keys (keys + in top hierarchy where value is dictionary). Value of a key is not + overridden in group if already contain value for the key. + + In second part all keys with "at" symbol in value are replaced with + value of the key afterward "at" symbol from the group. + + Args: + templates (dict): Raw templates data. + + Example: + templates:: + key_1: "value_1", + key_2: "{@key_1}/{filling_key}" + + group_1: + key_3: "value_3/{@key_2}" + + group_2: + key_2": "value_2" + key_4": "value_4/{@key_2}" + + output:: + key_1: "value_1" + key_2: "value_1/{filling_key}" + + group_1: { + key_1: "value_1" + key_2: "value_1/{filling_key}" + key_3: "value_3/value_1/{filling_key}" + + group_2: { + key_1: "value_1" + key_2: "value_2" + key_4: "value_3/value_2" + """ + default_key_values = templates.pop("defaults", {}) + for key, value in tuple(templates.items()): + if isinstance(value, dict): + continue + default_key_values[key] = templates.pop(key) + + # Pop "others" key before before expected keys are processed + other_templates = templates.pop("others") or {} + + keys_by_subkey = {} + for sub_key, sub_value in templates.items(): + key_values = {} + key_values.update(default_key_values) + key_values.update(sub_value) + keys_by_subkey[sub_key] = cls.prepare_inner_keys(key_values) + + for sub_key, sub_value in other_templates.items(): + if sub_key in keys_by_subkey: + log.warning(( + "Key \"{}\" is duplicated in others. Skipping." + ).format(sub_key)) + continue + + key_values = {} + key_values.update(default_key_values) + key_values.update(sub_value) + keys_by_subkey[sub_key] = cls.prepare_inner_keys(key_values) + + default_keys_by_subkeys = cls.prepare_inner_keys(default_key_values) + + for key, value in default_keys_by_subkeys.items(): + keys_by_subkey[key] = value + + return keys_by_subkey + + def _dict_to_subkeys_list(self, subdict, pre_keys=None): + if pre_keys is None: + pre_keys = [] + output = [] + for key in subdict: + value = subdict[key] + result = list(pre_keys) + result.append(key) + if isinstance(value, dict): + for item in self._dict_to_subkeys_list(value, result): + output.append(item) + else: + output.append(result) + return output + + def _keys_to_dicts(self, key_list, value): + if not key_list: + return None + if len(key_list) == 1: + return {key_list[0]: value} + return {key_list[0]: self._keys_to_dicts(key_list[1:], value)} + + def _rootless_path(self, result, final_data): + used_values = result.used_values + missing_keys = result.missing_keys + template = result.template + invalid_types = result.invalid_types + if ( + "root" not in used_values + or "root" in missing_keys + or "{root" not in template + ): + return + + for invalid_type in invalid_types: + if "root" in invalid_type: + return + + root_keys = self._dict_to_subkeys_list({"root": used_values["root"]}) + if not root_keys: + return + + output = str(result) + for used_root_keys in root_keys: + if not used_root_keys: + continue + + used_value = used_values + root_key = None + for key in used_root_keys: + used_value = used_value[key] + if root_key is None: + root_key = key + else: + root_key += "[{}]".format(key) + + root_key = "{" + root_key + "}" + output = output.replace(str(used_value), root_key) + + return output + + def format(self, data, strict=True): + copy_data = copy.deepcopy(data) + roots = self.roots + if roots: + copy_data["root"] = roots + result = super(AnatomyTemplates, self).format(copy_data) + result.strict = strict + return result + + def format_all(self, in_data, only_keys=True): + """ Solves templates based on entered data. + + Args: + data (dict): Containing keys to be filled into template. + + Returns: + TemplatesResultDict: Output `TemplateResult` have `strict` + attribute set to False so accessing unfilled keys in templates + won't raise any exceptions. + """ + return self.format(in_data, strict=False) + + +class RootItem(FormatObject): + """Represents one item or roots. + + Holds raw data of root item specification. Raw data contain value + for each platform, but current platform value is used when object + is used for formatting of template. + + Args: + root_raw_data (dict): Dictionary containing root values by platform + names. ["windows", "linux" and "darwin"] + name (str, optional): Root name which is representing. Used with + multi root setup otherwise None value is expected. + parent_keys (list, optional): All dictionary parent keys. Values of + `parent_keys` are used for get full key which RootItem is + representing. Used for replacing root value in path with + formattable key. e.g. parent_keys == ["work"] -> {root[work]} + parent (object, optional): It is expected to be `Roots` object. + Value of `parent` won't affect code logic much. + """ + + def __init__( + self, root_raw_data, name=None, parent_keys=None, parent=None + ): + lowered_platform_keys = {} + for key, value in root_raw_data.items(): + lowered_platform_keys[key.lower()] = value + self.raw_data = lowered_platform_keys + self.cleaned_data = self._clean_roots(lowered_platform_keys) + self.name = name + self.parent_keys = parent_keys or [] + self.parent = parent + + self.available_platforms = list(lowered_platform_keys.keys()) + self.value = lowered_platform_keys.get(platform.system().lower()) + self.clean_value = self.clean_root(self.value) + + def __format__(self, *args, **kwargs): + return self.value.__format__(*args, **kwargs) + + def __str__(self): + return str(self.value) + + def __repr__(self): + return self.__str__() + + def __getitem__(self, key): + if isinstance(key, numbers.Number): + return self.value[key] + + additional_info = "" + if self.parent and self.parent.project_name: + additional_info += " for project \"{}\"".format( + self.parent.project_name + ) + + raise AssertionError( + "Root key \"{}\" is missing{}.".format( + key, additional_info + ) + ) + + def full_key(self): + """Full key value for dictionary formatting in template. + + Returns: + str: Return full replacement key for formatting. This helps when + multiple roots are set. In that case e.g. `"root[work]"` is + returned. + """ + if not self.name: + return "root" + + joined_parent_keys = "".join( + ["[{}]".format(key) for key in self.parent_keys] + ) + return "root{}".format(joined_parent_keys) + + def clean_path(self, path): + """Just replace backslashes with forward slashes.""" + return str(path).replace("\\", "/") + + def clean_root(self, root): + """Makes sure root value does not end with slash.""" + if root: + root = self.clean_path(root) + while root.endswith("/"): + root = root[:-1] + return root + + def _clean_roots(self, raw_data): + """Clean all values of raw root item values.""" + cleaned = {} + for key, value in raw_data.items(): + cleaned[key] = self.clean_root(value) + return cleaned + + def path_remapper(self, path, dst_platform=None, src_platform=None): + """Remap path for specific platform. + + Args: + path (str): Source path which need to be remapped. + dst_platform (str, optional): Specify destination platform + for which remapping should happen. + src_platform (str, optional): Specify source platform. This is + recommended to not use and keep unset until you really want + to use specific platform. + roots (dict/RootItem/None, optional): It is possible to remap + path with different roots then instance where method was + called has. + + Returns: + str/None: When path does not contain known root then + None is returned else returns remapped path with "{root}" + or "{root[]}". + """ + cleaned_path = self.clean_path(path) + if dst_platform: + dst_root_clean = self.cleaned_data.get(dst_platform) + if not dst_root_clean: + key_part = "" + full_key = self.full_key() + if full_key != "root": + key_part += "\"{}\" ".format(full_key) + + log.warning( + "Root {}miss platform \"{}\" definition.".format( + key_part, dst_platform + ) + ) + return None + + if cleaned_path.startswith(dst_root_clean): + return cleaned_path + + if src_platform: + src_root_clean = self.cleaned_data.get(src_platform) + if src_root_clean is None: + log.warning( + "Root \"{}\" miss platform \"{}\" definition.".format( + self.full_key(), src_platform + ) + ) + return None + + if not cleaned_path.startswith(src_root_clean): + return None + + subpath = cleaned_path[len(src_root_clean):] + if dst_platform: + # `dst_root_clean` is used from upper condition + return dst_root_clean + subpath + return self.clean_value + subpath + + result, template = self.find_root_template_from_path(path) + if not result: + return None + + def parent_dict(keys, value): + if not keys: + return value + + key = keys.pop(0) + return {key: parent_dict(keys, value)} + + if dst_platform: + format_value = parent_dict(list(self.parent_keys), dst_root_clean) + else: + format_value = parent_dict(list(self.parent_keys), self.value) + + return template.format(**{"root": format_value}) + + def find_root_template_from_path(self, path): + """Replaces known root value with formattable key in path. + + All platform values are checked for this replacement. + + Args: + path (str): Path where root value should be found. + + Returns: + tuple: Tuple contain 2 values: `success` (bool) and `path` (str). + When success it True then path should contain replaced root + value with formattable key. + + Example: + When input path is:: + "C:/windows/path/root/projects/my_project/file.ext" + + And raw data of item looks like:: + { + "windows": "C:/windows/path/root", + "linux": "/mount/root" + } + + Output will be:: + (True, "{root}/projects/my_project/file.ext") + + If any of raw data value wouldn't match path's root output is:: + (False, "C:/windows/path/root/projects/my_project/file.ext") + """ + result = False + output = str(path) + + root_paths = list(self.cleaned_data.values()) + mod_path = self.clean_path(path) + for root_path in root_paths: + # Skip empty paths + if not root_path: + continue + + if mod_path.startswith(root_path): + result = True + replacement = "{" + self.full_key() + "}" + output = replacement + mod_path[len(root_path):] + break + + return (result, output) + + +class Roots: + """Object which should be used for formatting "root" key in templates. + + Args: + anatomy Anatomy: Anatomy object created for a specific project. + """ + + env_prefix = "OPENPYPE_PROJECT_ROOT" + roots_filename = "roots.json" + + def __init__(self, anatomy): + self.anatomy = anatomy + self.loaded_project = None + self._roots = None + + def __format__(self, *args, **kwargs): + return self.roots.__format__(*args, **kwargs) + + def __getitem__(self, key): + return self.roots[key] + + def reset(self): + """Reset current roots value.""" + self._roots = None + + def path_remapper( + self, path, dst_platform=None, src_platform=None, roots=None + ): + """Remap path for specific platform. + + Args: + path (str): Source path which need to be remapped. + dst_platform (str, optional): Specify destination platform + for which remapping should happen. + src_platform (str, optional): Specify source platform. This is + recommended to not use and keep unset until you really want + to use specific platform. + roots (dict/RootItem/None, optional): It is possible to remap + path with different roots then instance where method was + called has. + + Returns: + str/None: When path does not contain known root then + None is returned else returns remapped path with "{root}" + or "{root[]}". + """ + if roots is None: + roots = self.roots + + if roots is None: + raise ValueError("Roots are not set. Can't find path.") + + if "{root" in path: + path = path.format(**{"root": roots}) + # If `dst_platform` is not specified then return else continue. + if not dst_platform: + return path + + if isinstance(roots, RootItem): + return roots.path_remapper(path, dst_platform, src_platform) + + for _root in roots.values(): + result = self.path_remapper( + path, dst_platform, src_platform, _root + ) + if result is not None: + return result + + def find_root_template_from_path(self, path, roots=None): + """Find root value in entered path and replace it with formatting key. + + Args: + path (str): Source path where root will be searched. + roots (Roots/dict, optional): It is possible to use different + roots than instance where method was triggered has. + + Returns: + tuple: Output contains tuple with bool representing success as + first value and path with or without replaced root with + formatting key as second value. + + Raises: + ValueError: When roots are not entered and can't be loaded. + """ + if roots is None: + log.debug( + "Looking for matching root in path \"{}\".".format(path) + ) + roots = self.roots + + if roots is None: + raise ValueError("Roots are not set. Can't find path.") + + if isinstance(roots, RootItem): + return roots.find_root_template_from_path(path) + + for root_name, _root in roots.items(): + success, result = self.find_root_template_from_path(path, _root) + if success: + log.info("Found match in root \"{}\".".format(root_name)) + return success, result + + log.warning("No matching root was found in current setting.") + return (False, path) + + def set_root_environments(self): + """Set root environments for current project.""" + for key, value in self.root_environments().items(): + os.environ[key] = value + + def root_environments(self): + """Use root keys to create unique keys for environment variables. + + Concatenates prefix "OPENPYPE_ROOT" with root keys to create unique + keys. + + Returns: + dict: Result is `{(str): (str)}` dicitonary where key represents + unique key concatenated by keys and value is root value of + current platform root. + + Example: + With raw root values:: + "work": { + "windows": "P:/projects/work", + "linux": "/mnt/share/projects/work", + "darwin": "/darwin/path/work" + }, + "publish": { + "windows": "P:/projects/publish", + "linux": "/mnt/share/projects/publish", + "darwin": "/darwin/path/publish" + } + + Result on windows platform:: + { + "OPENPYPE_ROOT_WORK": "P:/projects/work", + "OPENPYPE_ROOT_PUBLISH": "P:/projects/publish" + } + + Short example when multiroot is not used:: + { + "OPENPYPE_ROOT": "P:/projects" + } + """ + return self._root_environments() + + def all_root_paths(self, roots=None): + """Return all paths for all roots of all platforms.""" + if roots is None: + roots = self.roots + + output = [] + if isinstance(roots, RootItem): + for value in roots.raw_data.values(): + output.append(value) + return output + + for _roots in roots.values(): + output.extend(self.all_root_paths(_roots)) + return output + + def _root_environments(self, keys=None, roots=None): + if not keys: + keys = [] + if roots is None: + roots = self.roots + + if isinstance(roots, RootItem): + key_items = [self.env_prefix] + for _key in keys: + key_items.append(_key.upper()) + + key = "_".join(key_items) + # Make sure key and value does not contain unicode + # - can happen in Python 2 hosts + return {str(key): str(roots.value)} + + output = {} + for _key, _value in roots.items(): + _keys = list(keys) + _keys.append(_key) + output.update(self._root_environments(_keys, _value)) + return output + + def root_environmets_fill_data(self, template=None): + """Environment variable values in dictionary for rootless path. + + Args: + template (str): Template for environment variable key fill. + By default is set to `"${}"`. + """ + if template is None: + template = "${}" + return self._root_environmets_fill_data(template) + + def _root_environmets_fill_data(self, template, keys=None, roots=None): + if keys is None and roots is None: + return { + "root": self._root_environmets_fill_data( + template, [], self.roots + ) + } + + if isinstance(roots, RootItem): + key_items = [Roots.env_prefix] + for _key in keys: + key_items.append(_key.upper()) + key = "_".join(key_items) + return template.format(key) + + output = {} + for key, value in roots.items(): + _keys = list(keys) + _keys.append(key) + output[key] = self._root_environmets_fill_data( + template, _keys, value + ) + return output + + @property + def project_name(self): + """Return project name which will be used for loading root values.""" + return self.anatomy.project_name + + @property + def roots(self): + """Property for filling "root" key in templates. + + This property returns roots for current project or default root values. + Warning: + Default roots value may cause issues when project use different + roots settings. That may happen when project use multiroot + templates but default roots miss their keys. + """ + if self.project_name != self.loaded_project: + self._roots = None + + if self._roots is None: + self._roots = self._discover() + self.loaded_project = self.project_name + return self._roots + + def _discover(self): + """ Loads current project's roots or default. + + Default roots are loaded if project override's does not contain roots. + + Returns: + `RootItem` or `dict` with multiple `RootItem`s when multiroot + setting is used. + """ + + return self._parse_dict(self.anatomy["roots"], parent=self) + + @staticmethod + def _parse_dict(data, key=None, parent_keys=None, parent=None): + """Parse roots raw data into RootItem or dictionary with RootItems. + + Converting raw roots data to `RootItem` helps to handle platform keys. + This method is recursive to be able handle multiroot setup and + is static to be able to load default roots without creating new object. + + Args: + data (dict): Should contain raw roots data to be parsed. + key (str, optional): Current root key. Set by recursion. + parent_keys (list): Parent dictionary keys. Set by recursion. + parent (Roots, optional): Parent object set in `RootItem` + helps to keep RootItem instance updated with `Roots` object. + + Returns: + `RootItem` or `dict` with multiple `RootItem`s when multiroot + setting is used. + """ + if not parent_keys: + parent_keys = [] + is_last = False + for value in data.values(): + if isinstance(value, six.string_types): + is_last = True + break + + if is_last: + return RootItem(data, key, parent_keys, parent=parent) + + output = {} + for _key, value in data.items(): + _parent_keys = list(parent_keys) + _parent_keys.append(_key) + output[_key] = Roots._parse_dict(value, _key, _parent_keys, parent) + return output diff --git a/openpype/pipeline/context_tools.py b/openpype/pipeline/context_tools.py index 4a147c230b1..e719e465144 100644 --- a/openpype/pipeline/context_tools.py +++ b/openpype/pipeline/context_tools.py @@ -1,11 +1,9 @@ """Core pipeline functionality""" import os -import sys import json import types import logging -import inspect import platform import pyblish.api @@ -14,11 +12,8 @@ import openpype from openpype.modules import load_modules, ModulesManager from openpype.settings import get_project_settings -from openpype.lib import ( - Anatomy, - filter_pyblish_plugins, -) - +from openpype.lib import filter_pyblish_plugins +from .anatomy import Anatomy from . import ( legacy_io, register_loader_plugin_path, @@ -235,73 +230,10 @@ def register_host(host): required, or browse the source code. """ - signatures = { - "ls": [] - } - _validate_signature(host, signatures) _registered_host["_"] = host -def _validate_signature(module, signatures): - # Required signatures for each member - - missing = list() - invalid = list() - success = True - - for member in signatures: - if not hasattr(module, member): - missing.append(member) - success = False - - else: - attr = getattr(module, member) - if sys.version_info.major >= 3: - signature = inspect.getfullargspec(attr)[0] - else: - signature = inspect.getargspec(attr)[0] - required_signature = signatures[member] - - assert isinstance(signature, list) - assert isinstance(required_signature, list) - - if not all(member in signature - for member in required_signature): - invalid.append({ - "member": member, - "signature": ", ".join(signature), - "required": ", ".join(required_signature) - }) - success = False - - if not success: - report = list() - - if missing: - report.append( - "Incomplete interface for module: '%s'\n" - "Missing: %s" % (module, ", ".join( - "'%s'" % member for member in missing)) - ) - - if invalid: - report.append( - "'%s': One or more members were found, but didn't " - "have the right argument signature." % module.__name__ - ) - - for member in invalid: - report.append( - " Found: {member}({signature})".format(**member) - ) - report.append( - " Expected: {member}({required})".format(**member) - ) - - raise ValueError("\n".join(report)) - - def registered_host(): """Return currently registered host""" return _registered_host["_"] diff --git a/openpype/pipeline/create/context.py b/openpype/pipeline/create/context.py index 7931ea400ad..12cd9bbc68d 100644 --- a/openpype/pipeline/create/context.py +++ b/openpype/pipeline/create/context.py @@ -6,6 +6,7 @@ from uuid import uuid4 from contextlib import contextmanager +from openpype.host import INewPublisher from openpype.pipeline import legacy_io from openpype.pipeline.mongodb import ( AvalonMongoDB, @@ -651,12 +652,6 @@ class CreateContext: discover_publish_plugins(bool): Discover publish plugins during reset phase. """ - # Methods required in host implementaion to be able create instances - # or change context data. - required_methods = ( - "get_context_data", - "update_context_data" - ) def __init__( self, host, dbcon=None, headless=False, reset=True, @@ -738,10 +733,10 @@ def get_host_misssing_methods(cls, host): Args: host(ModuleType): Host implementaion. """ - missing = set() - for attr_name in cls.required_methods: - if not hasattr(host, attr_name): - missing.add(attr_name) + + missing = set( + INewPublisher.get_missing_publish_methods(host) + ) return missing @property diff --git a/openpype/pipeline/editorial.py b/openpype/pipeline/editorial.py new file mode 100644 index 00000000000..f62a1842e07 --- /dev/null +++ b/openpype/pipeline/editorial.py @@ -0,0 +1,282 @@ +import os +import re +import clique + +import opentimelineio as otio +from opentimelineio import opentime as _ot + + +def otio_range_to_frame_range(otio_range): + start = _ot.to_frames( + otio_range.start_time, otio_range.start_time.rate) + end = start + _ot.to_frames( + otio_range.duration, otio_range.duration.rate) + return start, end + + +def otio_range_with_handles(otio_range, instance): + handle_start = instance.data["handleStart"] + handle_end = instance.data["handleEnd"] + handles_duration = handle_start + handle_end + fps = float(otio_range.start_time.rate) + start = _ot.to_frames(otio_range.start_time, fps) + duration = _ot.to_frames(otio_range.duration, fps) + + return _ot.TimeRange( + start_time=_ot.RationalTime((start - handle_start), fps), + duration=_ot.RationalTime((duration + handles_duration), fps) + ) + + +def is_overlapping_otio_ranges(test_otio_range, main_otio_range, strict=False): + test_start, test_end = otio_range_to_frame_range(test_otio_range) + main_start, main_end = otio_range_to_frame_range(main_otio_range) + covering_exp = bool( + (test_start <= main_start) and (test_end >= main_end) + ) + inside_exp = bool( + (test_start >= main_start) and (test_end <= main_end) + ) + overlaying_right_exp = bool( + (test_start <= main_end) and (test_end >= main_end) + ) + overlaying_left_exp = bool( + (test_end >= main_start) and (test_start <= main_start) + ) + + if not strict: + return any(( + covering_exp, + inside_exp, + overlaying_right_exp, + overlaying_left_exp + )) + else: + return covering_exp + + +def convert_to_padded_path(path, padding): + """ + Return correct padding in sequence string + + Args: + path (str): path url or simple file name + padding (int): number of padding + + Returns: + type: string with reformated path + + Example: + convert_to_padded_path("plate.%d.exr") > plate.%04d.exr + + """ + if "%d" in path: + path = re.sub("%d", "%0{padding}d".format(padding=padding), path) + return path + + +def trim_media_range(media_range, source_range): + """ + Trim input media range with clip source range. + + Args: + media_range (otio._ot._ot.TimeRange): available range of media + source_range (otio._ot._ot.TimeRange): clip required range + + Returns: + otio._ot._ot.TimeRange: trimmed media range + + """ + rw_media_start = _ot.RationalTime( + media_range.start_time.value + source_range.start_time.value, + media_range.start_time.rate + ) + rw_media_duration = _ot.RationalTime( + source_range.duration.value, + media_range.duration.rate + ) + return _ot.TimeRange( + rw_media_start, rw_media_duration) + + +def range_from_frames(start, duration, fps): + """ + Returns otio time range. + + Args: + start (int): frame start + duration (int): frame duration + fps (float): frame range + + Returns: + otio._ot._ot.TimeRange: created range + + """ + return _ot.TimeRange( + _ot.RationalTime(start, fps), + _ot.RationalTime(duration, fps) + ) + + +def frames_to_seconds(frames, framerate): + """ + Returning seconds. + + Args: + frames (int): frame + framerate (float): frame rate + + Returns: + float: second value + """ + + rt = _ot.from_frames(frames, framerate) + return _ot.to_seconds(rt) + + +def frames_to_timecode(frames, framerate): + rt = _ot.from_frames(frames, framerate) + return _ot.to_timecode(rt) + + +def make_sequence_collection(path, otio_range, metadata): + """ + Make collection from path otio range and otio metadata. + + Args: + path (str): path to image sequence with `%d` + otio_range (otio._ot._ot.TimeRange): range to be used + metadata (dict): data where padding value can be found + + Returns: + list: dir_path (str): path to sequence, collection object + + """ + if "%" not in path: + return None + file_name = os.path.basename(path) + dir_path = os.path.dirname(path) + head = file_name.split("%")[0] + tail = os.path.splitext(file_name)[-1] + first, last = otio_range_to_frame_range(otio_range) + collection = clique.Collection( + head=head, tail=tail, padding=metadata["padding"]) + collection.indexes.update([i for i in range(first, last)]) + return dir_path, collection + + +def _sequence_resize(source, length): + step = float(len(source) - 1) / (length - 1) + for i in range(length): + low, ratio = divmod(i * step, 1) + high = low + 1 if ratio > 0 else low + yield (1 - ratio) * source[int(low)] + ratio * source[int(high)] + + +def get_media_range_with_retimes(otio_clip, handle_start, handle_end): + source_range = otio_clip.source_range + available_range = otio_clip.available_range() + media_in = available_range.start_time.value + media_out = available_range.end_time_inclusive().value + + # modifiers + time_scalar = 1. + offset_in = 0 + offset_out = 0 + time_warp_nodes = [] + + # Check for speed effects and adjust playback speed accordingly + for effect in otio_clip.effects: + if isinstance(effect, otio.schema.LinearTimeWarp): + time_scalar = effect.time_scalar + + elif isinstance(effect, otio.schema.FreezeFrame): + # For freeze frame, playback speed must be set after range + time_scalar = 0. + + elif isinstance(effect, otio.schema.TimeEffect): + # For freeze frame, playback speed must be set after range + name = effect.name + effect_name = effect.effect_name + if "TimeWarp" not in effect_name: + continue + metadata = effect.metadata + lookup = metadata.get("lookup") + if not lookup: + continue + + # time warp node + tw_node = { + "Class": "TimeWarp", + "name": name + } + tw_node.update(metadata) + tw_node["lookup"] = list(lookup) + + # get first and last frame offsets + offset_in += lookup[0] + offset_out += lookup[-1] + + # add to timewarp nodes + time_warp_nodes.append(tw_node) + + # multiply by time scalar + offset_in *= time_scalar + offset_out *= time_scalar + + # filip offset if reversed speed + if time_scalar < 0: + _offset_in = offset_out + _offset_out = offset_in + offset_in = _offset_in + offset_out = _offset_out + + # scale handles + handle_start *= abs(time_scalar) + handle_end *= abs(time_scalar) + + # filip handles if reversed speed + if time_scalar < 0: + _handle_start = handle_end + _handle_end = handle_start + handle_start = _handle_start + handle_end = _handle_end + + source_in = source_range.start_time.value + + media_in_trimmed = ( + media_in + source_in + offset_in) + media_out_trimmed = ( + media_in + source_in + ( + ((source_range.duration.value - 1) * abs( + time_scalar)) + offset_out)) + + # calculate available handles + if (media_in_trimmed - media_in) < handle_start: + handle_start = (media_in_trimmed - media_in) + if (media_out - media_out_trimmed) < handle_end: + handle_end = (media_out - media_out_trimmed) + + # create version data + version_data = { + "versionData": { + "retime": True, + "speed": time_scalar, + "timewarps": time_warp_nodes, + "handleStart": round(handle_start), + "handleEnd": round(handle_end) + } + } + + returning_dict = { + "mediaIn": media_in_trimmed, + "mediaOut": media_out_trimmed, + "handleStart": round(handle_start), + "handleEnd": round(handle_end) + } + + # add version data only if retime + if time_warp_nodes or time_scalar != 1.: + returning_dict.update(version_data) + + return returning_dict diff --git a/openpype/pipeline/legacy_io.py b/openpype/pipeline/legacy_io.py index 9359e3057b0..bde2b24c2a3 100644 --- a/openpype/pipeline/legacy_io.py +++ b/openpype/pipeline/legacy_io.py @@ -18,9 +18,13 @@ log = logging.getLogger(__name__) +def is_installed(): + return module._is_installed + + def install(): """Establish a persistent connection to the database""" - if module._is_installed: + if is_installed(): return session = session_data_from_environment(context_keys=True) @@ -55,7 +59,7 @@ def uninstall(): def requires_install(func): @functools.wraps(func) def decorated(*args, **kwargs): - if not module._is_installed: + if not is_installed(): install() return func(*args, **kwargs) return decorated diff --git a/openpype/pipeline/load/utils.py b/openpype/pipeline/load/utils.py index 99e5d11f82f..eca581ba374 100644 --- a/openpype/pipeline/load/utils.py +++ b/openpype/pipeline/load/utils.py @@ -6,13 +6,24 @@ import inspect import numbers -import six -from bson.objectid import ObjectId - -from openpype.lib import Anatomy +from openpype.client import ( + get_project, + get_assets, + get_subsets, + get_versions, + get_version_by_id, + get_last_version_by_subset_id, + get_hero_version_by_subset_id, + get_version_by_name, + get_representations, + get_representation_by_id, + get_representation_by_name, + get_representation_parents +) from openpype.pipeline import ( schema, legacy_io, + Anatomy, ) log = logging.getLogger(__name__) @@ -52,13 +63,10 @@ def get_repres_contexts(representation_ids, dbcon=None): Returns: dict: The full representation context by representation id. - keys are repre_id, value is dictionary with full: - asset_doc - version_doc - subset_doc - repre_doc - + keys are repre_id, value is dictionary with full documents of + asset, subset, version and representation. """ + if not dbcon: dbcon = legacy_io @@ -66,26 +74,18 @@ def get_repres_contexts(representation_ids, dbcon=None): if not representation_ids: return contexts - _representation_ids = [] - for repre_id in representation_ids: - if isinstance(repre_id, six.string_types): - repre_id = ObjectId(repre_id) - _representation_ids.append(repre_id) + project_name = dbcon.active_project() + repre_docs = get_representations(project_name, representation_ids) - repre_docs = dbcon.find({ - "type": "representation", - "_id": {"$in": _representation_ids} - }) repre_docs_by_id = {} version_ids = set() for repre_doc in repre_docs: version_ids.add(repre_doc["parent"]) repre_docs_by_id[repre_doc["_id"]] = repre_doc - version_docs = dbcon.find({ - "type": {"$in": ["version", "hero_version"]}, - "_id": {"$in": list(version_ids)} - }) + version_docs = get_versions( + project_name, version_ids, hero=True + ) version_docs_by_id = {} hero_version_docs = [] @@ -99,10 +99,7 @@ def get_repres_contexts(representation_ids, dbcon=None): subset_ids.add(version_doc["parent"]) if versions_for_hero: - _version_docs = dbcon.find({ - "type": "version", - "_id": {"$in": list(versions_for_hero)} - }) + _version_docs = get_versions(project_name, versions_for_hero) _version_data_by_id = { version_doc["_id"]: version_doc["data"] for version_doc in _version_docs @@ -114,26 +111,20 @@ def get_repres_contexts(representation_ids, dbcon=None): version_data = copy.deepcopy(_version_data_by_id[version_id]) version_docs_by_id[hero_version_id]["data"] = version_data - subset_docs = dbcon.find({ - "type": "subset", - "_id": {"$in": list(subset_ids)} - }) + subset_docs = get_subsets(project_name, subset_ids) subset_docs_by_id = {} asset_ids = set() for subset_doc in subset_docs: subset_docs_by_id[subset_doc["_id"]] = subset_doc asset_ids.add(subset_doc["parent"]) - asset_docs = dbcon.find({ - "type": "asset", - "_id": {"$in": list(asset_ids)} - }) + asset_docs = get_assets(project_name, asset_ids) asset_docs_by_id = { asset_doc["_id"]: asset_doc for asset_doc in asset_docs } - project_doc = dbcon.find_one({"type": "project"}) + project_doc = get_project(project_name) for repre_id, repre_doc in repre_docs_by_id.items(): version_doc = version_docs_by_id[repre_doc["parent"]] @@ -173,32 +164,21 @@ def get_subset_contexts(subset_ids, dbcon=None): if not subset_ids: return contexts - _subset_ids = set() - for subset_id in subset_ids: - if isinstance(subset_id, six.string_types): - subset_id = ObjectId(subset_id) - _subset_ids.add(subset_id) - - subset_docs = dbcon.find({ - "type": "subset", - "_id": {"$in": list(_subset_ids)} - }) + project_name = dbcon.active_project() + subset_docs = get_subsets(project_name, subset_ids) subset_docs_by_id = {} asset_ids = set() for subset_doc in subset_docs: subset_docs_by_id[subset_doc["_id"]] = subset_doc asset_ids.add(subset_doc["parent"]) - asset_docs = dbcon.find({ - "type": "asset", - "_id": {"$in": list(asset_ids)} - }) + asset_docs = get_assets(project_name, asset_ids) asset_docs_by_id = { asset_doc["_id"]: asset_doc for asset_doc in asset_docs } - project_doc = dbcon.find_one({"type": "project"}) + project_doc = get_project(project_name) for subset_id, subset_doc in subset_docs_by_id.items(): asset_doc = asset_docs_by_id[subset_doc["parent"]] @@ -224,16 +204,17 @@ def get_representation_context(representation): Returns: dict: The full representation context. - """ assert representation is not None, "This is a bug" - if isinstance(representation, (six.string_types, ObjectId)): - representation = legacy_io.find_one( - {"_id": ObjectId(str(representation))}) + if not isinstance(representation, dict): + representation = get_representation_by_id(representation) - version, subset, asset, project = legacy_io.parenthood(representation) + project_name = legacy_io.active_project() + version, subset, asset, project = get_representation_parents( + project_name, representation + ) assert all([representation, version, subset, asset, project]), ( "This is a bug" @@ -405,42 +386,36 @@ def update_container(container, version=-1): """Update a container""" # Compute the different version from 'representation' - current_representation = legacy_io.find_one({ - "_id": ObjectId(container["representation"]) - }) + project_name = legacy_io.active_project() + current_representation = get_representation_by_id( + project_name, container["representation"] + ) assert current_representation is not None, "This is a bug" - current_version, subset, asset, project = legacy_io.parenthood( - current_representation) - + current_version = get_version_by_id( + project_name, current_representation["_id"], fields=["parent"] + ) if version == -1: - new_version = legacy_io.find_one({ - "type": "version", - "parent": subset["_id"] - }, sort=[("name", -1)]) + new_version = get_last_version_by_subset_id( + project_name, current_version["parent"], fields=["_id"] + ) + + elif isinstance(version, HeroVersionType): + new_version = get_hero_version_by_subset_id( + project_name, current_version["parent"], fields=["_id"] + ) + else: - if isinstance(version, HeroVersionType): - version_query = { - "parent": subset["_id"], - "type": "hero_version" - } - else: - version_query = { - "parent": subset["_id"], - "type": "version", - "name": version - } - new_version = legacy_io.find_one(version_query) + new_version = get_version_by_name( + project_name, version, current_version["parent"], fields=["_id"] + ) assert new_version is not None, "This is a bug" - new_representation = legacy_io.find_one({ - "type": "representation", - "parent": new_version["_id"], - "name": current_representation["name"] - }) - + new_representation = get_representation_by_name( + project_name, current_representation["name"], new_version["_id"] + ) assert new_representation is not None, "Representation wasn't found" path = get_representation_path(new_representation) @@ -482,10 +457,10 @@ def switch_container(container, representation, loader_plugin=None): )) # Get the new representation to switch to - new_representation = legacy_io.find_one({ - "type": "representation", - "_id": representation["_id"], - }) + project_name = legacy_io.active_project() + new_representation = get_representation_by_id( + project_name, representation["_id"] + ) new_context = get_representation_context(new_representation) if not is_compatible_loader(loader_plugin, new_context): diff --git a/openpype/plugins/load/delete_old_versions.py b/openpype/plugins/load/delete_old_versions.py index c3e9e9fa0a3..7465f538555 100644 --- a/openpype/plugins/load/delete_old_versions.py +++ b/openpype/plugins/load/delete_old_versions.py @@ -9,9 +9,8 @@ from Qt import QtWidgets, QtCore from openpype import style -from openpype.pipeline import load, AvalonMongoDB +from openpype.pipeline import load, AvalonMongoDB, Anatomy from openpype.lib import StringTemplate -from openpype.api import Anatomy class DeleteOldVersions(load.SubsetLoaderPlugin): diff --git a/openpype/plugins/load/delivery.py b/openpype/plugins/load/delivery.py index 7df07e3f640..0361ab2be5b 100644 --- a/openpype/plugins/load/delivery.py +++ b/openpype/plugins/load/delivery.py @@ -3,8 +3,8 @@ from Qt import QtWidgets, QtCore, QtGui -from openpype.pipeline import load, AvalonMongoDB -from openpype.api import Anatomy, config +from openpype.lib import config +from openpype.pipeline import load, AvalonMongoDB, Anatomy from openpype import resources, style from openpype.lib.delivery import ( diff --git a/openpype/plugins/publish/collect_anatomy_object.py b/openpype/plugins/publish/collect_anatomy_object.py index 2c87918728d..b1415098b61 100644 --- a/openpype/plugins/publish/collect_anatomy_object.py +++ b/openpype/plugins/publish/collect_anatomy_object.py @@ -4,11 +4,11 @@ os.environ -> AVALON_PROJECT Provides: - context -> anatomy (pype.api.Anatomy) + context -> anatomy (openpype.pipeline.anatomy.Anatomy) """ import os -from openpype.api import Anatomy import pyblish.api +from openpype.pipeline import Anatomy class CollectAnatomyObject(pyblish.api.ContextPlugin): diff --git a/openpype/plugins/publish/collect_otio_frame_ranges.py b/openpype/plugins/publish/collect_otio_frame_ranges.py index 8eaf9d6f293..c86e7778500 100644 --- a/openpype/plugins/publish/collect_otio_frame_ranges.py +++ b/openpype/plugins/publish/collect_otio_frame_ranges.py @@ -8,8 +8,11 @@ # import os import opentimelineio as otio import pyblish.api -import openpype.lib from pprint import pformat +from openpype.pipeline.editorial import ( + otio_range_to_frame_range, + otio_range_with_handles +) class CollectOtioFrameRanges(pyblish.api.InstancePlugin): @@ -31,9 +34,9 @@ def process(self, instance): otio_tl_range = otio_clip.range_in_parent() otio_src_range = otio_clip.source_range otio_avalable_range = otio_clip.available_range() - otio_tl_range_handles = openpype.lib.otio_range_with_handles( + otio_tl_range_handles = otio_range_with_handles( otio_tl_range, instance) - otio_src_range_handles = openpype.lib.otio_range_with_handles( + otio_src_range_handles = otio_range_with_handles( otio_src_range, instance) # get source avalable start frame @@ -42,7 +45,7 @@ def process(self, instance): otio_avalable_range.start_time.rate) # convert to frames - range_convert = openpype.lib.otio_range_to_frame_range + range_convert = otio_range_to_frame_range tl_start, tl_end = range_convert(otio_tl_range) tl_start_h, tl_end_h = range_convert(otio_tl_range_handles) src_start, src_end = range_convert(otio_src_range) diff --git a/openpype/plugins/publish/collect_otio_subset_resources.py b/openpype/plugins/publish/collect_otio_subset_resources.py index 40d4f35bdc7..fc6a9b50f27 100644 --- a/openpype/plugins/publish/collect_otio_subset_resources.py +++ b/openpype/plugins/publish/collect_otio_subset_resources.py @@ -10,7 +10,11 @@ import clique import opentimelineio as otio import pyblish.api -import openpype.lib as oplib +from openpype.pipeline.editorial import ( + get_media_range_with_retimes, + range_from_frames, + make_sequence_collection +) class CollectOtioSubsetResources(pyblish.api.InstancePlugin): @@ -42,7 +46,7 @@ def process(self, instance): available_duration = otio_avalable_range.duration.value # get available range trimmed with processed retimes - retimed_attributes = oplib.get_media_range_with_retimes( + retimed_attributes = get_media_range_with_retimes( otio_clip, handle_start, handle_end) self.log.debug( ">> retimed_attributes: {}".format(retimed_attributes)) @@ -64,7 +68,7 @@ def process(self, instance): a_frame_end_h = media_out + handle_end # create trimmed otio time range - trimmed_media_range_h = oplib.range_from_frames( + trimmed_media_range_h = range_from_frames( a_frame_start_h, (a_frame_end_h - a_frame_start_h) + 1, media_fps ) @@ -144,7 +148,7 @@ def process(self, instance): # in case it is file sequence but not new OTIO schema # `ImageSequenceReference` path = media_ref.target_url - collection_data = oplib.make_sequence_collection( + collection_data = make_sequence_collection( path, trimmed_media_range_h, metadata) self.staging_dir, collection = collection_data diff --git a/openpype/plugins/publish/extract_jpeg_exr.py b/openpype/plugins/publish/extract_jpeg_exr.py index a467728e770..42c4cbe0620 100644 --- a/openpype/plugins/publish/extract_jpeg_exr.py +++ b/openpype/plugins/publish/extract_jpeg_exr.py @@ -19,7 +19,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): label = "Extract Thumbnail" order = pyblish.api.ExtractorOrder families = [ - "imagesequence", "render", "render2d", + "imagesequence", "render", "render2d", "prerender", "source", "plate", "take" ] hosts = ["shell", "fusion", "resolve"] diff --git a/openpype/plugins/publish/extract_otio_review.py b/openpype/plugins/publish/extract_otio_review.py index 35adc974429..2ce53234682 100644 --- a/openpype/plugins/publish/extract_otio_review.py +++ b/openpype/plugins/publish/extract_otio_review.py @@ -19,6 +19,13 @@ import opentimelineio as otio from pyblish import api import openpype +from openpype.pipeline.editorial import ( + otio_range_to_frame_range, + trim_media_range, + range_from_frames, + frames_to_seconds, + make_sequence_collection +) class ExtractOTIOReview(openpype.api.Extractor): @@ -161,7 +168,7 @@ def process(self, instance): dirname = media_ref.target_url_base head = media_ref.name_prefix tail = media_ref.name_suffix - first, last = openpype.lib.otio_range_to_frame_range( + first, last = otio_range_to_frame_range( available_range) collection = clique.Collection( head=head, @@ -180,7 +187,7 @@ def process(self, instance): # in case it is file sequence but not new OTIO schema # `ImageSequenceReference` path = media_ref.target_url - collection_data = openpype.lib.make_sequence_collection( + collection_data = make_sequence_collection( path, available_range, metadata) dir_path, collection = collection_data @@ -305,8 +312,8 @@ def _trim_available_range(self, avl_range, start, duration, fps): duration = avl_durtation # return correct trimmed range - return openpype.lib.trim_media_range( - avl_range, openpype.lib.range_from_frames(start, duration, fps) + return trim_media_range( + avl_range, range_from_frames(start, duration, fps) ) def _render_seqment(self, sequence=None, @@ -357,8 +364,8 @@ def _render_seqment(self, sequence=None, frame_start = otio_range.start_time.value input_fps = otio_range.start_time.rate frame_duration = otio_range.duration.value - sec_start = openpype.lib.frames_to_secons(frame_start, input_fps) - sec_duration = openpype.lib.frames_to_secons( + sec_start = frames_to_seconds(frame_start, input_fps) + sec_duration = frames_to_seconds( frame_duration, input_fps ) @@ -370,8 +377,7 @@ def _render_seqment(self, sequence=None, ]) elif gap: - sec_duration = openpype.lib.frames_to_secons( - gap, self.actual_fps) + sec_duration = frames_to_seconds(gap, self.actual_fps) # form command for rendering gap files command.extend([ diff --git a/openpype/plugins/publish/extract_otio_trimming_video.py b/openpype/plugins/publish/extract_otio_trimming_video.py index e8e2994f36a..19625fa5685 100644 --- a/openpype/plugins/publish/extract_otio_trimming_video.py +++ b/openpype/plugins/publish/extract_otio_trimming_video.py @@ -9,6 +9,7 @@ from pyblish import api import openpype from copy import deepcopy +from openpype.pipeline.editorial import frames_to_seconds class ExtractOTIOTrimmingVideo(openpype.api.Extractor): @@ -81,8 +82,8 @@ def _ffmpeg_trim_seqment(self, input_file_path, otio_range): frame_start = otio_range.start_time.value input_fps = otio_range.start_time.rate frame_duration = otio_range.duration.value - 1 - sec_start = openpype.lib.frames_to_secons(frame_start, input_fps) - sec_duration = openpype.lib.frames_to_secons(frame_duration, input_fps) + sec_start = frames_to_seconds(frame_start, input_fps) + sec_duration = frames_to_seconds(frame_duration, input_fps) # form command for rendering gap files command.extend([ diff --git a/openpype/settings/defaults/project_settings/harmony.json b/openpype/settings/defaults/project_settings/harmony.json index 1508b02e1b7..d843bc8e709 100644 --- a/openpype/settings/defaults/project_settings/harmony.json +++ b/openpype/settings/defaults/project_settings/harmony.json @@ -1,4 +1,20 @@ { + "load": { + "ImageSequenceLoader": { + "family": [ + "shot", + "render", + "image", + "plate", + "reference" + ], + "representations": [ + "jpeg", + "png", + "jpg" + ] + } + }, "publish": { "CollectPalettes": { "allowed_tasks": [ diff --git a/openpype/settings/defaults/project_settings/maya.json b/openpype/settings/defaults/project_settings/maya.json index cdd3a62d004..9bebd92cb9b 100644 --- a/openpype/settings/defaults/project_settings/maya.json +++ b/openpype/settings/defaults/project_settings/maya.json @@ -204,7 +204,8 @@ "ValidateFrameRange": { "enabled": true, "optional": true, - "active": true + "active": true, + "exclude_families": ["model", "rig", "staticMesh"] }, "ValidateShaderName": { "enabled": false, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_harmony.json b/openpype/settings/entities/schemas/projects_schema/schema_project_harmony.json index c049ce3084c..311f742f812 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_harmony.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_harmony.json @@ -5,6 +5,34 @@ "label": "Harmony", "is_file": true, "children": [ + { + "type": "dict", + "collapsible": true, + "key": "load", + "label": "Loader plugins", + "children": [ + { + "type": "dict", + "collapsible": true, + "key": "ImageSequenceLoader", + "label": "Load Image Sequence", + "children": [ + { + "type": "list", + "key": "family", + "label": "Families", + "object_type": "text" + }, + { + "type": "list", + "key": "representations", + "label": "Representations", + "object_type": "text" + } + ] + } + ] + }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json index 41b681d893f..84182973a1a 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json @@ -62,13 +62,36 @@ } ] }, - { - "type": "schema_template", - "name": "template_publish_plugin", - "template_data": [ + { + "type": "dict", + "collapsible": true, + "key": "ValidateFrameRange", + "label": "Validate Frame Range", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" + }, + { + "type": "splitter" + }, { - "key": "ValidateFrameRange", - "label": "Validate Frame Range" + "key": "exclude_families", + "label": "Families", + "type": "list", + "object_type": "text" } ] }, diff --git a/openpype/style/style.css b/openpype/style/style.css index d76d833be14..72d12a92309 100644 --- a/openpype/style/style.css +++ b/openpype/style/style.css @@ -1418,3 +1418,6 @@ InViewButton, InViewButton:disabled { InViewButton:hover { background: rgba(255, 255, 255, 37); } +SupportLabel { + color: {color:font-disabled}; +} diff --git a/openpype/tools/loader/widgets.py b/openpype/tools/loader/widgets.py index 1f6d8b9fa22..13e18b37579 100644 --- a/openpype/tools/loader/widgets.py +++ b/openpype/tools/loader/widgets.py @@ -17,8 +17,7 @@ get_thumbnail_id_from_source, get_thumbnail, ) -from openpype.api import Anatomy -from openpype.pipeline import HeroVersionType +from openpype.pipeline import HeroVersionType, Anatomy from openpype.pipeline.thumbnail import get_thumbnail_binary from openpype.pipeline.load import ( discover_loader_plugins, diff --git a/openpype/tools/publisher/widgets/create_dialog.py b/openpype/tools/publisher/widgets/create_dialog.py index 53bbef8b755..3a68835dc73 100644 --- a/openpype/tools/publisher/widgets/create_dialog.py +++ b/openpype/tools/publisher/widgets/create_dialog.py @@ -977,7 +977,12 @@ def _set_creator(self, creator): elif variant: self.variant_hints_menu.addAction(variant) - self.variant_input.setText(default_variant or "Main") + variant_text = default_variant or "Main" + # Make sure subset name is updated to new plugin + if variant_text == self.variant_input.text(): + self._on_variant_change() + else: + self.variant_input.setText(variant_text) def _on_variant_widget_resize(self): self.variant_hints_btn.setFixedHeight(self.variant_input.height()) diff --git a/openpype/tools/sceneinventory/model.py b/openpype/tools/sceneinventory/model.py index 0bb9c4a6581..63fbe04c5cf 100644 --- a/openpype/tools/sceneinventory/model.py +++ b/openpype/tools/sceneinventory/model.py @@ -6,6 +6,7 @@ from Qt import QtCore, QtGui import qtawesome +from openpype.host import ILoadHost from openpype.client import ( get_asset_by_id, get_subset_by_id, @@ -193,7 +194,10 @@ def refresh(self, selected=None, items=None): host = registered_host() if not items: # for debugging or testing, injecting items from outside - items = host.ls() + if isinstance(host, ILoadHost): + items = host.get_containers() + else: + items = host.ls() self.clear() diff --git a/openpype/tools/texture_copy/app.py b/openpype/tools/texture_copy/app.py index 746a72b3ecc..a695bb8c4d5 100644 --- a/openpype/tools/texture_copy/app.py +++ b/openpype/tools/texture_copy/app.py @@ -6,8 +6,7 @@ from openpype.client import get_project, get_asset_by_name from openpype.lib import Terminal -from openpype.api import Anatomy -from openpype.pipeline import legacy_io +from openpype.pipeline import legacy_io, Anatomy t = Terminal() diff --git a/openpype/tools/utils/host_tools.py b/openpype/tools/utils/host_tools.py index 9dbbe25fdac..ae23e4d089c 100644 --- a/openpype/tools/utils/host_tools.py +++ b/openpype/tools/utils/host_tools.py @@ -6,7 +6,7 @@ import os import pyblish.api - +from openpype.host import IWorkfileHost, ILoadHost from openpype.pipeline import ( registered_host, legacy_io, @@ -49,12 +49,11 @@ def log(self): def get_workfiles_tool(self, parent): """Create, cache and return workfiles tool window.""" if self._workfiles_tool is None: - from openpype.tools.workfiles.app import ( - Window, validate_host_requirements - ) + from openpype.tools.workfiles.app import Window + # Host validation host = registered_host() - validate_host_requirements(host) + IWorkfileHost.validate_workfile_methods(host) workfiles_window = Window(parent=parent) self._workfiles_tool = workfiles_window @@ -92,6 +91,9 @@ def get_loader_tool(self, parent): if self._loader_tool is None: from openpype.tools.loader import LoaderWindow + host = registered_host() + ILoadHost.validate_load_methods(host) + loader_window = LoaderWindow(parent=parent or self._parent) self._loader_tool = loader_window @@ -164,6 +166,9 @@ def get_scene_inventory_tool(self, parent): if self._scene_inventory_tool is None: from openpype.tools.sceneinventory import SceneInventoryWindow + host = registered_host() + ILoadHost.validate_load_methods(host) + scene_inventory_window = SceneInventoryWindow( parent=parent or self._parent ) diff --git a/openpype/tools/workfiles/__init__.py b/openpype/tools/workfiles/__init__.py index 5fbc71797dc..205fd448384 100644 --- a/openpype/tools/workfiles/__init__.py +++ b/openpype/tools/workfiles/__init__.py @@ -1,12 +1,10 @@ from .window import Window from .app import ( show, - validate_host_requirements, ) __all__ = [ "Window", "show", - "validate_host_requirements", ] diff --git a/openpype/tools/workfiles/app.py b/openpype/tools/workfiles/app.py index 352847ede81..2d0d551faf3 100644 --- a/openpype/tools/workfiles/app.py +++ b/openpype/tools/workfiles/app.py @@ -1,6 +1,7 @@ import sys import logging +from openpype.host import IWorkfileHost from openpype.pipeline import ( registered_host, legacy_io, @@ -14,31 +15,6 @@ module.window = None -def validate_host_requirements(host): - if host is None: - raise RuntimeError("No registered host.") - - # Verify the host has implemented the api for Work Files - required = [ - "open_file", - "save_file", - "current_file", - "has_unsaved_changes", - "work_root", - "file_extensions", - ] - missing = [] - for name in required: - if not hasattr(host, name): - missing.append(name) - if missing: - raise RuntimeError( - "Host is missing required Work Files interfaces: " - "%s (host: %s)" % (", ".join(missing), host) - ) - return True - - def show(root=None, debug=False, parent=None, use_context=True, save=True): """Show Work Files GUI""" # todo: remove `root` argument to show() @@ -50,7 +26,7 @@ def show(root=None, debug=False, parent=None, use_context=True, save=True): pass host = registered_host() - validate_host_requirements(host) + IWorkfileHost.validate_workfile_methods(host) if debug: legacy_io.Session["AVALON_ASSET"] = "Mock" diff --git a/openpype/tools/workfiles/files_widget.py b/openpype/tools/workfiles/files_widget.py index a7e54471dc3..34692b7102e 100644 --- a/openpype/tools/workfiles/files_widget.py +++ b/openpype/tools/workfiles/files_widget.py @@ -6,12 +6,12 @@ import Qt from Qt import QtWidgets, QtCore +from openpype.host import IWorkfileHost from openpype.client import get_asset_by_id from openpype.tools.utils import PlaceholderLineEdit from openpype.tools.utils.delegates import PrettyTimeDelegate from openpype.lib import ( emit_event, - Anatomy, get_workfile_template_key, create_workdir_extra_folders, ) @@ -22,6 +22,7 @@ from openpype.pipeline import ( registered_host, legacy_io, + Anatomy, ) from .model import ( WorkAreaFilesModel, @@ -125,7 +126,7 @@ def __init__(self, parent): filter_layout.addWidget(published_checkbox, 0) # Create the Files models - extensions = set(self.host.file_extensions()) + extensions = set(self._get_host_extensions()) views_widget = QtWidgets.QWidget(self) # --- Workarea view --- @@ -452,7 +453,12 @@ def _get_event_context_data(self): def open_file(self, filepath): host = self.host - if host.has_unsaved_changes(): + if isinstance(host, IWorkfileHost): + has_unsaved_changes = host.workfile_has_unsaved_changes() + else: + has_unsaved_changes = host.has_unsaved_changes() + + if has_unsaved_changes: result = self.save_changes_prompt() if result is None: # Cancel operation @@ -460,7 +466,10 @@ def open_file(self, filepath): # Save first if has changes if result: - current_file = host.current_file() + if isinstance(host, IWorkfileHost): + current_file = host.get_current_workfile() + else: + current_file = host.current_file() if not current_file: # If the user requested to save the current scene # we can't actually automatically do so if the current @@ -471,7 +480,10 @@ def open_file(self, filepath): return # Save current scene, continue to open file - host.save_file(current_file) + if isinstance(host, IWorkfileHost): + host.save_workfile(current_file) + else: + host.save_file(current_file) event_data_before = self._get_event_context_data() event_data_before["filepath"] = filepath @@ -482,7 +494,10 @@ def open_file(self, filepath): source="workfiles.tool" ) self._enter_session() - host.open_file(filepath) + if isinstance(host, IWorkfileHost): + host.open_workfile(filepath) + else: + host.open_file(filepath) emit_event( "workfile.open.after", event_data_after, @@ -524,7 +539,7 @@ def get_filename(self): filepath = self._get_selected_filepath() extensions = [os.path.splitext(filepath)[1]] else: - extensions = self.host.file_extensions() + extensions = self._get_host_extensions() window = SaveAsDialog( parent=self, @@ -572,9 +587,14 @@ def _on_workarea_open_pressed(self): self.open_file(path) + def _get_host_extensions(self): + if isinstance(self.host, IWorkfileHost): + return self.host.get_workfile_extensions() + return self.host.file_extensions() + def on_browse_pressed(self): ext_filter = "Work File (*{0})".format( - " *".join(self.host.file_extensions()) + " *".join(self._get_host_extensions()) ) kwargs = { "caption": "Work Files", @@ -632,10 +652,16 @@ def _save_as_with_dialog(self): self._enter_session() if not self.published_enabled: - self.host.save_file(filepath) + if isinstance(self.host, IWorkfileHost): + self.host.save_workfile(filepath) + else: + self.host.save_file(filepath) else: shutil.copy(src_path, filepath) - self.host.open_file(filepath) + if isinstance(self.host, IWorkfileHost): + self.host.open_workfile(filepath) + else: + self.host.open_file(filepath) # Create extra folders create_workdir_extra_folders( diff --git a/openpype/version.py b/openpype/version.py index 02f928d83c8..92cdcf9fddb 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.12.0-nightly.2" +__version__ = "3.12.1-nightly.2" diff --git a/openpype/widgets/attribute_defs/files_widget.py b/openpype/widgets/attribute_defs/files_widget.py index 23cf8342b16..698a91a1a57 100644 --- a/openpype/widgets/attribute_defs/files_widget.py +++ b/openpype/widgets/attribute_defs/files_widget.py @@ -26,26 +26,110 @@ EXT_ROLE = QtCore.Qt.UserRole + 8 +class SupportLabel(QtWidgets.QLabel): + pass + + class DropEmpty(QtWidgets.QWidget): - _drop_enabled_text = "Drag & Drop\n(drop files here)" + _empty_extensions = "Any file" - def __init__(self, parent): + def __init__(self, single_item, allow_sequences, parent): super(DropEmpty, self).__init__(parent) - label_widget = QtWidgets.QLabel(self._drop_enabled_text, self) - label_widget.setAlignment(QtCore.Qt.AlignCenter) - label_widget.setAttribute(QtCore.Qt.WA_TranslucentBackground) + drop_label_widget = QtWidgets.QLabel("Drag & Drop files here", self) - layout = QtWidgets.QHBoxLayout(self) + items_label_widget = SupportLabel(self) + items_label_widget.setWordWrap(True) + + layout = QtWidgets.QVBoxLayout(self) layout.setContentsMargins(0, 0, 0, 0) - layout.addSpacing(10) + layout.addSpacing(20) + layout.addWidget( + drop_label_widget, 0, alignment=QtCore.Qt.AlignCenter + ) + layout.addSpacing(30) + layout.addStretch(1) layout.addWidget( - label_widget, - alignment=QtCore.Qt.AlignCenter + items_label_widget, 0, alignment=QtCore.Qt.AlignCenter ) layout.addSpacing(10) - self._label_widget = label_widget + for widget in ( + drop_label_widget, + items_label_widget, + ): + widget.setAlignment(QtCore.Qt.AlignCenter) + widget.setAttribute(QtCore.Qt.WA_TranslucentBackground) + + self._single_item = single_item + self._allow_sequences = allow_sequences + self._allowed_extensions = set() + self._allow_folders = None + + self._drop_label_widget = drop_label_widget + self._items_label_widget = items_label_widget + + self.set_allow_folders(False) + + def set_extensions(self, extensions): + if extensions: + extensions = { + ext.replace(".", "") + for ext in extensions + } + if extensions == self._allowed_extensions: + return + self._allowed_extensions = extensions + + self._update_items_label() + + def set_allow_folders(self, allowed): + if self._allow_folders == allowed: + return + + self._allow_folders = allowed + self._update_items_label() + + def _update_items_label(self): + allowed_items = [] + if self._allow_folders: + allowed_items.append("folder") + + if self._allowed_extensions: + allowed_items.append("file") + if self._allow_sequences: + allowed_items.append("sequence") + + if not self._single_item: + allowed_items = [item + "s" for item in allowed_items] + + if not allowed_items: + self._items_label_widget.setText( + "It is not allowed to add anything here!" + ) + return + + items_label = "Multiple " + if self._single_item: + items_label = "Single " + + if len(allowed_items) == 1: + allowed_items_label = allowed_items[0] + elif len(allowed_items) == 2: + allowed_items_label = " or ".join(allowed_items) + else: + last_item = allowed_items.pop(-1) + new_last_item = " or ".join(last_item, allowed_items.pop(-1)) + allowed_items.append(new_last_item) + allowed_items_label = ", ".join(allowed_items) + + items_label += allowed_items_label + if self._allowed_extensions: + items_label += " of\n{}".format( + ", ".join(sorted(self._allowed_extensions)) + ) + + self._items_label_widget.setText(items_label) def paintEvent(self, event): super(DropEmpty, self).paintEvent(event) @@ -188,7 +272,12 @@ def set_allow_folders(self, allow=None): def set_allowed_extensions(self, extensions=None): if extensions is not None: - extensions = set(extensions) + _extensions = set() + for ext in set(extensions): + if not ext.startswith("."): + ext = ".{}".format(ext) + _extensions.add(ext.lower()) + extensions = _extensions if self._allowed_extensions != extensions: self._allowed_extensions = extensions @@ -444,7 +533,7 @@ def __init__(self, single_item, allow_sequences, parent): super(FilesWidget, self).__init__(parent) self.setAcceptDrops(True) - empty_widget = DropEmpty(self) + empty_widget = DropEmpty(single_item, allow_sequences, self) files_model = FilesModel(single_item, allow_sequences) files_proxy_model = FilesProxyModel() @@ -519,6 +608,8 @@ def current_value(self): def set_filters(self, folders_allowed, exts_filter): self._files_proxy_model.set_allow_folders(folders_allowed) self._files_proxy_model.set_allowed_extensions(exts_filter) + self._empty_widget.set_extensions(exts_filter) + self._empty_widget.set_allow_folders(folders_allowed) def _on_rows_inserted(self, parent_index, start_row, end_row): for row in range(start_row, end_row + 1): diff --git a/poetry.lock b/poetry.lock index 47509f334e0..7221e191ff2 100644 --- a/poetry.lock +++ b/poetry.lock @@ -46,6 +46,19 @@ python-versions = ">=3.5" [package.dependencies] aiohttp = ">=3,<4" +[[package]] +name = "aiohttp-middlewares" +version = "2.0.0" +description = "Collection of useful middlewares for aiohttp applications." +category = "main" +optional = false +python-versions = ">=3.7,<4.0" + +[package.dependencies] +aiohttp = ">=3.8.1,<4.0.0" +async-timeout = ">=4.0.2,<5.0.0" +yarl = ">=1.5.1,<2.0.0" + [[package]] name = "aiosignal" version = "1.2.0" @@ -1783,6 +1796,10 @@ aiohttp-json-rpc = [ {file = "aiohttp-json-rpc-0.13.3.tar.gz", hash = "sha256:6237a104478c22c6ef96c7227a01d6832597b414e4b79a52d85593356a169e99"}, {file = "aiohttp_json_rpc-0.13.3-py3-none-any.whl", hash = "sha256:4fbd197aced61bd2df7ae3237ead7d3e08833c2ccf48b8581e1828c95ebee680"}, ] +aiohttp-middlewares = [ + {file = "aiohttp-middlewares-2.0.0.tar.gz", hash = "sha256:e08ba04dc0e8fe379aa5e9444a68485c275677ee1e18c55cbb855de0c3629502"}, + {file = "aiohttp_middlewares-2.0.0-py3-none-any.whl", hash = "sha256:29cf1513176b4013844711975ff520e26a8a5d8f9fefbbddb5e91224a86b043e"}, +] aiosignal = [ {file = "aiosignal-1.2.0-py3-none-any.whl", hash = "sha256:26e62109036cd181df6e6ad646f91f0dcfd05fe16d0cb924138ff2ab75d64e3a"}, {file = "aiosignal-1.2.0.tar.gz", hash = "sha256:78ed67db6c7b7ced4f98e495e572106d5c432a93e1ddd1bf475e1dc05f5b7df2"}, diff --git a/pyproject.toml b/pyproject.toml index a159559763d..401b24243bf 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "OpenPype" -version = "3.12.0-nightly.2" # OpenPype +version = "3.12.1-nightly.2" # OpenPype description = "Open VFX and Animation pipeline with support." authors = ["OpenPype Team "] license = "MIT License" @@ -68,6 +68,7 @@ slack-sdk = "^3.6.0" requests = "^2.25.1" pysftp = "^0.2.9" dropbox = "^11.20.0" +aiohttp-middlewares = "^2.0.0" [tool.poetry.dev-dependencies] @@ -154,4 +155,4 @@ exclude = [ ignore = ["website", "docs", ".git"] reportMissingImports = true -reportMissingTypeStubs = false \ No newline at end of file +reportMissingTypeStubs = false diff --git a/website/docs/admin_use.md b/website/docs/admin_use.md index f84905c4861..1c4ae9e01c8 100644 --- a/website/docs/admin_use.md +++ b/website/docs/admin_use.md @@ -105,6 +105,10 @@ save it in secure way to your systems keyring - on Windows it is **Credential Ma This can be also set beforehand with environment variable `OPENPYPE_MONGO`. If set it takes precedence over the one set in keyring. +:::tip Minimal permissions for DB user +- `readWrite` role to `openpype` and `avalon` databases +- `find` permission on `openpype`, `avalon` and `local` + #### Check for OpenPype version path When connection to MongoDB is made, OpenPype will get various settings from there - one among them is directory location where OpenPype versions are stored. If this directory exists OpenPype tries to diff --git a/website/docs/dev_host_implementation.md b/website/docs/dev_host_implementation.md new file mode 100644 index 00000000000..3702483ad10 --- /dev/null +++ b/website/docs/dev_host_implementation.md @@ -0,0 +1,89 @@ +--- +id: dev_host_implementation +title: Host implementation +sidebar_label: Host implementation +toc_max_heading_level: 4 +--- + +Host is an integration of DCC but in most of cases have logic that need to be handled before DCC is launched. Then based on abilities (or purpose) of DCC the integration can support different pipeline workflows. + +## Pipeline workflows +Workflows available in OpenPype are Workfiles, Load and Create-Publish. Each of them may require some functionality available in integration (e.g. call host API to achieve certain functionality). We'll go through them later. + +## How to implement and manage host +At this moment there is not fully unified way how host should be implemented but we're working on it. Host should have a "public face" code that can be used outside of DCC and in-DCC integration code. The main reason is that in-DCC code can have specific dependencies for python modules not available out of it's process. Hosts are located in `openpype/hosts/{host name}` folder. Current code (at many places) expect that the host name has equivalent folder there. So each subfolder should be named with the name of host it represents. + +### Recommended folder structure +```python +openpype/hosts/{host name} +│ +│ # Content of DCC integration - with in-DCC imports +├─ api +│ ├─ __init__.py +│ └─ [DCC integration files] +│ +│ # Plugins related to host - dynamically imported (can contain in-DCC imports) +├─ plugins +│ ├─ create +│ │ └─ [create plugin files] +│ ├─ load +│ │ └─ [load plugin files] +│ └─ publish +│ └─ [publish plugin files] +│ +│ # Launch hooks - used to modify how application is launched +├─ hooks +│ └─ [some pre/post launch hooks] +| +│ # Code initializing host integration in-DCC (DCC specific - example from Maya) +├─ startup +│ └─ userSetup.py +│ +│ # Public interface +├─ __init__.py +└─ [other public code] +``` + +### Launch Hooks +Launch hooks are not directly connected to host implementation, but they can be used to modify launch of process which may be crutial for the implementation. Launch hook are plugins called when DCC is launched. They are processed in sequence before and after launch. Pre launch hooks can change how process of DCC is launched, e.g. change subprocess flags, modify environments or modify launch arguments. If prelaunch hook crashes the application is not launched at all. Postlaunch hooks are triggered after launch of subprocess. They can be used to change statuses in your project tracker, start timer, etc. Crashed postlaunch hooks have no effect on rest of postlaunch hooks or launched process. They can be filtered by platform, host and application and order is defined by integer value. Hooks inside host are automatically loaded (one reason why folder name should match host name) or can be defined from modules. Hooks execution share same launch context where can be stored data used across multiple hooks (please be very specific in stored keys e.g. 'project' vs. 'project_name'). For more detailed information look into `openpype/lib/applications.py`. + +### Public interface +Public face is at this moment related to launching of the DCC. At this moment there there is only option to modify environment variables before launch by implementing function `add_implementation_envs` (must be available in `openpype/hosts/{host name}/__init__.py`). The function is called after pre launch hooks, as last step before subprocess launch, to be able set environment variables crutial for proper integration. It is also good place for functions that are used in prelaunch hooks and in-DCC integration. Future plans are to be able get workfiles extensions from here. Right now workfiles extensions are hardcoded in `openpype/pipeline/constants.py` under `HOST_WORKFILE_EXTENSIONS`, we would like to handle hosts as addons similar to OpenPype modules, and more improvements which are now hardcoded. + +### Integration +We've prepared base class `HostBase` in `openpype/host/host.py` to define minimum requirements and provide some default method implementations. The minimum requirement for a host is `name` attribute, this host would not be able to do much but is valid. To extend functionality we've prepared interfaces that helps to identify what is host capable of and if is possible to use certain tools with it. For those cases we defined interfaces for each workflow. `IWorkfileHost` interface add requirement to implement workfiles related methods which makes host usable in combination with Workfiles tool. `ILoadHost` interface add requirements to be able load, update, switch or remove referenced representations which should add support to use Loader and Scene Inventory tools. `INewPublisher` interface is required to be able use host with new OpenPype publish workflow. This is what must or can be implemented to allow certain functionality. `HostBase` will have more responsibility which will be taken from global variables in future. This process won't happen at once, but will be slow to keep backwards compatibility for some time. + +#### Example +```python +from openpype.host import HostBase, IWorkfileHost, ILoadHost + + +class MayaHost(HostBase, IWorkfileHost, ILoadHost): + def open_workfile(self, filepath): + ... + + def save_current_workfile(self, filepath=None): + ... + + def get_current_workfile(self): + ... + ... +``` + +### Install integration +We have prepared a host class, now where and how to initialize it's object? This part is DCC specific. In DCCs like Maya with embedded python and Qt we use advantage of being able to initialize object of the class directly in DCC process on start, the same happens in Nuke, Hiero and Houdini. In DCCs like Photoshop or Harmony there is launched OpenPype (python) process next to it which handles host initialization and communication with the DCC process (e.g. using sockects). Created object of host must be installed and registered to global scope of OpenPype. Which means that at this moment one process can handle only one host at a time. + +#### Install example (Maya startup file) +```python +from openpype.pipeline import install_host +from openpype.hosts.maya.api import MayaHost + + +host = MayaHost() +install_host(host) +``` + +Function `install_host` cares about installing global plugins, callbacks and register host. Host registration means that the object is kept in memory and is accessible using `get_registered_host()`. + +### Using UI tools +Most of functionality in DCCs is provided to artists by using UI tools. We're trying to keep UIs consistent so we use same set of tools in each host, all or most of them are Qt based. There is a `HostToolsHelper` in `openpype/tools/utils/host_tools.py` which unify showing of default tools, they can be showed almost at any point. Some of them are validating if host is capable of using them (Workfiles, Loader and Scene Inventory) which is related to [pipeline workflows](#pipeline-workflows). `HostToolsHelper` provides API to show tools but host integration must care about giving artists ability to show them. Most of DCCs have some extendable menu bar where is possible to add custom actions, which is preferred approach how to give ability to show the tools. diff --git a/website/sidebars.js b/website/sidebars.js index d4fec9fba2c..0e578bd0852 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -155,6 +155,7 @@ module.exports = { type: "category", label: "Hosts integrations", items: [ + "dev_host_implementation", "dev_publishing" ] }