-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pre-load #52
Comments
Getting away with murderWe'll need some way of guaranteeing that the child process dies when exiting a host. Otherwise, we run the risk of running subprocess after subprocess and leaving them hanging with no one to close them and thus consuming more and more memory (i.e. leaking memory). On Windows, this seems an adequate solution. |
Waking up on-timeAt the moment, there's no way to communicate with the GUI externally, as there is nothing to receive messages from within it. We could:
Number 1 adds another layer of complexity, much like the current layer provided by Endpoint. Number 2 on the other hand requires no set-up and is without consequence; other than that it blocks the Pyblish QML until a response it returned which is our case is exactly what we want. Awaiting instructionsWake-up will be handled by first pre-loading the interpreter, and then sending a request to a host. The request will however not be expected to be answered right away, but will instead block until a host is ready to respond. The response, in this case, is the wake-up call. Upon receiving a wake-up call, we are free to instantiate the GUI from our pre-loaded Python interpreter. Bam! RespondingOk, once the pre-loaded process has sent the request, how do we post-pone replying until the user hits "Publish"? Endpoint will start running as soon as the host launches and will reply instantly to any requests. We'll need a way to delay a response until a given time. Solution 1 - Polling Host
This could work, but headaches arise when dealing with multiple instances. (How would that work? Brain spinning..) Instead, we could send many requests at a given interval, such as twice a second. Solution 2 - Polling Client
This has the disadvantage of adding an additional delay to the start-up time of the gui (0.5 seconds, at most) and would thus counter-act what we are here to do in the first place; which is to minimise start-up time. It would also cause a host to needlessly respond to messages in cases where publishing only happens occasionally; e.g. <1/hour. Which is what we'd expect. Solution 3 - Nobody homeWe could choose to only start listening once a user is interested in publishing. That is, at the press of File|Publish we'll launch Endpoint and thus reply to the awaiting client. This however has the disadvantage of adding to start-up time as launching the server may take 50-100 ms or so which would also counter-act the purpose of this task. |
Multiple InstancesIf we were to maintain the ability to launch multiple instances of the Pyblish QML GUI, a few new issues arise. Once we launch a host, it would pre-load the first instance of our GUI. Once the GUI has been launched, we would need the host to start pre-loading the next instance such that when the user hits Once the first instance is closed, the pre-loaded interpreter and all it's related resources go away. Do we need multiple instances?Personally, I'm not a fan of any application forcing you into using only one instance; Photoshop is an example of this. I'd much rather prefer a behavior similar to Maya or Nuke or even Chrome and Windows Explorer. Each instance contains a current state and the state is important to maintain. However, our GUI is currently not as sophisticated to include much state. Not much is separating one instance from the next and so not much would be lost when closing it or saved when maintaining it. |
Killing and DetachingAs mention in #1, we may want out GUI to "detach" from a host, in case of the GUI visualising progress of a long-running publish, such as a render. In this case, we'd like the kill-child-on-parent-exit to not take effect. This isn't relevant now, but it would ideally be compatible with whichever approach is used to kill the child-process. |
Getting away with murder, part 2As it turns out, this works. import threading
import subprocess
import win32api
import win32con
import win32job
def worker():
hJob = win32job.CreateJobObject(None, "")
extended_info = win32job.QueryInformationJobObject(hJob, win32job.JobObjectExtendedLimitInformation)
extended_info['BasicLimitInformation']['LimitFlags'] = win32job.JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE
win32job.SetInformationJobObject(hJob, win32job.JobObjectExtendedLimitInformation, extended_info)
child = subprocess.Popen(["python"], creationflags=subprocess.CREATE_NEW_CONSOLE)
# Convert process id to process handle:
perms = win32con.PROCESS_TERMINATE | win32con.PROCESS_SET_QUOTA
hProcess = win32api.OpenProcess(perms, False, child.pid)
win32job.AssignProcessToJobObject(hJob, hProcess)
child.communicate()
def main():
thread = threading.Thread(target=worker)
thread.daemon = True
thread.start() But relies on the third-party, and OS-dependent library
This produces a grand total of 3 (Win, OSX, Unix) x 2 (2.6, 2.7) X 2 = 12 distributions. Luckily, we won't have to worry about MSC1500 (standard) versus MSC1600 (maya, nuke) versions of the Python interpreter as we are running standalone. StandaloneThis might be a good opportunity to discard any dependency on having Python installed on the client. If we bundle Python along with Pyblish QML, we can slim down the above differences into 1/platform. |
On second thought, turns out Looking for alternatives. |
Here's a pure ctypes solution, that did not go all the way, here's the progress so far. import sys
import time
import ctypes
SYNCHRONIZE = 0x00100000
PROCESS_QUERY_INFORMATION = 0x0400
STILL_ALIVE = 259
if __name__ == '__main__':
print "Launching process.."
pid = int(sys.argv[1])
assert isinstance(pid, int)
print "Checking if %s is still alive.." % pid
while True:
handle = ctypes.windll.kernel32.OpenProcess(
SYNCHRONIZE | PROCESS_QUERY_INFORMATION, False, pid)
try:
lp_exit_code = ctypes.c_int(0)
if ctypes.windll.kernel32.GetExitCodeProcess(
handle, ctypes.byref(lp_exit_code)) == 0:
print "GetLastError: %s" % ctypes.windll.kernel32.GetLastError()
break
except WindowsError:
err = ctypes.windll.kernel32.GetLastError()
print "GetLastError: %s" % err
break
except Exception as e:
print e
print "It's still alive"
ctypes.windll.kernel32.CloseHandle(handle)
time.sleep(1)
print "It's dead, Jim, self-destruct in 5 seconds.."
time.sleep(5)
print "Dying.."
sys.exit() On another note, this seems a likely solution. Utilising termination signals and only the standard library. |
The above did not seem to work. However, the library # Kill children of process id 1234
import psutil
for child in psutil.Process(1234).children():
child.kill() Here's polling. proc = psutil.Process(1234)
proc.wait()
sys.exit() |
Waking up on-time, part 2Ok, the next challenge is telling our pre-loaded interpreter to show the What we've got at the moment is a Flask server - Pyblish Endpoint - listening for requests coming from the So, what we'll have to do is to make a request upon having finished pre-loading and respond once the user chooses to launch the >>> import threading
>>> from Queue import Queue
>>> q = Queue()
>>> def waiter():
... cmd = q.get() # Block until a "command" is passed
... if cmd == "show":
... print "Showing.."
...
>>> t = threading.Thread(target=waiter)
>>> t.deamon = True
>>> t.start()
>>> # The above is what our pre-loaded Pyblish QML initiates
>>> # And the following is what the user then triggers, in
>>> # order to show the GUI.
>>> q.put("show")
Showing.. Here, the pre-loaded Pyblish QML will make a request to In effect, this puts the Down the lineThis system could potentially be used for any communication going from >>> # A modified waiter from above
>>> def waiter():
... while True:
... cmd = q.get()
... if cmd == "show":
... print "Showing.."
... else:
... print "Unrecognised command"
... As it is outside of what is required currently, we'll put that on ice. But it's comforting to know that all this work might be useful in other areas as well! |
Stumbled upon the answer to this, which is "Yes", and works as expected. Win! |
Request-response inversion works well. Here's a working implementation of what is to go into Pyblish Endpoint. server.py """Request-response inversion
Inverse the typical request-response pattern so as to allow a host to make
requests to the client.
Usage:
# Terminal 1
# The client
>>> import server
>>> import threading
>>> t = threading.Thread(target=server.app.run, kwargs={"threaded": True})
>>> t.start()
# Terminal 2
# The host
$ curl -X POST http://127.0.0.1:5000/dispatch
... blocking
# Terminal 1
# Client sending "request"
>>> server.queue.put("show")
# Terminal 2
# Host receiving "request"
{"status": "ok", "result": "Showing.."}
Requests may be made *before* having been responded to. With that, we've
got a true bi-directional communication link going.
Usage:
# Terminal 1
# The client
>>> import server
>>> import threading
>>> t = threading.Thread(target=server.app.run, kwargs={"threaded": True})
>>> t.start()
# Terminal 2
# First request from host
$ curl -X POST http://127.0.0.1:5000/dispatch
... blocking
# Terminal 3
# Second request from host
$ curl -X GET http://127.0.0.1:5000/dispatch
{"status": "ok", "queue": []}
# Terminal 1
# Client sending "request"
>>> server.queue.put("show")
# Terminal 2
# Host receiving "request".
{"status": "ok", "result": "Showing.."}
"""
# Standard library
import Queue
# Third-party dependencies
import flask
app = flask.Flask(__name__)
# This queue is used for communication between threads.
# It'll be queried (and may be empty) by the client,
# and filled by the host. When filled, the blocking query
# is released and processed.
queue = Queue.Queue()
@app.route("/dispatch", methods=["GET", "POST"])
def dispatch():
if flask.request.method == "GET":
return flask.jsonify(status="ok", queue=list(queue.queue))
else:
cmd = queue.get()
if cmd == "show":
return flask.jsonify(status="ok", result="Showing..")
return flask.jsonify(status="fail",
result="Command not recognised: %s" % cmd) |
Usage InstructionsHere's some usage code for the pre-loading mechanism. $ python -m pyblish_qml --port=1000 --preload
$ python -m pyblish_qml --port=1000 --pid=2000 --preload The pre-loaded instance of Pyblish QML will then send a request to Pyblish Endpoint at the given If made into a child, it will die upon the parent dying, using the Interfacedef preload(port, pid=None):
"""Asynchronously launch process and load relevant libraries.
Once loaded, a request is made to the host, the response of which causes
the GUI to appear.
Arguments:
port (int): Port at which to communicate with host
pid (int, optional): Id to parent process
""" Waking up the GUIHere's how the corresponding interface would look from the host. # Send request to client
from pyblish_endpoint import client
client.request("show") |
First version implemented in 0.2.5 |
Working well, here's the run-down. To launch a pre-loaded copy $ python -m -pyblish_qml --preload The copy will then lie dormant, until given a wake-up call. To wake a pre-loaded copy >>> import pyblish_endpoint.client
>>> pyblish_endpoint.client.request("show") Known QuirksHeadless As the process is not headless, it is extra important that it be destroyed upon exiting a host. A parent/child relationship is established upon launching the pre-loaded GUI and is maintained via a third-party Python library called However, as we are maintaining a link between client and server, killing the process involves killing the link prematurely (via Cosmetics On Windows, we're currently relying on the start-up animations of standard windows; which fades and scales into place. Upon hiding and showing the GUI, this animation is no longer present and is instead replaced by an instant flicker. We'll resolve this once we return to a border-less GUI and handle animations ourselves. |
Implemented in 0.2.6 |
Motivation
Running a separate process means re-doing some of the work already done by some hosts; such as Autodesk Maya and The Foundry Nuke.
In more constrained environments where Pyblish QML and it's associated libraries may reside on a network drive, this means an added ~100 mb of traffic per instantiation of Pyblish QML. As the target audience is in high-end visual effects and games - where traffic typically peaks in the 500-2000 mb/sec range - the restriction is assumed to not matter. But there are still environments where a slow network is in place, causing Pyblish QML to take an unacceptable amount of time (>2 seconds) to launch.
Goal
To eliminate the time taken to start-up a separate process and load libraries.
Implementation
At the moment, launching Pyblish QML as a separate process is a matter of:
To remedy the time taken for these processes to finish; we'll pre-load a Python interpreter and import the required libraries upon start of a host, such as Autodesk Maya.
The pre-loaded Python interpreter would lie quietly and listen for an incoming call to wake up. The
wakeup
call would take the form of a http request, which is already used for general communication internally within the process; between Python and QML.Solutions
Let's have a look at high-level solutions.
1 - Bundle
We bundle all of Pyblish QML into a single executable using Pyinstaller and distribute it. It'll run wherever anyone chooses to store it, without dependencies, pip or git.
2 - Pre-Load File Copy
Upon loading a host:
3 - Pre-Load Memory
Upon loading a host:
Killing
Currently, when a host quits, the GUI is to be killed. This is currently implemented in the integrations, such as Pyblish Nuke, and works well. But not for some. For some, the GUI remains open and must be closed manually; thus permanently terminating the process.
If the process is window-less, as it would be in this case, we'll need to find a more reliable method of killing the child-process upon termination of the parent (host) process.
The text was updated successfully, but these errors were encountered: