Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SYSTEMML-1325] Harmonize Compilation Execution Pipelines and Add GPU Support to JMLC #836

Closed
wants to merge 10 commits into from

Conversation

thomas9t
Copy link
Contributor

This PR adds support for compilation and execution of GPU enabled scripts in JMLC and harmonizes the pipeline used to compile and execute DML programs across the JMLC, MLContext and DMLScript. Specifically, the following changes were made:

  1. All three APIs now call ScriptExecutorUtils.compileRuntimeProgram to compile DML scripts. The original logic in MLContext and JMLC for pinning inputs and persisting outputs has been preserved.
  2. All three APIs now use ScriptExecutorUtils.executeRuntimeProgram to execute the compiled program. Previously, JMLC called the Script.execute method directly.
  3. jmlc.Connection.prepareScript now supports compiling a script to use GPU. Note that following [SYSTEMML-1325] Cleanup static variables in DMLScript #832 the issue noted in [SYSTEMML-1325] Support for GPU in JMLC and Harmonize Script Execution Pipelines #830 has been resolved.
  4. A PreparedScript is now statically assigned a GPU context when it is compiled and instatiated. This has potential performance implications because it means that a PreparedScript must be executed on a specific GPU. However, it reduces overhead from creating a GPU context each time a script is executed and unsures that a user cannot compile a script to use GPU and then forget to assign a GPU context when the script is run.
  5. Per (3) I have added a unit test which compiles and executes a GPU enabled script in JMLC both with and without pinned data and just asserts that no errors occur.

I ran mvn clean verify on a GPU enabled machine and encountered no exceptions.

@niketanpansare can you please take a look and let me know what you think? @mboehm7 you may also want to look at this.

@niketanpansare
Copy link
Contributor

Thanks @thomas9t ... the PR looks good to me 👍

Let's wait until tomorrow before merging in case @mboehm7 has additional comments.

niketanpansare pushed a commit to niketanpansare/systemml that referenced this pull request Oct 14, 2018
Support to JMLC

This PR adds support for compilation and execution of GPU enabled
scripts in JMLC and harmonizes the pipeline used to compile and execute
DML programs across the JMLC, MLContext and DMLScript. Specifically, the
following changes were made:

1. All three APIs now call ScriptExecutorUtils.compileRuntimeProgram to
compile DML scripts. The original logic in MLContext and JMLC for
pinning inputs and persisting outputs has been preserved.
2. All three APIs now use ScriptExecutorUtils.executeRuntimeProgram to
execute the compiled program. Previously, JMLC called the Script.execute
method directly.
3. jmlc.Connection.prepareScript now supports compiling a script to use
GPU. Note that following apache#832 the issue noted in apache#830 has been resolved.
4. A PreparedScript is now statically assigned a GPU context when it is
compiled and instatiated. This has potential performance implications
because it means that a PreparedScript must be executed on a specific
GPU. However, it reduces overhead from creating a GPU context each time
a script is executed and unsures that a user cannot compile a script to
use GPU and then forget to assign a GPU context when the script is run.
5. Per (3) I have added a unit test which compiles and executes a GPU
enabled script in JMLC both with and without pinned data and just
asserts that no errors occur.

Closes apache#836.
@asfgit asfgit closed this in 7907c0e Oct 14, 2018
@j143 j143 mentioned this pull request Apr 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants