-
Notifications
You must be signed in to change notification settings - Fork 83
Update README.md #222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update README.md #222
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for doing. Looks much better now.
README.md
Outdated
|
||
#### Zero Script Change | ||
|
||
You can use your own training script while using [AWS Deep Learning Containers (DLC)](https://aws.amazon.com/machine-learning/containers/) in TensorFlow, PyTorch, MXNet and XGBoost frameworks. The AWS DLCs enable you to use Debugger with no changes to your training script by automatically adding SageMaker Debugger's `Hook`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we link to an example of zero code change here. or a page that describes zero change experience better.
Key points are-
- There are some default collection that sagemaker saves for you by default.
- For more configuration regarding what tensors to save and how frequently to save you can pass config in estimator fit api, and no change in training code is reqd.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example code is in "How It Works" section below. This section is only to introduce what features and frameworks are available.
README.md
Outdated
|
||
Amazon SageMaker Debugger can be used inside or outside of SageMaker. However the built-in rules that AWS provides are only available for SageMaker training. Scenarios of usage can be classified into the following: | ||
- **SageMaker Zero Script Change**: Here you specify which rules to use when setting up the estimator and run your existing script without no change. For an example of how to [Running a Rule with Zero Script Change on SageMaker](#running-a-rule-with-zero-script-change-on-sageMaker). | ||
- **SageMaker Bring Your Own Container**: Here you specify the rules to use and modify your training script minimally to enable SageMaker Debugger. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this requires a dedicated page. explaining step by step, how can i bring my own container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed above, let me know if you can provide a piece of actual code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Current solution: linking to the markdown doc files (tensorflow.md, pytorch.md, mxnet.md, xgboost.md) for the four frameworks. Brought from sagemaker.md.
Going to review those files with Amol who is on call this week.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added one of the new BYOC snippet of code using TF 2.x GradientTape from tensorflow.md
Co-Authored-By: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Co-authored-by: Aaron Markham <markhama@amazon.com>
Codecov Report
@@ Coverage Diff @@
## master #222 +/- ##
==========================================
- Coverage 85.34% 85.33% -0.02%
==========================================
Files 85 85
Lines 6136 6136
==========================================
- Hits 5237 5236 -1
- Misses 899 900 +1
Continue to review full report at Codecov.
|
Description of changes:
Updating README.md
Added "Support" section that includes the newest release notes, currently supported frameworks, and known limitations.
Style and formatting:
I have run
pre-commit install
to ensure that auto-formatting happens with every commit.Issue number, if available
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.