Skip to content
This repository has been archived by the owner on Sep 3, 2022. It is now read-only.

Commit

Permalink
Cloudmlmerge (#239)
Browse files Browse the repository at this point in the history
* Adding evaluationanalysis API to generate evaluation stats from eval … (#99)

* Adding evaluationanalysis API to generate evaluation stats from eval source CSV file and eval results CSV file.

The resulting stats file will be fed to a visualization component which will come in a separate change.

* Follow up CR comments.

* Feature slicing view visualization component. (#109)

* Datalab Inception (image classification) solution. (#117)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package. Update Inception Package. (#121)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package.
 - Dump function args and docstrings
 - Run functions
Update Inception Package.
 - Added docstring on face functions.
 - Added batch prediction.
 - Use datalab's lib for talking to cloud training and prediction service.
 - More minor fixes and changes.

* Follow up code review comments.

* Fix an PackageRunner issue that temp installation is done multiple times unnecessarily.

* Update feature-slice-view supporting file, which fixes some stability UI issues. (#126)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery)  Add Confusion matrix magic. (#129)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery).
Add Confusion matrix magic.

* Follow up on code review comments. Also fix an inception issue that eval loss is nan when eval size is smaller than batch size.

* Fix set union.

* Mergemaster/cloudml (#134)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that prediction right after preprocessing fails in inception package local run. (#135)

* add structure data preprocessing and training  (#132)

merging the preprocessing and training parts.

* first full-feature version of structured data is done (#139)

* added the preprocessing/training files.

Preprocessing is connected with datalab. Training is not fully connected
with datalab.

* added training interface.

* local/cloud training ready for review

* saving work

* saving work

* cloud online prediction is done.

* split config file into two (schema/transforms) and updated the
unittests.

* local preprocess/train working

* 1) merged --model_type and --problem_type
2) online/local prediction is done

* added batch prediction

* all prediction is done. Going to make a merge request next

* Update _package.py

removed some white space + add a print statement to  local_predict

* --preprocessing puts a copy of schema in the outut dir.
--no need to pass schema to train in datalab.

* tests can be run from any folder above the test folder by

python -m unittest discover

Also, the training test will parse the output of training and check that
the loss is small.

* Inception Package Improvements (#138)

* Fix an issue that prediction right after preprocessing fails in inception package local run.

* Remove the "labels_file" parameter from inception preprocess/train/predict. Instead it will get labels from training data. Prediction graph will return labels.
Make online prediction works with GCS images.
"%%ml alpha deploy" now also check for "/model" subdir if needed.
Other minor improvements.

* Make local batch prediction really batched.
Batch prediction input may not have to include target column.
Sort labels, so it is consistent between preprocessing and training.
Follow up other core review comments.

* Follow up code review comments.

* Cloudmlm (#152)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that setting project id from datalab does not set gcloud default project. (#136)

* Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143)

As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency.

* Remove tensorflow and CloudML SDK from setup.py (#144)

* Install TensorFlow 0.12.1.

* Remove TensorFlow and CloudML SDK from setup.py.

* Add comments why we ignore errors when importing mlalpha.

* Adding evaluationanalysis API to generate evaluation stats from eval … (#99)

* Adding evaluationanalysis API to generate evaluation stats from eval source CSV file and eval results CSV file.

The resulting stats file will be fed to a visualization component which will come in a separate change.

* Follow up CR comments.

* Feature slicing view visualization component. (#109)

* Datalab Inception (image classification) solution. (#117)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package. Update Inception Package. (#121)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package.
 - Dump function args and docstrings
 - Run functions
Update Inception Package.
 - Added docstring on face functions.
 - Added batch prediction.
 - Use datalab's lib for talking to cloud training and prediction service.
 - More minor fixes and changes.

* Follow up code review comments.

* Fix an PackageRunner issue that temp installation is done multiple times unnecessarily.

* Update feature-slice-view supporting file, which fixes some stability UI issues. (#126)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery)  Add Confusion matrix magic. (#129)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery).
Add Confusion matrix magic.

* Follow up on code review comments. Also fix an inception issue that eval loss is nan when eval size is smaller than batch size.

* Fix set union.

* Mergemaster/cloudml (#134)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that prediction right after preprocessing fails in inception package local run. (#135)

* add structure data preprocessing and training  (#132)

merging the preprocessing and training parts.

* first full-feature version of structured data is done (#139)

* added the preprocessing/training files.

Preprocessing is connected with datalab. Training is not fully connected
with datalab.

* added training interface.

* local/cloud training ready for review

* saving work

* saving work

* cloud online prediction is done.

* split config file into two (schema/transforms) and updated the
unittests.

* local preprocess/train working

* 1) merged --model_type and --problem_type
2) online/local prediction is done

* added batch prediction

* all prediction is done. Going to make a merge request next

* Update _package.py

removed some white space + add a print statement to  local_predict

* --preprocessing puts a copy of schema in the outut dir.
--no need to pass schema to train in datalab.

* tests can be run from any folder above the test folder by

python -m unittest discover

Also, the training test will parse the output of training and check that
the loss is small.

* Inception Package Improvements (#138)

* Fix an issue that prediction right after preprocessing fails in inception package local run.

* Remove the "labels_file" parameter from inception preprocess/train/predict. Instead it will get labels from training data. Prediction graph will return labels.
Make online prediction works with GCS images.
"%%ml alpha deploy" now also check for "/model" subdir if needed.
Other minor improvements.

* Make local batch prediction really batched.
Batch prediction input may not have to include target column.
Sort labels, so it is consistent between preprocessing and training.
Follow up other core review comments.

* Follow up code review comments.

* Remove old DataSet implementation. Create new DataSets. (#151)

* Remove old DataSet implementation.

The new Dataset will be used as data source for packages. All DataSets will be capable of sampling to DataFrame, so feature exploration can be done with other libraries.

* Raise error when sample is larger than number of rows.

* Inception package improvements (#155)

* Inception package improvements.

- It takes DataSets as input instead of CSV files. It also supports BigQuery source now.
- Changes to make latest DataFlow and TensorFlow happy.
- Changes in preprocessing to remove partial support for multiple labels.
- Other minor improments.

* Add a comment.

* Update feature slice view UI. Added Slices Overview. (#161)

* Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic. (#163)

* Update feature slice view UI. Added Slices Overview.

* Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic.

Use matplotlib for tf events plotting so it can display well in static HTML pages (such as github).

Improve TensorFlow Events list/get APIs.

* Follow up on CR comments.

* new preprocessing and training for structured data (#160)

* new preprocessing is done

next: work on training, and then update the tests

* saving work

* sw

* seems to be working, going to do tests next

* got preprocessing test working

* training test pass!!!

* added exported graph back in

* dl preprocessing for local, cloud/csv, cloud/bigquery DONE :)

* gcloud cloud training works

* cloud dl training working

* ops, this files should not be saved

* removed junk function

* sw

* review comments

* removed cloudml sdk usage + lint

* review comments

* Move job, models, and feature_slice_view plotting to API. (#167)

* Move job, models, and feature_slice_view plotting to API.

* Follow up on CR comments.

* A util function to repackage and copy the package to staging location. (#169)

* A util function to repackage and copy the package to staging location, so in packages we can use the staging URL as package URL in cloud training.

* Follow up CR comments.

* Follow up CR comments.

* Move confusion matrix from %%ml to library. (#159)

* Move confusion matrix from %%ml to library.

This is part of efforts to move %%ml magic stuff to library to provide a consistent experience (python only).

* Add a comment.

* Improve inception package so there is no need to have an GCS copy of the package. Instead cloud training and preprocessing will repackage it from local installation and upload it to staging. (#175)

* Cloudmlsdp (#177)

* added the ',' graph hack

* sw

* batch prediction done

* sw

* review comments

* Add CloudTrainingConfig namedtuple to wrap cloud training configurations (#178)

* Add CloudTrainingConfig namedtuple to wrap cloud training configurations.

* Follow up code review comments.

* prediction update (#183)

* added the ',' graph hack

* sw

* batch prediction done

* sw

* review comments

* updated the the prediction graph keys, and makde the csvcoder not need
any other file.

* sw

* sw

* added newline

* review comments

* review comments

* trying to fix the Contributor License Agreement error.

* Inception Package Improvements (#186)

* Implement inception cloud batch prediction. Support explicit eval data in preprocessing.

* Follow up on CR comments. Also address changes from latest DataFlow.

* Cloudmlmerge (#188)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that setting project id from datalab does not set gcloud default project. (#136)

* Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143)

As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency.

* Remove tensorflow and CloudML SDK from setup.py (#144)

* Install TensorFlow 0.12.1.

* Remove TensorFlow and CloudML SDK from setup.py.

* Add comments why we ignore errors when importing mlalpha.

* Adding evaluationanalysis API to generate evaluation stats from eval … (#99)

* Adding evaluationanalysis API to generate evaluation stats from eval source CSV file and eval results CSV file.

The resulting stats file will be fed to a visualization component which will come in a separate change.

* Follow up CR comments.

* Feature slicing view visualization component. (#109)

* Datalab Inception (image classification) solution. (#117)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package. Update Inception Package. (#121)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package.
 - Dump function args and docstrings
 - Run functions
Update Inception Package.
 - Added docstring on face functions.
 - Added batch prediction.
 - Use datalab's lib for talking to cloud training and prediction service.
 - More minor fixes and changes.

* Follow up code review comments.

* Fix an PackageRunner issue that temp installation is done multiple times unnecessarily.

* Update feature-slice-view supporting file, which fixes some stability UI issues. (#126)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery)  Add Confusion matrix magic. (#129)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery).
Add Confusion matrix magic.

* Follow up on code review comments. Also fix an inception issue that eval loss is nan when eval size is smaller than batch size.

* Fix set union.

* Mergemaster/cloudml (#134)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that prediction right after preprocessing fails in inception package local run. (#135)

* add structure data preprocessing and training  (#132)

merging the preprocessing and training parts.

* first full-feature version of structured data is done (#139)

* added the preprocessing/training files.

Preprocessing is connected with datalab. Training is not fully connected
with datalab.

* added training interface.

* local/cloud training ready for review

* saving work

* saving work

* cloud online prediction is done.

* split config file into two (schema/transforms) and updated the
unittests.

* local preprocess/train working

* 1) merged --model_type and --problem_type
2) online/local prediction is done

* added batch prediction

* all prediction is done. Going to make a merge request next

* Update _package.py

removed some white space + add a print statement to  local_predict

* --preprocessing puts a copy of schema in the outut dir.
--no need to pass schema to train in datalab.

* tests can be run from any folder above the test folder by

python -m unittest discover

Also, the training test will parse the output of training and check that
the loss is small.

* Inception Package Improvements (#138)

* Fix an issue that prediction right after preprocessing fails in inception package local run.

* Remove the "labels_file" parameter from inception preprocess/train/predict. Instead it will get labels from training data. Prediction graph will return labels.
Make online prediction works with GCS images.
"%%ml alpha deploy" now also check for "/model" subdir if needed.
Other minor improvements.

* Make local batch prediction really batched.
Batch prediction input may not have to include target column.
Sort labels, so it is consistent between preprocessing and training.
Follow up other core review comments.

* Follow up code review comments.

* Remove old DataSet implementation. Create new DataSets. (#151)

* Remove old DataSet implementation.

The new Dataset will be used as data source for packages. All DataSets will be capable of sampling to DataFrame, so feature exploration can be done with other libraries.

* Raise error when sample is larger than number of rows.

* Inception package improvements (#155)

* Inception package improvements.

- It takes DataSets as input instead of CSV files. It also supports BigQuery source now.
- Changes to make latest DataFlow and TensorFlow happy.
- Changes in preprocessing to remove partial support for multiple labels.
- Other minor improments.

* Add a comment.

* Update feature slice view UI. Added Slices Overview. (#161)

* Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic. (#163)

* Update feature slice view UI. Added Slices Overview.

* Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic.

Use matplotlib for tf events plotting so it can display well in static HTML pages (such as github).

Improve TensorFlow Events list/get APIs.

* Follow up on CR comments.

* new preprocessing and training for structured data (#160)

* new preprocessing is done

next: work on training, and then update the tests

* saving work

* sw

* seems to be working, going to do tests next

* got preprocessing test working

* training test pass!!!

* added exported graph back in

* dl preprocessing for local, cloud/csv, cloud/bigquery DONE :)

* gcloud cloud training works

* cloud dl training working

* ops, this files should not be saved

* removed junk function

* sw

* review comments

* removed cloudml sdk usage + lint

* review comments

* Move job, models, and feature_slice_view plotting to API. (#167)

* Move job, models, and feature_slice_view plotting to API.

* Follow up on CR comments.

* A util function to repackage and copy the package to staging location. (#169)

* A util function to repackage and copy the package to staging location, so in packages we can use the staging URL as package URL in cloud training.

* Follow up CR comments.

* Follow up CR comments.

* Move confusion matrix from %%ml to library. (#159)

* Move confusion matrix from %%ml to library.

This is part of efforts to move %%ml magic stuff to library to provide a consistent experience (python only).

* Add a comment.

* Improve inception package so there is no need to have an GCS copy of the package. Instead cloud training and preprocessing will repackage it from local installation and upload it to staging. (#175)

* Cloudmlsdp (#177)

* added the ',' graph hack

* sw

* batch prediction done

* sw

* review comments

* Add CloudTrainingConfig namedtuple to wrap cloud training configurations (#178)

* Add CloudTrainingConfig namedtuple to wrap cloud training configurations.

* Follow up code review comments.

* prediction update (#183)

* added the ',' graph hack

* sw

* batch prediction done

* sw

* review comments

* updated the the prediction graph keys, and makde the csvcoder not need
any other file.

* sw

* sw

* added newline

* review comments

* review comments

* trying to fix the Contributor License Agreement error.

* Inception Package Improvements (#186)

* Implement inception cloud batch prediction. Support explicit eval data in preprocessing.

* Follow up on CR comments. Also address changes from latest DataFlow.

* CsvDataSet no longer globs files in init. (#187)

* CsvDataSet no longer globs files in init.

* removed file_io, that fix will be done later

* removed junk lines

* sample uses .file

* fixed csv dataset def files()

* Update _dataset.py

* Move cloud trainer and predictor from their own classes to Job and Model respectively. (#192)

* Move cloud trainer and predictor from their own classes to Job and Model respectively.

Cloud trainer and predictor will be cleaned up in a seperate change.

* Rename CloudModels to Models, CloudModelVersions to ModelVersions. Move their iterator from self to get_iterator() method.

* Switch to cloudml v1 endpoint.

* Remove one comment.

* Follow up on CR comments. Fix a bug in datalab iterator that count keeps incrementing incorrectly.

* removed the feature type file  (#199)

* sw

* removed feature types file from preprocessing

* training: no longer needs the input types file
prediction: cloud batch works now

* updated the tests

* added amazing comment to local_train
check that target column is the first column

* transforms file is not optional on the DL side.

* comments

* comments

* Make inception to work with tf1.0. (#204)

* Workaround a TF summary issue. Force online prediction to use TF 1.0. (#209)

* sd package. Local everything is working.  (#211)

* sw

* sw

* Remove tf dependency from structured data setup.py. (#212)

* Workaround a TF summary issue. Force online prediction to use TF 1.0.

* Remove tf dependency from structured data setup.py.

* Cloudmld (#213)

* sw

* sw

* cloud uses 0.12.0rc? and local uses whatever is in datalab

* for local testing

* master_setup is copy of ../../setup.py

* Add a resize option for inception package to avoid sending large data to online prediction (#215)

* Add a resize option for inception package to avoid sending large data to online prediction.
Update Lantern browser.

* Follow up on code review comments and fix a bug for inception.

* Cleanup mlalpha APIs that are not needed. (#218)

* Inception package updates. (#219)

- Instead of hard code setup.py path, duplicate it along with all py files, just like structured data package.
- Use Pip installable TensorFlow 1.0 for packages.
- Fix some TF warnings.

* Cloudml Branch Merge From Master (#222)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that setting project id from datalab does not set gcloud default project. (#136)

* Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143)

As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency.

* Remove tensorflow and CloudML SDK from setup.py (#144)

* Install TensorFlow 0.12.1.

* Remove TensorFlow and CloudML SDK from setup.py.

* Add comments why we ignore errors when importing mlalpha.

* Fix project_id from `gcloud config` in py3 (#194)

- `Popen.stdout` is a `bytes` in py3, needs `.decode()`

- Before:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

HTTP request failed: Invalid project ID 'b'foo-bar''. Project IDs must contain 6-63 lowercase letters, digits, or dashes. IDs must start with a letter and may not end with a dash.
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

b'foo-bar'
```

- After:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

QueryResultsTable job_1_bZNbAUtk8QzlK7bqWD5fz7S5o
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

foo-bar
```

* Use http Keep-Alive, else BigQuery queries are ~seconds slower than necessary (#195)

- Before (without Keep-Alive): ~3-7s for BigQuery `select 3` with an already cached result
- After (with Keep-Alive): ~1.5-3s
- Query sends these 6 http requests and runtime appears to be dominated by network RTT

* cast string to int (#217)

`table.insert_data(df)` inserts data correctly but raises TypeError: unorderable types: str() > int()

* Remove CloudML SDK as dependency for PyDatalab. (#227)

* Remove CloudML dependency from Inception. (#225)

* TensorFlow's save_model no longer creates export.meta, so disable the  check in deploying models. (#228)

* TensorFlow's save_model no longer creates export.meta, so disable the check in deploying models.

* Also check for saved_model.pb for deployment.

* Cloudmlsm (#229)

* csv prediction graph done

* csv works, but not json!!!

* sw, train working

* cloud training working

* finished census sample, cleaned up interface

* review comments

* small fixes to sd (#231)

* small fixes

* more small fixes

* Rename from mlalpha to ml. (#232)

* fixed prediction (#235)

* small fixes (#236)

* 1) prediction 'key_from_input' now the true key name
2) DF prediction now make csv_schema.json file
3) removed function that was not used.

* update csv_schema.json in _package too

* Cloudmlmerge (#238)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that setting project id from datalab does not set gcloud default project. (#136)

* Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143)

As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency.

* Remove tensorflow and CloudML SDK from setup.py (#144)

* Install TensorFlow 0.12.1.

* Remove TensorFlow and CloudML SDK from setup.py.

* Add comments why we ignore errors when importing mlalpha.

* Fix project_id from `gcloud config` in py3 (#194)

- `Popen.stdout` is a `bytes` in py3, needs `.decode()`

- Before:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

HTTP request failed: Invalid project ID 'b'foo-bar''. Project IDs must contain 6-63 lowercase letters, digits, or dashes. IDs must start with a letter and may not end with a dash.
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

b'foo-bar'
```

- After:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

QueryResultsTable job_1_bZNbAUtk8QzlK7bqWD5fz7S5o
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

foo-bar
```

* Use http Keep-Alive, else BigQuery queries are ~seconds slower than necessary (#195)

- Before (without Keep-Alive): ~3-7s for BigQuery `select 3` with an already cached result
- After (with Keep-Alive): ~1.5-3s
- Query sends these 6 http requests and runtime appears to be dominated by network RTT

* cast string to int (#217)

`table.insert_data(df)` inserts data correctly but raises TypeError: unorderable types: str() > int()

* bigquery.Api: Remove unused _DEFAULT_PAGE_SIZE (#221)

Test plan:
- Unit tests still pass
  • Loading branch information
qimingj authored Feb 27, 2017
1 parent 52939ee commit 2295860
Show file tree
Hide file tree
Showing 59 changed files with 12,742 additions and 2,634 deletions.
20 changes: 7 additions & 13 deletions datalab/mlalpha/__init__.py → datalab/ml/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,20 +14,14 @@

from __future__ import absolute_import

from ._local_runner import LocalRunner
from ._cloud_runner import CloudRunner
from ._metadata import Metadata
from ._local_predictor import LocalPredictor
from ._cloud_predictor import CloudPredictor
from ._job import Jobs
from ._job import Jobs, Job
from ._summary import Summary
from ._tensorboard import TensorBoardManager
from ._dataset import DataSet
from ._package import Packager
from ._cloud_models import CloudModels, CloudModelVersions
from ._tensorboard import TensorBoard
from ._dataset import CsvDataSet, BigQueryDataSet
from ._cloud_models import Models, ModelVersions
from ._confusion_matrix import ConfusionMatrix
from ._feature_slice_view import FeatureSliceView
from ._cloud_training_config import CloudTrainingConfig
from ._util import *

from plotly.offline import init_notebook_mode

init_notebook_mode()

274 changes: 274 additions & 0 deletions datalab/ml/_cloud_models.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,274 @@
# Copyright 2016 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.

"""Implements Cloud ML Model Operations"""

from googleapiclient import discovery
import os
import yaml

import datalab.context
import datalab.storage
import datalab.utils

from . import _util

class Models(object):
"""Represents a list of Cloud ML models for a project."""

def __init__(self, project_id=None):
"""
Args:
project_id: project_id of the models. If not provided, default project_id will be used.
"""
if project_id is None:
project_id = datalab.context.Context.default().project_id
self._project_id = project_id
self._credentials = datalab.context.Context.default().credentials
self._api = discovery.build('ml', 'v1', credentials=self._credentials)

def _retrieve_models(self, page_token, page_size):
list_info = self._api.projects().models().list(
parent='projects/' + self._project_id, pageToken=page_token, pageSize=page_size).execute()
models = list_info.get('models', [])
page_token = list_info.get('nextPageToken', None)
return models, page_token

def get_iterator(self):
"""Get iterator of models so it can be used as "for model in Models().get_iterator()".
"""
return iter(datalab.utils.Iterator(self._retrieve_models))

def get_model_details(self, model_name):
"""Get details of the specified model from CloudML Service.
Args:
model_name: the name of the model. It can be a model full name
("projects/[project_id]/models/[model_name]") or just [model_name].
Returns: a dictionary of the model details.
"""
full_name = model_name
if not model_name.startswith('projects/'):
full_name = ('projects/%s/models/%s' % (self._project_id, model_name))
return self._api.projects().models().get(name=full_name).execute()

def create(self, model_name):
"""Create a model.
Args:
model_name: the short name of the model, such as "iris".
Returns:
If successful, returns informaiton of the model, such as
{u'regions': [u'us-central1'], u'name': u'projects/myproject/models/mymodel'}
Raises:
If the model creation failed.
"""
body = {'name': model_name}
parent = 'projects/' + self._project_id
# Model creation is instant. If anything goes wrong, Exception will be thrown.
return self._api.projects().models().create(body=body, parent=parent).execute()

def delete(self, model_name):
"""Delete a model.
Args:
model_name: the name of the model. It can be a model full name
("projects/[project_id]/models/[model_name]") or just [model_name].
"""
full_name = model_name
if not model_name.startswith('projects/'):
full_name = ('projects/%s/models/%s' % (self._project_id, model_name))
response = self._api.projects().models().delete(name=full_name).execute()
if 'name' not in response:
raise Exception('Invalid response from service. "name" is not found.')
_util.wait_for_long_running_operation(response['name'])

def list(self, count=10):
"""List models under the current project in a table view.
Args:
count: upper limit of the number of models to list.
Raises:
Exception if it is called in a non-IPython environment.
"""
import IPython
data = []
# Add range(count) to loop so it will stop either it reaches count, or iteration
# on self is exhausted. "self" is iterable (see __iter__() method).
for _, model in zip(range(count), self):
element = {'name': model['name']}
if 'defaultVersion' in model:
version_short_name = model['defaultVersion']['name'].split('/')[-1]
element['defaultVersion'] = version_short_name
data.append(element)

IPython.display.display(
datalab.utils.commands.render_dictionary(data, ['name', 'defaultVersion']))

def describe(self, model_name):
"""Print information of a specified model.
Args:
model_name: the name of the model to print details on.
"""
model_yaml = yaml.safe_dump(self.get_model_details(model_name), default_flow_style=False)
print model_yaml


class ModelVersions(object):
"""Represents a list of versions for a Cloud ML model."""

def __init__(self, model_name, project_id=None):
"""
Args:
model_name: the name of the model. It can be a model full name
("projects/[project_id]/models/[model_name]") or just [model_name].
project_id: project_id of the models. If not provided and model_name is not a full name
(not including project_id), default project_id will be used.
"""
if project_id is None:
self._project_id = datalab.context.Context.default().project_id
self._credentials = datalab.context.Context.default().credentials
self._api = discovery.build('ml', 'v1', credentials=self._credentials)
if not model_name.startswith('projects/'):
model_name = ('projects/%s/models/%s' % (self._project_id, model_name))
self._full_model_name = model_name
self._model_name = self._full_model_name.split('/')[-1]

def _retrieve_versions(self, page_token, page_size):
parent = self._full_model_name
list_info = self._api.projects().models().versions().list(parent=parent,
pageToken=page_token, pageSize=page_size).execute()
versions = list_info.get('versions', [])
page_token = list_info.get('nextPageToken', None)
return versions, page_token

def get_iterator(self):
"""Get iterator of versions so it can be used as
"for v in ModelVersions(model_name).get_iterator()".
"""
return iter(datalab.utils.Iterator(self._retrieve_versions))

def get_version_details(self, version_name):
"""Get details of a version.
Args:
version: the name of the version in short form, such as "v1".
Returns: a dictionary containing the version details.
"""
name = ('%s/versions/%s' % (self._full_model_name, version_name))
return self._api.projects().models().versions().get(name=name).execute()

def deploy(self, version_name, path):
"""Deploy a model version to the cloud.
Args:
version_name: the name of the version in short form, such as "v1".
path: the Google Cloud Storage path (gs://...) which contains the model files.
Raises: Exception if the path is invalid or does not contain expected files.
Exception if the service returns invalid response.
"""
if not path.startswith('gs://'):
raise Exception('Invalid path. Only Google Cloud Storage path (gs://...) is accepted.')

# If there is no "export.meta" or"saved_model.pb" under path but there is
# path/model/export.meta or path/model/saved_model.pb, then append /model to the path.
if (not datalab.storage.Item.from_url(os.path.join(path, 'export.meta')).exists() and
not datalab.storage.Item.from_url(os.path.join(path, 'saved_model.pb')).exists()):
if (datalab.storage.Item.from_url(os.path.join(path, 'model', 'export.meta')).exists() or
datalab.storage.Item.from_url(os.path.join(path, 'model', 'saved_model.pb')).exists()):
path = os.path.join(path, 'model')
else:
print('Cannot find export.meta or saved_model.pb, but continue with deployment anyway.')

body = {'name': self._model_name}
parent = 'projects/' + self._project_id
try:
self._api.projects().models().create(body=body, parent=parent).execute()
except:
# Trying to create an already existing model gets an error. Ignore it.
pass
body = {
'name': version_name,
'deployment_uri': path,
'runtime_version': '1.0',
}
response = self._api.projects().models().versions().create(body=body,
parent=self._full_model_name).execute()
if 'name' not in response:
raise Exception('Invalid response from service. "name" is not found.')
_util.wait_for_long_running_operation(response['name'])

def delete(self, version_name):
"""Delete a version of model.
Args:
version_name: the name of the version in short form, such as "v1".
"""
name = ('%s/versions/%s' % (self._full_model_name, version_name))
response = self._api.projects().models().versions().delete(name=name).execute()
if 'name' not in response:
raise Exception('Invalid response from service. "name" is not found.')
_util.wait_for_long_running_operation(response['name'])

def predict(self, version_name, data):
"""Get prediction results from features instances.
Args:
version_name: the name of the version used for prediction.
data: typically a list of instance to be submitted for prediction. The format of the
instance depends on the model. For example, structured data model may require
a csv line for each instance.
Note that online prediction only works on models that take one placeholder value,
such as a string encoding a csv line.
Returns:
A list of prediction results for given instances. Each element is a dictionary representing
output mapping from the graph.
An example:
[{"predictions": 1, "score": [0.00078, 0.71406, 0.28515]},
{"predictions": 1, "score": [0.00244, 0.99634, 0.00121]}]
"""
full_version_name = ('%s/versions/%s' % (self._full_model_name, version_name))
request = self._api.projects().predict(body={'instances': data},
name=full_version_name)
request.headers['user-agent'] = 'GoogleCloudDataLab/1.0'
result = request.execute()
if 'predictions' not in result:
raise Exception('Invalid response from service. Cannot find "predictions" in response.')

return result['predictions']

def describe(self, version_name):
"""Print information of a specified model.
Args:
version: the name of the version in short form, such as "v1".
"""
version_yaml = yaml.safe_dump(self.get_version_details(version_name),
default_flow_style=False)
print version_yaml

def list(self):
"""List versions under the current model in a table view.
Raises:
Exception if it is called in a non-IPython environment.
"""
import IPython

# "self" is iterable (see __iter__() method).
data = [{'name': version['name'].split()[-1],
'deploymentUri': version['deploymentUri'], 'createTime': version['createTime']}
for version in self]
IPython.display.display(
datalab.utils.commands.render_dictionary(data, ['name', 'deploymentUri', 'createTime']))
47 changes: 47 additions & 0 deletions datalab/ml/_cloud_training_config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from collections import namedtuple

_CloudTrainingConfig = namedtuple("CloudConfig",
['region', 'scale_tier', 'master_type', 'worker_type',
'parameter_server_type', 'worker_count', 'parameter_server_count'])
_CloudTrainingConfig.__new__.__defaults__ = ('BASIC', None, None, None, None, None)


class CloudTrainingConfig(_CloudTrainingConfig):
"""A config namedtuple containing cloud specific configurations for CloudML training.
Fields:
region: the region of the training job to be submitted. For example, "us-central1".
Run "gcloud compute regions list" to get a list of regions.
scale_tier: Specifies the machine types, the number of replicas for workers and
parameter servers. For example, "STANDARD_1". See
https://cloud.google.com/ml/reference/rest/v1beta1/projects.jobs#scaletier
for list of accepted values.
master_type: specifies the type of virtual machine to use for your training
job's master worker. Must set this value when scale_tier is set to CUSTOM.
See the link in "scale_tier".
worker_type: specifies the type of virtual machine to use for your training
job's worker nodes. Must set this value when scale_tier is set to CUSTOM.
parameter_server_type: specifies the type of virtual machine to use for your training
job's parameter server. Must set this value when scale_tier is set to CUSTOM.
worker_count: the number of worker replicas to use for the training job. Each
replica in the cluster will be of the type specified in "worker_type".
Must set this value when scale_tier is set to CUSTOM.
parameter_server_count: the number of parameter server replicas to use. Each
replica in the cluster will be of the type specified in "parameter_server_type".
Must set this value when scale_tier is set to CUSTOM.
"""
pass
Loading

0 comments on commit 2295860

Please sign in to comment.