Skip to content
This repository has been archived by the owner on Sep 3, 2022. It is now read-only.

new preprocessing and training for structured data #160

Merged
merged 16 commits into from
Feb 7, 2017
Merged

Conversation

brandondutra
Copy link
Contributor

prediction is not done

preprocessing:
--local uses pandas
--cloud: if given bq table, does analysis on that. If given csv files, uses federated tables.

training:
--starts from raw csv data
--all transformations are done in training.
--transform types depend on if linear/dnn model is used.

@brandondutra
Copy link
Contributor Author

schema file example
[
{
"mode": "REQUIRED",
"name": "key",
"type": "STRING"
},
{
"mode": "NULLABLE",
"name": "target",
"type": "INTEGER"
},
{
"mode": "NULLABLE",
"name": "num1",
"type": "FLOAT"
},
{
"mode": "NULLABLE",
"name": "num2",
"type": "INTEGER"
},
{
"mode": "NULLABLE",
"name": "num3",
"type": "FLOAT"
},
{
"mode": "NULLABLE",
"name": "str1",
"type": "STRING"
},
{
"mode": "NULLABLE",
"name": "str2",
"type": "STRING"
},
{
"mode": "NULLABLE",
"name": "str3",
"type": "STRING"
}
]

@brandondutra
Copy link
Contributor Author

feature type example (consumed by preprocess). Default and type are required
{
"key": {"default": -1, "type": "key"},
"target": {"default": "unknown", "type": "categorical"},
"num1": {"default": 0.0, "type": "numerical"},
"num2": {"default": 0, "type": "numerical"},
"num3": {"default": 0.0, "type": "numerical"},
"str1": {"default": "black", "type": "categorical"},
"str2": {"default": "abc", "type": "categorical"},
"str3": {"default": "car", "type": "categorical"}
}

@brandondutra
Copy link
Contributor Author

example transforms file
{
"num1": {"transform": "scale"},
"num2": {"transform": "max_abs_scale", "value": 4},
"str1": {"transform": "one_hot"},
"str2": {"transform": "embedding", "embedding_dim": 3},
"target": {"transform": "target"}
}

supported numerical transforms: scale, max_abs_scale, identity (default)
supported categorical transforms for dnn models: hash_embedding, embedding, hash_one_hot, one_hot (default)
supported categorical transforms for linear models: hash_sparse, sparse (default)

Copy link
Contributor

@qimingj qimingj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice! A few observations, and some of them can be done in separate changes if needed.

import argparse
import os
import sys
import json
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sort import.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.


if args.bigquery_table:
if args.schema_file or args.input_file_pattern :
raise ValueError('If using --bigquery_table, then --schema_file and '
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should also support bigquery query (DataFlow support that) besides table, so users can create ad-hoc training data. We can add that in a separate change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added a todo


id_name = bigquery_table.split(':')
if len(id_name) != 2:
raise ValueError('Bigquery table name should be in the form '
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

project_id should be optional. It defaults to Datalab's default project.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is, cloud_preprocess in _package pulls in the default if missing.

type=str,
required=True,
help='Google Cloud Storage which to place outputs.')
parser.add_argument('--input_feature_types',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

input_feature_file?

help='Google Cloud Storage which to place outputs.')
parser.add_argument('--input_feature_types',
type=str,
required=True,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hope this file can be optional or be replaced by one in-memory arg.

-- Based on schema file, we can infer types. integer/float/bool --> numeric. string --> categorical. If there is a case that integers need to be categorical, there can be several ways:

  1. Of course, supply the feature types file.
  2. For bigQuery, use "SELECT STRING(int_col) FROM my_table" to convert integer to string.
  3. For CSV, in the schema file use string instead of int for that column.

-- We can take an optional parameter "skip_analysis" whose value is a list of columns, such as the key column.

But we will still output the feature types file since training will use it.

Thoughts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll hold off on making things optional for now. Feature types also encodes the default values for training. Picking default values for training is not something I want to do, especially for categorical columns. I want to revisit this late.


# record_defaults are used by tf.decode_csv to insert defaults, and to infer
# the datatype.
record_defaults = [[train_config['csv_defaults'][name]]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

csv_defaults can be optional too? We can use default value based on schema. We can require no missing values if defaults are not provided?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have to have a default value when reading data from csv in TF, so there is no way of knowing at training time if data is missing. Picking a bad default value would make a model useless, so I would want to just ask the user for these values.

# compression_type=tf.python_io.TFRecordCompressionType.GZIP)
#example_id, encoded_example = tf.TFRecordReader(options=options).read_up_to(
# filename_queue, batch_size)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

range_max=numerical_anlysis[name]['max'],
scale_min=-1,
scale_max=1)
elif transform_name == 'max_abs_scale':
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we can combine scale and max_abs_scale, and make scale value default to 1?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good idea.

raise ValueError('File %s not found in %s' % (CATEGORICAL_ANALYSIS % name, preprocess_output_dir))

df = pd.read_csv(StringIO(ml.util._file.load_file(vocab_file)),
header=0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

header=0 means using the first line as header. It seems in local_preprocess, you have "df.to_csv(None, header=False)" so no header?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update so that the vocab files are in the form:
label1
label2
label3EOF


df = pd.read_csv(StringIO(ml.util._file.load_file(vocab_file)),
header=0)
index_values = df['index'].values.tolist()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we can just sort the labels so we don't need the index.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed the index.

Copy link
Contributor

@qimingj qimingj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great!



def run_numerical_categorical_analysis(args, feature_types, schema_list):
"""Makes the nuermical and categorical analysis files.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo

(name, str(transform_type)))
sys.exit(1)
# Do target transform
with tf.name_scope('categorical_feature_preprocess') as scope:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be target_feature_preprocess?

@brandondutra brandondutra merged commit 1d0629d into cloudml Feb 7, 2017
@brandondutra brandondutra deleted the cloudmlsd branch February 7, 2017 17:55
qimingj pushed a commit that referenced this pull request Feb 13, 2017
* new preprocessing is done

next: work on training, and then update the tests

* saving work

* sw

* seems to be working, going to do tests next

* got preprocessing test working

* training test pass!!!

* added exported graph back in

* dl preprocessing for local, cloud/csv, cloud/bigquery DONE :)

* gcloud cloud training works

* cloud dl training working

* ops, this files should not be saved

* removed junk function

* sw

* review comments

* removed cloudml sdk usage + lint

* review comments
qimingj added a commit that referenced this pull request Feb 13, 2017
* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that setting project id from datalab does not set gcloud default project. (#136)

* Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143)

As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency.

* Remove tensorflow and CloudML SDK from setup.py (#144)

* Install TensorFlow 0.12.1.

* Remove TensorFlow and CloudML SDK from setup.py.

* Add comments why we ignore errors when importing mlalpha.

* Adding evaluationanalysis API to generate evaluation stats from eval … (#99)

* Adding evaluationanalysis API to generate evaluation stats from eval source CSV file and eval results CSV file.

The resulting stats file will be fed to a visualization component which will come in a separate change.

* Follow up CR comments.

* Feature slicing view visualization component. (#109)

* Datalab Inception (image classification) solution. (#117)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package. Update Inception Package. (#121)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package.
 - Dump function args and docstrings
 - Run functions
Update Inception Package.
 - Added docstring on face functions.
 - Added batch prediction.
 - Use datalab's lib for talking to cloud training and prediction service.
 - More minor fixes and changes.

* Follow up code review comments.

* Fix an PackageRunner issue that temp installation is done multiple times unnecessarily.

* Update feature-slice-view supporting file, which fixes some stability UI issues. (#126)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery)  Add Confusion matrix magic. (#129)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery).
Add Confusion matrix magic.

* Follow up on code review comments. Also fix an inception issue that eval loss is nan when eval size is smaller than batch size.

* Fix set union.

* Mergemaster/cloudml (#134)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that prediction right after preprocessing fails in inception package local run. (#135)

* add structure data preprocessing and training  (#132)

merging the preprocessing and training parts.

* first full-feature version of structured data is done (#139)

* added the preprocessing/training files.

Preprocessing is connected with datalab. Training is not fully connected
with datalab.

* added training interface.

* local/cloud training ready for review

* saving work

* saving work

* cloud online prediction is done.

* split config file into two (schema/transforms) and updated the
unittests.

* local preprocess/train working

* 1) merged --model_type and --problem_type
2) online/local prediction is done

* added batch prediction

* all prediction is done. Going to make a merge request next

* Update _package.py

removed some white space + add a print statement to  local_predict

* --preprocessing puts a copy of schema in the outut dir.
--no need to pass schema to train in datalab.

* tests can be run from any folder above the test folder by

python -m unittest discover

Also, the training test will parse the output of training and check that
the loss is small.

* Inception Package Improvements (#138)

* Fix an issue that prediction right after preprocessing fails in inception package local run.

* Remove the "labels_file" parameter from inception preprocess/train/predict. Instead it will get labels from training data. Prediction graph will return labels.
Make online prediction works with GCS images.
"%%ml alpha deploy" now also check for "/model" subdir if needed.
Other minor improvements.

* Make local batch prediction really batched.
Batch prediction input may not have to include target column.
Sort labels, so it is consistent between preprocessing and training.
Follow up other core review comments.

* Follow up code review comments.

* Remove old DataSet implementation. Create new DataSets. (#151)

* Remove old DataSet implementation.

The new Dataset will be used as data source for packages. All DataSets will be capable of sampling to DataFrame, so feature exploration can be done with other libraries.

* Raise error when sample is larger than number of rows.

* Inception package improvements (#155)

* Inception package improvements.

- It takes DataSets as input instead of CSV files. It also supports BigQuery source now.
- Changes to make latest DataFlow and TensorFlow happy.
- Changes in preprocessing to remove partial support for multiple labels.
- Other minor improments.

* Add a comment.

* Update feature slice view UI. Added Slices Overview. (#161)

* Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic. (#163)

* Update feature slice view UI. Added Slices Overview.

* Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic.

Use matplotlib for tf events plotting so it can display well in static HTML pages (such as github).

Improve TensorFlow Events list/get APIs.

* Follow up on CR comments.

* new preprocessing and training for structured data (#160)

* new preprocessing is done

next: work on training, and then update the tests

* saving work

* sw

* seems to be working, going to do tests next

* got preprocessing test working

* training test pass!!!

* added exported graph back in

* dl preprocessing for local, cloud/csv, cloud/bigquery DONE :)

* gcloud cloud training works

* cloud dl training working

* ops, this files should not be saved

* removed junk function

* sw

* review comments

* removed cloudml sdk usage + lint

* review comments

* Move job, models, and feature_slice_view plotting to API. (#167)

* Move job, models, and feature_slice_view plotting to API.

* Follow up on CR comments.

* A util function to repackage and copy the package to staging location. (#169)

* A util function to repackage and copy the package to staging location, so in packages we can use the staging URL as package URL in cloud training.

* Follow up CR comments.

* Follow up CR comments.

* Move confusion matrix from %%ml to library. (#159)

* Move confusion matrix from %%ml to library.

This is part of efforts to move %%ml magic stuff to library to provide a consistent experience (python only).

* Add a comment.

* Improve inception package so there is no need to have an GCS copy of the package. Instead cloud training and preprocessing will repackage it from local installation and upload it to staging. (#175)

* Cloudmlsdp (#177)

* added the ',' graph hack

* sw

* batch prediction done

* sw

* review comments

* Add CloudTrainingConfig namedtuple to wrap cloud training configurations (#178)

* Add CloudTrainingConfig namedtuple to wrap cloud training configurations.

* Follow up code review comments.

* prediction update (#183)

* added the ',' graph hack

* sw

* batch prediction done

* sw

* review comments

* updated the the prediction graph keys, and makde the csvcoder not need
any other file.

* sw

* sw

* added newline

* review comments

* review comments

* trying to fix the Contributor License Agreement error.

* Inception Package Improvements (#186)

* Implement inception cloud batch prediction. Support explicit eval data in preprocessing.

* Follow up on CR comments. Also address changes from latest DataFlow.
qimingj pushed a commit that referenced this pull request Feb 22, 2017
* new preprocessing is done

next: work on training, and then update the tests

* saving work

* sw

* seems to be working, going to do tests next

* got preprocessing test working

* training test pass!!!

* added exported graph back in

* dl preprocessing for local, cloud/csv, cloud/bigquery DONE :)

* gcloud cloud training works

* cloud dl training working

* ops, this files should not be saved

* removed junk function

* sw

* review comments

* removed cloudml sdk usage + lint

* review comments
qimingj added a commit that referenced this pull request Feb 27, 2017
* Adding evaluationanalysis API to generate evaluation stats from eval … (#99)

* Adding evaluationanalysis API to generate evaluation stats from eval source CSV file and eval results CSV file.

The resulting stats file will be fed to a visualization component which will come in a separate change.

* Follow up CR comments.

* Feature slicing view visualization component. (#109)

* Datalab Inception (image classification) solution. (#117)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package. Update Inception Package. (#121)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package.
 - Dump function args and docstrings
 - Run functions
Update Inception Package.
 - Added docstring on face functions.
 - Added batch prediction.
 - Use datalab's lib for talking to cloud training and prediction service.
 - More minor fixes and changes.

* Follow up code review comments.

* Fix an PackageRunner issue that temp installation is done multiple times unnecessarily.

* Update feature-slice-view supporting file, which fixes some stability UI issues. (#126)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery)  Add Confusion matrix magic. (#129)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery).
Add Confusion matrix magic.

* Follow up on code review comments. Also fix an inception issue that eval loss is nan when eval size is smaller than batch size.

* Fix set union.

* Mergemaster/cloudml (#134)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that prediction right after preprocessing fails in inception package local run. (#135)

* add structure data preprocessing and training  (#132)

merging the preprocessing and training parts.

* first full-feature version of structured data is done (#139)

* added the preprocessing/training files.

Preprocessing is connected with datalab. Training is not fully connected
with datalab.

* added training interface.

* local/cloud training ready for review

* saving work

* saving work

* cloud online prediction is done.

* split config file into two (schema/transforms) and updated the
unittests.

* local preprocess/train working

* 1) merged --model_type and --problem_type
2) online/local prediction is done

* added batch prediction

* all prediction is done. Going to make a merge request next

* Update _package.py

removed some white space + add a print statement to  local_predict

* --preprocessing puts a copy of schema in the outut dir.
--no need to pass schema to train in datalab.

* tests can be run from any folder above the test folder by

python -m unittest discover

Also, the training test will parse the output of training and check that
the loss is small.

* Inception Package Improvements (#138)

* Fix an issue that prediction right after preprocessing fails in inception package local run.

* Remove the "labels_file" parameter from inception preprocess/train/predict. Instead it will get labels from training data. Prediction graph will return labels.
Make online prediction works with GCS images.
"%%ml alpha deploy" now also check for "/model" subdir if needed.
Other minor improvements.

* Make local batch prediction really batched.
Batch prediction input may not have to include target column.
Sort labels, so it is consistent between preprocessing and training.
Follow up other core review comments.

* Follow up code review comments.

* Cloudmlm (#152)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that setting project id from datalab does not set gcloud default project. (#136)

* Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143)

As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency.

* Remove tensorflow and CloudML SDK from setup.py (#144)

* Install TensorFlow 0.12.1.

* Remove TensorFlow and CloudML SDK from setup.py.

* Add comments why we ignore errors when importing mlalpha.

* Adding evaluationanalysis API to generate evaluation stats from eval … (#99)

* Adding evaluationanalysis API to generate evaluation stats from eval source CSV file and eval results CSV file.

The resulting stats file will be fed to a visualization component which will come in a separate change.

* Follow up CR comments.

* Feature slicing view visualization component. (#109)

* Datalab Inception (image classification) solution. (#117)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package. Update Inception Package. (#121)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package.
 - Dump function args and docstrings
 - Run functions
Update Inception Package.
 - Added docstring on face functions.
 - Added batch prediction.
 - Use datalab's lib for talking to cloud training and prediction service.
 - More minor fixes and changes.

* Follow up code review comments.

* Fix an PackageRunner issue that temp installation is done multiple times unnecessarily.

* Update feature-slice-view supporting file, which fixes some stability UI issues. (#126)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery)  Add Confusion matrix magic. (#129)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery).
Add Confusion matrix magic.

* Follow up on code review comments. Also fix an inception issue that eval loss is nan when eval size is smaller than batch size.

* Fix set union.

* Mergemaster/cloudml (#134)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that prediction right after preprocessing fails in inception package local run. (#135)

* add structure data preprocessing and training  (#132)

merging the preprocessing and training parts.

* first full-feature version of structured data is done (#139)

* added the preprocessing/training files.

Preprocessing is connected with datalab. Training is not fully connected
with datalab.

* added training interface.

* local/cloud training ready for review

* saving work

* saving work

* cloud online prediction is done.

* split config file into two (schema/transforms) and updated the
unittests.

* local preprocess/train working

* 1) merged --model_type and --problem_type
2) online/local prediction is done

* added batch prediction

* all prediction is done. Going to make a merge request next

* Update _package.py

removed some white space + add a print statement to  local_predict

* --preprocessing puts a copy of schema in the outut dir.
--no need to pass schema to train in datalab.

* tests can be run from any folder above the test folder by

python -m unittest discover

Also, the training test will parse the output of training and check that
the loss is small.

* Inception Package Improvements (#138)

* Fix an issue that prediction right after preprocessing fails in inception package local run.

* Remove the "labels_file" parameter from inception preprocess/train/predict. Instead it will get labels from training data. Prediction graph will return labels.
Make online prediction works with GCS images.
"%%ml alpha deploy" now also check for "/model" subdir if needed.
Other minor improvements.

* Make local batch prediction really batched.
Batch prediction input may not have to include target column.
Sort labels, so it is consistent between preprocessing and training.
Follow up other core review comments.

* Follow up code review comments.

* Remove old DataSet implementation. Create new DataSets. (#151)

* Remove old DataSet implementation.

The new Dataset will be used as data source for packages. All DataSets will be capable of sampling to DataFrame, so feature exploration can be done with other libraries.

* Raise error when sample is larger than number of rows.

* Inception package improvements (#155)

* Inception package improvements.

- It takes DataSets as input instead of CSV files. It also supports BigQuery source now.
- Changes to make latest DataFlow and TensorFlow happy.
- Changes in preprocessing to remove partial support for multiple labels.
- Other minor improments.

* Add a comment.

* Update feature slice view UI. Added Slices Overview. (#161)

* Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic. (#163)

* Update feature slice view UI. Added Slices Overview.

* Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic.

Use matplotlib for tf events plotting so it can display well in static HTML pages (such as github).

Improve TensorFlow Events list/get APIs.

* Follow up on CR comments.

* new preprocessing and training for structured data (#160)

* new preprocessing is done

next: work on training, and then update the tests

* saving work

* sw

* seems to be working, going to do tests next

* got preprocessing test working

* training test pass!!!

* added exported graph back in

* dl preprocessing for local, cloud/csv, cloud/bigquery DONE :)

* gcloud cloud training works

* cloud dl training working

* ops, this files should not be saved

* removed junk function

* sw

* review comments

* removed cloudml sdk usage + lint

* review comments

* Move job, models, and feature_slice_view plotting to API. (#167)

* Move job, models, and feature_slice_view plotting to API.

* Follow up on CR comments.

* A util function to repackage and copy the package to staging location. (#169)

* A util function to repackage and copy the package to staging location, so in packages we can use the staging URL as package URL in cloud training.

* Follow up CR comments.

* Follow up CR comments.

* Move confusion matrix from %%ml to library. (#159)

* Move confusion matrix from %%ml to library.

This is part of efforts to move %%ml magic stuff to library to provide a consistent experience (python only).

* Add a comment.

* Improve inception package so there is no need to have an GCS copy of the package. Instead cloud training and preprocessing will repackage it from local installation and upload it to staging. (#175)

* Cloudmlsdp (#177)

* added the ',' graph hack

* sw

* batch prediction done

* sw

* review comments

* Add CloudTrainingConfig namedtuple to wrap cloud training configurations (#178)

* Add CloudTrainingConfig namedtuple to wrap cloud training configurations.

* Follow up code review comments.

* prediction update (#183)

* added the ',' graph hack

* sw

* batch prediction done

* sw

* review comments

* updated the the prediction graph keys, and makde the csvcoder not need
any other file.

* sw

* sw

* added newline

* review comments

* review comments

* trying to fix the Contributor License Agreement error.

* Inception Package Improvements (#186)

* Implement inception cloud batch prediction. Support explicit eval data in preprocessing.

* Follow up on CR comments. Also address changes from latest DataFlow.

* Cloudmlmerge (#188)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that setting project id from datalab does not set gcloud default project. (#136)

* Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143)

As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency.

* Remove tensorflow and CloudML SDK from setup.py (#144)

* Install TensorFlow 0.12.1.

* Remove TensorFlow and CloudML SDK from setup.py.

* Add comments why we ignore errors when importing mlalpha.

* Adding evaluationanalysis API to generate evaluation stats from eval … (#99)

* Adding evaluationanalysis API to generate evaluation stats from eval source CSV file and eval results CSV file.

The resulting stats file will be fed to a visualization component which will come in a separate change.

* Follow up CR comments.

* Feature slicing view visualization component. (#109)

* Datalab Inception (image classification) solution. (#117)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package. Update Inception Package. (#121)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package.
 - Dump function args and docstrings
 - Run functions
Update Inception Package.
 - Added docstring on face functions.
 - Added batch prediction.
 - Use datalab's lib for talking to cloud training and prediction service.
 - More minor fixes and changes.

* Follow up code review comments.

* Fix an PackageRunner issue that temp installation is done multiple times unnecessarily.

* Update feature-slice-view supporting file, which fixes some stability UI issues. (#126)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery)  Add Confusion matrix magic. (#129)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery).
Add Confusion matrix magic.

* Follow up on code review comments. Also fix an inception issue that eval loss is nan when eval size is smaller than batch size.

* Fix set union.

* Mergemaster/cloudml (#134)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that prediction right after preprocessing fails in inception package local run. (#135)

* add structure data preprocessing and training  (#132)

merging the preprocessing and training parts.

* first full-feature version of structured data is done (#139)

* added the preprocessing/training files.

Preprocessing is connected with datalab. Training is not fully connected
with datalab.

* added training interface.

* local/cloud training ready for review

* saving work

* saving work

* cloud online prediction is done.

* split config file into two (schema/transforms) and updated the
unittests.

* local preprocess/train working

* 1) merged --model_type and --problem_type
2) online/local prediction is done

* added batch prediction

* all prediction is done. Going to make a merge request next

* Update _package.py

removed some white space + add a print statement to  local_predict

* --preprocessing puts a copy of schema in the outut dir.
--no need to pass schema to train in datalab.

* tests can be run from any folder above the test folder by

python -m unittest discover

Also, the training test will parse the output of training and check that
the loss is small.

* Inception Package Improvements (#138)

* Fix an issue that prediction right after preprocessing fails in inception package local run.

* Remove the "labels_file" parameter from inception preprocess/train/predict. Instead it will get labels from training data. Prediction graph will return labels.
Make online prediction works with GCS images.
"%%ml alpha deploy" now also check for "/model" subdir if needed.
Other minor improvements.

* Make local batch prediction really batched.
Batch prediction input may not have to include target column.
Sort labels, so it is consistent between preprocessing and training.
Follow up other core review comments.

* Follow up code review comments.

* Remove old DataSet implementation. Create new DataSets. (#151)

* Remove old DataSet implementation.

The new Dataset will be used as data source for packages. All DataSets will be capable of sampling to DataFrame, so feature exploration can be done with other libraries.

* Raise error when sample is larger than number of rows.

* Inception package improvements (#155)

* Inception package improvements.

- It takes DataSets as input instead of CSV files. It also supports BigQuery source now.
- Changes to make latest DataFlow and TensorFlow happy.
- Changes in preprocessing to remove partial support for multiple labels.
- Other minor improments.

* Add a comment.

* Update feature slice view UI. Added Slices Overview. (#161)

* Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic. (#163)

* Update feature slice view UI. Added Slices Overview.

* Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic.

Use matplotlib for tf events plotting so it can display well in static HTML pages (such as github).

Improve TensorFlow Events list/get APIs.

* Follow up on CR comments.

* new preprocessing and training for structured data (#160)

* new preprocessing is done

next: work on training, and then update the tests

* saving work

* sw

* seems to be working, going to do tests next

* got preprocessing test working

* training test pass!!!

* added exported graph back in

* dl preprocessing for local, cloud/csv, cloud/bigquery DONE :)

* gcloud cloud training works

* cloud dl training working

* ops, this files should not be saved

* removed junk function

* sw

* review comments

* removed cloudml sdk usage + lint

* review comments

* Move job, models, and feature_slice_view plotting to API. (#167)

* Move job, models, and feature_slice_view plotting to API.

* Follow up on CR comments.

* A util function to repackage and copy the package to staging location. (#169)

* A util function to repackage and copy the package to staging location, so in packages we can use the staging URL as package URL in cloud training.

* Follow up CR comments.

* Follow up CR comments.

* Move confusion matrix from %%ml to library. (#159)

* Move confusion matrix from %%ml to library.

This is part of efforts to move %%ml magic stuff to library to provide a consistent experience (python only).

* Add a comment.

* Improve inception package so there is no need to have an GCS copy of the package. Instead cloud training and preprocessing will repackage it from local installation and upload it to staging. (#175)

* Cloudmlsdp (#177)

* added the ',' graph hack

* sw

* batch prediction done

* sw

* review comments

* Add CloudTrainingConfig namedtuple to wrap cloud training configurations (#178)

* Add CloudTrainingConfig namedtuple to wrap cloud training configurations.

* Follow up code review comments.

* prediction update (#183)

* added the ',' graph hack

* sw

* batch prediction done

* sw

* review comments

* updated the the prediction graph keys, and makde the csvcoder not need
any other file.

* sw

* sw

* added newline

* review comments

* review comments

* trying to fix the Contributor License Agreement error.

* Inception Package Improvements (#186)

* Implement inception cloud batch prediction. Support explicit eval data in preprocessing.

* Follow up on CR comments. Also address changes from latest DataFlow.

* CsvDataSet no longer globs files in init. (#187)

* CsvDataSet no longer globs files in init.

* removed file_io, that fix will be done later

* removed junk lines

* sample uses .file

* fixed csv dataset def files()

* Update _dataset.py

* Move cloud trainer and predictor from their own classes to Job and Model respectively. (#192)

* Move cloud trainer and predictor from their own classes to Job and Model respectively.

Cloud trainer and predictor will be cleaned up in a seperate change.

* Rename CloudModels to Models, CloudModelVersions to ModelVersions. Move their iterator from self to get_iterator() method.

* Switch to cloudml v1 endpoint.

* Remove one comment.

* Follow up on CR comments. Fix a bug in datalab iterator that count keeps incrementing incorrectly.

* removed the feature type file  (#199)

* sw

* removed feature types file from preprocessing

* training: no longer needs the input types file
prediction: cloud batch works now

* updated the tests

* added amazing comment to local_train
check that target column is the first column

* transforms file is not optional on the DL side.

* comments

* comments

* Make inception to work with tf1.0. (#204)

* Workaround a TF summary issue. Force online prediction to use TF 1.0. (#209)

* sd package. Local everything is working.  (#211)

* sw

* sw

* Remove tf dependency from structured data setup.py. (#212)

* Workaround a TF summary issue. Force online prediction to use TF 1.0.

* Remove tf dependency from structured data setup.py.

* Cloudmld (#213)

* sw

* sw

* cloud uses 0.12.0rc? and local uses whatever is in datalab

* for local testing

* master_setup is copy of ../../setup.py

* Add a resize option for inception package to avoid sending large data to online prediction (#215)

* Add a resize option for inception package to avoid sending large data to online prediction.
Update Lantern browser.

* Follow up on code review comments and fix a bug for inception.

* Cleanup mlalpha APIs that are not needed. (#218)

* Inception package updates. (#219)

- Instead of hard code setup.py path, duplicate it along with all py files, just like structured data package.
- Use Pip installable TensorFlow 1.0 for packages.
- Fix some TF warnings.

* Cloudml Branch Merge From Master (#222)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that setting project id from datalab does not set gcloud default project. (#136)

* Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143)

As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency.

* Remove tensorflow and CloudML SDK from setup.py (#144)

* Install TensorFlow 0.12.1.

* Remove TensorFlow and CloudML SDK from setup.py.

* Add comments why we ignore errors when importing mlalpha.

* Fix project_id from `gcloud config` in py3 (#194)

- `Popen.stdout` is a `bytes` in py3, needs `.decode()`

- Before:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

HTTP request failed: Invalid project ID 'b'foo-bar''. Project IDs must contain 6-63 lowercase letters, digits, or dashes. IDs must start with a letter and may not end with a dash.
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

b'foo-bar'
```

- After:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

QueryResultsTable job_1_bZNbAUtk8QzlK7bqWD5fz7S5o
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

foo-bar
```

* Use http Keep-Alive, else BigQuery queries are ~seconds slower than necessary (#195)

- Before (without Keep-Alive): ~3-7s for BigQuery `select 3` with an already cached result
- After (with Keep-Alive): ~1.5-3s
- Query sends these 6 http requests and runtime appears to be dominated by network RTT

* cast string to int (#217)

`table.insert_data(df)` inserts data correctly but raises TypeError: unorderable types: str() > int()

* Remove CloudML SDK as dependency for PyDatalab. (#227)

* Remove CloudML dependency from Inception. (#225)

* TensorFlow's save_model no longer creates export.meta, so disable the  check in deploying models. (#228)

* TensorFlow's save_model no longer creates export.meta, so disable the check in deploying models.

* Also check for saved_model.pb for deployment.

* Cloudmlsm (#229)

* csv prediction graph done

* csv works, but not json!!!

* sw, train working

* cloud training working

* finished census sample, cleaned up interface

* review comments

* small fixes to sd (#231)

* small fixes

* more small fixes

* Rename from mlalpha to ml. (#232)

* fixed prediction (#235)

* small fixes (#236)

* 1) prediction 'key_from_input' now the true key name
2) DF prediction now make csv_schema.json file
3) removed function that was not used.

* update csv_schema.json in _package too

* Cloudmlmerge (#238)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that setting project id from datalab does not set gcloud default project. (#136)

* Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143)

As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency.

* Remove tensorflow and CloudML SDK from setup.py (#144)

* Install TensorFlow 0.12.1.

* Remove TensorFlow and CloudML SDK from setup.py.

* Add comments why we ignore errors when importing mlalpha.

* Fix project_id from `gcloud config` in py3 (#194)

- `Popen.stdout` is a `bytes` in py3, needs `.decode()`

- Before:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

HTTP request failed: Invalid project ID 'b'foo-bar''. Project IDs must contain 6-63 lowercase letters, digits, or dashes. IDs must start with a letter and may not end with a dash.
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

b'foo-bar'
```

- After:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

QueryResultsTable job_1_bZNbAUtk8QzlK7bqWD5fz7S5o
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

foo-bar
```

* Use http Keep-Alive, else BigQuery queries are ~seconds slower than necessary (#195)

- Before (without Keep-Alive): ~3-7s for BigQuery `select 3` with an already cached result
- After (with Keep-Alive): ~1.5-3s
- Query sends these 6 http requests and runtime appears to be dominated by network RTT

* cast string to int (#217)

`table.insert_data(df)` inserts data correctly but raises TypeError: unorderable types: str() > int()

* bigquery.Api: Remove unused _DEFAULT_PAGE_SIZE (#221)

Test plan:
- Unit tests still pass
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants