Skip to content
This repository has been archived by the owner on Sep 3, 2022. It is now read-only.

Fix project_id from gcloud config in py3 #194

Merged
merged 1 commit into from
Feb 14, 2017

Conversation

jdanbrown
Copy link
Contributor

  • Popen.stdout is a bytes in py3, needs .decode()

  • Before:

>>> %%sql -d standard
... select 3
Your active configuration is: [test]

HTTP request failed: Invalid project ID 'b'foo-bar''. Project IDs must contain 6-63 lowercase letters, digits, or dashes. IDs must start with a letter and may not end with a dash.
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

b'foo-bar'
  • After:
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

QueryResultsTable job_1_bZNbAUtk8QzlK7bqWD5fz7S5o
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

foo-bar

- `Popen.stdout` is a `bytes` in py3, needs `.decode()`

- Before:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

HTTP request failed: Invalid project ID 'b'foo-bar''. Project IDs must contain 6-63 lowercase letters, digits, or dashes. IDs must start with a letter and may not end with a dash.
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

b'foo-bar'
```

- After:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

QueryResultsTable job_1_bZNbAUtk8QzlK7bqWD5fz7S5o
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

foo-bar
```
Copy link
Contributor

@qimingj qimingj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@qimingj qimingj merged commit 62716e9 into googledatalab:master Feb 14, 2017
@jdanbrown jdanbrown deleted the pr-fix-project-id-py3 branch February 14, 2017 18:29
yebrahim pushed a commit to yebrahim/pydatalab that referenced this pull request Feb 15, 2017
- `Popen.stdout` is a `bytes` in py3, needs `.decode()`

- Before:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

HTTP request failed: Invalid project ID 'b'foo-bar''. Project IDs must contain 6-63 lowercase letters, digits, or dashes. IDs must start with a letter and may not end with a dash.
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

b'foo-bar'
```

- After:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

QueryResultsTable job_1_bZNbAUtk8QzlK7bqWD5fz7S5o
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

foo-bar
```
qimingj added a commit that referenced this pull request Feb 22, 2017
* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that setting project id from datalab does not set gcloud default project. (#136)

* Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143)

As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency.

* Remove tensorflow and CloudML SDK from setup.py (#144)

* Install TensorFlow 0.12.1.

* Remove TensorFlow and CloudML SDK from setup.py.

* Add comments why we ignore errors when importing mlalpha.

* Fix project_id from `gcloud config` in py3 (#194)

- `Popen.stdout` is a `bytes` in py3, needs `.decode()`

- Before:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

HTTP request failed: Invalid project ID 'b'foo-bar''. Project IDs must contain 6-63 lowercase letters, digits, or dashes. IDs must start with a letter and may not end with a dash.
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

b'foo-bar'
```

- After:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

QueryResultsTable job_1_bZNbAUtk8QzlK7bqWD5fz7S5o
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

foo-bar
```

* Use http Keep-Alive, else BigQuery queries are ~seconds slower than necessary (#195)

- Before (without Keep-Alive): ~3-7s for BigQuery `select 3` with an already cached result
- After (with Keep-Alive): ~1.5-3s
- Query sends these 6 http requests and runtime appears to be dominated by network RTT

* cast string to int (#217)

`table.insert_data(df)` inserts data correctly but raises TypeError: unorderable types: str() > int()
qimingj added a commit that referenced this pull request Feb 25, 2017
* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that setting project id from datalab does not set gcloud default project. (#136)

* Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143)

As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency.

* Remove tensorflow and CloudML SDK from setup.py (#144)

* Install TensorFlow 0.12.1.

* Remove TensorFlow and CloudML SDK from setup.py.

* Add comments why we ignore errors when importing mlalpha.

* Fix project_id from `gcloud config` in py3 (#194)

- `Popen.stdout` is a `bytes` in py3, needs `.decode()`

- Before:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

HTTP request failed: Invalid project ID 'b'foo-bar''. Project IDs must contain 6-63 lowercase letters, digits, or dashes. IDs must start with a letter and may not end with a dash.
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

b'foo-bar'
```

- After:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

QueryResultsTable job_1_bZNbAUtk8QzlK7bqWD5fz7S5o
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

foo-bar
```

* Use http Keep-Alive, else BigQuery queries are ~seconds slower than necessary (#195)

- Before (without Keep-Alive): ~3-7s for BigQuery `select 3` with an already cached result
- After (with Keep-Alive): ~1.5-3s
- Query sends these 6 http requests and runtime appears to be dominated by network RTT

* cast string to int (#217)

`table.insert_data(df)` inserts data correctly but raises TypeError: unorderable types: str() > int()

* bigquery.Api: Remove unused _DEFAULT_PAGE_SIZE (#221)

Test plan:
- Unit tests still pass
qimingj added a commit that referenced this pull request Feb 27, 2017
* Adding evaluationanalysis API to generate evaluation stats from eval … (#99)

* Adding evaluationanalysis API to generate evaluation stats from eval source CSV file and eval results CSV file.

The resulting stats file will be fed to a visualization component which will come in a separate change.

* Follow up CR comments.

* Feature slicing view visualization component. (#109)

* Datalab Inception (image classification) solution. (#117)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package. Update Inception Package. (#121)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package.
 - Dump function args and docstrings
 - Run functions
Update Inception Package.
 - Added docstring on face functions.
 - Added batch prediction.
 - Use datalab's lib for talking to cloud training and prediction service.
 - More minor fixes and changes.

* Follow up code review comments.

* Fix an PackageRunner issue that temp installation is done multiple times unnecessarily.

* Update feature-slice-view supporting file, which fixes some stability UI issues. (#126)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery)  Add Confusion matrix magic. (#129)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery).
Add Confusion matrix magic.

* Follow up on code review comments. Also fix an inception issue that eval loss is nan when eval size is smaller than batch size.

* Fix set union.

* Mergemaster/cloudml (#134)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that prediction right after preprocessing fails in inception package local run. (#135)

* add structure data preprocessing and training  (#132)

merging the preprocessing and training parts.

* first full-feature version of structured data is done (#139)

* added the preprocessing/training files.

Preprocessing is connected with datalab. Training is not fully connected
with datalab.

* added training interface.

* local/cloud training ready for review

* saving work

* saving work

* cloud online prediction is done.

* split config file into two (schema/transforms) and updated the
unittests.

* local preprocess/train working

* 1) merged --model_type and --problem_type
2) online/local prediction is done

* added batch prediction

* all prediction is done. Going to make a merge request next

* Update _package.py

removed some white space + add a print statement to  local_predict

* --preprocessing puts a copy of schema in the outut dir.
--no need to pass schema to train in datalab.

* tests can be run from any folder above the test folder by

python -m unittest discover

Also, the training test will parse the output of training and check that
the loss is small.

* Inception Package Improvements (#138)

* Fix an issue that prediction right after preprocessing fails in inception package local run.

* Remove the "labels_file" parameter from inception preprocess/train/predict. Instead it will get labels from training data. Prediction graph will return labels.
Make online prediction works with GCS images.
"%%ml alpha deploy" now also check for "/model" subdir if needed.
Other minor improvements.

* Make local batch prediction really batched.
Batch prediction input may not have to include target column.
Sort labels, so it is consistent between preprocessing and training.
Follow up other core review comments.

* Follow up code review comments.

* Cloudmlm (#152)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that setting project id from datalab does not set gcloud default project. (#136)

* Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143)

As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency.

* Remove tensorflow and CloudML SDK from setup.py (#144)

* Install TensorFlow 0.12.1.

* Remove TensorFlow and CloudML SDK from setup.py.

* Add comments why we ignore errors when importing mlalpha.

* Adding evaluationanalysis API to generate evaluation stats from eval … (#99)

* Adding evaluationanalysis API to generate evaluation stats from eval source CSV file and eval results CSV file.

The resulting stats file will be fed to a visualization component which will come in a separate change.

* Follow up CR comments.

* Feature slicing view visualization component. (#109)

* Datalab Inception (image classification) solution. (#117)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package. Update Inception Package. (#121)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package.
 - Dump function args and docstrings
 - Run functions
Update Inception Package.
 - Added docstring on face functions.
 - Added batch prediction.
 - Use datalab's lib for talking to cloud training and prediction service.
 - More minor fixes and changes.

* Follow up code review comments.

* Fix an PackageRunner issue that temp installation is done multiple times unnecessarily.

* Update feature-slice-view supporting file, which fixes some stability UI issues. (#126)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery)  Add Confusion matrix magic. (#129)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery).
Add Confusion matrix magic.

* Follow up on code review comments. Also fix an inception issue that eval loss is nan when eval size is smaller than batch size.

* Fix set union.

* Mergemaster/cloudml (#134)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that prediction right after preprocessing fails in inception package local run. (#135)

* add structure data preprocessing and training  (#132)

merging the preprocessing and training parts.

* first full-feature version of structured data is done (#139)

* added the preprocessing/training files.

Preprocessing is connected with datalab. Training is not fully connected
with datalab.

* added training interface.

* local/cloud training ready for review

* saving work

* saving work

* cloud online prediction is done.

* split config file into two (schema/transforms) and updated the
unittests.

* local preprocess/train working

* 1) merged --model_type and --problem_type
2) online/local prediction is done

* added batch prediction

* all prediction is done. Going to make a merge request next

* Update _package.py

removed some white space + add a print statement to  local_predict

* --preprocessing puts a copy of schema in the outut dir.
--no need to pass schema to train in datalab.

* tests can be run from any folder above the test folder by

python -m unittest discover

Also, the training test will parse the output of training and check that
the loss is small.

* Inception Package Improvements (#138)

* Fix an issue that prediction right after preprocessing fails in inception package local run.

* Remove the "labels_file" parameter from inception preprocess/train/predict. Instead it will get labels from training data. Prediction graph will return labels.
Make online prediction works with GCS images.
"%%ml alpha deploy" now also check for "/model" subdir if needed.
Other minor improvements.

* Make local batch prediction really batched.
Batch prediction input may not have to include target column.
Sort labels, so it is consistent between preprocessing and training.
Follow up other core review comments.

* Follow up code review comments.

* Remove old DataSet implementation. Create new DataSets. (#151)

* Remove old DataSet implementation.

The new Dataset will be used as data source for packages. All DataSets will be capable of sampling to DataFrame, so feature exploration can be done with other libraries.

* Raise error when sample is larger than number of rows.

* Inception package improvements (#155)

* Inception package improvements.

- It takes DataSets as input instead of CSV files. It also supports BigQuery source now.
- Changes to make latest DataFlow and TensorFlow happy.
- Changes in preprocessing to remove partial support for multiple labels.
- Other minor improments.

* Add a comment.

* Update feature slice view UI. Added Slices Overview. (#161)

* Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic. (#163)

* Update feature slice view UI. Added Slices Overview.

* Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic.

Use matplotlib for tf events plotting so it can display well in static HTML pages (such as github).

Improve TensorFlow Events list/get APIs.

* Follow up on CR comments.

* new preprocessing and training for structured data (#160)

* new preprocessing is done

next: work on training, and then update the tests

* saving work

* sw

* seems to be working, going to do tests next

* got preprocessing test working

* training test pass!!!

* added exported graph back in

* dl preprocessing for local, cloud/csv, cloud/bigquery DONE :)

* gcloud cloud training works

* cloud dl training working

* ops, this files should not be saved

* removed junk function

* sw

* review comments

* removed cloudml sdk usage + lint

* review comments

* Move job, models, and feature_slice_view plotting to API. (#167)

* Move job, models, and feature_slice_view plotting to API.

* Follow up on CR comments.

* A util function to repackage and copy the package to staging location. (#169)

* A util function to repackage and copy the package to staging location, so in packages we can use the staging URL as package URL in cloud training.

* Follow up CR comments.

* Follow up CR comments.

* Move confusion matrix from %%ml to library. (#159)

* Move confusion matrix from %%ml to library.

This is part of efforts to move %%ml magic stuff to library to provide a consistent experience (python only).

* Add a comment.

* Improve inception package so there is no need to have an GCS copy of the package. Instead cloud training and preprocessing will repackage it from local installation and upload it to staging. (#175)

* Cloudmlsdp (#177)

* added the ',' graph hack

* sw

* batch prediction done

* sw

* review comments

* Add CloudTrainingConfig namedtuple to wrap cloud training configurations (#178)

* Add CloudTrainingConfig namedtuple to wrap cloud training configurations.

* Follow up code review comments.

* prediction update (#183)

* added the ',' graph hack

* sw

* batch prediction done

* sw

* review comments

* updated the the prediction graph keys, and makde the csvcoder not need
any other file.

* sw

* sw

* added newline

* review comments

* review comments

* trying to fix the Contributor License Agreement error.

* Inception Package Improvements (#186)

* Implement inception cloud batch prediction. Support explicit eval data in preprocessing.

* Follow up on CR comments. Also address changes from latest DataFlow.

* Cloudmlmerge (#188)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that setting project id from datalab does not set gcloud default project. (#136)

* Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143)

As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency.

* Remove tensorflow and CloudML SDK from setup.py (#144)

* Install TensorFlow 0.12.1.

* Remove TensorFlow and CloudML SDK from setup.py.

* Add comments why we ignore errors when importing mlalpha.

* Adding evaluationanalysis API to generate evaluation stats from eval … (#99)

* Adding evaluationanalysis API to generate evaluation stats from eval source CSV file and eval results CSV file.

The resulting stats file will be fed to a visualization component which will come in a separate change.

* Follow up CR comments.

* Feature slicing view visualization component. (#109)

* Datalab Inception (image classification) solution. (#117)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package. Update Inception Package. (#121)

* Datalab Inception (image classification) solution.

* Fix dataflow URL.

* Datalab "ml" magics for running a solution package.
 - Dump function args and docstrings
 - Run functions
Update Inception Package.
 - Added docstring on face functions.
 - Added batch prediction.
 - Use datalab's lib for talking to cloud training and prediction service.
 - More minor fixes and changes.

* Follow up code review comments.

* Fix an PackageRunner issue that temp installation is done multiple times unnecessarily.

* Update feature-slice-view supporting file, which fixes some stability UI issues. (#126)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery)  Add Confusion matrix magic. (#129)

* Remove old feature-slicing pipeline implementation (is replaced by BigQuery).
Add Confusion matrix magic.

* Follow up on code review comments. Also fix an inception issue that eval loss is nan when eval size is smaller than batch size.

* Fix set union.

* Mergemaster/cloudml (#134)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that prediction right after preprocessing fails in inception package local run. (#135)

* add structure data preprocessing and training  (#132)

merging the preprocessing and training parts.

* first full-feature version of structured data is done (#139)

* added the preprocessing/training files.

Preprocessing is connected with datalab. Training is not fully connected
with datalab.

* added training interface.

* local/cloud training ready for review

* saving work

* saving work

* cloud online prediction is done.

* split config file into two (schema/transforms) and updated the
unittests.

* local preprocess/train working

* 1) merged --model_type and --problem_type
2) online/local prediction is done

* added batch prediction

* all prediction is done. Going to make a merge request next

* Update _package.py

removed some white space + add a print statement to  local_predict

* --preprocessing puts a copy of schema in the outut dir.
--no need to pass schema to train in datalab.

* tests can be run from any folder above the test folder by

python -m unittest discover

Also, the training test will parse the output of training and check that
the loss is small.

* Inception Package Improvements (#138)

* Fix an issue that prediction right after preprocessing fails in inception package local run.

* Remove the "labels_file" parameter from inception preprocess/train/predict. Instead it will get labels from training data. Prediction graph will return labels.
Make online prediction works with GCS images.
"%%ml alpha deploy" now also check for "/model" subdir if needed.
Other minor improvements.

* Make local batch prediction really batched.
Batch prediction input may not have to include target column.
Sort labels, so it is consistent between preprocessing and training.
Follow up other core review comments.

* Follow up code review comments.

* Remove old DataSet implementation. Create new DataSets. (#151)

* Remove old DataSet implementation.

The new Dataset will be used as data source for packages. All DataSets will be capable of sampling to DataFrame, so feature exploration can be done with other libraries.

* Raise error when sample is larger than number of rows.

* Inception package improvements (#155)

* Inception package improvements.

- It takes DataSets as input instead of CSV files. It also supports BigQuery source now.
- Changes to make latest DataFlow and TensorFlow happy.
- Changes in preprocessing to remove partial support for multiple labels.
- Other minor improments.

* Add a comment.

* Update feature slice view UI. Added Slices Overview. (#161)

* Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic. (#163)

* Update feature slice view UI. Added Slices Overview.

* Move TensorBoard and TensorFlow Events UI rendering to Python function to deprecate magic.

Use matplotlib for tf events plotting so it can display well in static HTML pages (such as github).

Improve TensorFlow Events list/get APIs.

* Follow up on CR comments.

* new preprocessing and training for structured data (#160)

* new preprocessing is done

next: work on training, and then update the tests

* saving work

* sw

* seems to be working, going to do tests next

* got preprocessing test working

* training test pass!!!

* added exported graph back in

* dl preprocessing for local, cloud/csv, cloud/bigquery DONE :)

* gcloud cloud training works

* cloud dl training working

* ops, this files should not be saved

* removed junk function

* sw

* review comments

* removed cloudml sdk usage + lint

* review comments

* Move job, models, and feature_slice_view plotting to API. (#167)

* Move job, models, and feature_slice_view plotting to API.

* Follow up on CR comments.

* A util function to repackage and copy the package to staging location. (#169)

* A util function to repackage and copy the package to staging location, so in packages we can use the staging URL as package URL in cloud training.

* Follow up CR comments.

* Follow up CR comments.

* Move confusion matrix from %%ml to library. (#159)

* Move confusion matrix from %%ml to library.

This is part of efforts to move %%ml magic stuff to library to provide a consistent experience (python only).

* Add a comment.

* Improve inception package so there is no need to have an GCS copy of the package. Instead cloud training and preprocessing will repackage it from local installation and upload it to staging. (#175)

* Cloudmlsdp (#177)

* added the ',' graph hack

* sw

* batch prediction done

* sw

* review comments

* Add CloudTrainingConfig namedtuple to wrap cloud training configurations (#178)

* Add CloudTrainingConfig namedtuple to wrap cloud training configurations.

* Follow up code review comments.

* prediction update (#183)

* added the ',' graph hack

* sw

* batch prediction done

* sw

* review comments

* updated the the prediction graph keys, and makde the csvcoder not need
any other file.

* sw

* sw

* added newline

* review comments

* review comments

* trying to fix the Contributor License Agreement error.

* Inception Package Improvements (#186)

* Implement inception cloud batch prediction. Support explicit eval data in preprocessing.

* Follow up on CR comments. Also address changes from latest DataFlow.

* CsvDataSet no longer globs files in init. (#187)

* CsvDataSet no longer globs files in init.

* removed file_io, that fix will be done later

* removed junk lines

* sample uses .file

* fixed csv dataset def files()

* Update _dataset.py

* Move cloud trainer and predictor from their own classes to Job and Model respectively. (#192)

* Move cloud trainer and predictor from their own classes to Job and Model respectively.

Cloud trainer and predictor will be cleaned up in a seperate change.

* Rename CloudModels to Models, CloudModelVersions to ModelVersions. Move their iterator from self to get_iterator() method.

* Switch to cloudml v1 endpoint.

* Remove one comment.

* Follow up on CR comments. Fix a bug in datalab iterator that count keeps incrementing incorrectly.

* removed the feature type file  (#199)

* sw

* removed feature types file from preprocessing

* training: no longer needs the input types file
prediction: cloud batch works now

* updated the tests

* added amazing comment to local_train
check that target column is the first column

* transforms file is not optional on the DL side.

* comments

* comments

* Make inception to work with tf1.0. (#204)

* Workaround a TF summary issue. Force online prediction to use TF 1.0. (#209)

* sd package. Local everything is working.  (#211)

* sw

* sw

* Remove tf dependency from structured data setup.py. (#212)

* Workaround a TF summary issue. Force online prediction to use TF 1.0.

* Remove tf dependency from structured data setup.py.

* Cloudmld (#213)

* sw

* sw

* cloud uses 0.12.0rc? and local uses whatever is in datalab

* for local testing

* master_setup is copy of ../../setup.py

* Add a resize option for inception package to avoid sending large data to online prediction (#215)

* Add a resize option for inception package to avoid sending large data to online prediction.
Update Lantern browser.

* Follow up on code review comments and fix a bug for inception.

* Cleanup mlalpha APIs that are not needed. (#218)

* Inception package updates. (#219)

- Instead of hard code setup.py path, duplicate it along with all py files, just like structured data package.
- Use Pip installable TensorFlow 1.0 for packages.
- Fix some TF warnings.

* Cloudml Branch Merge From Master (#222)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that setting project id from datalab does not set gcloud default project. (#136)

* Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143)

As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency.

* Remove tensorflow and CloudML SDK from setup.py (#144)

* Install TensorFlow 0.12.1.

* Remove TensorFlow and CloudML SDK from setup.py.

* Add comments why we ignore errors when importing mlalpha.

* Fix project_id from `gcloud config` in py3 (#194)

- `Popen.stdout` is a `bytes` in py3, needs `.decode()`

- Before:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

HTTP request failed: Invalid project ID 'b'foo-bar''. Project IDs must contain 6-63 lowercase letters, digits, or dashes. IDs must start with a letter and may not end with a dash.
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

b'foo-bar'
```

- After:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

QueryResultsTable job_1_bZNbAUtk8QzlK7bqWD5fz7S5o
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

foo-bar
```

* Use http Keep-Alive, else BigQuery queries are ~seconds slower than necessary (#195)

- Before (without Keep-Alive): ~3-7s for BigQuery `select 3` with an already cached result
- After (with Keep-Alive): ~1.5-3s
- Query sends these 6 http requests and runtime appears to be dominated by network RTT

* cast string to int (#217)

`table.insert_data(df)` inserts data correctly but raises TypeError: unorderable types: str() > int()

* Remove CloudML SDK as dependency for PyDatalab. (#227)

* Remove CloudML dependency from Inception. (#225)

* TensorFlow's save_model no longer creates export.meta, so disable the  check in deploying models. (#228)

* TensorFlow's save_model no longer creates export.meta, so disable the check in deploying models.

* Also check for saved_model.pb for deployment.

* Cloudmlsm (#229)

* csv prediction graph done

* csv works, but not json!!!

* sw, train working

* cloud training working

* finished census sample, cleaned up interface

* review comments

* small fixes to sd (#231)

* small fixes

* more small fixes

* Rename from mlalpha to ml. (#232)

* fixed prediction (#235)

* small fixes (#236)

* 1) prediction 'key_from_input' now the true key name
2) DF prediction now make csv_schema.json file
3) removed function that was not used.

* update csv_schema.json in _package too

* Cloudmlmerge (#238)

* Add gcs_copy_file() that is missing but is referenced in a couple of places. (#110)

* Add gcs_copy_file() that is missing but is referenced in a couple of places.

* Add DataFlow to pydatalab dependency list.

* Fix travis test errors by reimplementing gcs copy.

* Remove unnecessary shutil import.

* Flake8 configuration. Set max line length to 100. Ignore E111, E114 (#102)

* Add datalab user agent to CloudML trainer and predictor requests. (#112)

* Update oauth2client to 2.2.0 to satisfy cloudml in Cloud Datalab (#111)

* Update README.md (#114)

Added docs link.

* Generate reST documentation for magic commands (#113)

Auto generate docs for any added magics by searching through the source files for lines with register_line_cell_magic, capturing the names for those magics, and calling them inside an ipython kernel with the -h argument, then storing that output into a generated datalab.magics.rst file.

* Fix an issue that %%chart failed with UDF query. (#116)

* Fix an issue that %%chart failed with UDF query.

The problem is that the query is submitted to BQ without replacing variable values from user namespace.

* Fix chart tests by adding ip.user_ns mock.

* Fix charting test.

* Add missing import "mock".

* Fix chart tests.

* Fix "%%bigquery schema" issue --  the command generates nothing in output. (#119)

* Add some missing dependencies, remove some unused ones (#122)

* Remove scikit-learn and scipy as dependencies
* add more required packages
* Add psutil as dependency
* Update packages versions

* Cleanup (#123)

* Remove unnecessary semicolons

* remove unused imports

* remove unncessary defined variable

* Fix query_metadata tests (#128)

Fix query_metadata tests

* Make the library pip-installable (#125)

This PR adds tensorflow and cloudml in setup.py to make the lib pip-installable. I had to install them explicitly using pip from inside the setup.py script, even though it's not a clean way to do it, it gets around the two issues we have at the moment with these two packags:
- Pypi has Tensorflow version 0.12, while we need 0.11 for the current version of pydatalab. According to the Cloud ML docs, that version exists as a pip package for three supported platforms.
- Cloud ML SDK exists as a pip package, but also not on Pypi, and while we could add it as a dependency link, there exists another package on Pypi called cloudml, and pip ends up installing that instead (see #124). I cannot find a way to force pip to install the package from the link I included.

* Set command description so it is displayed in --help. argparser's format_help() prints description but not help. (#131)

* Fix an issue that setting project id from datalab does not set gcloud default project. (#136)

* Add future==0.16.0 as a dependency since it's required by CloudML SDK (#143)

As of the latest release of CloudML Python SDK, that package seems to require future==0.16.0, so until it's fixed, we'll take it as a dependency.

* Remove tensorflow and CloudML SDK from setup.py (#144)

* Install TensorFlow 0.12.1.

* Remove TensorFlow and CloudML SDK from setup.py.

* Add comments why we ignore errors when importing mlalpha.

* Fix project_id from `gcloud config` in py3 (#194)

- `Popen.stdout` is a `bytes` in py3, needs `.decode()`

- Before:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

HTTP request failed: Invalid project ID 'b'foo-bar''. Project IDs must contain 6-63 lowercase letters, digits, or dashes. IDs must start with a letter and may not end with a dash.
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

b'foo-bar'
```

- After:
```py
>>> %%sql -d standard
... select 3
Your active configuration is: [test]

QueryResultsTable job_1_bZNbAUtk8QzlK7bqWD5fz7S5o
```
```sh
$ for p in python2 python3; do $p -c 'from datalab.context._utils import get_project_id; print(get_project_id())'; done
Your active configuration is: [test]

foo-bar
Your active configuration is: [test]

foo-bar
```

* Use http Keep-Alive, else BigQuery queries are ~seconds slower than necessary (#195)

- Before (without Keep-Alive): ~3-7s for BigQuery `select 3` with an already cached result
- After (with Keep-Alive): ~1.5-3s
- Query sends these 6 http requests and runtime appears to be dominated by network RTT

* cast string to int (#217)

`table.insert_data(df)` inserts data correctly but raises TypeError: unorderable types: str() > int()

* bigquery.Api: Remove unused _DEFAULT_PAGE_SIZE (#221)

Test plan:
- Unit tests still pass
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants