-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set up github actions + remove databricks-specific test infra #3
Conversation
@@ -1 +0,0 @@ | |||
FROM python:3.7-slim-buster |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file is not used anywhere in this repository. Rather it is referenced from outside the repository.
@@ -1,3 +0,0 @@ | |||
RUN pip install -r dev_requirements.txt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto above.
@@ -1,5 +0,0 @@ | |||
#!/bin/bash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto above.
@@ -39,6 +40,7 @@ def make_dummy_result_set_from_initial_results(arrow_table): | |||
for col_id in range(arrow_table.num_columns)] | |||
return rs | |||
|
|||
@pytest.mark.skip(reason="Test has not been updated for latest connector API (June 2022)") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test fails because this test file was not updated as the connector API changed. Unclear if we should just remove this test completely or update it. I didn't find any notes on this subject.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@susodapop was there any breaking changes in the public API of the PySQL?
Did we do breaking change as part of open sourcing? or the breaking change was introduced before that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I recall the test is broken because we changed the API and didn't update this test accordingly. Since I'm not familiar with what this test is meant to do or why it wasn't updated I don't have further context.
@@ -1,20 +0,0 @@ | |||
import databricks.sql |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These test files are intended to be called in a bazel test run which is not defined in this repository.
@@ -39,6 +40,7 @@ def make_dummy_result_set_from_initial_results(arrow_table): | |||
for col_id in range(arrow_table.num_columns)] | |||
return rs | |||
|
|||
@pytest.mark.skip(reason="Test has not been updated for latest connector API (June 2022)") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@susodapop was there any breaking changes in the public API of the PySQL?
Did we do breaking change as part of open sourcing? or the breaking change was introduced before that?
- name: Install library | ||
run: poetry install --no-interaction | ||
#---------------------------------------------- | ||
# run test suite |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@susodapop
which of the following is true:
- does this install the connector package (same way as installed by end user) and runs the tests against the package?
- runs the tests against a source folder of the connector package without installing it as a package.
let's say we have a bug in packaging of the connector, that bug will be discovered if we do 1) but not if we do 2). I wonder which approach we are doing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually it's a combination of both. Since we are now using poetry
, it guarantees that when we runpoetry install
, the local code will be packaged and then installed equivalent to a user installing the published .tar.gz
file with pip
or similar.
I believe we're safe on this front.
id: setup-python | ||
uses: actions/setup-python@v2 | ||
with: | ||
python-version: 3.7 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need to extend the test matrix to cover other supported versions?
3.8 or 3.9?
which versions do we officially support?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's easy enough to duplicate this workflow for Python 3.8 through 3.10.
I think to minimise duplication we can have one workflow that uses the latest version of python and another that uses the oldest version we support.
I'm merging these changes now. Will file a follow-up PR with the cross version Python unit tests. |
catalog in get_table_names adjusted with backquotes
* move py.typed to correct places https://peps.python.org/pep-0561/ says 'For namespace packages (see PEP 420), the py.typed file should be in the submodules of the namespace, to avoid conflicts and for clarity.'. Previously, when I added the py.typed file to this project, #382 , I was unaware this was a namespace package (although, curiously, it seems I had done it right initially and then changed to the wrong way). As PEP 561 warns us, this does create conflicts; other libraries in the databricks namespace package (such as, in my case, databricks-vectorsearch) are then treated as though they are typed, which they are not. This commit moves the py.typed file to the correct places, the submodule folders, fixing that problem. Signed-off-by: wyattscarpenter <wyattscarpenter@gmail.com> * change target of mypy to src/databricks instead of src. I think this might fix the CI code-quality checks failure, but unfortunately I can't replicate that failure locally and the error message is unhelpful Signed-off-by: wyattscarpenter <wyattscarpenter@gmail.com> * Possible workaround for bad error message 'error: --install-types failed (no mypy cache directory)'; see python/mypy#10768 (comment) Signed-off-by: wyattscarpenter <wyattscarpenter@gmail.com> * fix invalid yaml syntax Signed-off-by: wyattscarpenter <wyattscarpenter@gmail.com> * Best fix (#3) Fixes the problem by cding and supplying a flag to mypy (that mypy needs this flag is seemingly fixed/changed in later versions of mypy; but that's another pr altogether...). Also fixes a type error that was somehow in the arguments of the program (?!) (I guess this is because you guys are still using implicit optional) --------- Signed-off-by: wyattscarpenter <wyattscarpenter@gmail.com> * return the old result_links default (#5) Return the old result_links default, make the type optional, & I'm pretty sure the original problem is that add_file_links can't take a None, so these statements should be in the body of the if-statement that ensures it is not None Signed-off-by: wyattscarpenter <wyattscarpenter@gmail.com> * Update src/databricks/sql/utils.py "self.download_manager is unconditionally used later, so must be created. Looks this part of code is totally not covered with tests 🤔" Co-authored-by: Levko Kravets <levko.ne@gmail.com> Signed-off-by: wyattscarpenter <wyattscarpenter@gmail.com> --------- Signed-off-by: wyattscarpenter <wyattscarpenter@gmail.com> Co-authored-by: Levko Kravets <levko.ne@gmail.com>
Description
Workflows
This PR introduces github workflows to perform unit tests, linting, and static type analysis for every pull request on this repository. You can run these locally with the following commands:
poetry run pytest tests
poetry run black --check src
poetry run mypy src
The workflows are copied directly from the https://github.com/databricks/databricks-sql-cli repository.
Cleanup Databricks specific test infra
This repository contains resources that Databricks uses for integration tests. But the tests themselves were never housed in this directory. I've cleaned out these files and references from this repository for cleanliness. Because without the actual integration tests / infrastructure, these files serve no purpose within this repository.
Will need to follow-up with the team about how and where to house the integration tests going forward.
How is this tested?
The checks will automatically appear at the bottom of this PR. If they pass, then
black,
mypy, and
pytest` all exited with code 0.