Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PECO-1803] Splitting the PySql connector into the core and the non core part #417

Conversation

jprakash-db
Copy link
Contributor

@jprakash-db jprakash-db commented Jul 24, 2024

PECO-1803

Related Links

databricks_sqlalchemy split is present in this PR - https://github.com/databricks/databricks-sqlalchemy/pull/1

Description

databricks-sql-python library is split so that package size can be reduced for the end user based on their requirements
Particularly pyarrow is the heavy component that is planned to be kept optional
existing library split into

  • databricks-sql-connector ( This is kept, so that the existing users import flow does not change )
  • databricks-sql-connector-core ( This is the lightweight library that separates the core part )

Tasks Completed

  • Refractored the code into its respective folders based on the proposed design doc
  • pyproject.toml file has been changed to reflect the proper dependencies for the split
  • Made sure that all the existing e2e and units tests are working pre and post spit, ensuring parity
  • Added benchmarking queries to test the performance of pre and post split and a dashboard has been created for visualization
  • Dependency tests are also added to check how the library behaves when certain libraries are not available and the user requests their functions

How to Test

Testing pipeline remains the same as it is before the split.
pytest can be used to directly run both the integration as well as unit tests, by pytest [directory_name or file_name]

Addition of dist folder in this repo

Github actions have been setup in the databricks_sqlalchemy repo to run tests using the databricks_sql_connector_core. For running those tests currently we need the .whl file in the dist folder and for temporary testing it has been added to the PR.

Once the library gets published to a public repository such as PyPi then databricks_sqlalchemy will automatically download it from that repo and run the tests using Github actions

Performance Comparison - Benchmarking

The pre-split and post-split preformance comparison has been made using the large and small queries to make sure their is no regression of performance
Dashboard has been created so that everytime the benchmarking is run the result are stored in the benchfood, and comparisons can be made easily
Screenshot 2024-09-03 at 2 48 19 PM

@jprakash-db jprakash-db self-assigned this Jul 24, 2024
@jprakash-db jprakash-db changed the base branch from main to PECO-1803/connector-split August 14, 2024 09:34
@@ -0,0 +1,23 @@
[tool.poetry]
name = "databricks-sql-connector"
version = "3.3.0"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we change the package structure in this release. I would suggest to bump the major version to 4.0.0 or at least it should be greater than current release 3.4.0, so it has to start with 3.5.0. @kravets-levko wdyt?

README.md Outdated
@@ -60,7 +60,7 @@ In the above example:
- `server-hostname` is the Databricks instance host name.
- `http-path` is the HTTP Path either to a Databricks SQL endpoint (e.g. /sql/1.0/endpoints/1234567890abcdef),
or to a Databricks Runtime interactive cluster (e.g. /sql/protocolv1/o/1234567890123456/1234-123456-slid123)
- `personal-access-token` is the Databricks Personal Access Token for the account that will execute commands and queries
`personal-access-token` is the Databricks Personal Access Token for the account that will execute commands and queries
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is some accidental change - please revert it

@@ -40,6 +44,7 @@ def fetch_rows(self, cursor, row_count, fetchmany_size):
+ "assuming 10K fetch size."
)

@skipUnless(pysql_supports_arrow(), "Without pyarrow lz4 compression is not supported")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's stick with @pytest.mark.skipif everywhere (please check other test files as well)

poetry.lock Outdated
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now, when we have two projects - each of them needs its own lockfile. Keeping the old one at old path is useless, because Poetry won't use it.

Basically, there should be a new lockfile created for databricks_sql_connector. For databricks_sql_connector the old one should be reused, with unnecessary dependencies removed

@jprakash-db jprakash-db changed the base branch from PECO-1803/connector-split to main September 23, 2024 09:01
@jprakash-db jprakash-db changed the base branch from main to PECO-1803/connector-split September 23, 2024 16:01
@jprakash-db jprakash-db merged commit 4099939 into databricks:PECO-1803/connector-split Sep 24, 2024
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants