Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release v0.1.1 #261

Merged
merged 5 commits into from
Sep 21, 2023
Merged

Release v0.1.1 #261

merged 5 commits into from
Sep 21, 2023

Conversation

nfx
Copy link
Collaborator

@nfx nfx commented Sep 21, 2023

  • Added batched iteration for INSERT INTO queries in StatementExecutionBackend with default max_records_per_batch=1000 (#237).
  • Added crawler for mount points (#209).
  • Added crawlers for compatibility of jobs and clusters, along with basic recommendations for external locations (#244).
  • Added safe return on grants (#246).
  • Added ability to specify empty group filter in the installer script (#216) (#217).
  • Added ability to install application by multiple different users on the same workspace (#235).
  • Added dashboard creation on installation and a requirement for warehouse_id in config, so that the assessment dashboards are refreshed automatically after job runs (#214).
  • Added reliance on rate limiting from Databricks SDK for listing workspace (#258).
  • Fixed errors in corner cases where Azure Service Principal Credentials were not available in Spark context (#254).
  • Fixed DESCRIBE TABLE throwing errors when listing Legacy Table ACLs (#238).
  • Fixed file already exists error in the installer script (#219) (#222).
  • Fixed guess_external_locations failure with AttributeError: as_dict and added an integration test (#259).
  • Fixed error handling edge cases in crawl_tables task (#243) (#251).
  • Fixed crawl_permissions task failure on folder names containing a forward slash (#234).
  • Improved README notebook documentation (#260, #228, #252, #223, #225).
  • Removed redundant .python-version file (#221).
  • Removed discovery of account groups from crawl_permissions task (#240).
  • Updated databricks-sdk requirement from ~=0.8.0 to ~=0.9.0 (#245).

@codecov
Copy link

codecov bot commented Sep 21, 2023

Codecov Report

Merging #261 (def197f) into main (1a07212) will increase coverage by 0.01%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##             main     #261      +/-   ##
==========================================
+ Coverage   83.33%   83.35%   +0.01%     
==========================================
  Files          29       29              
  Lines        1974     1976       +2     
  Branches      337      337              
==========================================
+ Hits         1645     1647       +2     
  Misses        261      261              
  Partials       68       68              
Files Changed Coverage Δ
src/databricks/labs/ucx/assessment/crawlers.py 67.82% <ø> (ø)
src/databricks/labs/ucx/__about__.py 100.00% <100.00%> (ø)
src/databricks/labs/ucx/framework/crawlers.py 85.18% <100.00%> (ø)
...databricks/labs/ucx/hive_metastore/data_objects.py 80.85% <100.00%> (ø)
src/databricks/labs/ucx/hive_metastore/grants.py 100.00% <100.00%> (ø)
src/databricks/labs/ucx/hive_metastore/tables.py 94.66% <100.00%> (ø)
src/databricks/labs/ucx/runtime.py 56.33% <100.00%> (+1.26%) ⬆️

@nfx nfx merged commit c3173eb into main Sep 21, 2023
4 checks passed
@nfx nfx deleted the prepare/0.1.1 branch September 21, 2023 20:34
larsgeorge-db pushed a commit that referenced this pull request Sep 23, 2023
* Added batched iteration for `INSERT INTO` queries in
`StatementExecutionBackend` with default `max_records_per_batch=1000`
([#237](#237)).
* Added crawler for mount points
([#209](#209)).
* Added crawlers for compatibility of jobs and clusters, along with
basic recommendations for external locations
([#244](#244)).
* Added safe return on grants
([#246](#246)).
* Added ability to specify empty group filter in the installer script
([#216](#216))
([#217](#217)).
* Added ability to install application by multiple different users on
the same workspace ([#235](#235)).
* Added dashboard creation on installation and a requirement for
`warehouse_id` in config, so that the assessment dashboards are
refreshed automatically after job runs
([#214](#214)).
* Added reliance on rate limiting from Databricks SDK for listing
workspace ([#258](#258)).
* Fixed errors in corner cases where Azure Service Principal Credentials
were not available in Spark context
([#254](#254)).
* Fixed `DESCRIBE TABLE` throwing errors when listing Legacy Table ACLs
([#238](#238)).
* Fixed `file already exists` error in the installer script
([#219](#219))
([#222](#222)).
* Fixed `guess_external_locations` failure with `AttributeError:
as_dict` and added an integration test
([#259](#259)).
* Fixed error handling edge cases in `crawl_tables` task
([#243](#243))
([#251](#251)).
* Fixed `crawl_permissions` task failure on folder names containing a
forward slash ([#234](#234)).
* Improved `README` notebook documentation
([#260](#260),
[#228](#228),
[#252](#252),
[#223](#223),
[#225](#225)).
* Removed redundant `.python-version` file
([#221](#221)).
* Removed discovery of account groups from `crawl_permissions` task
([#240](#240)).
* Updated databricks-sdk requirement from ~=0.8.0 to ~=0.9.0
([#245](#245)).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant