-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create dashboards on installation and require warehouse_id
on installation
#214
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
This raises a good point... should the dashboard operate on the summaries the assessment step creates, or on the inventory table(s). I am not for the latter right now as it means implementing the logic to detect things twice, once in the assessment code and then again in the dashboard code. I would rather prefer to have this being created by the assessment step and stored in a report table, then the dashboard simply visualizes the aggregates. Thoughts @nfx @FastLee? |
061baf0
to
175e0e1
Compare
Codecov Report
@@ Coverage Diff @@
## main #214 +/- ##
==========================================
- Coverage 82.91% 82.47% -0.44%
==========================================
Files 28 29 +1
Lines 1598 1855 +257
Branches 259 299 +40
==========================================
+ Hits 1325 1530 +205
- Misses 232 271 +39
- Partials 41 54 +13
|
warehouse_id
on installation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me.
…llation (#214) This PR adds framework to automatically add Redash queries / vizualiations / widgets / dashboards from simple file definitions
* Added batched iteration for `INSERT INTO` queries in `StatementExecutionBackend` with default `max_records_per_batch=1000` ([#237](#237)). * Added crawler for mount points ([#209](#209)). * Added crawlers for compatibility of jobs and clusters, along with basic recommendations for external locations ([#244](#244)). * Added safe return on grants ([#246](#246)). * Added ability to specify empty group filter in the installer script ([#216](#216)) ([#217](#217)). * Added ability to install application by multiple different users on the same workspace ([#235](#235)). * Added dashboard creation on installation and a requirement for `warehouse_id` in config, so that the assessment dashboards are refreshed automatically after job runs ([#214](#214)). * Added reliance on rate limiting from Databricks SDK for listing workspace ([#258](#258)). * Fixed errors in corner cases where Azure Service Principal Credentials were not available in Spark context ([#254](#254)). * Fixed `DESCRIBE TABLE` throwing errors when listing Legacy Table ACLs ([#238](#238)). * Fixed `file already exists` error in the installer script ([#219](#219)) ([#222](#222)). * Fixed `guess_external_locations` failure with `AttributeError: as_dict` and added an integration test ([#259](#259)). * Fixed error handling edge cases in `crawl_tables` task ([#243](#243)) ([#251](#251)). * Fixed `crawl_permissions` task failure on folder names containing a forward slash ([#234](#234)). * Improved `README` notebook documentation ([#260](#260), [#228](#228), [#252](#252), [#223](#223), [#225](#225)). * Removed redundant `.python-version` file ([#221](#221)). * Removed discovery of account groups from `crawl_permissions` task ([#240](#240)). * Updated databricks-sdk requirement from ~=0.8.0 to ~=0.9.0 ([#245](#245)).
* Added batched iteration for `INSERT INTO` queries in `StatementExecutionBackend` with default `max_records_per_batch=1000` ([#237](#237)). * Added crawler for mount points ([#209](#209)). * Added crawlers for compatibility of jobs and clusters, along with basic recommendations for external locations ([#244](#244)). * Added safe return on grants ([#246](#246)). * Added ability to specify empty group filter in the installer script ([#216](#216)) ([#217](#217)). * Added ability to install application by multiple different users on the same workspace ([#235](#235)). * Added dashboard creation on installation and a requirement for `warehouse_id` in config, so that the assessment dashboards are refreshed automatically after job runs ([#214](#214)). * Added reliance on rate limiting from Databricks SDK for listing workspace ([#258](#258)). * Fixed errors in corner cases where Azure Service Principal Credentials were not available in Spark context ([#254](#254)). * Fixed `DESCRIBE TABLE` throwing errors when listing Legacy Table ACLs ([#238](#238)). * Fixed `file already exists` error in the installer script ([#219](#219)) ([#222](#222)). * Fixed `guess_external_locations` failure with `AttributeError: as_dict` and added an integration test ([#259](#259)). * Fixed error handling edge cases in `crawl_tables` task ([#243](#243)) ([#251](#251)). * Fixed `crawl_permissions` task failure on folder names containing a forward slash ([#234](#234)). * Improved `README` notebook documentation ([#260](#260), [#228](#228), [#252](#252), [#223](#223), [#225](#225)). * Removed redundant `.python-version` file ([#221](#221)). * Removed discovery of account groups from `crawl_permissions` task ([#240](#240)). * Updated databricks-sdk requirement from ~=0.8.0 to ~=0.9.0 ([#245](#245)).
This PR adds framework to automatically add Redash queries / vizualiations / widgets / dashboards from simple file definitions