-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(ingestion/spark): Platform instance and column level lineage fix #10843
fix(ingestion/spark): Platform instance and column level lineage fix #10843
Conversation
- Add incremental column level lineage fix - Add option to set platform instance and env per platform - Add to lowercase urns
WalkthroughThe recent updates consist of adding the capability to convert dataset URNs to lowercase, enhancing configuration handling for environment and platform instances, updating downstream processing logic, and modifying default configuration settings for improved flexibility and usability. Changes
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (8)
- metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/config/DatahubOpenlineageConfig.java (1 hunks)
- metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/converter/OpenLineageToDataHub.java (4 hunks)
- metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/dataset/DatahubJob.java (1 hunks)
- metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/dataset/PathSpec.java (1 hunks)
- metadata-integration/java/spark-lineage-beta/README.md (2 hunks)
- metadata-integration/java/spark-lineage-beta/src/main/java/datahub/spark/conf/SparkConfigParser.java (5 hunks)
- metadata-integration/java/spark-lineage-beta/src/test/java/datahub/spark/OpenLineageEventToDatahubTest.java (2 hunks)
- metadata-integration/java/spark-lineage-beta/src/test/resources/ol_events/redshift_mixed_case_lineage_spark.json (1 hunks)
Additional context used
LanguageTool
metadata-integration/java/spark-lineage-beta/README.md
[grammar] ~184-~184: After ‘it’, use the third-person verb form “coalesces”.
Context: ... | | | Normally it coalesce and send metadata at the onApplicationE...(IT_VBZ)
[uncategorized] ~185-~185: Did you mean: “By default,”?
Context: ...rwrites existing Dataset lineage edges. By default it is disabled. ...(BY_DEFAULT_COMMA)
[uncategorized] ~186-~186: Did you mean: “By default,”?
Context: ...this to true to lowercase dataset urns. By default it is disabled. ...(BY_DEFAULT_COMMA)
[style] ~187-~187: ‘prefer to have’ might be wordy. Consider a shorter alternative.
Context: ...use dataset symlink (for example if you prefer to have the s3 location instead of the Hive tab...(EN_WORDINESS_PREMIUM_PREFER_TO_HAVE)
[uncategorized] ~187-~187: Did you mean: “By default,”?
Context: ...s3 location instead of the Hive table). By default it is disabled. ...(BY_DEFAULT_COMMA)
Markdownlint
metadata-integration/java/spark-lineage-beta/README.md
189-189: Expected: 1; Actual: 2
Multiple consecutive blank lines(MD012, no-multiple-blanks)
346-346: Expected: 1; Actual: 0; Above
Headings should be surrounded by blank lines(MD022, blanks-around-headings)
348-348: Expected: 1; Actual: 0; Below
Headings should be surrounded by blank lines(MD022, blanks-around-headings)
349-349: Expected: 1; Actual: 0; Above
Headings should be surrounded by blank lines(MD022, blanks-around-headings)
349-349: Expected: 1; Actual: 0; Below
Headings should be surrounded by blank lines(MD022, blanks-around-headings)
345-345: null
Fenced code blocks should be surrounded by blank lines(MD031, blanks-around-fences)
350-350: null
Lists should be surrounded by blank lines(MD032, blanks-around-lists)
Additional comments not posted (22)
metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/dataset/PathSpec.java (1)
17-17
: Good use of Optional forenv
field.Using
Optional
for theenv
field improves the handling of potentially absent values and makes the code more robust.metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/config/DatahubOpenlineageConfig.java (1)
36-36
: New configuration option for lowercasing dataset URNs.The addition of the
lowerCaseDatasetUrns
field adds flexibility to the configuration, allowing users to specify whether dataset URNs should be lowercased.metadata-integration/java/spark-lineage-beta/src/test/resources/ol_events/redshift_mixed_case_lineage_spark.json (1)
1-147
: Well-formed JSON for Spark job lineage.The JSON representation of the Spark job with detailed lineage information for inputs and outputs is well-formed and comprehensive.
metadata-integration/java/spark-lineage-beta/src/main/java/datahub/spark/conf/SparkConfigParser.java (1)
45-46
: Consistent addition of lowercasing functionality for dataset URNs.The new constant
DATASET_LOWERCASE_URNS
and the corresponding methodisLowerCaseDatasetUrns
are well-integrated and consistent with the configuration changes.Also applies to: 157-157, 252-262, 273-274, 351-354
metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/dataset/DatahubJob.java (1)
283-287
: Bug fix: Corrected handling of downstream datasets.The change to swap the use of
downstream
andupstream
variables in theprocessDownstreams
method ensures the correct handling of downstream datasets.metadata-integration/java/spark-lineage-beta/README.md (3)
351-351
: LGTM!This line looks good to me.
352-352
: LGTM!This line looks good to me.
353-353
: LGTM!This line looks good to me.
metadata-integration/java/spark-lineage-beta/src/test/java/datahub/spark/OpenLineageEventToDatahubTest.java (6)
16-16
: LGTM!The import statement for
PathSpec
looks good to me.
23-23
: LGTM!The import statement for
Optional
looks good to me.
603-636
: LGTM!The test method
testProcessRedshiftOutputWithPlatformInstance
looks good to me.
638-681
: LGTM!The test method
testProcessRedshiftOutputWithPlatformSpecificPlatformInstance
looks good to me.
683-722
: LGTM!The test method
testProcessRedshiftOutputWithPlatformSpecificEnv
looks good to me.
724-756
: LGTM!The test method
testProcessRedshiftOutputLowercasedUrns
looks good to me.metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/converter/OpenLineageToDataHub.java (8)
7-7
: LGTM! Import added forFabricType
.This import is necessary for handling environment configurations.
61-61
: LGTM! Import added forPathSpec
.This import is necessary for managing platform instances and environments.
62-62
: LGTM! Import added forDatahubUtils
.This import is necessary for utility functions related to DataHub.
Line range hint
115-115
: LGTM! Method signature updated forconvertOpenlineageDatasetToDatasetUrn
.The additional parameter allows the method to use configuration settings for namespace and dataset name transformations.
199-216
: LGTM! Added handling for environment configurations usingPathSpec
.The code iterates through
PathSpec
to get the environment configuration for the specified platform and handles invalid values with a warning log.However, ensure that the
PathSpec
configuration is correctly set in the codebase.
194-197
: LGTM! Added code to create dataset URNs using platform instance and environment.The code uses
DatahubUtils.createDatasetUrn
to create dataset URNs with the specified platform, platform instance, dataset name, and environment.However, ensure that the
DatahubUtils.createDatasetUrn
function is correctly implemented in the codebase.Verification successful
The
DatahubUtils.createDatasetUrn
function is correctly implemented.The
createDatasetUrn
method constructs aDatasetUrn
using the provided platform, platform instance, dataset name, and fabric type. This ensures that the URNs are correctly formatted based on the configuration.
- Location:
metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/utils/DatahubUtils.java
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the implementation of `DatahubUtils.createDatasetUrn`. # Test: Search for the function implementation. Expect: Occurrences of `createDatasetUrn` in the codebase. rg --type java $'createDatasetUrn'Length of output: 1476
Script:
#!/bin/bash # Extract the implementation of `createDatasetUrn` from `DatahubUtils`. ast-grep --lang java --pattern $'class DatahubUtils { $$$ public static DatasetUrn createDatasetUrn($_, $_, $_, $_) { $$$ } $$$ }'Length of output: 143
Script:
#!/bin/bash # Extract the implementation of `createDatasetUrn` from `DatahubUtils` using rg to get more context. rg --context 10 'public static DatasetUrn createDatasetUrn' metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/utils/DatahubUtils.javaLength of output: 969
157-161
: LGTM! Added handling for lowercase dataset URNs.The code checks the configuration and converts the namespace and dataset name to lowercase if enabled.
However, ensure that the
lowerCaseDatasetUrns
configuration is correctly set in the codebase.Verification successful
LGTM! Added handling for lowercase dataset URNs.
The code checks the configuration and converts the namespace and dataset name to lowercase if enabled. The configuration
lowerCaseDatasetUrns
is correctly set and used in the codebase.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the usage of `lowerCaseDatasetUrns` configuration. # Test: Search for the configuration usage. Expect: Occurrences of `lowerCaseDatasetUrns` in the codebase. rg --type java $'lowerCaseDatasetUrns'Length of output: 575
Script:
#!/bin/bash # Description: Locate and display the implementation of `SparkConfigParser.isLowerCaseDatasetUrns`. # Find the file containing the isLowerCaseDatasetUrns method. file_path=$(rg --files-with-matches --type java 'isLowerCaseDatasetUrns' | head -n 1) # Display the implementation of the method. if [[ -n "$file_path" ]]; then rg --context 10 'isLowerCaseDatasetUrns' "$file_path" else echo "Method isLowerCaseDatasetUrns not found." fiLength of output: 1154
Script:
#!/bin/bash # Description: Locate and display the implementation of `SparkConfigParser.isLowerCaseDatasetUrns`. # Find the file containing the SparkConfigParser class. file_path=$(rg --files-with-matches --type java 'class SparkConfigParser' | head -n 1) # Display the implementation of the isLowerCaseDatasetUrns method in the SparkConfigParser class. if [[ -n "$file_path" ]]; then rg --context 10 'isLowerCaseDatasetUrns' "$file_path" else echo "SparkConfigParser class not found." fiLength of output: 2164
218-232
: LGTM! Added handling for platform instance configurations usingPathSpec
.The code iterates through
PathSpec
to get the platform instance configuration for the specified platform and uses the common platform instance if not present.However, ensure that the
PathSpec
configuration is correctly set in the codebase.Verification successful
Verified: The
PathSpec
configuration is correctly set and utilized within the codebase.The
DatahubOpenlineageConfig
class contains the necessary configuration forPathSpec
, and thegetPlatformInstance
method inOpenLineageToDataHub.java
correctly uses this configuration to fetch the platform instance.
DatahubOpenlineageConfig.java
:
- Contains
Map<String, List<PathSpec>> pathSpecs
and methodgetPathSpecsForPlatform(String platform)
.
PathSpec.java
:
- Defines fields for alias, platform, environment, path specification list, and platform instance.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the usage of `PathSpec` configuration. # Test: Search for the configuration usage. Expect: Occurrences of `PathSpec` in the codebase. rg --type java $'PathSpec'Length of output: 47867
Script:
#!/bin/bash # Description: Verify the configuration and usage of `PathSpec` within `DatahubOpenlineageConfig`. # Search for the implementation details of `DatahubOpenlineageConfig` class. ast-grep --lang java --pattern 'class DatahubOpenlineageConfig { $$$ }' # Search for the implementation details of `PathSpec` class. ast-grep --lang java --pattern 'class PathSpec { $$$ }'Length of output: 131
Script:
#!/bin/bash # Description: Verify the configuration and usage of `PathSpec` within `DatahubOpenlineageConfig`. # Search for the definition of `DatahubOpenlineageConfig` class. rg --type java 'class DatahubOpenlineageConfig' # Search for the definition of `PathSpec` class. rg --type java 'class PathSpec'Length of output: 390
Script:
#!/bin/bash # Description: Display the contents of `DatahubOpenlineageConfig` and `PathSpec` classes to verify their configurations and usages. # Display the contents of `DatahubOpenlineageConfig.java`. cat metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/config/DatahubOpenlineageConfig.java # Display the contents of `PathSpec.java`. cat metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/dataset/PathSpec.javaLength of output: 2707
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (1)
- metadata-integration/java/spark-lineage-beta/README.md (7 hunks)
Additional context used
LanguageTool
metadata-integration/java/spark-lineage-beta/README.md
[grammar] ~184-~184: After ‘it’, use the third-person verb form “coalesces”.
Context: ... | | | Normally it coalesce and send metadata at the onApplicationE...(IT_VBZ)
[uncategorized] ~185-~185: Did you mean: “By default,”?
Context: ...rwrites existing Dataset lineage edges. By default it is disabled. ...(BY_DEFAULT_COMMA)
[uncategorized] ~186-~186: Did you mean: “By default,”?
Context: ...this to true to lowercase dataset urns. By default it is disabled. ...(BY_DEFAULT_COMMA)
[style] ~187-~187: ‘prefer to have’ might be wordy. Consider a shorter alternative.
Context: ...use dataset symlink (for example if you prefer to have the s3 location instead of the Hive tab...(EN_WORDINESS_PREMIUM_PREFER_TO_HAVE)
[uncategorized] ~187-~187: Did you mean: “By default,”?
Context: ...s3 location instead of the Hive table). By default it is disabled. ...(BY_DEFAULT_COMMA)
Markdownlint
metadata-integration/java/spark-lineage-beta/README.md
189-189: Expected: 1; Actual: 2
Multiple consecutive blank lines(MD012, no-multiple-blanks)
346-346: Expected: 1; Actual: 0; Above
Headings should be surrounded by blank lines(MD022, blanks-around-headings)
348-348: Expected: 1; Actual: 0; Below
Headings should be surrounded by blank lines(MD022, blanks-around-headings)
349-349: Expected: 1; Actual: 0; Above
Headings should be surrounded by blank lines(MD022, blanks-around-headings)
349-349: Expected: 1; Actual: 0; Below
Headings should be surrounded by blank lines(MD022, blanks-around-headings)
345-345: null
Fenced code blocks should be surrounded by blank lines(MD031, blanks-around-fences)
350-350: null
Lists should be surrounded by blank lines(MD032, blanks-around-lists)
Additional comments not posted (6)
metadata-integration/java/spark-lineage-beta/README.md (6)
27-27
: LGTM!The configuration instruction for
spark.jars.packages
is correct.
35-35
: LGTM!The configuration instruction for
spark-submit
is correct.
44-44
: LGTM!The configuration instruction for Amazon EMR is correct.
59-59
: LGTM!The configuration instruction for Notebooks is correct.
82-82
: LGTM!The configuration instruction for Standalone Java Applications is correct.
162-162
: LGTM!The configuration options for the Spark agent are correct.
| spark.datahub.patch.enabled | | false | Set this to true to send lineage as a patch, which appends rather than overwrites existing Dataset lineage edges. By default it is disabled. | | ||
| spark.datahub.metadata.dataset.lowerCaseUrns | | false | Set this to true to lowercase dataset urns. By default it is disabled. | | ||
| spark.datahub.disableSymlinkResolution | | false | Set this to true if you prefer not use dataset symlink (for example if you prefer to have the s3 location instead of the Hive table). By default it is disabled. | | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix grammatical and punctuation errors.
There are minor grammatical and punctuation issues in the descriptions.
- By default it is disabled.
+ By default, it is disabled.
- Set this to true to lowercase dataset urns. By default it is disabled.
+ Set this to true to lowercase dataset URNs. By default, it is disabled.
- Set this to true if you prefer to have the s3 location instead of the Hive table. By default it is disabled.
+ Set this to true if you prefer using the S3 location instead of the Hive table. By default, it is disabled.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| spark.datahub.patch.enabled | | false | Set this to true to send lineage as a patch, which appends rather than overwrites existing Dataset lineage edges. By default it is disabled. | | |
| spark.datahub.metadata.dataset.lowerCaseUrns | | false | Set this to true to lowercase dataset urns. By default it is disabled. | | |
| spark.datahub.disableSymlinkResolution | | false | Set this to true if you prefer not use dataset symlink (for example if you prefer to have the s3 location instead of the Hive table). By default it is disabled. | | |
| spark.datahub.patch.enabled | | false | Set this to true to send lineage as a patch, which appends rather than overwrites existing Dataset lineage edges. By default, it is disabled. | | |
| spark.datahub.metadata.dataset.lowerCaseUrns | | false | Set this to true to lowercase dataset URNs. By default, it is disabled. | | |
| spark.datahub.disableSymlinkResolution | | false | Set this to true if you prefer using the S3 location instead of the Hive table. By default, it is disabled. | |
Tools
LanguageTool
[uncategorized] ~185-~185: Did you mean: “By default,”?
Context: ...rwrites existing Dataset lineage edges. By default it is disabled. ...(BY_DEFAULT_COMMA)
[uncategorized] ~186-~186: Did you mean: “By default,”?
Context: ...this to true to lowercase dataset urns. By default it is disabled. ...(BY_DEFAULT_COMMA)
[style] ~187-~187: ‘prefer to have’ might be wordy. Consider a shorter alternative.
Context: ...use dataset symlink (for example if you prefer to have the s3 location instead of the Hive tab...(EN_WORDINESS_PREMIUM_PREFER_TO_HAVE)
[uncategorized] ~187-~187: Did you mean: “By default,”?
Context: ...s3 location instead of the Hive table). By default it is disabled. ...(BY_DEFAULT_COMMA)
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Outside diff range and nitpick comments (3)
metadata-ingestion/docs/sources/databricks/README.md (1)
11-11
: Remove extra blank line.Remove the extra blank line to maintain proper formatting.
-
Tools
Markdownlint
11-11: Expected: 1; Actual: 2
Multiple consecutive blank lines(MD012, no-multiple-blanks)
metadata-integration/java/spark-lineage-beta/README.md (2)
186-187
: Fix grammatical and punctuation errors.There are minor grammatical and punctuation issues in the descriptions.
- Set this to true to lowercase dataset urns. By default, it is disabled. + Set this to true to lowercase dataset URNs. By default, it is disabled. - Set this to true if you prefer to have the s3 location instead of the Hive table. By default, it is disabled. + Set this to true if you prefer using the S3 location instead of the Hive table. By default, it is disabled.
347-358
: Fix formatting issues.There are minor formatting issues in the changelog.
- ## Changelog + + ## Changelog - ### Version 0.2.12 + + ### Version 0.2.12 - - Add option to lowercase dataset URNs + + - Add option to lowercase dataset URNs<details> <summary>Tools</summary> <details> <summary>Markdownlint</summary><blockquote> 347-347: Expected: dash; Actual: plus Unordered list style (MD004, ul-style) --- 347-347: Expected: 0 or 2; Actual: 1 Trailing spaces (MD009, no-trailing-spaces) --- 348-348: Expected: 1; Actual: 0; Above Headings should be surrounded by blank lines (MD022, blanks-around-headings) --- 350-350: Expected: 1; Actual: 0; Below Headings should be surrounded by blank lines (MD022, blanks-around-headings) --- 353-353: null Multiple headings with the same content (MD024, no-duplicate-heading) --- 347-347: null Lists should be surrounded by blank lines (MD032, blanks-around-lists) --- 351-351: null Lists should be surrounded by blank lines (MD032, blanks-around-lists) </blockquote></details> </details> </blockquote></details> </blockquote></details> <details> <summary>Review details</summary> **Configuration used: CodeRabbit UI** **Review profile: CHILL** <details> <summary>Commits</summary> Files that changed from the base of the PR and between ec55767086cd61c692461505e6744e4b524c7a82 and 4db4ff8ba2e6005995a594c3b5c72526c17de3a5. </details> <details> <summary>Files selected for processing (3)</summary> * metadata-ingestion/docs/sources/databricks/README.md (1 hunks) * metadata-integration/java/spark-lineage-beta/README.md (7 hunks) * metadata-integration/java/spark-lineage-beta/src/main/java/io/openlineage/spark/agent/util/RddPathUtils.java (1 hunks) </details> <details> <summary>Additional context used</summary> <details> <summary>Markdownlint</summary><blockquote> <details> <summary>metadata-ingestion/docs/sources/databricks/README.md</summary><blockquote> 11-11: Expected: 1; Actual: 2 Multiple consecutive blank lines (MD012, no-multiple-blanks) </blockquote></details> <details> <summary>metadata-integration/java/spark-lineage-beta/README.md</summary><blockquote> 347-347: Expected: dash; Actual: plus Unordered list style (MD004, ul-style) --- 347-347: Expected: 0 or 2; Actual: 1 Trailing spaces (MD009, no-trailing-spaces) --- 345-345: Expected: 1; Actual: 0; Above Headings should be surrounded by blank lines (MD022, blanks-around-headings) --- 348-348: Expected: 1; Actual: 0; Above Headings should be surrounded by blank lines (MD022, blanks-around-headings) --- 350-350: Expected: 1; Actual: 0; Below Headings should be surrounded by blank lines (MD022, blanks-around-headings) --- 353-353: null Multiple headings with the same content (MD024, no-duplicate-heading) --- 344-344: null Fenced code blocks should be surrounded by blank lines (MD031, blanks-around-fences) --- 347-347: null Lists should be surrounded by blank lines (MD032, blanks-around-lists) --- 351-351: null Lists should be surrounded by blank lines (MD032, blanks-around-lists) </blockquote></details> </blockquote></details> </details> <details> <summary>Additional comments not posted (11)</summary><blockquote> <details> <summary>metadata-ingestion/docs/sources/databricks/README.md (1)</summary><blockquote> `14-14`: **Approved: Addition is helpful.** The addition of the recommendation to use the Spark agent for real-time activity and lineage tracking is helpful and aligns with the document's purpose. </blockquote></details> <details> <summary>metadata-integration/java/spark-lineage-beta/src/main/java/io/openlineage/spark/agent/util/RddPathUtils.java (7)</summary><blockquote> `29-40`: **Approved: Method is well-structured.** The `findRDDPaths` method is well-structured and uses streams effectively to find paths from RDDs. --- `42-53`: **Approved: Method is straightforward and correct.** The `UnknownRDDExtractor` method correctly logs unknown RDD classes at the debug level and returns an empty stream. --- `56-67`: **Approved: Method correctly extracts paths from HadoopRDDs.** The `HadoopRDDExtractor` method correctly extracts paths from HadoopRDDs and maps them using `PlanUtils.getDirectoryPath`. --- `70-80`: **Approved: Method is efficient.** The `MapPartitionsRDDExtractor` method is efficient and leverages recursion to extract paths from MapPartitionsRDDs. --- `83-107`: **Approved: Method correctly handles different Spark versions.** The `FileScanRDDExtractor` method correctly handles different Spark versions and extracts paths from FileScanRDDs. --- `110-146`: **Approved: Method correctly extracts paths from ParallelCollectionRDDs.** The `ParallelCollectionRDDExtractor` method correctly extracts paths from ParallelCollectionRDDs and logs relevant debug information. --- `149-154`: **Approved: Method is straightforward and handles exceptions.** The `parentOf` method is straightforward and correctly handles exceptions by returning null. </blockquote></details> <details> <summary>metadata-integration/java/spark-lineage-beta/README.md (3)</summary><blockquote> `27-27`: **Approved: Configuration instructions are clear.** The configuration instructions for spark-submit are clear and provide necessary details for configuring the Spark agent. --- `44-44`: **Approved: Configuration instructions are clear.** The configuration instructions for Amazon EMR are clear and provide necessary details for configuring the Spark agent on Amazon EMR. --- `59-59`: **Approved: Configuration instructions are clear.** The configuration instructions for Notebooks are clear and provide necessary details for configuring the Spark agent in Notebooks. </blockquote></details> </blockquote></details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
…10843) Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…atahub-project#10843) Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* feat(forms) Handle deleting forms references when hard deleting forms (datahub-project#10820) * refactor(ui): Misc improvements to the setup ingestion flow (ingest uplift 1/2) (datahub-project#10764) Co-authored-by: John Joyce <john@Johns-MBP.lan> Co-authored-by: John Joyce <john@ip-192-168-1-200.us-west-2.compute.internal> * fix(ingestion/airflow-plugin): pipeline tasks discoverable in search (datahub-project#10819) * feat(ingest/transformer): tags to terms transformer (datahub-project#10758) Co-authored-by: Aseem Bansal <asmbansal2@gmail.com> * fix(ingestion/unity-catalog): fixed issue with profiling with GE turned on (datahub-project#10752) Co-authored-by: Aseem Bansal <asmbansal2@gmail.com> * feat(forms) Add java SDK for form entity PATCH + CRUD examples (datahub-project#10822) * feat(SDK) Add java SDK for structuredProperty entity PATCH + CRUD examples (datahub-project#10823) * feat(SDK) Add StructuredPropertyPatchBuilder in python sdk and provide sample CRUD files (datahub-project#10824) * feat(forms) Add CRUD endpoints to GraphQL for Form entities (datahub-project#10825) * add flag for includeSoftDeleted in scroll entities API (datahub-project#10831) * feat(deprecation) Return actor entity with deprecation aspect (datahub-project#10832) * feat(structuredProperties) Add CRUD graphql APIs for structured property entities (datahub-project#10826) * add scroll parameters to openapi v3 spec (datahub-project#10833) * fix(ingest): correct profile_day_of_week implementation (datahub-project#10818) * feat(ingest/glue): allow ingestion of empty databases from Glue (datahub-project#10666) Co-authored-by: Harshal Sheth <hsheth2@gmail.com> * feat(cli): add more details to get cli (datahub-project#10815) * fix(ingestion/glue): ensure date formatting works on all platforms for aws glue (datahub-project#10836) * fix(ingestion): fix datajob patcher (datahub-project#10827) * fix(smoke-test): add suffix in temp file creation (datahub-project#10841) * feat(ingest/glue): add helper method to permit user or group ownership (datahub-project#10784) * feat(): Show data platform instances in policy modal if they are set on the policy (datahub-project#10645) Co-authored-by: Hendrik Richert <hendrik.richert@swisscom.com> * docs(patch): add patch documentation for how implementation works (datahub-project#10010) Co-authored-by: John Joyce <john@acryl.io> * fix(jar): add missing custom-plugin-jar task (datahub-project#10847) * fix(): also check exceptions/stack trace when filtering log messages (datahub-project#10391) Co-authored-by: John Joyce <john@acryl.io> * docs(): Update posts.md (datahub-project#9893) Co-authored-by: Hyejin Yoon <0327jane@gmail.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * chore(ingest): update acryl-datahub-classify version (datahub-project#10844) * refactor(ingest): Refactor structured logging to support infos, warnings, and failures structured reporting to UI (datahub-project#10828) Co-authored-by: John Joyce <john@Johns-MBP.lan> Co-authored-by: Harshal Sheth <hsheth2@gmail.com> * fix(restli): log aspect-not-found as a warning rather than as an error (datahub-project#10834) * fix(ingest/nifi): remove duplicate upstream jobs (datahub-project#10849) * fix(smoke-test): test access to create/revoke personal access tokens (datahub-project#10848) * fix(smoke-test): missing test for move domain (datahub-project#10837) * ci: update usernames to not considered for community (datahub-project#10851) * env: change defaults for data contract visibility (datahub-project#10854) * fix(ingest/tableau): quote special characters in external URL (datahub-project#10842) * fix(smoke-test): fix flakiness of auto complete test * ci(ingest): pin dask dependency for feast (datahub-project#10865) * fix(ingestion/lookml): liquid template resolution and view-to-view cll (datahub-project#10542) * feat(ingest/audit): add client id and version in system metadata props (datahub-project#10829) * chore(ingest): Mypy 1.10.1 pin (datahub-project#10867) * docs: use acryl-datahub-actions as expected python package to install (datahub-project#10852) * docs: add new js snippet (datahub-project#10846) * refactor(ingestion): remove company domain for security reason (datahub-project#10839) * fix(ingestion/spark): Platform instance and column level lineage fix (datahub-project#10843) Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * feat(ingestion/tableau): optionally ingest multiple sites and create site containers (datahub-project#10498) Co-authored-by: Yanik Häni <Yanik.Haeni1@swisscom.com> * fix(ingestion/looker): Add sqlglot dependency and remove unused sqlparser (datahub-project#10874) * fix(manage-tokens): fix manage access token policy (datahub-project#10853) * Batch get entity endpoints (datahub-project#10880) * feat(system): support conditional write semantics (datahub-project#10868) * fix(build): upgrade vercel builds to Node 20.x (datahub-project#10890) * feat(ingest/lookml): shallow clone repos (datahub-project#10888) * fix(ingest/looker): add missing dependency (datahub-project#10876) * fix(ingest): only populate audit stamps where accurate (datahub-project#10604) * fix(ingest/dbt): always encode tag urns (datahub-project#10799) * fix(ingest/redshift): handle multiline alter table commands (datahub-project#10727) * fix(ingestion/looker): column name missing in explore (datahub-project#10892) * fix(lineage) Fix lineage source/dest filtering with explored per hop limit (datahub-project#10879) * feat(conditional-writes): misc updates and fixes (datahub-project#10901) * feat(ci): update outdated action (datahub-project#10899) * feat(rest-emitter): adding async flag to rest emitter (datahub-project#10902) Co-authored-by: Gabe Lyons <gabe.lyons@acryl.io> * feat(ingest): add snowflake-queries source (datahub-project#10835) * fix(ingest): improve `auto_materialize_referenced_tags_terms` error handling (datahub-project#10906) * docs: add new company to adoption list (datahub-project#10909) * refactor(redshift): Improve redshift error handling with new structured reporting system (datahub-project#10870) Co-authored-by: John Joyce <john@Johns-MBP.lan> Co-authored-by: Harshal Sheth <hsheth2@gmail.com> * feat(ui) Finalize support for all entity types on forms (datahub-project#10915) * Index ExecutionRequestResults status field (datahub-project#10811) * feat(ingest): grafana connector (datahub-project#10891) Co-authored-by: Shirshanka Das <shirshanka@apache.org> Co-authored-by: Harshal Sheth <hsheth2@gmail.com> * fix(gms) Add Form entity type to EntityTypeMapper (datahub-project#10916) * feat(dataset): add support for external url in Dataset (datahub-project#10877) * docs(saas-overview) added missing features to observe section (datahub-project#10913) Co-authored-by: John Joyce <john@acryl.io> * fix(ingest/spark): Fixing Micrometer warning (datahub-project#10882) * fix(structured properties): allow application of structured properties without schema file (datahub-project#10918) * fix(data-contracts-web) handle other schedule types (datahub-project#10919) * fix(ingestion/tableau): human-readable message for PERMISSIONS_MODE_SWITCHED error (datahub-project#10866) Co-authored-by: Harshal Sheth <hsheth2@gmail.com> * Add feature flag for view defintions (datahub-project#10914) Co-authored-by: Ethan Cartwright <ethan.cartwright@acryl.io> * feat(ingest/BigQuery): refactor+parallelize dataset metadata extraction (datahub-project#10884) * fix(airflow): add error handling around render_template() (datahub-project#10907) * feat(ingestion/sqlglot): add optional `default_dialect` parameter to sqlglot lineage (datahub-project#10830) * feat(mcp-mutator): new mcp mutator plugin (datahub-project#10904) * fix(ingest/bigquery): changes helper function to decode unicode scape sequences (datahub-project#10845) * feat(ingest/postgres): fetch table sizes for profile (datahub-project#10864) * feat(ingest/abs): Adding azure blob storage ingestion source (datahub-project#10813) * fix(ingest/redshift): reduce severity of SQL parsing issues (datahub-project#10924) * fix(build): fix lint fix web react (datahub-project#10896) * fix(ingest/bigquery): handle quota exceeded for project.list requests (datahub-project#10912) * feat(ingest): report extractor failures more loudly (datahub-project#10908) * feat(ingest/snowflake): integrate snowflake-queries into main source (datahub-project#10905) * fix(ingest): fix docs build (datahub-project#10926) * fix(ingest/snowflake): fix test connection (datahub-project#10927) * fix(ingest/lookml): add view load failures to cache (datahub-project#10923) * docs(slack) overhauled setup instructions and screenshots (datahub-project#10922) Co-authored-by: John Joyce <john@acryl.io> * fix(airflow): Add comma parsing of owners to DataJobs (datahub-project#10903) * fix(entityservice): fix merging sideeffects (datahub-project#10937) * feat(ingest): Support System Ingestion Sources, Show and hide system ingestion sources with Command-S (datahub-project#10938) Co-authored-by: John Joyce <john@Johns-MBP.lan> * chore() Set a default lineage filtering end time on backend when a start time is present (datahub-project#10925) Co-authored-by: John Joyce <john@ip-192-168-1-200.us-west-2.compute.internal> Co-authored-by: John Joyce <john@Johns-MBP.lan> * Added relationships APIs to V3. Added these generic APIs to V3 swagger doc. (datahub-project#10939) * docs: add learning center to docs (datahub-project#10921) * doc: Update hubspot form id (datahub-project#10943) * chore(airflow): add python 3.11 w/ Airflow 2.9 to CI (datahub-project#10941) * fix(ingest/Glue): column upstream lineage between S3 and Glue (datahub-project#10895) * fix(ingest/abs): split abs utils into multiple files (datahub-project#10945) * doc(ingest/looker): fix doc for sql parsing documentation (datahub-project#10883) Co-authored-by: Harshal Sheth <hsheth2@gmail.com> * fix(ingest/bigquery): Adding missing BigQuery types (datahub-project#10950) * fix(ingest/setup): feast and abs source setup (datahub-project#10951) * fix(connections) Harden adding /gms to connections in backend (datahub-project#10942) * feat(siblings) Add flag to prevent combining siblings in the UI (datahub-project#10952) * fix(docs): make graphql doc gen more automated (datahub-project#10953) * feat(ingest/athena): Add option for Athena partitioned profiling (datahub-project#10723) * fix(spark-lineage): default timeout for future responses (datahub-project#10947) * feat(datajob/flow): add environment filter using info aspects (datahub-project#10814) * fix(ui/ingest): correct privilege used to show tab (datahub-project#10483) Co-authored-by: Kunal-kankriya <127090035+Kunal-kankriya@users.noreply.github.com> * feat(ingest/looker): include dashboard urns in browse v2 (datahub-project#10955) * add a structured type to batchGet in OpenAPI V3 spec (datahub-project#10956) * fix(ui): scroll on the domain sidebar to show all domains (datahub-project#10966) * fix(ingest/sagemaker): resolve incorrect variable assignment for SageMaker API call (datahub-project#10965) * fix(airflow/build): Pinning mypy (datahub-project#10972) * Fixed a bug where the OpenAPI V3 spec was incorrect. The bug was introduced in datahub-project#10939. (datahub-project#10974) * fix(ingest/test): Fix for mssql integration tests (datahub-project#10978) * fix(entity-service) exist check correctly extracts status (datahub-project#10973) * fix(structuredProps) casing bug in StructuredPropertiesValidator (datahub-project#10982) * bugfix: use anyOf instead of allOf when creating references in openapi v3 spec (datahub-project#10986) * fix(ui): Remove ant less imports (datahub-project#10988) * feat(ingest/graph): Add get_results_by_filter to DataHubGraph (datahub-project#10987) * feat(ingest/cli): init does not actually support environment variables (datahub-project#10989) * fix(ingest/graph): Update get_results_by_filter graphql query (datahub-project#10991) * feat(ingest/spark): Promote beta plugin (datahub-project#10881) Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * feat(ingest): support domains in meta -> "datahub" section (datahub-project#10967) * feat(ingest): add `check server-config` command (datahub-project#10990) * feat(cli): Make consistent use of DataHubGraphClientConfig (datahub-project#10466) Deprecates get_url_and_token() in favor of a more complete option: load_graph_config() that returns a full DatahubClientConfig. This change was then propagated across previous usages of get_url_and_token so that connections to DataHub server from the client respect the full breadth of configuration specified by DatahubClientConfig. I.e: You can now specify disable_ssl_verification: true in your ~/.datahubenv file so that all cli functions to the server work when ssl certification is disabled. Fixes datahub-project#9705 * fix(ingest/s3): Fixing container creation when there is no folder in path (datahub-project#10993) * fix(ingest/looker): support platform instance for dashboards & charts (datahub-project#10771) * feat(ingest/bigquery): improve handling of information schema in sql parser (datahub-project#10985) * feat(ingest): improve `ingest deploy` command (datahub-project#10944) * fix(backend): allow excluding soft-deleted entities in relationship-queries; exclude soft-deleted members of groups (datahub-project#10920) - allow excluding soft-deleted entities in relationship-queries - exclude soft-deleted members of groups * fix(ingest/looker): downgrade missing chart type log level (datahub-project#10996) * doc(acryl-cloud): release docs for 0.3.4.x (datahub-project#10984) Co-authored-by: John Joyce <john@acryl.io> Co-authored-by: RyanHolstien <RyanHolstien@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Co-authored-by: Pedro Silva <pedro@acryl.io> * fix(protobuf/build): Fix protobuf check jar script (datahub-project#11006) * fix(ui/ingest): Support invalid cron jobs (datahub-project#10998) * fix(ingest): fix graph config loading (datahub-project#11002) Co-authored-by: Pedro Silva <pedro@acryl.io> * feat(docs): Document __DATAHUB_TO_FILE_ directive (datahub-project#10968) Co-authored-by: Harshal Sheth <hsheth2@gmail.com> * fix(graphql/upsertIngestionSource): Validate cron schedule; parse error in CLI (datahub-project#11011) * feat(ece): support custom ownership type urns in ECE generation (datahub-project#10999) * feat(assertion-v2): changed Validation tab to Quality and created new Governance tab (datahub-project#10935) * fix(ingestion/glue): Add support for missing config options for profiling in Glue (datahub-project#10858) * feat(propagation): Add models for schema field docs, tags, terms (datahub-project#2959) (datahub-project#11016) Co-authored-by: Chris Collins <chriscollins3456@gmail.com> * docs: standardize terminology to DataHub Cloud (datahub-project#11003) * fix(ingestion/transformer): replace the externalUrl container (datahub-project#11013) * docs(slack) troubleshoot docs (datahub-project#11014) * feat(propagation): Add graphql API (datahub-project#11030) Co-authored-by: Chris Collins <chriscollins3456@gmail.com> * feat(propagation): Add models for Action feature settings (datahub-project#11029) * docs(custom properties): Remove duplicate from sidebar (datahub-project#11033) * feat(models): Introducing Dataset Partitions Aspect (datahub-project#10997) Co-authored-by: John Joyce <john@Johns-MBP.lan> Co-authored-by: John Joyce <john@ip-192-168-1-200.us-west-2.compute.internal> * feat(propagation): Add Documentation Propagation Settings (datahub-project#11038) * fix(models): chart schema fields mapping, add dataHubAction entity, t… (datahub-project#11040) * fix(ci): smoke test lint failures (datahub-project#11044) * docs: fix learning center color scheme & typo (datahub-project#11043) * feat: add cloud main page (datahub-project#11017) Co-authored-by: Jay <159848059+jayacryl@users.noreply.github.com> * feat(restore-indices): add additional step to also clear system metadata service (datahub-project#10662) Co-authored-by: John Joyce <john@acryl.io> * docs: fix typo (datahub-project#11046) * fix(lint): apply spotless (datahub-project#11050) * docs(airflow): example query to get datajobs for a dataflow (datahub-project#11034) * feat(cli): Add run-id option to put sub-command (datahub-project#11023) Adds an option to assign run-id to a given put command execution. This is useful when transformers do not exist for a given ingestion payload, we can follow up with custom metadata and assign it to an ingestion pipeline. * fix(ingest): improve sql error reporting calls (datahub-project#11025) * fix(airflow): fix CI setup (datahub-project#11031) * feat(ingest/dbt): add experimental `prefer_sql_parser_lineage` flag (datahub-project#11039) * fix(ingestion/lookml): enable stack-trace in lookml logs (datahub-project#10971) * (chore): Linting fix (datahub-project#11015) * chore(ci): update deprecated github actions (datahub-project#10977) * Fix ALB configuration example (datahub-project#10981) * chore(ingestion-base): bump base image packages (datahub-project#11053) * feat(cli): Trim report of dataHubExecutionRequestResult to max GMS size (datahub-project#11051) * fix(ingestion/lookml): emit dummy sql condition for lookml custom condition tag (datahub-project#11008) Co-authored-by: Harshal Sheth <hsheth2@gmail.com> * fix(ingestion/powerbi): fix issue with broken report lineage (datahub-project#10910) * feat(ingest/tableau): add retry on timeout (datahub-project#10995) * change generate kafka connect properties from env (datahub-project#10545) Co-authored-by: david-leifker <114954101+david-leifker@users.noreply.github.com> * fix(ingest): fix oracle cronjob ingestion (datahub-project#11001) Co-authored-by: david-leifker <114954101+david-leifker@users.noreply.github.com> * chore(ci): revert update deprecated github actions (datahub-project#10977) (datahub-project#11062) * feat(ingest/dbt-cloud): update metadata_endpoint inference (datahub-project#11041) * build: Reduce size of datahub-frontend-react image by 50-ish% (datahub-project#10878) Co-authored-by: david-leifker <114954101+david-leifker@users.noreply.github.com> * fix(ci): Fix lint issue in datahub_ingestion_run_summary_provider.py (datahub-project#11063) * docs(ingest): update developing-a-transformer.md (datahub-project#11019) * feat(search-test): update search tests from datahub-project#10408 (datahub-project#11056) * feat(cli): add aspects parameter to DataHubGraph.get_entity_semityped (datahub-project#11009) Co-authored-by: Harshal Sheth <hsheth2@gmail.com> * docs(airflow): update min version for plugin v2 (datahub-project#11065) * doc(ingestion/tableau): doc update for derived permission (datahub-project#11054) Co-authored-by: Pedro Silva <pedro.cls93@gmail.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Co-authored-by: Harshal Sheth <hsheth2@gmail.com> * fix(py): remove dep on types-pkg_resources (datahub-project#11076) * feat(ingest/mode): add option to exclude restricted (datahub-project#11081) * fix(ingest): set lastObserved in sdk when unset (datahub-project#11071) * doc(ingest): Update capabilities (datahub-project#11072) * chore(vulnerability): Log Injection (datahub-project#11090) * chore(vulnerability): Information exposure through a stack trace (datahub-project#11091) * chore(vulnerability): Comparison of narrow type with wide type in loop condition (datahub-project#11089) * chore(vulnerability): Insertion of sensitive information into log files (datahub-project#11088) * chore(vulnerability): Risky Cryptographic Algorithm (datahub-project#11059) * chore(vulnerability): Overly permissive regex range (datahub-project#11061) Co-authored-by: Harshal Sheth <hsheth2@gmail.com> * fix: update customer data (datahub-project#11075) * fix(models): fixing the datasetPartition models (datahub-project#11085) Co-authored-by: John Joyce <john@ip-192-168-1-200.us-west-2.compute.internal> * fix(ui): Adding view, forms GraphQL query, remove showing a fallback error message on unhandled GraphQL error (datahub-project#11084) Co-authored-by: John Joyce <john@ip-192-168-1-200.us-west-2.compute.internal> * feat(docs-site): hiding learn more from cloud page (datahub-project#11097) * fix(docs): Add correct usage of orFilters in search API docs (datahub-project#11082) Co-authored-by: Jay <159848059+jayacryl@users.noreply.github.com> * fix(ingest/mode): Regexp in mode name matcher didn't allow underscore (datahub-project#11098) * docs: Refactor customer stories section (datahub-project#10869) Co-authored-by: Jeff Merrick <jeff@wireform.io> * fix(release): fix full/slim suffix on tag (datahub-project#11087) * feat(config): support alternate hashing algorithm for doc id (datahub-project#10423) Co-authored-by: david-leifker <114954101+david-leifker@users.noreply.github.com> Co-authored-by: John Joyce <john@acryl.io> * fix(emitter): fix typo in get method of java kafka emitter (datahub-project#11007) * fix(ingest): use correct native data type in all SQLAlchemy sources by compiling data type using dialect (datahub-project#10898) Co-authored-by: Harshal Sheth <hsheth2@gmail.com> * chore: Update contributors list in PR labeler (datahub-project#11105) * feat(ingest): tweak stale entity removal messaging (datahub-project#11064) * fix(ingestion): enforce lastObserved timestamps in SystemMetadata (datahub-project#11104) * fix(ingest/powerbi): fix broken lineage between chart and dataset (datahub-project#11080) * feat(ingest/lookml): CLL support for sql set in sql_table_name attribute of lookml view (datahub-project#11069) * docs: update graphql docs on forms & structured properties (datahub-project#11100) * test(search): search openAPI v3 test (datahub-project#11049) * fix(ingest/tableau): prevent empty site content urls (datahub-project#11057) Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * feat(entity-client): implement client batch interface (datahub-project#11106) * fix(snowflake): avoid reporting warnings/info for sys tables (datahub-project#11114) * fix(ingest): downgrade column type mapping warning to info (datahub-project#11115) * feat(api): add AuditStamp to the V3 API entity/aspect response (datahub-project#11118) * fix(ingest/redshift): replace r'\n' with '\n' to avoid token error redshift serverless… (datahub-project#11111) * fix(entiy-client): handle null entityUrn case for restli (datahub-project#11122) * fix(sql-parser): prevent bad urns from alter table lineage (datahub-project#11092) * fix(ingest/bigquery): use small batch size if use_tables_list_query_v2 is set (datahub-project#11121) * fix(graphql): add missing entities to EntityTypeMapper and EntityTypeUrnMapper (datahub-project#10366) * feat(ui): Changes to allow editable dataset name (datahub-project#10608) Co-authored-by: Jay Kadambi <jayasimhan_venkatadri@optum.com> * fix: remove saxo (datahub-project#11127) * feat(mcl-processor): Update mcl processor hooks (datahub-project#11134) * fix(openapi): fix openapi v2 endpoints & v3 documentation update * Revert "fix(openapi): fix openapi v2 endpoints & v3 documentation update" This reverts commit 573c1cb. * docs(policies): updates to policies documentation (datahub-project#11073) * fix(openapi): fix openapi v2 and v3 docs update (datahub-project#11139) * feat(auth): grant type and acr values custom oidc parameters support (datahub-project#11116) * fix(mutator): mutator hook fixes (datahub-project#11140) * feat(search): support sorting on multiple fields (datahub-project#10775) * feat(ingest): various logging improvements (datahub-project#11126) * fix(ingestion/lookml): fix for sql parsing error (datahub-project#11079) Co-authored-by: Harshal Sheth <hsheth2@gmail.com> * feat(docs-site) cloud page spacing and content polishes (datahub-project#11141) * feat(ui) Enable editing structured props on fields (datahub-project#11042) * feat(tests): add md5 and last computed to testResult model (datahub-project#11117) * test(openapi): openapi regression smoke tests (datahub-project#11143) * fix(airflow): fix tox tests + update docs (datahub-project#11125) * docs: add chime to adoption stories (datahub-project#11142) * fix(ingest/databricks): Updating code to work with Databricks sdk 0.30 (datahub-project#11158) * fix(kafka-setup): add missing script to image (datahub-project#11190) * fix(config): fix hash algo config (datahub-project#11191) * test(smoke-test): updates to smoke-tests (datahub-project#11152) * fix(elasticsearch): refactor idHashAlgo setting (datahub-project#11193) * chore(kafka): kafka version bump (datahub-project#11211) * readd UsageStatsWorkUnit * fix merge problems * change logo --------- Co-authored-by: Chris Collins <chriscollins3456@gmail.com> Co-authored-by: John Joyce <john@acryl.io> Co-authored-by: John Joyce <john@Johns-MBP.lan> Co-authored-by: John Joyce <john@ip-192-168-1-200.us-west-2.compute.internal> Co-authored-by: dushayntAW <158567391+dushayntAW@users.noreply.github.com> Co-authored-by: sagar-salvi-apptware <159135491+sagar-salvi-apptware@users.noreply.github.com> Co-authored-by: Aseem Bansal <asmbansal2@gmail.com> Co-authored-by: Kevin Chun <kevin1chun@gmail.com> Co-authored-by: jordanjeremy <72943478+jordanjeremy@users.noreply.github.com> Co-authored-by: skrydal <piotr.skrydalewicz@gmail.com> Co-authored-by: Harshal Sheth <hsheth2@gmail.com> Co-authored-by: david-leifker <114954101+david-leifker@users.noreply.github.com> Co-authored-by: sid-acryl <155424659+sid-acryl@users.noreply.github.com> Co-authored-by: Julien Jehannet <80408664+aviv-julienjehannet@users.noreply.github.com> Co-authored-by: Hendrik Richert <github@richert.li> Co-authored-by: Hendrik Richert <hendrik.richert@swisscom.com> Co-authored-by: RyanHolstien <RyanHolstien@users.noreply.github.com> Co-authored-by: Felix Lüdin <13187726+Masterchen09@users.noreply.github.com> Co-authored-by: Pirry <158024088+chardaway@users.noreply.github.com> Co-authored-by: Hyejin Yoon <0327jane@gmail.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Co-authored-by: cburroughs <chris.burroughs@gmail.com> Co-authored-by: ksrinath <ksrinath@users.noreply.github.com> Co-authored-by: Mayuri Nehate <33225191+mayurinehate@users.noreply.github.com> Co-authored-by: Kunal-kankriya <127090035+Kunal-kankriya@users.noreply.github.com> Co-authored-by: Shirshanka Das <shirshanka@apache.org> Co-authored-by: ipolding-cais <155455744+ipolding-cais@users.noreply.github.com> Co-authored-by: Tamas Nemeth <treff7es@gmail.com> Co-authored-by: Shubham Jagtap <132359390+shubhamjagtap639@users.noreply.github.com> Co-authored-by: haeniya <yanik.haeni@gmail.com> Co-authored-by: Yanik Häni <Yanik.Haeni1@swisscom.com> Co-authored-by: Gabe Lyons <itsgabelyons@gmail.com> Co-authored-by: Gabe Lyons <gabe.lyons@acryl.io> Co-authored-by: 808OVADOZE <52988741+shtephlee@users.noreply.github.com> Co-authored-by: noggi <anton.kuraev@acryl.io> Co-authored-by: Nicholas Pena <npena@foursquare.com> Co-authored-by: Jay <159848059+jayacryl@users.noreply.github.com> Co-authored-by: ethan-cartwright <ethan.cartwright.m@gmail.com> Co-authored-by: Ethan Cartwright <ethan.cartwright@acryl.io> Co-authored-by: Nadav Gross <33874964+nadavgross@users.noreply.github.com> Co-authored-by: Patrick Franco Braz <patrickfbraz@poli.ufrj.br> Co-authored-by: pie1nthesky <39328908+pie1nthesky@users.noreply.github.com> Co-authored-by: Joel Pinto Mata (KPN-DSH-DEX team) <130968841+joelmataKPN@users.noreply.github.com> Co-authored-by: Ellie O'Neil <110510035+eboneil@users.noreply.github.com> Co-authored-by: Ajoy Majumdar <ajoymajumdar@hotmail.com> Co-authored-by: deepgarg-visa <149145061+deepgarg-visa@users.noreply.github.com> Co-authored-by: Tristan Heisler <tristankheisler@gmail.com> Co-authored-by: Andrew Sikowitz <andrew.sikowitz@acryl.io> Co-authored-by: Davi Arnaut <davi.arnaut@acryl.io> Co-authored-by: Pedro Silva <pedro@acryl.io> Co-authored-by: amit-apptware <132869468+amit-apptware@users.noreply.github.com> Co-authored-by: Sam Black <sam.black@acryl.io> Co-authored-by: Raj Tekal <varadaraj_tekal@optum.com> Co-authored-by: Steffen Grohsschmiedt <gitbhub@steffeng.eu> Co-authored-by: jaegwon.seo <162448493+wornjs@users.noreply.github.com> Co-authored-by: Renan F. Lima <51028757+lima-renan@users.noreply.github.com> Co-authored-by: Matt Exchange <xkollar@users.noreply.github.com> Co-authored-by: Jonny Dixon <45681293+acrylJonny@users.noreply.github.com> Co-authored-by: Pedro Silva <pedro.cls93@gmail.com> Co-authored-by: Pinaki Bhattacharjee <pinakipb2@gmail.com> Co-authored-by: Jeff Merrick <jeff@wireform.io> Co-authored-by: skrydal <piotr.skrydalewicz@acryl.io> Co-authored-by: AndreasHegerNuritas <163423418+AndreasHegerNuritas@users.noreply.github.com> Co-authored-by: jayasimhankv <145704974+jayasimhankv@users.noreply.github.com> Co-authored-by: Jay Kadambi <jayasimhan_venkatadri@optum.com> Co-authored-by: David Leifker <david.leifker@acryl.io>
Checklist
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Tests