Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(ingest/spark): Promote beta plugin #10881

Merged
merged 27 commits into from
Jul 25, 2024

Conversation

treff7es
Copy link
Contributor

@treff7es treff7es commented Jul 10, 2024

  • Add Kafka emitter to emit lineage to kafka

  • Add File emitter to emit lineage to file

  • Add S3 emitter to save mcps to s3

  • Upgrading OpenLineage to 1.19.0

  • Renaming project to acryl-datahub-spark-lineage

  • Supporting OpenLineage 1.17+ glue identifier changes

  • Fix handling OpenLineage input/output where wasn't any facet attached

  • Bumping OpenLineage version to 1.19.0

  • The PR conforms to DataHub's Contributing Guideline (particularly Commit Message Format)

  • Links to related issues (if applicable)

  • Tests for the changes have been added/updated (if applicable)

  • Docs related to the changes have been added/updated (if applicable). If a new feature has been added a Usage Guide has been added for the same.

  • For any breaking change/potential downtime/deprecation/big changes an entry has been made in Updating DataHub

Summary by CodeRabbit

Summary by CodeRabbit

  • New Features

    • Enhanced AWS services functionality with the latest dependency update.
    • Introduced new configuration options for metadata emission methods, including REST, Kafka, and file.
  • Updates

    • Upgraded openLineageVersion to 1.19.0.
    • Revised documentation paths and references to reflect the rebranded acryl-spark-lineage.
  • Documentation

    • Updated links in various documentation files for clarity and consistency with new paths, improving user access to resources.
    • Enhanced configuration table structure in the acryl-spark-lineage README for better readability.
  • Chores

    • Adjusted identifiers and paths in documentation to align with the new structure.
  • Improvements

    • Enhanced emitter initialization process to support multiple emitter types, improving configurability and robustness.

@github-actions github-actions bot added the ingestion PR or Issue related to the ingestion of metadata label Jul 10, 2024
@treff7es treff7es marked this pull request as draft July 10, 2024 06:31
Copy link
Contributor

coderabbitai bot commented Jul 10, 2024

Warning

Rate limit exceeded

@treff7es has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 11 minutes and 21 seconds before requesting another review.

How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

Commits

Files that changed from the base of the PR and between 344563f and 20c4efb.

Walkthrough

This update enhances the acryl-spark-lineage integration, focusing on version upgrades and improved documentation paths. Key changes include updating the openLineageVersion and adding a new AWS S3 dependency. Documentation links have been revised for clarity, ensuring users can easily access updated resources for Spark event listeners and configuration instructions. Additionally, various references have been restructured to reflect the rebranding, promoting a more coherent user experience.

Changes

File Path Summary
build.gradle Updated openLineageVersion from '1.16.0' to '1.19.0' and added AWS S3 dependency software.amazon.awssdk:s3:2.26.21.
metadata-ingestion/docs/sources/databricks/README.md Updated link reference for Spark agent configuration instructions to the new documentation path.
docs-website/filterTagIndexes.json
docs-website/sidebars.js
Changed references from spark-lineage and spark-lineage-beta to acryl-spark-lineage, indicating documentation rebranding.
docs/lineage/openlineage.md Updated links for Spark Event Listener Plugin and PathSpec support to reflect new paths under acryl-spark-lineage.
metadata-ingestion/README.md Updated reference for Spark in push-based integrations to the new documentation path.
metadata-integration/java/acryl-spark-lineage/README.md
metadata-integration/java/acryl-spark-lineage/src/main/java/datahub/spark/DatahubSparkListener.java
metadata-integration/java/acryl-spark-lineage/src/main/java/datahub/spark/conf/SparkConfigParser.java
Introduced significant configuration modifications and enhanced emitter initialization to support multiple types, improving clarity and robustness.

Poem

In the garden of code, new blossoms arise,
With AWS magic, we reach for the skies.
Links have been woven, paths freshly spun,
Through acryl-spark-lineage, our journey's begun.
Hooray for the changes, let joy take its flight,
A rabbit's delight in the code's shining light! 🐇🌷✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between cd932c3 and 5fb85b7.

Files ignored due to path filters (7)
  • metadata-integration/java/spark-lineage-legacy/spark-smoke-test/resources/data/in1.csv/part1.csv is excluded by !**/*.csv
  • metadata-integration/java/spark-lineage-legacy/spark-smoke-test/resources/data/in2.csv/part1.csv is excluded by !**/*.csv
  • metadata-integration/java/spark-lineage-legacy/spark-smoke-test/test-spark-lineage/gradle/wrapper/gradle-wrapper.jar is excluded by !**/*.jar
  • metadata-integration/java/spark-lineage-legacy/src/test/resources/data/in1.csv/part1.csv is excluded by !**/*.csv
  • metadata-integration/java/spark-lineage-legacy/src/test/resources/data/in2.csv/part1.csv is excluded by !**/*.csv
  • metadata-integration/java/spark-lineage-legacy/src/test/resources/data/in3.csv/part1.csv is excluded by !**/*.csv
  • metadata-integration/java/spark-lineage-legacy/src/test/resources/data/in4.csv/part1.csv is excluded by !**/*.csv
Files selected for processing (15)
  • build.gradle (1 hunks)
  • metadata-ingestion/docs/sources/databricks/README.md (1 hunks)
  • metadata-integration/java/acryl-spark-lineage/README.md (7 hunks)
  • metadata-integration/java/acryl-spark-lineage/src/main/java/datahub/spark/DatahubSparkListener.java (1 hunks)
  • metadata-integration/java/acryl-spark-lineage/src/main/java/datahub/spark/conf/SparkConfigParser.java (5 hunks)
  • metadata-integration/java/acryl-spark-lineage/src/main/java/io/openlineage/spark/agent/lifecycle/plan/InsertIntoHadoopFsRelationVisitor.java (2 hunks)
  • metadata-integration/java/acryl-spark-lineage/src/main/java/io/openlineage/spark/agent/util/RddPathUtils.java (1 hunks)
  • metadata-integration/java/acryl-spark-lineage/src/test/java/datahub/spark/OpenLineageEventToDatahubTest.java (2 hunks)
  • metadata-integration/java/acryl-spark-lineage/src/test/resources/ol_events/redshift_mixed_case_lineage_spark.json (1 hunks)
  • metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/config/DatahubOpenlineageConfig.java (1 hunks)
  • metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/converter/OpenLineageToDataHub.java (4 hunks)
  • metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/dataset/DatahubJob.java (1 hunks)
  • metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/dataset/PathSpec.java (1 hunks)
  • metadata-integration/java/spark-lineage-legacy/src/main/java/datahub/spark/consumer/impl/McpEmitter.java (5 hunks)
  • settings.gradle (1 hunks)
Files skipped from review due to trivial changes (4)
  • build.gradle
  • metadata-ingestion/docs/sources/databricks/README.md
  • metadata-integration/java/acryl-spark-lineage/src/main/java/datahub/spark/DatahubSparkListener.java
  • metadata-integration/java/acryl-spark-lineage/src/test/resources/ol_events/redshift_mixed_case_lineage_spark.json
Additional context used
Markdownlint
metadata-integration/java/acryl-spark-lineage/README.md

347-347: Expected: dash; Actual: plus
Unordered list style

(MD004, ul-style)

Additional comments not posted (39)
metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/dataset/PathSpec.java (1)

17-17: Good use of Optional for null safety.

The change to use Optional<String> for the env field improves null safety and aligns with best practices.

metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/config/DatahubOpenlineageConfig.java (1)

36-36: New field addition for dataset URNs case handling.

The addition of the lowerCaseDatasetUrns field with a default value of false is a backward-compatible change for additional functionality.

settings.gradle (1)

55-55: Approved inclusion of new modules.

The addition of spark-lineage-legacy and acryl-spark-lineage modules aligns with the functionalities introduced in the PR.

Also applies to: 61-61

metadata-integration/java/acryl-spark-lineage/src/main/java/io/openlineage/spark/agent/lifecycle/plan/InsertIntoHadoopFsRelationVisitor.java (1)

41-41: Refactor ensures robustness and correctness.

The refactored method includes necessary checks for SparkSession presence and improves path trimming logic, enhancing the robustness and correctness of the code.

Also applies to: 54-58, 63-63, 70-71, 79-79, 82-86, 88-91

metadata-integration/java/acryl-spark-lineage/src/main/java/io/openlineage/spark/agent/util/RddPathUtils.java (8)

29-39: LGTM!

The findRDDPaths method is well-implemented and correctly uses the Stream API to find and extract paths from RDDs.


42-53: LGTM!

The UnknownRDDExtractor class correctly implements the RddPathExtractor interface and logs debug information for unknown RDDs.


56-67: LGTM!

The HadoopRDDExtractor class correctly implements the RddPathExtractor interface and extracts paths from HadoopRDD.


70-80: LGTM!

The MapPartitionsRDDExtractor class correctly implements the RddPathExtractor interface and extracts paths from MapPartitionsRDD.


83-106: LGTM!

The FileScanRDDExtractor class correctly implements the RddPathExtractor interface and extracts paths from FileScanRDD.


110-145: LGTM!

The ParallelCollectionRDDExtractor class correctly implements the RddPathExtractor interface and extracts paths from ParallelCollectionRDD.


149-154: LGTM!

The parentOf method correctly returns the parent path of a given path string.


157-160: LGTM!

The RddPathExtractor interface correctly defines methods for extracting paths from RDDs.

metadata-integration/java/spark-lineage-legacy/src/main/java/datahub/spark/consumer/impl/McpEmitter.java (3)

26-35: LGTM!

The added fields for KafkaEmitter configuration are necessary and appropriately defined.


45-53: LGTM!

The getEmitter method correctly handles the creation of KafkaEmitter and handles exceptions properly.


129-163: LGTM!

The McpEmitter constructor correctly handles the configuration for KafkaEmitter and logs relevant information.

metadata-integration/java/acryl-spark-lineage/src/main/java/datahub/spark/conf/SparkConfigParser.java (4)

45-46: LGTM!

The added constant DATASET_LOWERCASE_URNS is necessary for the new configuration.


157-157: LGTM!

The sparkConfigToDatahubOpenlineageConf method correctly includes the builder for lowerCaseDatasetUrns.


252-258: LGTM!

The getPathSpecListMap method correctly includes handling of optional env and platformInstance.


351-354: LGTM!

The isLowerCaseDatasetUrns method correctly checks for the new configuration.

metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/dataset/DatahubJob.java (1)

283-287: LGTM!

The processDownstreams method correctly handles new configurations and materializes datasets.

metadata-integration/java/acryl-spark-lineage/README.md (6)

27-27: Approve the version update in spark-submit section.

The version for io.acryl:acryl-spark-lineage has been updated to 0.2.13. Ensure this is consistent across all sections.


35-35: Approve the version update in spark-submit command line section.

The version for io.acryl:acryl-spark-lineage has been updated to 0.2.13.


44-44: Approve the version update in Amazon EMR section.

The version for io.acryl:acryl-spark-lineage has been updated to 0.2.13.


59-59: Approve the version update in Notebooks section.

The version for io.acryl:acryl-spark-lineage has been updated to 0.2.13.


82-82: Approve the version update in Standalone Java Applications section.

The version for io.acryl:acryl-spark-lineage has been updated to 0.2.13.


348-361: Approve the changelog update.

The changelog has been updated with Version 0.2.13, listing the new features and fixes.

metadata-integration/java/acryl-spark-lineage/src/test/java/datahub/spark/OpenLineageEventToDatahubTest.java (7)

604-636: Approve the new test method testProcessRedshiftOutputWithPlatformInstance.

The test method correctly verifies processing Redshift output with a specific platform instance.


638-681: Approve the new test method testProcessRedshiftOutputWithPlatformSpecificPlatformInstance.

The test method correctly verifies processing Redshift output with a platform-specific platform instance.


683-721: Approve the new test method testProcessRedshiftOutputWithPlatformSpecificEnv.

The test method correctly verifies processing Redshift output with platform-specific environment settings.


724-755: Approve the new test method testProcessRedshiftOutputLowercasedUrns.

The test method correctly verifies processing Redshift output with lower-cased URNs.


Line range hint 570-603: Approve the new test method testProcessGlueOlEvent.

The test method correctly verifies processing Glue events.


604-636: Approve the new test method testProcessGlueOlEventSymlinkDisabled.

The test method correctly verifies processing Glue events with symlink resolution disabled.


637-681: Approve the new test method testProcessGlueOlEventWithHiveAlias.

The test method correctly verifies processing Glue events with a Hive alias.

metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/converter/OpenLineageToDataHub.java (6)

7-7: New Imports Added

The new imports FabricType, PathSpec, and DatahubUtils appear necessary for the new feature implementations. Ensure these classes are correctly utilized in the code.

Also applies to: 61-62


199-216: New Method: getEnv

The method retrieves the environment based on the platform from PathSpec. It falls back to a default value if not found. The logic appears correct.


218-232: New Method: getPlatformInstance

The method retrieves the platform instance similarly to getEnv, by checking PathSpec first. It has a fallback to a common instance if not found. The logic appears correct.


Line range hint 263-263:
Utility Method Review

Verify all utility methods introduced are complete and logically sound.


Line range hint 157-197:
Check the logic for handling lowercase URNs

The logic for converting namespace and dataset names to lowercase is correctly implemented based on the isLowerCaseDatasetUrns config flag.

However, ensure that this feature is thoroughly tested to avoid any unintended effects on existing datasets.


Line range hint 263-263:
Validate Integration of New Methods

Ensure the correct integration of newly added getEnv and getPlatformInstance methods within convertRunEventToJob, processJobInputs, and processJobOutputs.

Verification successful

Validate Integration of New Methods

The methods getEnv and getPlatformInstance are correctly integrated within the OpenLineageToDataHub.java file. Their usage in generating dataset URNs and retrieving platform instance information is consistent with the intended functionality.

  • File: metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/converter/OpenLineageToDataHub.java
  • Methods:
    • convertRunEventToJob
    • processJobInputs
    • processJobOutputs
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of `getEnv` and `getPlatformInstance` methods within the code.
ast-grep --lang java --pattern $'getEnv($_, $_);'
ast-grep --lang java --pattern $'getPlatformInstance($_, $_);'

Length of output: 114


Script:

#!/bin/bash
# Searching for the usage of `getEnv` method in the codebase
rg 'getEnv'

# Searching for the usage of `getPlatformInstance` method in the codebase
rg 'getPlatformInstance'

Length of output: 5752


Script:

#!/bin/bash
# Searching for the definitions and usages of `convertRunEventToJob` in the `OpenLineageToDataHub.java` file.
rg 'convertRunEventToJob' metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/converter/OpenLineageToDataHub.java

# Searching for the definitions and usages of `processJobInputs` in the `OpenLineageToDataHub.java` file.
rg 'processJobInputs' metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/converter/OpenLineageToDataHub.java

# Searching for the definitions and usages of `processJobOutputs` in the `OpenLineageToDataHub.java` file.
rg 'processJobOutputs' metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/converter/OpenLineageToDataHub.java

Length of output: 693


Script:

#!/bin/bash
# Extracting context around the definition and usage of `convertRunEventToJob` to verify integration.
rg 'convertRunEventToJob' -A 10 metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/converter/OpenLineageToDataHub.java

# Extracting context around the definition and usage of `processJobInputs` to verify integration.
rg 'processJobInputs' -A 10 metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/converter/OpenLineageToDataHub.java

# Extracting context around the definition and usage of `processJobOutputs` to verify integration.
rg 'processJobOutputs' -A 10 metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/converter/OpenLineageToDataHub.java

Length of output: 2668


Script:

#!/bin/bash
# Searching for the usage of `getEnv` within the OpenLineageToDataHub.java file
rg 'getEnv' -A 10 metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/converter/OpenLineageToDataHub.java

# Searching for the usage of `getPlatformInstance` within the OpenLineageToDataHub.java file
rg 'getPlatformInstance' -A 10 metadata-integration/java/openlineage-converter/src/main/java/io/datahubproject/openlineage/converter/OpenLineageToDataHub.java

Length of output: 4183

- Upgrading OpenLineage to 1.18.0
- Renaming project to acryl-datahub-spark-lineage
- Supporting OpenLineage 1.17+ glue identifier changes
- Removing custom
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

?

treff7es and others added 7 commits July 22, 2024 10:14
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Renaming old spark plugin to legacy
Adding kafka emitter
Bumping OpenLineage to 0.17.1
Buming OpenLineage version to 1.18.0
Adding s3 emitter
- Fixing openlineage inputs where there were no facets included
@treff7es treff7es marked this pull request as ready for review July 23, 2024 20:51
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Outside diff range, codebase verification and nitpick comments (1)
docs/lineage/openlineage.md (1)

86-86: LGTM!

The guide link for the Spark Lineage plugin has been correctly updated to point to the new documentation path.

Consider revising the preposition.

The preposition “on” seems more appropriate in this position.

- Follow the guides of the Spark Lineage plugin page
+ Follow the guides on the Spark Lineage plugin page
Tools
LanguageTool

[uncategorized] ~86-~86: The preposition “on” seems more likely in this position.
Context: ...re. #### How to Use Follow the guides of the Spark Lineage plugin page for more ...

(AI_EN_LECTOR_REPLACEMENT_PREPOSITION)

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 5fb85b7 and 5c2f478.

Files selected for processing (5)
  • build.gradle (2 hunks)
  • docs-website/filterTagIndexes.json (1 hunks)
  • docs-website/sidebars.js (2 hunks)
  • docs/lineage/openlineage.md (3 hunks)
  • metadata-ingestion/docs/sources/databricks/README.md (1 hunks)
Files skipped from review due to trivial changes (3)
  • docs-website/filterTagIndexes.json
  • docs-website/sidebars.js
  • metadata-ingestion/docs/sources/databricks/README.md
Additional context used
LanguageTool
docs/lineage/openlineage.md

[uncategorized] ~86-~86: The preposition “on” seems more likely in this position.
Context: ...re. #### How to Use Follow the guides of the Spark Lineage plugin page for more ...

(AI_EN_LECTOR_REPLACEMENT_PREPOSITION)

Additional comments not posted (5)
docs/lineage/openlineage.md (3)

9-9: LGTM!

The link to the Spark Event Listener Plugin has been correctly updated to point to the new documentation path.


76-76: LGTM!

The link to PathSpec Support has been correctly updated to point to the new documentation path.


92-92: LGTM!

The link to the DataHub Spark Lineage Plugin has been correctly updated to point to the new documentation path.

build.gradle (2)

57-57: LGTM!

The openLineageVersion has been correctly updated to 1.19.0, reflecting the latest version.


114-114: LGTM!

The awsS3 dependency has been correctly added with version 2.26.21, expanding the project's capabilities to interact with Amazon S3 services.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between c53fdb6 and 772c593.

Files selected for processing (1)
  • docs-website/sidebars.js (2 hunks)
Files skipped from review as they are similar to previous changes (1)
  • docs-website/sidebars.js

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 772c593 and ad6321d.

Files selected for processing (2)
  • docs-website/sidebars.js (2 hunks)
  • metadata-ingestion/README.md (1 hunks)
Files skipped from review due to trivial changes (1)
  • metadata-ingestion/README.md
Files skipped from review as they are similar to previous changes (1)
  • docs-website/sidebars.js

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between ad6321d and 0b59cd4.

Files selected for processing (1)
  • docs/lineage/openlineage.md (3 hunks)
Files skipped from review due to trivial changes (1)
  • docs/lineage/openlineage.md

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 0b59cd4 and 57ccba8.

Files selected for processing (1)
  • docs-website/sidebars.js (2 hunks)
Files skipped from review as they are similar to previous changes (1)
  • docs-website/sidebars.js

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Outside diff range, codebase verification and nitpick comments (2)
metadata-integration/java/acryl-spark-lineage/src/main/java/datahub/spark/conf/SparkConfigParser.java (1)

28-28: Renamed constant: Update required in McpEmitter.java.

The constant TRANSPORT_KEY should be renamed to EMITTER_TYPE in the following locations:

  • metadata-integration/java/spark-lineage-legacy/src/main/java/datahub/spark/consumer/impl/McpEmitter.java:
    • Line 5: private static final String TRANSPORT_KEY = "transport";
    • Line 15: datahubConf.hasPath(TRANSPORT_KEY) ? datahubConf.getString(TRANSPORT_KEY) : "rest";
Analysis chain

Renamed constant: Verify usage across the codebase.

The constant TRANSPORT_KEY has been renamed to EMITTER_TYPE. Ensure that all references to TRANSPORT_KEY are updated accordingly.

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify all references to `TRANSPORT_KEY` have been updated to `EMITTER_TYPE`.

# Test: Search for the old constant name. Expect: No occurrences of `TRANSPORT_KEY`.
rg --type java $'TRANSPORT_KEY'

Length of output: 391


Script:

#!/bin/bash
# Description: Check the context of `TRANSPORT_KEY` usage in `McpEmitter.java` to determine if it should be renamed to `EMITTER_TYPE`.

# Extract lines around the occurrences of `TRANSPORT_KEY` to understand its context
rg --type java -A 5 -B 5 $'TRANSPORT_KEY' metadata-integration/java/spark-lineage-legacy/src/main/java/datahub/spark/consumer/impl/McpEmitter.java

Length of output: 1074

metadata-integration/java/acryl-spark-lineage/README.md (1)

160-164: New configuration options: Ensure clarity and correctness.

New configuration options have been added. Ensure that the descriptions are clear and the default values are correct.

- | spark.datahub.emitter                                  |          | rest                  | Specify the ways to emit metadata. By default it sends to DataHub using REST emitter. Valid options are rest, kafka or file                                                               |
+ | spark.datahub.emitter                                  |          | rest                  | Specify the ways to emit metadata. By default, it sends to DataHub using REST emitter. Valid options are rest, kafka, or file.                                                             |
Tools
LanguageTool

[uncategorized] ~164-~164: Did you mean: “By default,”?
Context: ... | Specify the ways to emit metadata. By default it sends to DataHub using REST emitter....

(BY_DEFAULT_COMMA)

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 57ccba8 and d307a5c.

Files selected for processing (4)
  • metadata-integration/java/acryl-spark-lineage/README.md (4 hunks)
  • metadata-integration/java/acryl-spark-lineage/src/main/java/datahub/spark/DatahubSparkListener.java (4 hunks)
  • metadata-integration/java/acryl-spark-lineage/src/main/java/datahub/spark/conf/SparkConfigParser.java (3 hunks)
  • metadata-integration/java/datahub-client/src/main/java/datahub/client/kafka/KafkaEmitter.java (1 hunks)
Files skipped from review due to trivial changes (1)
  • metadata-integration/java/datahub-client/src/main/java/datahub/client/kafka/KafkaEmitter.java
Additional context used
LanguageTool
metadata-integration/java/acryl-spark-lineage/README.md

[uncategorized] ~164-~164: Did you mean: “By default,”?
Context: ... | Specify the ways to emit metadata. By default it sends to DataHub using REST emitter....

(BY_DEFAULT_COMMA)


[grammar] ~177-~177: The word “fallback” is a noun. The verb is spelled with a space.
Context: ...s#fabrictype). In all other cases, will fallback to PROD ...

(NOUN_VERB_CONFUSION)


[grammar] ~180-~180: After ‘It’, use the third-person verb form “changes”.
Context: ...ition pattern. (e.g. /partition=\d+) It change database/table/partition=123 to databas...

(IT_VBZ)


[typographical] ~186-~186: The word “otherwise” is an adverb that can’t be used like a conjunction, and therefore needs to be separated from the sentence.
Context: ... is set it will be used as the DataFlow name otherwise it uses spark app name as flow_name ...

(THUS_SENTENCE)

Markdownlint
metadata-integration/java/acryl-spark-lineage/README.md

165-165: null
Bare URL used

(MD034, no-bare-urls)


356-356: Expected: dash; Actual: plus
Unordered list style

(MD004, ul-style)

Additional comments not posted (11)
metadata-integration/java/acryl-spark-lineage/src/main/java/datahub/spark/conf/SparkConfigParser.java (5)

312-318: Deprecation strategy in method: Ensure backward compatibility.

The method isIncludeSchemaMetadata now checks the new constant first and falls back to the deprecated alias. Verify that this logic maintains backward compatibility.


35-39: New Kafka emitter constants: Ensure correct usage.

New constants related to Kafka emitter configuration have been added. Verify their usage in the codebase to ensure they are correctly implemented.

Verification successful

Kafka emitter constants are correctly implemented.

The newly added Kafka emitter constants are used appropriately in the codebase, specifically in the DatahubSparkListener class for accessing configuration properties.

  • KAFKA_EMITTER_BOOTSTRAP: Used in DatahubSparkListener.java lines 1 and 2.
  • KAFKA_EMITTER_SCHEMA_REGISTRY_URL: Used in DatahubSparkListener.java lines 1 and 2.
  • KAFKA_EMITTER_SCHEMA_REGISTRY_CONFIG: Used in DatahubSparkListener.java lines 1 and 2.
  • KAFKA_EMITTER_PRODUCER_CONFIG: Used in DatahubSparkListener.java lines 1 and 2.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of the new Kafka emitter constants.

# Test: Search for the new constants. Expect: Occurrences of each constant in relevant contexts.
for constant in KAFKA_EMITTER_BOOTSTRAP KAFKA_EMITTER_SCHEMA_REGISTRY_URL KAFKA_EMITTER_SCHEMA_REGISTRY_CONFIG KAFKA_EMITTER_PRODUCER_CONFIG; do
  rg --type java $constant
done

Length of output: 2910


63-66: Deprecation strategy: Ensure backward compatibility.

The constant DATASET_INCLUDE_SCHEMA_METADATA has been renamed to DATASET_INCLUDE_SCHEMA_METADATA_DEPRECATED_ALIAS, and a new constant DATASET_INCLUDE_SCHEMA_METADATA has been introduced. Ensure that the deprecation strategy maintains backward compatibility.

Verification successful

Deprecation strategy verified: Backward compatibility maintained.

The code ensures backward compatibility by retaining the usage of the deprecated constant DATASET_INCLUDE_SCHEMA_METADATA_DEPRECATED_ALIAS while introducing the new constant DATASET_INCLUDE_SCHEMA_METADATA.

  • SparkConfigParser.java:
    • Both constants are used to check for their respective configurations and retrieve their values.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of the deprecated and new constants.

# Test: Search for the old and new constants. Expect: Occurrences of both constants in relevant contexts.
for constant in DATASET_INCLUDE_SCHEMA_METADATA_DEPRECATED_ALIAS DATASET_INCLUDE_SCHEMA_METADATA; do
  rg --type java $constant
done

Length of output: 1939


40-47: New S3 emitter constants: Ensure correct usage.

New constants related to S3 emitter configuration have been added. Verify their usage in the codebase to ensure they are correctly implemented.

Verification successful

New S3 emitter constants: Correct usage verified.

The new constants related to S3 emitter configuration have been correctly implemented and used in the DatahubSparkListener class.

  • S3_EMITTER_BUCKET
  • S3_EMITTER_REGION
  • S3_EMITTER_ENDPOINT
  • S3_EMITTER_PREFIX
  • S3_EMITTER_ACCESS_KEY
  • S3_EMITTER_SECRET_KEY
  • S3_EMITTER_PROFILE
  • S3_EMITTER_FILE_NAME

Each constant is checked for existence in the configuration and then used to set a property in the s3EmitterConfig object.

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of the new S3 emitter constants.

# Test: Search for the new constants. Expect: Occurrences of each constant in relevant contexts.
for constant in S3_EMITTER_BUCKET S3_EMITTER_REGION S3_EMITTER_ENDPOINT S3_EMITTER_PREFIX S3_EMITTER_ACCESS_KEY S3_EMITTER_SECRET_KEY S3_EMITTER_PROFILE S3_EMITTER_FILE_NAME; do
  rg --type java $constant
done

Length of output: 5807


31-31: New constant: Ensure correct usage.

The new constant FILE_EMITTER_FILE_NAME has been added. Verify its usage in the codebase to ensure it is correctly implemented.

Verification successful

New constant: Correct usage verified.

The new constant FILE_EMITTER_FILE_NAME is correctly used in the DatahubSparkListener.java file to set the file name in the fileEmitterConfig.

  • DatahubSparkListener.java: The constant is used within the initializeEmitter method to fetch the configuration value from sparkConf.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of the new constant `FILE_EMITTER_FILE_NAME`.

# Test: Search for the new constant. Expect: Occurrences of `FILE_EMITTER_FILE_NAME` in relevant contexts.
rg --type java $'FILE_EMITTER_FILE_NAME'

Length of output: 411


Script:

#!/bin/bash
# Description: Verify the context of `fileEmitterConfig` and `sparkConf` in DatahubSparkListener.java.

# Search for the definition and usage of `fileEmitterConfig` in DatahubSparkListener.java
rg --type java 'fileEmitterConfig' -A 5 -B 5

# Search for the definition and usage of `sparkConf` in DatahubSparkListener.java
rg --type java 'sparkConf' -A 5 -B 5

Length of output: 72916

metadata-integration/java/acryl-spark-lineage/src/main/java/datahub/spark/DatahubSparkListener.java (5)

107-108: Emitter type configuration: Ensure default value.

The emitter type is determined from the configuration, defaulting to "rest" if not specified. Verify that this logic is correctly implemented and the default value is appropriate.


261-263: Emitter configuration: Ensure correct integration.

The emitter configuration is integrated into the SparkLineageConf. Verify that this integration is correctly implemented.


272-272: Initialization: Ensure context factory initialization.

The context factory is initialized if not already done. Verify that this logic is correctly implemented.


8-16: New imports: Ensure necessary dependencies.

New imports have been added for different emitter configurations. Verify that these imports are necessary and correctly used in the code.

Verification successful

Imports are necessary and correctly used.

The new imports for different emitter configurations are used in multiple relevant files within the codebase, indicating that they are necessary and correctly utilized.

  • datahub.client.file.FileEmitterConfig: Used in DatahubSparkListener.java, FileDatahubEmitterConfig.java, Proto2DataHub.java, PatchTest.java, and S3Emitter.java.
  • datahub.client.kafka.KafkaEmitterConfig: Used in McpEmitter.java, KafkaDatahubEmitterConfig.java, and DatahubSparkListener.java.
  • datahub.client.s3.S3EmitterConfig: Used in DatahubSparkListener.java and S3DatahubEmitterConfig.java.
  • datahub.spark.conf.FileDatahubEmitterConfig: Used in DatahubSparkListener.java and DatahubEventEmitter.java.
  • datahub.spark.conf.KafkaDatahubEmitterConfig: Used in DatahubSparkListener.java and DatahubEventEmitter.java.
  • datahub.spark.conf.S3DatahubEmitterConfig: Used in DatahubSparkListener.java and DatahubEventEmitter.java.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of the new imports.

# Test: Search for the usage of each new import. Expect: Occurrences of each import in relevant contexts.
for import in datahub.client.file.FileEmitterConfig datahub.client.kafka.KafkaEmitterConfig datahub.client.s3.S3EmitterConfig datahub.spark.conf.FileDatahubEmitterConfig datahub.spark.conf.KafkaDatahubEmitterConfig datahub.spark.conf.S3DatahubEmitterConfig; do
  rg --type java $import
done

Length of output: 4266


110-238: Switch-case for emitter types: Ensure correct initialization.

The switch-case structure initializes different emitter configurations based on the emitter type. Verify that each case correctly initializes the respective emitter configuration.

Verification successful

Switch-case for emitter types: Ensure correct initialization.

The initialization of different emitter configurations appears to be correctly handled based on the search results. Each emitter type ("rest", "kafka", "file", "s3") is referenced and initialized in the codebase as expected.

  • rest: Found in metadata-integration/java/acryl-spark-lineage/src/main/java/datahub/spark/DatahubSparkListener.java
  • kafka: Found in metadata-integration/java/acryl-spark-lineage/src/main/java/datahub/spark/DatahubSparkListener.java
  • file: Found in metadata-integration/java/acryl-spark-lineage/src/main/java/datahub/spark/DatahubSparkListener.java
  • s3: Found in metadata-integration/java/acryl-spark-lineage/src/main/java/datahub/spark/DatahubSparkListener.java
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the initialization of different emitter configurations.

# Test: Search for the usage of each emitter type. Expect: Occurrences of each emitter type in relevant contexts.
for emitter in rest kafka file s3; do
  rg --type java $emitter
done

Length of output: 428424

metadata-integration/java/acryl-spark-lineage/README.md (1)

358-367: Changelog update: Ensure accuracy.

The changelog has been updated to reflect new features and changes. Ensure that the descriptions are accurate and complete.

@@ -340,20 +349,32 @@ log4j.logger.datahub.client.rest=DEBUG
Use Java 8 to build the project. The project uses Gradle as the build tool. To build the project, run the following command:

```shell
./gradlew -PjavaClassVersionDefault=8 :metadata-integration:java:spark-lineage-beta:shadowJar
./gradlew -PjavaClassVersionDefault=8 :metadata-integration:java:acryl-spark-lineage:shadowJar
```
## Known limitations

+
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unordered list style: Use dash instead of plus.

Correct the unordered list style.

- + 
+ -

Committable suggestion was skipped due to low confidence.

Tools
Markdownlint

356-356: Expected: dash; Actual: plus
Unordered list style

(MD004, ul-style)

| spark.datahub.metadata.dataset.materialize | | false | Materialize Datasets in DataHub |
| spark.datahub.platform.s3.path_spec_list | | | List of pathspec per platform |
| spark.datahub.metadata.dataset.include_schema_metadata | false | | Emit dataset schema metadata based on the spark execution. It is recommended to get schema information from platform specific DataHub sources as this is less reliable |
| spark.datahub.flow_name | | | If it is set it will be used as the DataFlow name otherwise it uses spark app name as flow_name |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Grammar correction: Use a comma before "otherwise".

Correct the conjunction usage.

- | spark.datahub.flow_name                                |          |                       | If it is set it will be used as the DataFlow name otherwise it uses spark app name as flow_name                                                                                           |
+ | spark.datahub.flow_name                                |          |                       | If it is set, it will be used as the DataFlow name, otherwise it uses spark app name as flow_name                                                                                           |
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
| spark.datahub.flow_name | | | If it is set it will be used as the DataFlow name otherwise it uses spark app name as flow_name |
| spark.datahub.flow_name | | | If it is set, it will be used as the DataFlow name, otherwise it uses spark app name as flow_name |
Tools
LanguageTool

[typographical] ~186-~186: The word “otherwise” is an adverb that can’t be used like a conjunction, and therefore needs to be separated from the sentence.
Context: ... is set it will be used as the DataFlow name otherwise it uses spark app name as flow_name ...

(THUS_SENTENCE)

| spark.datahub.metadata.dataset.env | | PROD | [Supported values](https://datahubproject.io/docs/graphql/enums#fabrictype). In all other cases, will fallback to PROD |
| spark.datahub.metadata.dataset.hivePlatformAlias | | hive | By default, datahub assigns Hive-like tables to the Hive platform. If you are using Glue as your Hive metastore, set this config flag to `glue` |
| spark.datahub.metadata.include_scheme | | true | Include scheme from the path URI (e.g. hdfs://, s3://) in the dataset URN. We recommend setting this value to false, it is set to true for backwards compatibility with previous versions |
| spark.datahub.metadata.remove_partition_pattern | | | Remove partition pattern. (e.g. /partition=\d+) It change database/table/partition=123 to database/table |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Grammar correction: Use "changes".

Correct the verb form for "changes".

- | spark.datahub.metadata.remove_partition_pattern        |          |                       | Remove partition pattern. (e.g. /partition=\d+) It change database/table/partition=123 to database/table                                                                                  |
+ | spark.datahub.metadata.remove_partition_pattern        |          |                       | Remove partition pattern. (e.g. /partition=\d+) It changes database/table/partition=123 to database/table                                                                                  |
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
| spark.datahub.metadata.remove_partition_pattern | | | Remove partition pattern. (e.g. /partition=\d+) It change database/table/partition=123 to database/table |
| spark.datahub.metadata.remove_partition_pattern | | | Remove partition pattern. (e.g. /partition=\d+) It changes database/table/partition=123 to database/table |
Tools
LanguageTool

[grammar] ~180-~180: After ‘It’, use the third-person verb form “changes”.
Context: ...ition pattern. (e.g. /partition=\d+) It change database/table/partition=123 to databas...

(IT_VBZ)

| spark.datahub.kafka.producer_config. | | | Additional config to pass in to the Kafka producer. For example: `--conf "spark.datahub.kafka.producer_config.client.id=my_client_id"` |
| spark.datahub.metadata.pipeline.platformInstance | | | Pipeline level platform instance |
| spark.datahub.metadata.dataset.platformInstance | | | dataset level platform instance (it is usefult to set if you have it in your glue ingestion) |
| spark.datahub.metadata.dataset.env | | PROD | [Supported values](https://datahubproject.io/docs/graphql/enums#fabrictype). In all other cases, will fallback to PROD |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Grammar correction: Use "fall back".

Correct the verb form for "fall back".

- | spark.datahub.metadata.dataset.env                     |          | PROD                  | [Supported values](https://datahubproject.io/docs/graphql/enums#fabrictype). In all other cases, will fallback to PROD                                                                    |
+ | spark.datahub.metadata.dataset.env                     |          | PROD                  | [Supported values](https://datahubproject.io/docs/graphql/enums#fabrictype). In all other cases, will fall back to PROD                                                                    |
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
| spark.datahub.metadata.dataset.env | | PROD | [Supported values](https://datahubproject.io/docs/graphql/enums#fabrictype). In all other cases, will fallback to PROD |
| spark.datahub.metadata.dataset.env | | PROD | [Supported values](https://datahubproject.io/docs/graphql/enums#fabrictype). In all other cases, will fall back to PROD |
Tools
LanguageTool

[grammar] ~177-~177: The word “fallback” is a noun. The verb is spelled with a space.
Context: ...s#fabrictype). In all other cases, will fallback to PROD ...

(NOUN_VERB_CONFUSION)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Outside diff range, codebase verification and nitpick comments (2)
metadata-integration/java/acryl-spark-lineage/README.md (2)

165-165: Avoid using bare URLs.

Consider using markdown syntax for URLs.

- Datahub server url  eg:<http://localhost:8080>
+ Datahub server URL, e.g., [http://localhost:8080](http://localhost:8080)
Tools
Markdownlint

165-165: null
Bare URL used

(MD034, no-bare-urls)


170-170: Clarify the filename usage.

Specify that the filename is required when using the file emitter.

- The file where metadata will be written if file emitter is set
+ The filename where metadata will be written when the file emitter is set
Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between d307a5c and 344563f.

Files selected for processing (1)
  • metadata-integration/java/acryl-spark-lineage/README.md (4 hunks)
Additional context used
LanguageTool
metadata-integration/java/acryl-spark-lineage/README.md

[uncategorized] ~164-~164: Did you mean: “By default,”?
Context: ... | Specify the ways to emit metadata. By default it sends to DataHub using REST emitter....

(BY_DEFAULT_COMMA)


[grammar] ~177-~177: The word “fallback” is a noun. The verb is spelled with a space.
Context: ...s#fabrictype). In all other cases, will fallback to PROD ...

(NOUN_VERB_CONFUSION)


[grammar] ~180-~180: After ‘It’, use the third-person verb form “changes”.
Context: ...ition pattern. (e.g. /partition=\d+) It change database/table/partition=123 to databas...

(IT_VBZ)


[typographical] ~186-~186: The word “otherwise” is an adverb that can’t be used like a conjunction, and therefore needs to be separated from the sentence.
Context: ... is set it will be used as the DataFlow name otherwise it uses spark app name as flow_name ...

(THUS_SENTENCE)

Markdownlint
metadata-integration/java/acryl-spark-lineage/README.md

165-165: null
Bare URL used

(MD034, no-bare-urls)


356-356: Expected: dash; Actual: plus
Unordered list style

(MD004, ul-style)

Additional comments not posted (4)
metadata-integration/java/acryl-spark-lineage/README.md (4)

177-177: Grammar correction: Use "fall back".

Correct the verb form for "fall back".

- will fallback to PROD
+ will fall back to PROD
Tools
LanguageTool

[grammar] ~177-~177: The word “fallback” is a noun. The verb is spelled with a space.
Context: ...s#fabrictype). In all other cases, will fallback to PROD ...

(NOUN_VERB_CONFUSION)


180-180: Grammar correction: Use "changes".

Correct the verb form for "changes".

- It change database/table/partition=123 to database/table
+ It changes database/table/partition=123 to database/table
Tools
LanguageTool

[grammar] ~180-~180: After ‘It’, use the third-person verb form “changes”.
Context: ...ition pattern. (e.g. /partition=\d+) It change database/table/partition=123 to databas...

(IT_VBZ)


186-186: Grammar correction: Use a comma before "otherwise".

Correct the conjunction usage.

- If it is set it will be used as the DataFlow name otherwise it uses spark app name as flow_name
+ If it is set, it will be used as the DataFlow name, otherwise it uses spark app name as flow_name
Tools
LanguageTool

[typographical] ~186-~186: The word “otherwise” is an adverb that can’t be used like a conjunction, and therefore needs to be separated from the sentence.
Context: ... is set it will be used as the DataFlow name otherwise it uses spark app name as flow_name ...

(THUS_SENTENCE)


356-356: Unordered list style: Use dash instead of plus.

Correct the unordered list style.

- + 
+ -
Tools
Markdownlint

356-356: Expected: dash; Actual: plus
Unordered list style

(MD004, ul-style)

Comment on lines 360 to 367
- Add Kafka emitter to emit lineage to kafka
- Add File emitter to emit lineage to file
- Add S3 emitter to save mcps to s3
- Upgrading OpenLineage to 1.19.0
- Renaming project to acryl-datahub-spark-lineage
- Supporting OpenLineage 1.17+ glue identifier changes
- Fix handling OpenLineage input/output where wasn't any facet attached
- Bumping OpenLineage version to 1.19.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove redundant entry for OpenLineage version upgrade.

The entry for upgrading OpenLineage to 1.19.0 is duplicated.

- Upgrading OpenLineage to 1.19.0
- Bumping OpenLineage version to 1.19.0
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- Add Kafka emitter to emit lineage to kafka
- Add File emitter to emit lineage to file
- Add S3 emitter to save mcps to s3
- Upgrading OpenLineage to 1.19.0
- Renaming project to acryl-datahub-spark-lineage
- Supporting OpenLineage 1.17+ glue identifier changes
- Fix handling OpenLineage input/output where wasn't any facet attached
- Bumping OpenLineage version to 1.19.0
- Add Kafka emitter to emit lineage to kafka
- Add File emitter to emit lineage to file
- Add S3 emitter to save mcps to s3
- Renaming project to acryl-datahub-spark-lineage
- Supporting OpenLineage 1.17+ glue identifier changes
- Fix handling OpenLineage input/output where wasn't any facet attached
- Bumping OpenLineage version to 1.19.0

|--------------------------------------------------------|----------|-----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| spark.jars.packages | ✅ | | Set with latest/required version io.acryl:acryl-spark-lineage:0.2.13 |
| spark.extraListeners | ✅ | | datahub.spark.DatahubSparkListener |
| spark.datahub.emitter | | rest | Specify the ways to emit metadata. By default it sends to DataHub using REST emitter. Valid options are rest, kafka or file |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Grammar correction: Use a comma after "By default".

Correct the sentence structure for clarity.

- Specify the ways to emit metadata. By default it sends to DataHub using REST emitter.
+ Specify the ways to emit metadata. By default, it sends to DataHub using REST emitter.
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
| spark.datahub.emitter | | rest | Specify the ways to emit metadata. By default it sends to DataHub using REST emitter. Valid options are rest, kafka or file |
| spark.datahub.emitter | | rest | Specify the ways to emit metadata. By default, it sends to DataHub using REST emitter. Valid options are rest, kafka or file |
Tools
LanguageTool

[uncategorized] ~164-~164: Did you mean: “By default,”?
Context: ... | Specify the ways to emit metadata. By default it sends to DataHub using REST emitter....

(BY_DEFAULT_COMMA)

@treff7es treff7es merged commit f4fb89e into datahub-project:master Jul 25, 2024
61 of 62 checks passed
@treff7es treff7es deleted the spark_kafka_emitter branch July 25, 2024 12:46
arosanda added a commit to infobip/datahub that referenced this pull request Sep 23, 2024
* feat(forms) Handle deleting forms references when hard deleting forms (datahub-project#10820)

* refactor(ui): Misc improvements to the setup ingestion flow (ingest uplift 1/2)  (datahub-project#10764)

Co-authored-by: John Joyce <john@Johns-MBP.lan>
Co-authored-by: John Joyce <john@ip-192-168-1-200.us-west-2.compute.internal>

* fix(ingestion/airflow-plugin): pipeline tasks discoverable in search (datahub-project#10819)

* feat(ingest/transformer): tags to terms transformer (datahub-project#10758)

Co-authored-by: Aseem Bansal <asmbansal2@gmail.com>

* fix(ingestion/unity-catalog): fixed issue with profiling with GE turned on (datahub-project#10752)

Co-authored-by: Aseem Bansal <asmbansal2@gmail.com>

* feat(forms) Add java SDK for form entity PATCH + CRUD examples (datahub-project#10822)

* feat(SDK) Add java SDK for structuredProperty entity PATCH + CRUD examples (datahub-project#10823)

* feat(SDK) Add StructuredPropertyPatchBuilder in python sdk and provide sample CRUD files (datahub-project#10824)

* feat(forms) Add CRUD endpoints to GraphQL for Form entities (datahub-project#10825)

* add flag for includeSoftDeleted in scroll entities API (datahub-project#10831)

* feat(deprecation) Return actor entity with deprecation aspect (datahub-project#10832)

* feat(structuredProperties) Add CRUD graphql APIs for structured property entities (datahub-project#10826)

* add scroll parameters to openapi v3 spec (datahub-project#10833)

* fix(ingest): correct profile_day_of_week implementation (datahub-project#10818)

* feat(ingest/glue): allow ingestion of empty databases from Glue (datahub-project#10666)

Co-authored-by: Harshal Sheth <hsheth2@gmail.com>

* feat(cli): add more details to get cli (datahub-project#10815)

* fix(ingestion/glue): ensure date formatting works on all platforms for aws glue (datahub-project#10836)

* fix(ingestion): fix datajob patcher (datahub-project#10827)

* fix(smoke-test): add suffix in temp file creation (datahub-project#10841)

* feat(ingest/glue): add helper method to permit user or group ownership (datahub-project#10784)

* feat(): Show data platform instances in policy modal if they are set on the policy (datahub-project#10645)

Co-authored-by: Hendrik Richert <hendrik.richert@swisscom.com>

* docs(patch): add patch documentation for how implementation works (datahub-project#10010)

Co-authored-by: John Joyce <john@acryl.io>

* fix(jar): add missing custom-plugin-jar task (datahub-project#10847)

* fix(): also check exceptions/stack trace when filtering log messages (datahub-project#10391)

Co-authored-by: John Joyce <john@acryl.io>

* docs(): Update posts.md (datahub-project#9893)

Co-authored-by: Hyejin Yoon <0327jane@gmail.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* chore(ingest): update acryl-datahub-classify version (datahub-project#10844)

* refactor(ingest): Refactor structured logging to support infos, warnings, and failures structured reporting to UI (datahub-project#10828)

Co-authored-by: John Joyce <john@Johns-MBP.lan>
Co-authored-by: Harshal Sheth <hsheth2@gmail.com>

* fix(restli): log aspect-not-found as a warning rather than as an error (datahub-project#10834)

* fix(ingest/nifi): remove duplicate upstream jobs (datahub-project#10849)

* fix(smoke-test): test access to create/revoke personal access tokens (datahub-project#10848)

* fix(smoke-test): missing test for move domain (datahub-project#10837)

* ci: update usernames to not considered for community (datahub-project#10851)

* env: change defaults for data contract visibility (datahub-project#10854)

* fix(ingest/tableau): quote special characters in external URL (datahub-project#10842)

* fix(smoke-test): fix flakiness of auto complete test

* ci(ingest): pin dask dependency for feast (datahub-project#10865)

* fix(ingestion/lookml): liquid template resolution and view-to-view cll (datahub-project#10542)

* feat(ingest/audit): add client id and version in system metadata props (datahub-project#10829)

* chore(ingest): Mypy 1.10.1 pin (datahub-project#10867)

* docs: use acryl-datahub-actions as expected python package to install (datahub-project#10852)

* docs: add new js snippet (datahub-project#10846)

* refactor(ingestion): remove company domain for security reason (datahub-project#10839)

* fix(ingestion/spark): Platform instance and column level lineage fix (datahub-project#10843)

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* feat(ingestion/tableau): optionally ingest multiple sites and create site containers (datahub-project#10498)

Co-authored-by: Yanik Häni <Yanik.Haeni1@swisscom.com>

* fix(ingestion/looker): Add sqlglot dependency and remove unused sqlparser (datahub-project#10874)

* fix(manage-tokens): fix manage access token policy (datahub-project#10853)

* Batch get entity endpoints (datahub-project#10880)

* feat(system): support conditional write semantics (datahub-project#10868)

* fix(build): upgrade vercel builds to Node 20.x (datahub-project#10890)

* feat(ingest/lookml): shallow clone repos (datahub-project#10888)

* fix(ingest/looker): add missing dependency (datahub-project#10876)

* fix(ingest): only populate audit stamps where accurate (datahub-project#10604)

* fix(ingest/dbt): always encode tag urns (datahub-project#10799)

* fix(ingest/redshift): handle multiline alter table commands (datahub-project#10727)

* fix(ingestion/looker): column name missing in explore (datahub-project#10892)

* fix(lineage) Fix lineage source/dest filtering with explored per hop limit (datahub-project#10879)

* feat(conditional-writes): misc updates and fixes (datahub-project#10901)

* feat(ci): update outdated action (datahub-project#10899)

* feat(rest-emitter): adding async flag to rest emitter (datahub-project#10902)

Co-authored-by: Gabe Lyons <gabe.lyons@acryl.io>

* feat(ingest): add snowflake-queries source (datahub-project#10835)

* fix(ingest): improve `auto_materialize_referenced_tags_terms` error handling (datahub-project#10906)

* docs: add new company to adoption list (datahub-project#10909)

* refactor(redshift): Improve redshift error handling with new structured reporting system (datahub-project#10870)

Co-authored-by: John Joyce <john@Johns-MBP.lan>
Co-authored-by: Harshal Sheth <hsheth2@gmail.com>

* feat(ui) Finalize support for all entity types on forms (datahub-project#10915)

* Index ExecutionRequestResults status field (datahub-project#10811)

* feat(ingest): grafana connector (datahub-project#10891)

Co-authored-by: Shirshanka Das <shirshanka@apache.org>
Co-authored-by: Harshal Sheth <hsheth2@gmail.com>

* fix(gms) Add Form entity type to EntityTypeMapper (datahub-project#10916)

* feat(dataset): add support for external url in Dataset (datahub-project#10877)

* docs(saas-overview) added missing features to observe section (datahub-project#10913)

Co-authored-by: John Joyce <john@acryl.io>

* fix(ingest/spark): Fixing Micrometer warning (datahub-project#10882)

* fix(structured properties): allow application of structured properties without schema file (datahub-project#10918)

* fix(data-contracts-web) handle other schedule types (datahub-project#10919)

* fix(ingestion/tableau): human-readable message for PERMISSIONS_MODE_SWITCHED error (datahub-project#10866)

Co-authored-by: Harshal Sheth <hsheth2@gmail.com>

* Add feature flag for view defintions (datahub-project#10914)

Co-authored-by: Ethan Cartwright <ethan.cartwright@acryl.io>

* feat(ingest/BigQuery): refactor+parallelize dataset metadata extraction (datahub-project#10884)

* fix(airflow): add error handling around render_template() (datahub-project#10907)

* feat(ingestion/sqlglot): add optional `default_dialect` parameter to sqlglot lineage (datahub-project#10830)

* feat(mcp-mutator): new mcp mutator plugin (datahub-project#10904)

* fix(ingest/bigquery): changes helper function to decode unicode scape sequences (datahub-project#10845)

* feat(ingest/postgres): fetch table sizes for profile (datahub-project#10864)

* feat(ingest/abs): Adding azure blob storage ingestion source (datahub-project#10813)

* fix(ingest/redshift): reduce severity of SQL parsing issues (datahub-project#10924)

* fix(build): fix lint fix web react (datahub-project#10896)

* fix(ingest/bigquery): handle quota exceeded for project.list requests (datahub-project#10912)

* feat(ingest): report extractor failures more loudly (datahub-project#10908)

* feat(ingest/snowflake): integrate snowflake-queries into main source (datahub-project#10905)

* fix(ingest): fix docs build (datahub-project#10926)

* fix(ingest/snowflake): fix test connection (datahub-project#10927)

* fix(ingest/lookml): add view load failures to cache (datahub-project#10923)

* docs(slack) overhauled setup instructions and screenshots (datahub-project#10922)

Co-authored-by: John Joyce <john@acryl.io>

* fix(airflow): Add comma parsing of owners to DataJobs (datahub-project#10903)

* fix(entityservice): fix merging sideeffects (datahub-project#10937)

* feat(ingest): Support System Ingestion Sources, Show and hide system ingestion sources with Command-S (datahub-project#10938)

Co-authored-by: John Joyce <john@Johns-MBP.lan>

* chore() Set a default lineage filtering end time on backend when a start time is present (datahub-project#10925)

Co-authored-by: John Joyce <john@ip-192-168-1-200.us-west-2.compute.internal>
Co-authored-by: John Joyce <john@Johns-MBP.lan>

* Added relationships APIs to V3. Added these generic APIs to V3 swagger doc. (datahub-project#10939)

* docs: add learning center to docs (datahub-project#10921)

* doc: Update hubspot form id (datahub-project#10943)

* chore(airflow): add python 3.11 w/ Airflow 2.9 to CI (datahub-project#10941)

* fix(ingest/Glue): column upstream lineage between S3 and Glue (datahub-project#10895)

* fix(ingest/abs): split abs utils into multiple files (datahub-project#10945)

* doc(ingest/looker): fix doc for sql parsing documentation (datahub-project#10883)

Co-authored-by: Harshal Sheth <hsheth2@gmail.com>

* fix(ingest/bigquery): Adding missing BigQuery types (datahub-project#10950)

* fix(ingest/setup): feast and abs source setup (datahub-project#10951)

* fix(connections) Harden adding /gms to connections in backend (datahub-project#10942)

* feat(siblings) Add flag to prevent combining siblings in the UI (datahub-project#10952)

* fix(docs): make graphql doc gen more automated (datahub-project#10953)

* feat(ingest/athena): Add option for Athena partitioned profiling (datahub-project#10723)

* fix(spark-lineage): default timeout for future responses (datahub-project#10947)

* feat(datajob/flow): add environment filter using info aspects (datahub-project#10814)

* fix(ui/ingest): correct privilege used to show tab (datahub-project#10483)

Co-authored-by: Kunal-kankriya <127090035+Kunal-kankriya@users.noreply.github.com>

* feat(ingest/looker): include dashboard urns in browse v2 (datahub-project#10955)

* add a structured type to batchGet in OpenAPI V3 spec (datahub-project#10956)

* fix(ui): scroll on the domain sidebar to show all domains (datahub-project#10966)

* fix(ingest/sagemaker): resolve incorrect variable assignment for SageMaker API call (datahub-project#10965)

* fix(airflow/build): Pinning mypy (datahub-project#10972)

* Fixed a bug where the OpenAPI V3 spec was incorrect. The bug was introduced in datahub-project#10939. (datahub-project#10974)

* fix(ingest/test): Fix for mssql integration tests (datahub-project#10978)

* fix(entity-service) exist check correctly extracts status (datahub-project#10973)

* fix(structuredProps) casing bug in StructuredPropertiesValidator (datahub-project#10982)

* bugfix: use anyOf instead of allOf when creating references in openapi v3 spec (datahub-project#10986)

* fix(ui): Remove ant less imports (datahub-project#10988)

* feat(ingest/graph): Add get_results_by_filter to DataHubGraph (datahub-project#10987)

* feat(ingest/cli): init does not actually support environment variables (datahub-project#10989)

* fix(ingest/graph): Update get_results_by_filter graphql query (datahub-project#10991)

* feat(ingest/spark): Promote beta plugin (datahub-project#10881)

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* feat(ingest): support domains in meta -> "datahub" section (datahub-project#10967)

* feat(ingest): add `check server-config` command (datahub-project#10990)

* feat(cli): Make consistent use of DataHubGraphClientConfig (datahub-project#10466)

Deprecates get_url_and_token() in favor of a more complete option: load_graph_config() that returns a full DatahubClientConfig.
This change was then propagated across previous usages of get_url_and_token so that connections to DataHub server from the client respect the full breadth of configuration specified by DatahubClientConfig.

I.e: You can now specify disable_ssl_verification: true in your ~/.datahubenv file so that all cli functions to the server work when ssl certification is disabled.

Fixes datahub-project#9705

* fix(ingest/s3): Fixing container creation when there is no folder in path (datahub-project#10993)

* fix(ingest/looker): support platform instance for dashboards & charts (datahub-project#10771)

* feat(ingest/bigquery): improve handling of information schema in sql parser (datahub-project#10985)

* feat(ingest): improve `ingest deploy` command (datahub-project#10944)

* fix(backend): allow excluding soft-deleted entities in relationship-queries; exclude soft-deleted members of groups (datahub-project#10920)

- allow excluding soft-deleted entities in relationship-queries
- exclude soft-deleted members of groups

* fix(ingest/looker): downgrade missing chart type log level (datahub-project#10996)

* doc(acryl-cloud): release docs for 0.3.4.x (datahub-project#10984)

Co-authored-by: John Joyce <john@acryl.io>
Co-authored-by: RyanHolstien <RyanHolstien@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: Pedro Silva <pedro@acryl.io>

* fix(protobuf/build): Fix protobuf check jar script (datahub-project#11006)

* fix(ui/ingest): Support invalid cron jobs (datahub-project#10998)

* fix(ingest): fix graph config loading (datahub-project#11002)

Co-authored-by: Pedro Silva <pedro@acryl.io>

* feat(docs): Document __DATAHUB_TO_FILE_ directive (datahub-project#10968)

Co-authored-by: Harshal Sheth <hsheth2@gmail.com>

* fix(graphql/upsertIngestionSource): Validate cron schedule; parse error in CLI (datahub-project#11011)

* feat(ece): support custom ownership type urns in ECE generation (datahub-project#10999)

* feat(assertion-v2): changed Validation tab to Quality and created new Governance tab (datahub-project#10935)

* fix(ingestion/glue): Add support for missing config options for profiling in Glue (datahub-project#10858)

* feat(propagation): Add models for schema field docs, tags, terms (datahub-project#2959) (datahub-project#11016)

Co-authored-by: Chris Collins <chriscollins3456@gmail.com>

* docs: standardize terminology to DataHub Cloud (datahub-project#11003)

* fix(ingestion/transformer): replace the externalUrl container (datahub-project#11013)

* docs(slack) troubleshoot docs (datahub-project#11014)

* feat(propagation): Add graphql API (datahub-project#11030)

Co-authored-by: Chris Collins <chriscollins3456@gmail.com>

* feat(propagation):  Add models for Action feature settings (datahub-project#11029)

* docs(custom properties): Remove duplicate from sidebar (datahub-project#11033)

* feat(models): Introducing Dataset Partitions Aspect (datahub-project#10997)

Co-authored-by: John Joyce <john@Johns-MBP.lan>
Co-authored-by: John Joyce <john@ip-192-168-1-200.us-west-2.compute.internal>

* feat(propagation): Add Documentation Propagation Settings (datahub-project#11038)

* fix(models): chart schema fields mapping, add dataHubAction entity, t… (datahub-project#11040)

* fix(ci): smoke test lint failures (datahub-project#11044)

* docs: fix learning center color scheme & typo (datahub-project#11043)

* feat: add cloud main page (datahub-project#11017)

Co-authored-by: Jay <159848059+jayacryl@users.noreply.github.com>

* feat(restore-indices): add additional step to also clear system metadata service (datahub-project#10662)

Co-authored-by: John Joyce <john@acryl.io>

* docs: fix typo (datahub-project#11046)

* fix(lint): apply spotless (datahub-project#11050)

* docs(airflow): example query to get datajobs for a dataflow (datahub-project#11034)

* feat(cli): Add run-id option to put sub-command (datahub-project#11023)

Adds an option to assign run-id to a given put command execution. 
This is useful when transformers do not exist for a given ingestion payload, we can follow up with custom metadata and assign it to an ingestion pipeline.

* fix(ingest): improve sql error reporting calls (datahub-project#11025)

* fix(airflow): fix CI setup (datahub-project#11031)

* feat(ingest/dbt): add experimental `prefer_sql_parser_lineage` flag (datahub-project#11039)

* fix(ingestion/lookml): enable stack-trace in lookml logs (datahub-project#10971)

* (chore): Linting fix (datahub-project#11015)

* chore(ci): update deprecated github actions (datahub-project#10977)

* Fix ALB configuration example (datahub-project#10981)

* chore(ingestion-base): bump base image packages (datahub-project#11053)

* feat(cli): Trim report of dataHubExecutionRequestResult to max GMS size (datahub-project#11051)

* fix(ingestion/lookml): emit dummy sql condition for lookml custom condition tag (datahub-project#11008)

Co-authored-by: Harshal Sheth <hsheth2@gmail.com>

* fix(ingestion/powerbi): fix issue with broken report lineage (datahub-project#10910)

* feat(ingest/tableau): add retry on timeout (datahub-project#10995)

* change generate kafka connect properties from env (datahub-project#10545)

Co-authored-by: david-leifker <114954101+david-leifker@users.noreply.github.com>

* fix(ingest): fix oracle cronjob ingestion (datahub-project#11001)

Co-authored-by: david-leifker <114954101+david-leifker@users.noreply.github.com>

* chore(ci): revert update deprecated github actions (datahub-project#10977) (datahub-project#11062)

* feat(ingest/dbt-cloud): update metadata_endpoint inference (datahub-project#11041)

* build: Reduce size of datahub-frontend-react image by 50-ish% (datahub-project#10878)

Co-authored-by: david-leifker <114954101+david-leifker@users.noreply.github.com>

* fix(ci): Fix lint issue in datahub_ingestion_run_summary_provider.py (datahub-project#11063)

* docs(ingest): update developing-a-transformer.md (datahub-project#11019)

* feat(search-test): update search tests from datahub-project#10408 (datahub-project#11056)

* feat(cli): add aspects parameter to DataHubGraph.get_entity_semityped (datahub-project#11009)

Co-authored-by: Harshal Sheth <hsheth2@gmail.com>

* docs(airflow): update min version for plugin v2 (datahub-project#11065)

* doc(ingestion/tableau): doc update for derived permission (datahub-project#11054)

Co-authored-by: Pedro Silva <pedro.cls93@gmail.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: Harshal Sheth <hsheth2@gmail.com>

* fix(py): remove dep on types-pkg_resources (datahub-project#11076)

* feat(ingest/mode): add option to exclude restricted (datahub-project#11081)

* fix(ingest): set lastObserved in sdk when unset (datahub-project#11071)

* doc(ingest): Update capabilities (datahub-project#11072)

* chore(vulnerability): Log Injection (datahub-project#11090)

* chore(vulnerability): Information exposure through a stack trace (datahub-project#11091)

* chore(vulnerability): Comparison of narrow type with wide type in loop condition (datahub-project#11089)

* chore(vulnerability): Insertion of sensitive information into log files (datahub-project#11088)

* chore(vulnerability): Risky Cryptographic Algorithm (datahub-project#11059)

* chore(vulnerability): Overly permissive regex range (datahub-project#11061)

Co-authored-by: Harshal Sheth <hsheth2@gmail.com>

* fix: update customer data (datahub-project#11075)

* fix(models): fixing the datasetPartition models (datahub-project#11085)

Co-authored-by: John Joyce <john@ip-192-168-1-200.us-west-2.compute.internal>

* fix(ui): Adding view, forms GraphQL query, remove showing a fallback error message on unhandled GraphQL error (datahub-project#11084)

Co-authored-by: John Joyce <john@ip-192-168-1-200.us-west-2.compute.internal>

* feat(docs-site): hiding learn more from cloud page (datahub-project#11097)

* fix(docs): Add correct usage of orFilters in search API docs (datahub-project#11082)

Co-authored-by: Jay <159848059+jayacryl@users.noreply.github.com>

* fix(ingest/mode): Regexp in mode name matcher didn't allow underscore (datahub-project#11098)

* docs: Refactor customer stories section (datahub-project#10869)

Co-authored-by: Jeff Merrick <jeff@wireform.io>

* fix(release): fix full/slim suffix on tag (datahub-project#11087)

* feat(config): support alternate hashing algorithm for doc id (datahub-project#10423)

Co-authored-by: david-leifker <114954101+david-leifker@users.noreply.github.com>
Co-authored-by: John Joyce <john@acryl.io>

* fix(emitter): fix typo in get method of java kafka emitter (datahub-project#11007)

* fix(ingest): use correct native data type in all SQLAlchemy sources by compiling data type using dialect (datahub-project#10898)

Co-authored-by: Harshal Sheth <hsheth2@gmail.com>

* chore: Update contributors list in PR labeler (datahub-project#11105)

* feat(ingest): tweak stale entity removal messaging (datahub-project#11064)

* fix(ingestion): enforce lastObserved timestamps in SystemMetadata (datahub-project#11104)

* fix(ingest/powerbi): fix broken lineage between chart and dataset (datahub-project#11080)

* feat(ingest/lookml): CLL support for sql set in sql_table_name attribute of lookml view (datahub-project#11069)

* docs: update graphql docs on forms & structured properties (datahub-project#11100)

* test(search): search openAPI v3 test (datahub-project#11049)

* fix(ingest/tableau): prevent empty site content urls (datahub-project#11057)

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* feat(entity-client): implement client batch interface (datahub-project#11106)

* fix(snowflake): avoid reporting warnings/info for sys tables (datahub-project#11114)

* fix(ingest): downgrade column type mapping warning to info (datahub-project#11115)

* feat(api): add AuditStamp to the V3 API entity/aspect response (datahub-project#11118)

* fix(ingest/redshift): replace r'\n' with '\n' to avoid token error redshift serverless… (datahub-project#11111)

* fix(entiy-client): handle null entityUrn case for restli (datahub-project#11122)

* fix(sql-parser): prevent bad urns from alter table lineage (datahub-project#11092)

* fix(ingest/bigquery): use small batch size if use_tables_list_query_v2 is set (datahub-project#11121)

* fix(graphql): add missing entities to EntityTypeMapper and EntityTypeUrnMapper (datahub-project#10366)

* feat(ui): Changes to allow editable dataset name (datahub-project#10608)

Co-authored-by: Jay Kadambi <jayasimhan_venkatadri@optum.com>

* fix: remove saxo (datahub-project#11127)

* feat(mcl-processor): Update mcl processor hooks (datahub-project#11134)

* fix(openapi): fix openapi v2 endpoints & v3 documentation update

* Revert "fix(openapi): fix openapi v2 endpoints & v3 documentation update"

This reverts commit 573c1cb.

* docs(policies): updates to policies documentation (datahub-project#11073)

* fix(openapi): fix openapi v2 and v3 docs update (datahub-project#11139)

* feat(auth): grant type and acr values custom oidc parameters support (datahub-project#11116)

* fix(mutator): mutator hook fixes (datahub-project#11140)

* feat(search): support sorting on multiple fields (datahub-project#10775)

* feat(ingest): various logging improvements (datahub-project#11126)

* fix(ingestion/lookml): fix for sql parsing error (datahub-project#11079)

Co-authored-by: Harshal Sheth <hsheth2@gmail.com>

* feat(docs-site) cloud page spacing and content polishes (datahub-project#11141)

* feat(ui) Enable editing structured props on fields (datahub-project#11042)

* feat(tests): add md5 and last computed to testResult model (datahub-project#11117)

* test(openapi): openapi regression smoke tests (datahub-project#11143)

* fix(airflow): fix tox tests + update docs (datahub-project#11125)

* docs: add chime to adoption stories (datahub-project#11142)

* fix(ingest/databricks): Updating code to work with Databricks sdk 0.30 (datahub-project#11158)

* fix(kafka-setup): add missing script to image (datahub-project#11190)

* fix(config): fix hash algo config (datahub-project#11191)

* test(smoke-test): updates to smoke-tests (datahub-project#11152)

* fix(elasticsearch): refactor idHashAlgo setting (datahub-project#11193)

* chore(kafka): kafka version bump (datahub-project#11211)

* readd UsageStatsWorkUnit

* fix merge problems

* change logo

---------

Co-authored-by: Chris Collins <chriscollins3456@gmail.com>
Co-authored-by: John Joyce <john@acryl.io>
Co-authored-by: John Joyce <john@Johns-MBP.lan>
Co-authored-by: John Joyce <john@ip-192-168-1-200.us-west-2.compute.internal>
Co-authored-by: dushayntAW <158567391+dushayntAW@users.noreply.github.com>
Co-authored-by: sagar-salvi-apptware <159135491+sagar-salvi-apptware@users.noreply.github.com>
Co-authored-by: Aseem Bansal <asmbansal2@gmail.com>
Co-authored-by: Kevin Chun <kevin1chun@gmail.com>
Co-authored-by: jordanjeremy <72943478+jordanjeremy@users.noreply.github.com>
Co-authored-by: skrydal <piotr.skrydalewicz@gmail.com>
Co-authored-by: Harshal Sheth <hsheth2@gmail.com>
Co-authored-by: david-leifker <114954101+david-leifker@users.noreply.github.com>
Co-authored-by: sid-acryl <155424659+sid-acryl@users.noreply.github.com>
Co-authored-by: Julien Jehannet <80408664+aviv-julienjehannet@users.noreply.github.com>
Co-authored-by: Hendrik Richert <github@richert.li>
Co-authored-by: Hendrik Richert <hendrik.richert@swisscom.com>
Co-authored-by: RyanHolstien <RyanHolstien@users.noreply.github.com>
Co-authored-by: Felix Lüdin <13187726+Masterchen09@users.noreply.github.com>
Co-authored-by: Pirry <158024088+chardaway@users.noreply.github.com>
Co-authored-by: Hyejin Yoon <0327jane@gmail.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: cburroughs <chris.burroughs@gmail.com>
Co-authored-by: ksrinath <ksrinath@users.noreply.github.com>
Co-authored-by: Mayuri Nehate <33225191+mayurinehate@users.noreply.github.com>
Co-authored-by: Kunal-kankriya <127090035+Kunal-kankriya@users.noreply.github.com>
Co-authored-by: Shirshanka Das <shirshanka@apache.org>
Co-authored-by: ipolding-cais <155455744+ipolding-cais@users.noreply.github.com>
Co-authored-by: Tamas Nemeth <treff7es@gmail.com>
Co-authored-by: Shubham Jagtap <132359390+shubhamjagtap639@users.noreply.github.com>
Co-authored-by: haeniya <yanik.haeni@gmail.com>
Co-authored-by: Yanik Häni <Yanik.Haeni1@swisscom.com>
Co-authored-by: Gabe Lyons <itsgabelyons@gmail.com>
Co-authored-by: Gabe Lyons <gabe.lyons@acryl.io>
Co-authored-by: 808OVADOZE <52988741+shtephlee@users.noreply.github.com>
Co-authored-by: noggi <anton.kuraev@acryl.io>
Co-authored-by: Nicholas Pena <npena@foursquare.com>
Co-authored-by: Jay <159848059+jayacryl@users.noreply.github.com>
Co-authored-by: ethan-cartwright <ethan.cartwright.m@gmail.com>
Co-authored-by: Ethan Cartwright <ethan.cartwright@acryl.io>
Co-authored-by: Nadav Gross <33874964+nadavgross@users.noreply.github.com>
Co-authored-by: Patrick Franco Braz <patrickfbraz@poli.ufrj.br>
Co-authored-by: pie1nthesky <39328908+pie1nthesky@users.noreply.github.com>
Co-authored-by: Joel Pinto Mata (KPN-DSH-DEX team) <130968841+joelmataKPN@users.noreply.github.com>
Co-authored-by: Ellie O'Neil <110510035+eboneil@users.noreply.github.com>
Co-authored-by: Ajoy Majumdar <ajoymajumdar@hotmail.com>
Co-authored-by: deepgarg-visa <149145061+deepgarg-visa@users.noreply.github.com>
Co-authored-by: Tristan Heisler <tristankheisler@gmail.com>
Co-authored-by: Andrew Sikowitz <andrew.sikowitz@acryl.io>
Co-authored-by: Davi Arnaut <davi.arnaut@acryl.io>
Co-authored-by: Pedro Silva <pedro@acryl.io>
Co-authored-by: amit-apptware <132869468+amit-apptware@users.noreply.github.com>
Co-authored-by: Sam Black <sam.black@acryl.io>
Co-authored-by: Raj Tekal <varadaraj_tekal@optum.com>
Co-authored-by: Steffen Grohsschmiedt <gitbhub@steffeng.eu>
Co-authored-by: jaegwon.seo <162448493+wornjs@users.noreply.github.com>
Co-authored-by: Renan F. Lima <51028757+lima-renan@users.noreply.github.com>
Co-authored-by: Matt Exchange <xkollar@users.noreply.github.com>
Co-authored-by: Jonny Dixon <45681293+acrylJonny@users.noreply.github.com>
Co-authored-by: Pedro Silva <pedro.cls93@gmail.com>
Co-authored-by: Pinaki Bhattacharjee <pinakipb2@gmail.com>
Co-authored-by: Jeff Merrick <jeff@wireform.io>
Co-authored-by: skrydal <piotr.skrydalewicz@acryl.io>
Co-authored-by: AndreasHegerNuritas <163423418+AndreasHegerNuritas@users.noreply.github.com>
Co-authored-by: jayasimhankv <145704974+jayasimhankv@users.noreply.github.com>
Co-authored-by: Jay Kadambi <jayasimhan_venkatadri@optum.com>
Co-authored-by: David Leifker <david.leifker@acryl.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ingestion PR or Issue related to the ingestion of metadata
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants