-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
0.1.1 Added source class for GCP BigQuery #56
0.1.1 Added source class for GCP BigQuery #56
Conversation
...-data-platform/real_world_use_cases/near_data_lake/config/sample_queries/near_transaction.py
Outdated
Show resolved
Hide resolved
This looks great @mrutunjay-kinagi !!!! |
@tusharchou , @redpheonixx : Time for you guys to review again ! |
@@ -11,7 +11,7 @@ class Pipeline(Flow): | |||
|
|||
def __init__(self, config: Config, *args, **kwargs): | |||
self.config = config | |||
self.source = Source(**config.metadata['source']) | |||
self.target = Target(**config.metadata['target']) | |||
# self.source = Source(**config.metadata['source']) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mrutunjay-kinagi what went wrong here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great though let's try to reduce files and increase number of commits next time. Cheers!
from pathlib import Path | ||
import json | ||
from local_data_platform import Credentials | ||
from local_data_platform import logger |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should be
from local_data_platform.logger import log
Name: Added source class for GCP BigQuery
About: Implemented BigQuery integration with local-data-platform
Description : This implementation enables local-data-platform to pull data from BigQuery , store it as CSV and Upload CSV data to Iceberg.
Is your pull request related to a issue? Please tag. #19