Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[spike] Investigate suitability of Kedro for EL pipelines and incremental data loading #3578

Open
astrojuanlu opened this issue Jan 30, 2024 · 23 comments

Comments

@astrojuanlu
Copy link
Member

astrojuanlu commented Jan 30, 2024

Intro and context

Kedro describes itself in its README as a tool for data science and data engineering pipelines (emphasis mine):

Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.

As per kedro-org/kedro-devrel#94, these "data engineering and data science pipelines" actually reflect the broad categories that people have in mind when talking about "pipelines", which are

The focus of this issue is on data pipelines.

Data pipelines

Data pipelines are important because they are the beginning of any data project: you need to get your data from somewhere, to then start doing analysis, machine learning, and the like.

Data pipelines are tricky. For ETL architectures, the Transformation needs to be executed carefully, and it's coupled to both the source (Extraction) and target (Loading). ELT is touted as the "modern" approach, but creates a big overhead of often denormalised tables on the data warehouse.

According to industry surveys kedro-org/kedro-devrel#94, most teams use in-house tools, or just no recognizable tools at all (a mess of Python scripts, Jupyter notebooks, and the like), which suggests that most teams are doing ETL as opposed to ELT. The most recognizable tools and vendors focus on ELT and are commercial (Fivetran, Azure Data Factory) whereas the existing open source tools have mixed reviews (Airbyte, Meltano).

Kedro for data pipelines

We have evidence of users using Kedro for authoring data pipelines https://linen-slack.kedro.org/t/16312377/hi-everyone-here-luca-ds-from-italy-happy-kedro-user-for-3-y#2d666fee-5385-45d2-b2f8-4282ef22c2f9

However, there are also some signs that hint that there's margin for improvement for Kedro to be suitable for creating data pipelines:

The fact that Kedro is not mentioned in any industry survey we have found kedro-org/kedro-devrel#94 is probably a symptom, rather than a cause, of all the above.

There's two sides of this problem:

  • There might be some friction because of technical difficulties, and/or
  • There might be a lack of educational material or guidance on how to use Kedro for these tasks.

Next steps

Part of this intersects with #1778, #1936 cc @merelcht

From a product perspective it's worth asking whether we want to pursue making Kedro a suitable tool for ETL/ELT pipelines at all. Regarding ELT, Kedro will probably never be as convenient as the Singer ecosystem and derivatives could theoretically be - however, the practical application of Meltano and Airbyte leaves some gaps, and maybe Kedro could be a satisfying tool for some users. Regarding ETL, I think Kedro could be a perfect framework for this, provided that the datasets, the crucial bits that perform the I/O, are up to the task or at least we provide clear guidance of what is the "Kedronic" way of achieving idempotent data ingestion pipelines that can support cold starts, behave well under changes of the source schema, and any other desirable properties of data pipelines.

From a technical perspective, on the other hand, we need to develop an understanding of how Kedro can be used nowadays for ELT/ETL pipelines following modern data architecture patterns, and evaluate to what extent the pains described above are real or are just a matter of having better docs.

On a related note, discussion in kedro-org/kedro-plugins#471 surfaced that we might have to make some of the Kedro principles more explicit.

Finally, we should execute on messaging/value proposition updates based on the conclusions of our investigation, and probably generate appropriate educational material in the form of documentation, blog posts, and videos.

@datajoely
Copy link
Contributor

Ibis has to be central to this

@astrojuanlu
Copy link
Member Author

To give a specific example of how this is posing a problem to users: https://linen-slack.kedro.org/t/16366189/tldr-is-there-a-suggested-pattern-for-converting-vanilla-par#23c36a9d-7bea-40f9-a21f-cc6def7e9ccf

User tries to convert a Parquet file to a Delta table with a Kedro pipeline, only to see that DatasetError: DeltaTableDataset is a read only dataset type. Supposedly there's rationale for this in the original PR from 3 years ago #964 but (1) the conversation is extremely long, and I can't pinpoint the exact moment it was decided to remove _save() functionality, and (2) this was never documented in any place, so users are left in the dark.

Going through the PR again, I found a comment that spells the problem in detail #964 (comment)

Update, Upsert/Merge, Delete

These are not directly consistent with the Kedro Principles & DAG, as

  1. The filepath is intrinsic to the DeltaTable
  2. The update, merge and delete methods are methods on the DeltaTable and are immediately materialised (on call or on subsequent execute call on a merge builder)
  3. We still need to inform the Kedro pipeline and DAG that this node has succeeded in a meaningful way

This is the problem we're addressing.

@deepyaman
Copy link
Member

It's unclear how to write Kedro datasets that are amenable to UPSERT (aka MERGE aka "INSERT or UPDATE") operations #964

Upsert is mostly supported by database backends. You could simulate it in data frames using concat with indices, e.g. in pandas or spark, but it's not very clean.

For database backends, it is on the radar for Ibis support.

Regarding ELT, Kedro will probably never be as convenient as the Singer ecosystem and derivatives could theoretically be - however, the practical application of Meltano and Airbyte leaves some gaps, and maybe Kedro could be a satisfying tool for some users. Regarding ETL, I think Kedro could be a perfect framework for this

Why can Kedro not be at least as good at ELT as it is at ETL? As long as you can interact with databases natively using SQL under the hood, I think it can be a great option for the people who are going to use Python anyway (or prefer to).

Also, I'm not recent enough on this perhaps, but has there been any momentum back towards people wanting to do ETL? If ELT is still where "modern data engineering" is at, doing ETL well isn't that exciting.

Finally, we should execute on messaging/value proposition updates based on the conclusions of our investigation, and probably generate appropriate educational material in the form of documentation, blog posts, and videos.

💯

If Kedro is a tool that supports both data pipelines and ML pipelines, it makes sense that people are educated on how to write each, and don't use the same approach for the disparate problems.

@astrojuanlu
Copy link
Member Author

Also, I'm not recent enough on this perhaps, but has there been any momentum back towards people wanting to do ETL? If ELT is still where "modern data engineering" is at, doing ETL well isn't that exciting.

We could discuss whether the Modern Data Stack was a real industry trend or only happened on Data Twitter - but I'll only do so over beer 😄

@astrojuanlu
Copy link
Member Author

astrojuanlu commented Feb 7, 2024

Why can Kedro not be at least as good at ELT as it is at ETL?

I'm not denying this. What I'm saying that, in theory1,

$ pip install meltano
$ meltano add extractor tap-postgres
$ meltano add loader target-snowflake
$ meltano run

is the optimal open-source, CLI-based, EL experience, and I don't think Kedro can match this at the moment or in the near future (very happy to be proven wrong).

Edit: Meltano would be EL, then for example dbt would be T, or as Lauren Balik jokingly says, TTTTTTT

Footnotes

  1. In theory there is no difference between theory and practice, while in practice there is.

@inigohidalgo
Copy link
Contributor

We've implemented an in-house upsert functionality into one of our Arrow datasets using a method @deepyaman alludes to

You could simulate it in data frames using concat with indices, e.g. in pandas or spark, but it's not very clean.

The write_mode is just a save_arg for us. This definitely breaks "reproducibility" though and goes towards idempotency like @astrojuanlu pointed out

@inigohidalgo
Copy link
Contributor

Why can Kedro not be at least as good at ELT as it is at ETL?

I'm not denying this. What I'm saying that, in theory

$ pip install meltano
$ meltano add extractor tap-postgres
$ meltano add loader target-snowflake
$ meltano run

is the optimal open-source, CLI-based, ELT experience, and I don't think Kedro can match this at the moment or in the near future

I've never used meltano, but this covers only EL in ELT, right? Kedro+ibis could slot in very nicely into the T, and also provide more-than-good-enough performance for the EL side, though it does seem hard to beat specialized tools like meltano.

@astrojuanlu
Copy link
Member Author

astrojuanlu commented Feb 28, 2024

Oh, correct. I meant "the optimal [...] EL experience".

@takikadiri
Copy link

I would love to see Kedro fully support the "T", standing as an alternative for dbt for engine base transformation but with a python API. This could bring a huge value for some Data teams that need to juggle between two (or more) differents Technologies/frameworks and throws their works over the wall for others teams, dependencing on the stages of their Data pipelines (DE, DS/ML).

This could significally enlarge Kedro user base, as there is much more volume of work in Data & Analytics engineering than Data science & ML.

As for the "E" and "L" part Kedro could be just good enough.

@datajoely
Copy link
Contributor

To achieve this I really believe we should go all in on Ibis as a first class citizen / prefered approach in Kedro. One syntax for broadly the backends we care about enabling the interdisciplinary collaboration @takikadiri mentions.

@astrojuanlu
Copy link
Member Author

(From phone) To clarify, I don't think T is the problem, but rather E & L. I suspect some changes in philosophy of even API might be required that go beyond adopting Ibis, the task here is to investigate.

@astrojuanlu
Copy link
Member Author

Although T might also require some improvements in how we approach upserts.

@astrojuanlu
Copy link
Member Author

At PyData London I spoke to 2 different users about how they were using Kedro for their ETL pipelines and they both have challenges:

  • One of them has a special arrangement and performs the I/O outside of Kedro because of the current limitations, so they don't really leverage the full power of Kedro, just use it as a micro-orchestration engine.
  • Another one had to write their own Delta Table dataset with upserts (update credentials example for S3 bucket specs #542), plus a layer of state management for checkpointing and keeping track of what parts of the data had been ingested already.

Also, while discussing this in person with @deepyaman, I realised that both EL and T data pipelines need upserts anyway, so probably my comments above were somewhat misguided.

@astrojuanlu
Copy link
Member Author

astrojuanlu commented Jun 15, 2024

Inspiration: "Incremental loads should be replayable by design" (source)

image

@datajoely
Copy link
Contributor

datajoely commented Jun 17, 2024

I wonder if we could make start_date and end_date first class CLI arguments and part of the session constructor?

@astrojuanlu
Copy link
Member Author

Sometimes it would be date, sometimes it would be id... Don't think we can anticipate all possible pagination options. But regardless, I think this is more or less achievable already thanks to runtime parameters, right? The difficult thing is the upsert logic. Not from a technical perspective but from a product philosophy perspective, shifting from a focus on reproducibility to a focus on idempotency (data pipelines and machine learning pipelines might require different approaches)

@datajoely
Copy link
Contributor

I think you can generalise upserts into the need for a conditional node...

@astrojuanlu
Copy link
Member Author

BTW about Ibis and upserts ibis-project/ibis#5391 cc @deepyaman

@astrojuanlu
Copy link
Member Author

Kept thinking and thinking about the reproducibility idea. From #3979 (comment) (by @datajoely) and some conversations I had during EuroPython (and the fact that I've been mulling over this almost since I joined the project):

We should acknowledge that the reproducibility principle has never been explicit. In fact, it's mentioned zero times in https://github.com/kedro-org/kedro/wiki/Kedro-Principles (established roughly 3 years ago #824).

It was said in kedro-org/kedro-plugins#471 (comment) that "pure functions + DAG + static catalog = reproducible", but as I already hinted in that thread, that only holds if you assume that the catalog points to files that are tracked by version control alongside the code itself. The moment you refer to a remote location s3://bucket/my-file.csv that principle already breaks, because Kedro doesn't do any sort of hashing of the inputs and, by definition, whatever comes from that URL is out of Kedro jurisdiction.

LLMs aren't hugely different from any other REST APIs in that regard. Even under my "1. Frozen inputs" scenario there, models and APIs aren't versioned, there's randomness without the possibility to set the seed, etc.

Hence remote data locations, database connection strings, REST APIs, LLMs... all of them can break reproducibility.

We should of course continue to try to keep functions as pure as possible (always a good thing to do) and push the I/O part to the datasets, but users are demanding a better answer to data pipelines and dynamic catalogs, so I think it's time to break free from the illusion that Kedro in itself and by itself can guarantee the reproducibility of the pipelines.

@datajoely
Copy link
Contributor

Hence remote data locations, database connection strings, REST APIs, LLMs... all of them can break reproducibility.

This is true of any upstream data outside of your control, a SQL table/view which is changing frequently also applies. In my mind all of these roads lead back to some sort of conditional construct... you need it to do UPSERTS and a bunch of other things.

@astrojuanlu
Copy link
Member Author

FYI, we're considering giving dlt a try and see how it works alongside Kedro.

@astrojuanlu astrojuanlu changed the title [spike] Investigate suitability of Kedro for ETL/ELT data pipelines [spike] Investigate suitability of Kedro for EL pipelines and incremental data loading Jul 30, 2024
@astrojuanlu
Copy link
Member Author

Also renamed this issue to hopefully make it less confusing.

@astrojuanlu
Copy link
Member Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants