-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support callback dependencies #2378
Comments
This is probably one particular instantiation of #1577. |
Hi @mdekstrand ! We currently have a so called "callback stages" that are ran every time. With those, you could do something like
this way, dvc would always run |
Almost, but not quite. The problem is forcing ordering between generating the marker file, and whatever step came before to populate the database with the content that
Without this, running a |
@mdekstrand Thank you for the explanation! Indeed, our current callback stage is not very useful in that scenario. And how about an explicit flag smth like As to your idea with
it seems like we don't really need the md5 of the output, we could simply use the return code of that command as an indicator. E.g. <0 - error, 0 - didn't change, >0 - changed. |
The I don't think I like the return code option, though, because then the status script must know how to compare current state against previous state. If DVC checksums the scripts output, then all the script needs to be able to do is emit a stable summary or description of current state, and DVC's existing logic can take care of determining if that represents a change or not. |
@mdekstrand great point! In that case, what if we make the command return the checksum itself through stdout instead of us computing md5 of its output? That has the potential of being used not only for dependencies but also for outputs, as a cheap alternative-checksum plugin. There are a lot of things to consider with it though. |
Just a temporary workaround that comes to my mind. To make an intermediate stage effectively a "callback" stage, we can make it depend (along with other things, like DB upload pipeline) on an artificial callback stage that for example just dumps the current timestamp. We can even reuse this dummy callback stage everywhere to make any number of stages always reproducible. I love the idea to have cmd dependencies. It's simple to implement and solves a very good use case in a neat way. |
Seems like good feature request. I don't see too much problems with implementation at first sight. Probably some graceful exception handling will be required (when status check will be performed on non existing data). Also, I think that some kind of communiciation with user might be a good idea. Like |
I am in favor of cmd: psql -U user -c 'select id from some_table order by id desc limit 1' and many other alike. We need to come up with command line interface though. How should this look in |
How about |
@Suor , the only problem I'm seeing with this one is when using multiple dependencies;
How would you know which |
Cmd is a separate dependency, it doesn't have path, and doesn't correspond to anything. |
@Suor I'm actually not sure about that. We need the path to build the DAG properly. So what we need is something like
|
But there is no path, DAG should not include cmd dep or this should be special cased somehow. |
Shouldn't comand dependency scripts be handled by scm? |
@Suor My thinking was that @pared They should. I didn't mean that path/to/dep should be a path to the script that we are running, but rather to the dep that it is analyzing. |
@efiop path has no meaning here so it should not be neither in stage file nor in command line UI. I don't understand what "analyze |
@Suor The command mentioned by the OP, is analyzing db, so this stage should depend on it, the command is only helping us to judge if that dependency has changed or not. |
@efiop tbh, I also don't understand where does path come from? can you give an example when we would need it and what file that "command dependency" would depend on? |
@shcheklein In this use case, you could, for example, have I guess we can say that we are already doing that, our current |
@pared thanks for the example! (Btw, I would say it depends on I still don't quite understand what exactly are you suggesting though. Could you please describe the logic, CLI, high level implementation you have in mind? Like - we run |
@pared if you need dependency of dependency when you should create a separate stage for that command and set that script as dependency. In general path have no meaning here, this could easily be one-liner either with |
@Suor you don't even need a separate stage. You can make a second regular file dependency in the that same stage as far as I understand. But may be I'm still missing something. |
This is an extension of our callback stages (no deps, so always considered as changed) for stages with dependencies, so that these stages could be used properly in the middle of the DAG. See attached issue for more info. Related to iterative#2378
@mdekstrand |
@efiop Will do as soon as I can! It's been a busy 2-3 weeks. |
@efiop I have started making our book data tools use DVC with However, there is a semantic hole in the
If the database is out of date or changed somehow, The only way I see to fix this is to introduce a mechanism where an output is computed by a user-defined process, that gets invoked and re-run every time DVC needs to check the output status. This computation should, of course, be very cheap. This is exactly the scenario allowed with e.g. S3 remote outputs/dependencies: if my code were storing data on S3 instead of PostgreSQL, I could make the import job have an S3 output, and the process job have an S3 input, and everything would be fine. As discussed upthread, there are serious problems with trying to wire together callback dependencies and outputs. An alternate approach would be to make external dependencies pluggable: if I can register a handler for |
@mdekstrand Sorry for the delay. Why can't you just combine your |
There are a couple of reasons why I'm not fond of that option:
The import itself is actually in the middle of 3-4 steps. The general flow looks like:
There is some back-and-forth; the indexing of some data sets depends on the integration of some others. This repository is importing 6 different data sets (in our current active set - we have plans to add at least 2 more) in different formats (CSV, JSON, MARC XML, soon RDF). DVC is giving me 95% of what I need to wire it all together, avoid unnecessary reruns, and debug the dependency graph. I'm currently making it work by ensuring that I have values that will propagate through |
To reply to @Suor in#2531, for a piece of the discussion that feels more relevant here:
This is reasonable. However, DVC has already extended some of this responsibility with external dependencies / outputs; it just limits them to S3, Google Cloud, Azure, and HTTPS. That was actually my inspiration for even thinking about this: I saw that feature as doing exactly what I needed, except limited to a closed set of sources. So in my mind, that is what this issue is about: allowing users to extend the set of external dependency & output sources. I thought callbacks might be an easier way to do that than plugins, but am no longer so convinced. I could implement a small web server that exposes status information over HTTP, and use URLs pointing to it as external dependencies and outputs. I thought about that. It does, however, increase the infrastructure requirements for users of the code in which I would use this feature (and that code is intended for external consumption). |
@mdekstrand Would it be possible to check db state without actually importing it? I actually had that scenario in mind when we were talking about this issue and assumed that you have a way of knowing db version without a long-running import. You've mentioned http server, so I assume it will be able to tell db version without importing it, right? Why can't you have a local script that would do that? |
S3 and friends are files in all the senses we need. We can checksum them, commit to cache and checkout, push and pull, which is the cycle of reproducibility dvc provides. This is not so with some external db. |
@efiop wrote:
Yes. If I implemented the HTTP server, it would either return a 404 when asked for the status of an import operation that has not run, or it would return an empty result that would not match the result from any successful import. That would be enough to let DVC know that the output does not exist and should be recreated. My ideal end state actually depends on this: if external outputs are pluggable, DVC needs to be able to attempt to retrieve a stage's output/status blob when the stage has not yet run, to detect that it is missing or contains unexpected content. I am currently checking initial state, and then arranging for that to propagate through the state files in the rest of my operations.
Yes. However, as I understand it, the individual scripts are responsible for pushing blobs to S3, and DVC simply checks those blobs. I don't see a substantial, meaningful difference between "python script pushes $BLOB to S3" and "python script pushes a bunch of data to PostgreSQL that results in stable $BLOB". A few wrap-up comments:
|
I have hacked up an ugly version of this feature as a monkey-patch to DVC in our book data tools: https://github.com/BoiseState/bookdata-tools What I do is define a custom remote, with corresponding dependency and output classes, that keys off the I would rather not have to monkey-patch, although the patch is not difficult. Pluggable remotes would help a lot. There is the problem that |
@mdekstrand Whoa, that is neat! Do you think pgstat could be useful outside of your project? Would it make sense to include it into the core dvc? |
The details of What I think would be trivially generalizable to include in DVC is the idea of custom remotes & external deps/outs for this kind of thing. If I could put in [custom-external-remotes]
pgstat = bookdata.pgstat And then write a |
@mdekstrand @efiop I wonder if this discussion here relevant? (please, read to the very end - it started from Python function dependencies and now is going around custom functional filters). |
@shcheklein It is relevant. It is also relevant to #1577. @mdekstrand As to |
@efiop A general plugin interface would be quite helpful, and I'm fine with Doing this will require well-defined extension points with documented, stable, and hopefully minimal interfaces. It will probably also require the DVC config schema to be extensible, so plugins can be configured. A first version that loads and runs plugins that then monkey-patch DVC would allow things to get started (and I could remove my DVC wrapper). There is still the security issue, though (DVC shouldn't run plugins until a repository is trusted). |
Just FYI DVC2 broke my monkey-patch, and it wasn't clear how to add it back, so I have reverted back to the two-step mechanism with I would still very much like to see support for custom URL schemas for external outputs and dependencies. It would solve this problem rather cleanly. |
Just to add in another use case. I started with several stages depending on the entire source directory. This is cumbersome because when unrelated files change, I still need to rerun output (or explicitly commit what I know hasn't changed to avoid this). I hacked together something based on pydeps that can analyze dependencies of the scripts I use for my stages and update my However, this now requires that I run this script occasionally to make sure my dependencies are up-to-date. With a callback dependency, I could just run this script to check only the necessary dependencies. Since the dependency graph is somewhat complicated to compute, I could even have another stage which depends on the whole source directory and calculates and caches the dependency graph. Then just check the cached dependency graph in other stages with a callback dependency. |
Supporting 'callback' dependencies and outputs (for lack of a better term) would enable a number of interesting possibilities for using DVC to control processes that store data outside of DVC's file access, without requiring an explosion of remote access possibilities.
This would generalize what is possible with HTTP outputs and dependencies.
An example of what this could look like in a DVC file:
Instead of consulting a file, as with
path:
, DVC would run the specified command (which should be relatively quick), and compute the MD5 hash of its output. That command could do whatever it needs to in order to get the data status.My specific use case is using DVC to control a large data import process that processes raw data files, loads them into PostgreSQL, and performs some additional computations involving intermediate files. I would implement a script that extracts data status from the PostgreSQL database so that DVC can check whether a step is up-to-date with the actual data currently in the database. I could implement this with HTTP dependencies and something like PostgREST, but that would introduce additional infrastructure requirements for people using the code (namely, a running PostgREST server).
The text was updated successfully, but these errors were encountered: