You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When uploading a new repo state, datalad-annex essentially does this:
annex drop --force -f origin --all
annex copy --fast --to origin --all
If this is happening against a dataverse dataset that is published, the drop will effectively not remove anything with the current implementation and conceptualization on the dataverse-side.
There needs to be an explicit test for this scenario. I suspect that the current implementation does not behave well in this case.
The text was updated successfully, but these errors were encountered:
mih
added a commit
to mih/datalad-dataverse
that referenced
this issue
Mar 6, 2023
This fix is needed after ripping out the special casing for XDLRA
keys. It intentionally only addresses the non-export case, to make
clear what is important for which mode.
This fix was developed by @christian-monch as part of
#1
It fixes the situation where a git-clone from dataverse cannot know
the fileId of a repository export (datalad-annex git remote helper)
that, by definition, needs to be packed up and uploaded _before_
a fileId can be known and registered in the repository that is
already finalized and uploaded.
In order to break this chicken-and-egg-problem, `_remove_file()` now
uniformly falls back on determining the fileId via path matching.
But see datalad#189 and datalad#188 for related aspects of this general issue.
Note, that WRT to the usecase of datalad-annex, this was one of the reasons to not fail on REMOVE. Having annex-copy trigger a replacement would still work, since what matters is the new draft.
When uploading a new repo state, datalad-annex essentially does this:
annex drop --force -f origin --all
annex copy --fast --to origin --all
If this is happening against a dataverse dataset that is published, the
drop
will effectively not remove anything with the current implementation and conceptualization on the dataverse-side.There needs to be an explicit test for this scenario. I suspect that the current implementation does not behave well in this case.
The text was updated successfully, but these errors were encountered: