-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
replication from compound using python #103
Comments
I am not sure the functionality/quality of the https://docs.irods.org/4.2.11/doxygen/reDataObjOpr_8cpp.html#a957a06d93d1100dceb5a497bb9d1253f It's possible you've found a similar issue to #54 - but this sounds a bit different. |
I"ve just tried The linked bug arises only in case more threads are used. However this one indeed has different cause as it occurs even with |
Upon further reading/consultation.... we think this is definitely the same as #54. #54 was reported against 4.2.6 and 4.2.7, before we introduced logical locking in 4.2.9, which makes all data movement create a placeholder in the catalog first... like only parallel transfer did in 4.2.8 and before. This matches the scenario you're seeing above. Pretty sure this is the reason... #1
Also explains why it works fine without coming through the Python rule engine plugin. We'll test if whether that lock is still required/essential. |
Thank you for investigation. In that case this ticket is a duplicate. |
@trel @ciklysta This may be a duplicate of irods/irods#6622 instead of #54: the problem occurs when |
Oh, interesting... Any chance you think that irods/irods#6622 is actually, itself, the same as #54? In other words, should we now re-test #54 to see if it is still a deadlock with the new irods/irods#6622 codefix in place? |
Both issues deadlock on the same lock, both require a python rule on the call stack, but they are distinct. #54 deadlocks without using |
Got it - right. |
Bug Report
Irods version 4.2.11, centos7
I have the following resource hierarchy (OldResource being an old resource that I want to migrate data from)
Data is only in the archive, the cache is empty.
I've written a custom rule that manages replication since it is a lengthy operation that I need to manage myself
migrateOneObj.r content:
core.re content:
When I run
irule -F migrateOneObj.r
under the user that ownsfile.zip
, it works correctly.However, if I move the function
testReplicationFromCompound
to python (core.py):and run
irule -F migrateOneObj.r
(under the user that ownsfile.zip
) the following happens:irodsServer
processes, that started whenirule
was started.strace
says thatread
from a pipefutex(0x7f85e89d8ec8, FUTEX_WAIT_PRIVATE, 2, NULL
migration-interface.sh
) is never run (otherwise it would create an entry in a custom log file)The text was updated successfully, but these errors were encountered: