You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are two areas in the riak_repl code that could result in the fscoordinator not knowing about a node with an rt_dirty count > 0.
The first is in the rt_dirty function of riak_repl_stats [0]. Once a node is marked as dirty, it is possible that the communication of the event with the fscoordinator fails. Currently, this is being logged at the debug level:
lager:debug("Failed to notify coordinator of rt_dirty status")
However, an event like this logged at warning would allow debugging of the issue at lager's info level which would facilitate instructing clients to manually clear the rt_dirty flag using riak_repl_stats: clear_rt_dirty().
The second is in the notify_rt_dirty_nodes function of riak_repl2_fscoordinator [1]. After a successful fullsync, the multicall on line 723:
does not track the results of the multicall. Immediately after the multicall, the list of dirty nodes is cleared:
State#state{dirty_nodes=ordsets:new()};
If there is an error during the multicall, or a node is unreachable for some reason, the rt_dirty flag is not cleared on the remote node and the coordinator no longer considers it dirty.
Tracking these errors and not removing them from the list of dirty nodes will allow an attempt to clear that rt_dirty counter on the next fullsync.
Both of these scenarios create situations where nodes have an rt_dirty count that remains constant through fullsyncs (until the rt_dirty count is incremented again, at which time rt_dirty() will attempt to update the fscoordinator.)
The text was updated successfully, but these errors were encountered:
While reviewing the fullsync coordinator code as a group, we noticed this problem with the node dirty notifications. It does seem we can do better than our current approach in case of temporary partitions, and it should definitely be at least a warning in the logs.
(All links and line refs are to 1.3.1 tag)
There are two areas in the riak_repl code that could result in the fscoordinator not knowing about a node with an rt_dirty count > 0.
The first is in the rt_dirty function of riak_repl_stats [0]. Once a node is marked as dirty, it is possible that the communication of the event with the fscoordinator fails. Currently, this is being logged at the debug level:
lager:debug("Failed to notify coordinator of rt_dirty status")
However, an event like this logged at warning would allow debugging of the issue at lager's info level which would facilitate instructing clients to manually clear the rt_dirty flag using riak_repl_stats: clear_rt_dirty().
The second is in the notify_rt_dirty_nodes function of riak_repl2_fscoordinator [1]. After a successful fullsync, the multicall on line 723:
rpc:multicall(NodesToNotify, riak_repl_stats, clear_rt_dirty, []),
does not track the results of the multicall. Immediately after the multicall, the list of dirty nodes is cleared:
State#state{dirty_nodes=ordsets:new()};
If there is an error during the multicall, or a node is unreachable for some reason, the rt_dirty flag is not cleared on the remote node and the coordinator no longer considers it dirty.
Tracking these errors and not removing them from the list of dirty nodes will allow an attempt to clear that rt_dirty counter on the next fullsync.
Both of these scenarios create situations where nodes have an rt_dirty count that remains constant through fullsyncs (until the rt_dirty count is incremented again, at which time rt_dirty() will attempt to update the fscoordinator.)
The text was updated successfully, but these errors were encountered: