You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a summary of the findings from investigating #3367. The new ticket is because the initial description of the issue doesn't match real-world behaviour so here is a summary of the real-world impact.
Issue
Protobuf serialization is non-deterministic. There are many sources for non-determinism with the biggest issue being the order of entries in maps.
Wire format ordering and map iteration ordering of map values is undefined, so you cannot rely on your map items being in a particular order.
This is a problem because Bisq currently uses the protobuf serialized bytes as the basis for hashing to uniquely identify payloads.
Serialization being non-deterministic means the produced hashes are not stable and could change over time. Since hashes are used for referencing payloads from other messages (eg in RefreshOfferMessage or DaoStateHash) this means broken hashes could have a detrimental impact on the entire bisq project.
Real World
Monitoring mainnet data has shown that there are only a very small number of issues that come up which could be a result of unstable hashes. The ones that did show up could also be explained through other behaviour.
The small number of issues implies that hashes in bisq are in fact largely stable contrary to the non-determinism.
This is thanks to an implementation detail in the java protobuf library we are using. Internally the library instantiates LinkedHashMaps when deserialising map fields. LinkedHashMap preserves insert order of the entries which means the byte order (and hashes) will be stable after re-serializing.
Risks
While we seem to be clear for the time being there is no guarantee this will remain so. Implementations of protobufs in other languages (including the reference implementation in golang) don't use order preserving maps internally and are not be able to reliably reproduce the hashes across nodes.
If the implementation details of the java pb lib change, or other sources of non-determinism come into play Bisq may be completely broken.
Possible work around
Moving to a deterministic form of hashing is the only long-term mitigation. However this initially appears to be quite difficult in a backwards-compatible way (more investigation required).
One possible work around identified in this article is to change the map fields in the proto schema definition to repeated fields.
These 2 schemas are bit compatible on serialization. The second one is guaranteed to preserve order, which would remove the most critical source of non-determinism.
Editing the schemas to implement this work around would involve changing all references from maps to arrays of 2 element tuples. Wether or not this is worth the effort as short-term mitigation also requires more investigation.
The text was updated successfully, but these errors were encountered:
This is a summary of the findings from investigating #3367. The new ticket is because the initial description of the issue doesn't match real-world behaviour so here is a summary of the real-world impact.
Issue
Protobuf serialization is non-deterministic. There are many sources for non-determinism with the biggest issue being the order of entries in maps.
This is a problem because Bisq currently uses the protobuf serialized bytes as the basis for hashing to uniquely identify payloads.
Serialization being non-deterministic means the produced hashes are not stable and could change over time. Since hashes are used for referencing payloads from other messages (eg in
RefreshOfferMessage
orDaoStateHash
) this means broken hashes could have a detrimental impact on the entire bisq project.Real World
Monitoring mainnet data has shown that there are only a very small number of issues that come up which could be a result of unstable hashes. The ones that did show up could also be explained through other behaviour.
The small number of issues implies that hashes in bisq are in fact largely stable contrary to the non-determinism.
This is thanks to an implementation detail in the java protobuf library we are using. Internally the library instantiates
LinkedHashMap
s when deserialising map fields.LinkedHashMap
preserves insert order of the entries which means the byte order (and hashes) will be stable after re-serializing.Risks
While we seem to be clear for the time being there is no guarantee this will remain so. Implementations of protobufs in other languages (including the reference implementation in golang) don't use order preserving maps internally and are not be able to reliably reproduce the hashes across nodes.
If the implementation details of the java pb lib change, or other sources of non-determinism come into play Bisq may be completely broken.
Possible work around
Moving to a deterministic form of hashing is the only long-term mitigation. However this initially appears to be quite difficult in a backwards-compatible way (more investigation required).
One possible work around identified in this article is to change the
map
fields in the proto schema definition torepeated
fields.ie going from:
to
These 2 schemas are bit compatible on serialization. The second one is guaranteed to preserve order, which would remove the most critical source of non-determinism.
Editing the schemas to implement this work around would involve changing all references from maps to arrays of 2 element tuples. Wether or not this is worth the effort as short-term mitigation also requires more investigation.
The text was updated successfully, but these errors were encountered: