-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: mnauth always use basic scheme #6467
feat: mnauth always use basic scheme #6467
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good overall, but it is incomplete because RPC mnauth still uses V19 deployment in rpc/misc.cpp to decide if that's legacy or basic public key.
This PR is subset of my local branch to fire up v19 on RegTest from block 1, but so far as this PR is almost ready and my local branch with v19 changes is still blocked by #6325, let's get this one merged first.
Though, consider to include df1d707
@@ -106,9 +103,7 @@ PeerMsgRet CMNAuth::ProcessMessage(CNode& peer, ServiceFlags node_services, CCon | |||
if (Params().NetworkIDString() != CBaseChainParams::MAIN && gArgs.IsArgSet("-pushversion")) { | |||
nOurNodeVersion = gArgs.GetArg("-pushversion", PROTOCOL_VERSION); | |||
} | |||
const CBlockIndex* tip = active_chain.Tip(); | |||
const bool is_basic_scheme_active{DeploymentActiveAfter(tip, Params().GetConsensus(), Consensus::DEPLOYMENT_V19)}; | |||
ConstCBLSPublicKeyVersionWrapper pubKey(dmn->pdmnState->pubKeyOperator.Get(), !is_basic_scheme_active); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: drop also unused ConstCBLSPublicKeyVersionWrapper - see 68ee12d
…ustom changes Assertion failed: detected inconsistent lock order for 'cs_main' in node/blockstorage.cpp:775 (in thread 'httpworker.2'), details in debug log. Previous lock order was: (2) 'cs_main' in rpc/net.cpp:666 (in thread 'httpworker.3') (1) 'm_nodes_mutex' in net.cpp:4754 (in thread 'httpworker.3') Current lock order is: (1) 'm_nodes_mutex' in net.cpp:5102 (in thread 'httpworker.2') (2) 'cs_main' in node/blockstorage.cpp:775 (in thread 'httpworker.2') node0 2024-12-09T07:46:49.907123Z (mocktime: 2014-12-04T18:34:20Z) [httpworker.2] [stacktraces.cpp:528] [PrintCrashInfo] Posix Signal: Aborted ... 15#: (0x5764EB18DE42) sync.cpp:123 - potential_deadlock_detected 16#: (0x5764EB194A7E) sync.cpp:190 - push_lock<std::recursive_mutex> 17#: (0x5764EB194A7E) sync.cpp:214 - void EnterCritical<std::recursive_mutex>(char const*, char const*, int, std::recursive_mutex*, bool) 18#: (0x5764EA928642) unique_lock.h:150 - std::unique_lock<std::recursive_mutex>::try_lock() 19#: (0x5764EA928642) sync.h:168 - UniqueLock<AnnotatedMixin<std::recursive_mutex>, std::unique_lock<std::recursive_mutex> >::Enter(char const*, char const*, int) 20#: (0x5764EA928642) sync.h:190 - UniqueLock<AnnotatedMixin<std::recursive_mutex>, std::unique_lock<std::recursive_mutex> >::UniqueLock(AnnotatedMixin<std::recursive_mutex>&, char const*, char const*, int, bool) 21#: (0x5764EAAD9C3D) chain.h:214 - CBlockIndex::GetBlockPos() const 22#: (0x5764EAAD9C3D) blockstorage.cpp:775 - operator() 23#: (0x5764EAAD9C3D) blockstorage.cpp:775 - ReadBlockFromDisk(CBlock&, CBlockIndex const*, Consensus::Params const&) 24#: (0x5764EADB117A) stl_vector.h:1126 - std::vector<std::shared_ptr<CTransaction const>, std::allocator<std::shared_ptr<CTransaction const> > >::operator[](unsigned long) 25#: (0x5764EADB117A) cbtx.cpp:467 - GetNonNullCoinbaseChainlock(CBlockIndex const*) 26#: (0x5764EA9FE4F8) utils.cpp:84 - GetHashModifier 27#: (0x5764EAA05291) utils.cpp:189 - ComputeQuorumMembers 28#: (0x5764EAA05291) utils.cpp:167 - llmq::utils::GetAllQuorumMembers(Consensus::LLMQType, CDeterministicMNManager&, gsl::not_null<CBlockIndex const*>, bool) 29#: (0x5764EA9906FC) stl_vector.h:1258 - std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > >::data() 30#: (0x5764EA9906FC) span.h:164 - Span<std::shared_ptr<CDeterministicMN const> >::Span<std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > > >(std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > >&, std::enable_if<((!Span<std::shared_ptr<CDeterministicMN const> >::is_Span<std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > > >::value)&&std::is_convertible<std::remove_pointer<decltype ((((declval<std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > >&>)()).data)())>::type (*) [], std::shared_ptr<CDeterministicMN const> (*) []>::value)&&std::is_convertible<decltype ((((declval<std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > >&>)()).size)()), unsigned long>::value, decltype(nullptr)>::type) 31#: (0x5764EA9906FC) quorums.cpp:412 - llmq::CQuorumManager::BuildQuorumFromCommitment(Consensus::LLMQType, gsl::not_null<CBlockIndex const*>, bool) const 32#: (0x5764EA99198B) shared_ptr_base.h:1540 - std::__shared_ptr<llmq::CQuorum const, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<llmq::CQuorum, void>(std::__shared_ptr<llmq::CQuorum, (__gnu_cxx::_Lock_policy)2>&&) 33#: (0x5764EA99198B) shared_ptr.h:369 - std::shared_ptr<llmq::CQuorum const>::shared_ptr<llmq::CQuorum, void>(std::shared_ptr<llmq::CQuorum>&&) 34#: (0x5764EA99198B) quorums.cpp:672 - llmq::CQuorumManager::GetQuorum(Consensus::LLMQType, gsl::not_null<CBlockIndex const*>, bool) const 35#: (0x5764EA991BD6) shared_ptr_base.h:1070 - std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count() 36#: (0x5764EA991BD6) shared_ptr_base.h:1524 - std::__shared_ptr<llmq::CQuorum const, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr() 37#: (0x5764EA991BD6) shared_ptr.h:175 - std::shared_ptr<llmq::CQuorum const>::~shared_ptr() 38#: (0x5764EA991BD6) quorums.cpp:494 - llmq::CQuorumManager::RequestQuorumData(CNode*, Consensus::LLMQType, CBlockIndex const*, unsigned short, uint256 const&) const ...
…ustom changes Assertion failed: detected inconsistent lock order for 'cs_main' in node/blockstorage.cpp:775 (in thread 'httpworker.2'), details in debug log. Previous lock order was: (2) 'cs_main' in rpc/net.cpp:666 (in thread 'httpworker.3') (1) 'm_nodes_mutex' in net.cpp:4754 (in thread 'httpworker.3') Current lock order is: (1) 'm_nodes_mutex' in net.cpp:5102 (in thread 'httpworker.2') (2) 'cs_main' in node/blockstorage.cpp:775 (in thread 'httpworker.2') node0 2024-12-09T07:46:49.907123Z (mocktime: 2014-12-04T18:34:20Z) [httpworker.2] [stacktraces.cpp:528] [PrintCrashInfo] Posix Signal: Aborted ... 15#: (0x5764EB18DE42) sync.cpp:123 - potential_deadlock_detected 16#: (0x5764EB194A7E) sync.cpp:190 - push_lock<std::recursive_mutex> 17#: (0x5764EB194A7E) sync.cpp:214 - void EnterCritical<std::recursive_mutex>(char const*, char const*, int, std::recursive_mutex*, bool) 18#: (0x5764EA928642) unique_lock.h:150 - std::unique_lock<std::recursive_mutex>::try_lock() 19#: (0x5764EA928642) sync.h:168 - UniqueLock<AnnotatedMixin<std::recursive_mutex>, std::unique_lock<std::recursive_mutex> >::Enter(char const*, char const*, int) 20#: (0x5764EA928642) sync.h:190 - UniqueLock<AnnotatedMixin<std::recursive_mutex>, std::unique_lock<std::recursive_mutex> >::UniqueLock(AnnotatedMixin<std::recursive_mutex>&, char const*, char const*, int, bool) 21#: (0x5764EAAD9C3D) chain.h:214 - CBlockIndex::GetBlockPos() const 22#: (0x5764EAAD9C3D) blockstorage.cpp:775 - operator() 23#: (0x5764EAAD9C3D) blockstorage.cpp:775 - ReadBlockFromDisk(CBlock&, CBlockIndex const*, Consensus::Params const&) 24#: (0x5764EADB117A) stl_vector.h:1126 - std::vector<std::shared_ptr<CTransaction const>, std::allocator<std::shared_ptr<CTransaction const> > >::operator[](unsigned long) 25#: (0x5764EADB117A) cbtx.cpp:467 - GetNonNullCoinbaseChainlock(CBlockIndex const*) 26#: (0x5764EA9FE4F8) utils.cpp:84 - GetHashModifier 27#: (0x5764EAA05291) utils.cpp:189 - ComputeQuorumMembers 28#: (0x5764EAA05291) utils.cpp:167 - llmq::utils::GetAllQuorumMembers(Consensus::LLMQType, CDeterministicMNManager&, gsl::not_null<CBlockIndex const*>, bool) 29#: (0x5764EA9906FC) stl_vector.h:1258 - std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > >::data() 30#: (0x5764EA9906FC) span.h:164 - Span<std::shared_ptr<CDeterministicMN const> >::Span<std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > > >(std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > >&, std::enable_if<((!Span<std::shared_ptr<CDeterministicMN const> >::is_Span<std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > > >::value)&&std::is_convertible<std::remove_pointer<decltype ((((declval<std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > >&>)()).data)())>::type (*) [], std::shared_ptr<CDeterministicMN const> (*) []>::value)&&std::is_convertible<decltype ((((declval<std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > >&>)()).size)()), unsigned long>::value, decltype(nullptr)>::type) 31#: (0x5764EA9906FC) quorums.cpp:412 - llmq::CQuorumManager::BuildQuorumFromCommitment(Consensus::LLMQType, gsl::not_null<CBlockIndex const*>, bool) const 32#: (0x5764EA99198B) shared_ptr_base.h:1540 - std::__shared_ptr<llmq::CQuorum const, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<llmq::CQuorum, void>(std::__shared_ptr<llmq::CQuorum, (__gnu_cxx::_Lock_policy)2>&&) 33#: (0x5764EA99198B) shared_ptr.h:369 - std::shared_ptr<llmq::CQuorum const>::shared_ptr<llmq::CQuorum, void>(std::shared_ptr<llmq::CQuorum>&&) 34#: (0x5764EA99198B) quorums.cpp:672 - llmq::CQuorumManager::GetQuorum(Consensus::LLMQType, gsl::not_null<CBlockIndex const*>, bool) const 35#: (0x5764EA991BD6) shared_ptr_base.h:1070 - std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count() 36#: (0x5764EA991BD6) shared_ptr_base.h:1524 - std::__shared_ptr<llmq::CQuorum const, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr() 37#: (0x5764EA991BD6) shared_ptr.h:175 - std::shared_ptr<llmq::CQuorum const>::~shared_ptr() 38#: (0x5764EA991BD6) quorums.cpp:494 - llmq::CQuorumManager::RequestQuorumData(CNode*, Consensus::LLMQType, CBlockIndex const*, unsigned short, uint256 const&) const ...
fb9fdf9
to
844f242
Compare
Code reviews applied, squashed into top commit. |
|
||
/** | ||
* @pre CMasternodeMetaMan's database must be successfully loaded before | ||
* attempting to call this function regardless of sync state | ||
*/ | ||
static PeerMsgRet ProcessMessage(CNode& peer, ServiceFlags node_services, CConnman& connman, CMasternodeMetaMan& mn_metaman, const CActiveMasternodeManager* const mn_activeman, | ||
const CChain& active_chain, const CMasternodeSync& mn_sync, const CDeterministicMNList& tip_mn_list, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: also
-class CChain;
@@ -3784,7 +3784,7 @@ void PeerManagerImpl::ProcessMessage( | |||
} | |||
|
|||
if (is_masternode && !pfrom.m_masternode_probe_connection) { | |||
CMNAuth::PushMNAUTH(pfrom, m_connman, *m_mn_activeman, m_chainman.ActiveChain().Tip()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
haven't you forgot to update net_processing.cpp?
CMNAuth::ProcessMessage
changed signature
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably 🙈
fixup: drop CChain from mnauth ProcessMessage feat: let RPC mnauth to generate only BASIC bls messages and fixes for rpc_mnauth.py and p2p_quorum_data.py refactor: drop unused ConstCBLSPublicKeyVersionWrapper
844f242
to
09058e0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK 09058e0
Thread sanitizer failed with error that addressed in my PR. logs for reference
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
…ustom changes Assertion failed: detected inconsistent lock order for 'cs_main' in node/blockstorage.cpp:775 (in thread 'httpworker.2'), details in debug log. Previous lock order was: (2) 'cs_main' in rpc/net.cpp:666 (in thread 'httpworker.3') (1) 'm_nodes_mutex' in net.cpp:4754 (in thread 'httpworker.3') Current lock order is: (1) 'm_nodes_mutex' in net.cpp:5102 (in thread 'httpworker.2') (2) 'cs_main' in node/blockstorage.cpp:775 (in thread 'httpworker.2') node0 2024-12-09T07:46:49.907123Z (mocktime: 2014-12-04T18:34:20Z) [httpworker.2] [stacktraces.cpp:528] [PrintCrashInfo] Posix Signal: Aborted ... 15#: (0x5764EB18DE42) sync.cpp:123 - potential_deadlock_detected 16#: (0x5764EB194A7E) sync.cpp:190 - push_lock<std::recursive_mutex> 17#: (0x5764EB194A7E) sync.cpp:214 - void EnterCritical<std::recursive_mutex>(char const*, char const*, int, std::recursive_mutex*, bool) 18#: (0x5764EA928642) unique_lock.h:150 - std::unique_lock<std::recursive_mutex>::try_lock() 19#: (0x5764EA928642) sync.h:168 - UniqueLock<AnnotatedMixin<std::recursive_mutex>, std::unique_lock<std::recursive_mutex> >::Enter(char const*, char const*, int) 20#: (0x5764EA928642) sync.h:190 - UniqueLock<AnnotatedMixin<std::recursive_mutex>, std::unique_lock<std::recursive_mutex> >::UniqueLock(AnnotatedMixin<std::recursive_mutex>&, char const*, char const*, int, bool) 21#: (0x5764EAAD9C3D) chain.h:214 - CBlockIndex::GetBlockPos() const 22#: (0x5764EAAD9C3D) blockstorage.cpp:775 - operator() 23#: (0x5764EAAD9C3D) blockstorage.cpp:775 - ReadBlockFromDisk(CBlock&, CBlockIndex const*, Consensus::Params const&) 24#: (0x5764EADB117A) stl_vector.h:1126 - std::vector<std::shared_ptr<CTransaction const>, std::allocator<std::shared_ptr<CTransaction const> > >::operator[](unsigned long) 25#: (0x5764EADB117A) cbtx.cpp:467 - GetNonNullCoinbaseChainlock(CBlockIndex const*) 26#: (0x5764EA9FE4F8) utils.cpp:84 - GetHashModifier 27#: (0x5764EAA05291) utils.cpp:189 - ComputeQuorumMembers 28#: (0x5764EAA05291) utils.cpp:167 - llmq::utils::GetAllQuorumMembers(Consensus::LLMQType, CDeterministicMNManager&, gsl::not_null<CBlockIndex const*>, bool) 29#: (0x5764EA9906FC) stl_vector.h:1258 - std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > >::data() 30#: (0x5764EA9906FC) span.h:164 - Span<std::shared_ptr<CDeterministicMN const> >::Span<std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > > >(std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > >&, std::enable_if<((!Span<std::shared_ptr<CDeterministicMN const> >::is_Span<std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > > >::value)&&std::is_convertible<std::remove_pointer<decltype ((((declval<std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > >&>)()).data)())>::type (*) [], std::shared_ptr<CDeterministicMN const> (*) []>::value)&&std::is_convertible<decltype ((((declval<std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > >&>)()).size)()), unsigned long>::value, decltype(nullptr)>::type) 31#: (0x5764EA9906FC) quorums.cpp:412 - llmq::CQuorumManager::BuildQuorumFromCommitment(Consensus::LLMQType, gsl::not_null<CBlockIndex const*>, bool) const 32#: (0x5764EA99198B) shared_ptr_base.h:1540 - std::__shared_ptr<llmq::CQuorum const, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<llmq::CQuorum, void>(std::__shared_ptr<llmq::CQuorum, (__gnu_cxx::_Lock_policy)2>&&) 33#: (0x5764EA99198B) shared_ptr.h:369 - std::shared_ptr<llmq::CQuorum const>::shared_ptr<llmq::CQuorum, void>(std::shared_ptr<llmq::CQuorum>&&) 34#: (0x5764EA99198B) quorums.cpp:672 - llmq::CQuorumManager::GetQuorum(Consensus::LLMQType, gsl::not_null<CBlockIndex const*>, bool) const 35#: (0x5764EA991BD6) shared_ptr_base.h:1070 - std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count() 36#: (0x5764EA991BD6) shared_ptr_base.h:1524 - std::__shared_ptr<llmq::CQuorum const, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr() 37#: (0x5764EA991BD6) shared_ptr.h:175 - std::shared_ptr<llmq::CQuorum const>::~shared_ptr() 38#: (0x5764EA991BD6) quorums.cpp:494 - llmq::CQuorumManager::RequestQuorumData(CNode*, Consensus::LLMQType, CBlockIndex const*, unsigned short, uint256 const&) const ...
…with custom changes 1b0af99 fix: wrong lock order, observed locally on top of #6467 with custom changes (Konstantin Akimov) Pull request description: ## Issue being fixed or feature implemented Assertion failed: detected inconsistent lock order for 'cs_main' in node/blockstorage.cpp:775 (in thread 'httpworker.2'), details in debug log. Previous lock order was: (2) 'cs_main' in rpc/net.cpp:666 (in thread 'httpworker.3') (1) 'm_nodes_mutex' in net.cpp:4754 (in thread 'httpworker.3') Current lock order is: (1) 'm_nodes_mutex' in net.cpp:5102 (in thread 'httpworker.2') (2) 'cs_main' in node/blockstorage.cpp:775 (in thread 'httpworker.2') node0 2024-12-09T07:46:49.907123Z (mocktime: 2014-12-04T18:34:20Z) [httpworker.2] [stacktraces.cpp:528] [PrintCrashInfo] Posix Signal: Aborted ... 15#: (0x5764EB18DE42) sync.cpp:123 - potential_deadlock_detected 16#: (0x5764EB194A7E) sync.cpp:190 - push_lock<std::recursive_mutex> 17#: (0x5764EB194A7E) sync.cpp:214 - void EnterCritical<std::recursive_mutex>(char const*, char const*, int, std::recursive_mutex*, bool) 18#: (0x5764EA928642) unique_lock.h:150 - std::unique_lock<std::recursive_mutex>::try_lock() 19#: (0x5764EA928642) sync.h:168 - UniqueLock<AnnotatedMixin<std::recursive_mutex>, std::unique_lock<std::recursive_mutex> >::Enter(char const*, char const*, int) 20#: (0x5764EA928642) sync.h:190 - UniqueLock<AnnotatedMixin<std::recursive_mutex>, std::unique_lock<std::recursive_mutex> >::UniqueLock(AnnotatedMixin<std::recursive_mutex>&, char const*, char const*, int, bool) 21#: (0x5764EAAD9C3D) chain.h:214 - CBlockIndex::GetBlockPos() const 22#: (0x5764EAAD9C3D) blockstorage.cpp:775 - operator() 23#: (0x5764EAAD9C3D) blockstorage.cpp:775 - ReadBlockFromDisk(CBlock&, CBlockIndex const*, Consensus::Params const&) 24#: (0x5764EADB117A) stl_vector.h:1126 - std::vector<std::shared_ptr<CTransaction const>, std::allocator<std::shared_ptr<CTransaction const> > >::operator[](unsigned long) 25#: (0x5764EADB117A) cbtx.cpp:467 - GetNonNullCoinbaseChainlock(CBlockIndex const*) 26#: (0x5764EA9FE4F8) utils.cpp:84 - GetHashModifier 27#: (0x5764EAA05291) utils.cpp:189 - ComputeQuorumMembers 28#: (0x5764EAA05291) utils.cpp:167 - llmq::utils::GetAllQuorumMembers(Consensus::LLMQType, CDeterministicMNManager&, gsl::not_null<CBlockIndex const*>, bool) 29#: (0x5764EA9906FC) stl_vector.h:1258 - std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > >::data() 30#: (0x5764EA9906FC) span.h:164 - Span<std::shared_ptr<CDeterministicMN const> >::Span<std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > > >(std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > >&, std::enable_if<((!Span<std::shared_ptr<CDeterministicMN const> >::is_Span<std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > > >::value)&&std::is_convertible<std::remove_pointer<decltype ((((declval<std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > >&>)()).data)())>::type (*) [], std::shared_ptr<CDeterministicMN const> (*) []>::value)&&std::is_convertible<decltype ((((declval<std::vector<std::shared_ptr<CDeterministicMN const>, std::allocator<std::shared_ptr<CDeterministicMN const> > >&>)()).size)()), unsigned long>::value, decltype(nullptr)>::type) 31#: (0x5764EA9906FC) quorums.cpp:412 - llmq::CQuorumManager::BuildQuorumFromCommitment(Consensus::LLMQType, gsl::not_null<CBlockIndex const*>, bool) const 32#: (0x5764EA99198B) shared_ptr_base.h:1540 - std::__shared_ptr<llmq::CQuorum const, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<llmq::CQuorum, void>(std::__shared_ptr<llmq::CQuorum, (__gnu_cxx::_Lock_policy)2>&&) 33#: (0x5764EA99198B) shared_ptr.h:369 - std::shared_ptr<llmq::CQuorum const>::shared_ptr<llmq::CQuorum, void>(std::shared_ptr<llmq::CQuorum>&&) 34#: (0x5764EA99198B) quorums.cpp:672 - llmq::CQuorumManager::GetQuorum(Consensus::LLMQType, gsl::not_null<CBlockIndex const*>, bool) const 35#: (0x5764EA991BD6) shared_ptr_base.h:1070 - std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count() 36#: (0x5764EA991BD6) shared_ptr_base.h:1524 - std::__shared_ptr<llmq::CQuorum const, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr() 37#: (0x5764EA991BD6) shared_ptr.h:175 - std::shared_ptr<llmq::CQuorum const>::~shared_ptr() 38#: (0x5764EA991BD6) quorums.cpp:494 - llmq::CQuorumManager::RequestQuorumData(CNode*, Consensus::LLMQType, CBlockIndex const*, unsigned short, uint256 const&) const ... ## What was done? Refactored call of GetQuorum out of `llmq::CQuorumManager::RequestQuorumData`. It also optimize `CQuorumManager::StartQuorumDataRecoveryThread` because avoid double call of `GetQuorum` ## How Has This Been Tested? Run with and without this changes. ## Breaking Changes You may observe error `RPC_INVALID_PARAMETER, "quorum not found"` while calling RPC `quorum getdata` instead returning `false` with log output only in case of request for non-existing quorum. The output is not documented at the moment anyway. ## Checklist: - [x] I have performed a self-review of my own code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have added or updated relevant unit/integration/functional/e2e tests - [ ] I have made corresponding changes to the documentation - [x] I have assigned this pull request to a milestone ACKs for top commit: UdjinM6: utACK 1b0af99 PastaPastaPasta: utACK 1b0af99 Tree-SHA512: b1955c7a2caad81f6c68299df513b4b83ff43f1d829d91769ac7d2a7b88985b5e7a86b765cfe90739ced9bac97da8fea23cd5c87cde1fca039d8b3f1a9c91084
…nauth test ec00c37 test: fix off-by-one in dynamically_add_masternode (UdjinM6) 6519856 test: don't add legacy bls mn on start (UdjinM6) 3db20e3 test: actually use masternode with basic bls pubkey in mnauth test (UdjinM6) Pull request description: ## Issue being fixed or feature implemented `rpc_manuth` is failing in ~50% cases locally because we still use legacy pubkeys (not in 100% cases because sometimes they look like basic ones). In CI it fails too but we retry failed tests a few times so it's less noticeable. Example of "unlucky" tests: https://gitlab.com/dashpay/dash/-/jobs/8613271300#L1867. #6467 follow-up ## What was done? Add another masternode after v19 activaition to actually use basic bls pubkey ## How Has This Been Tested? run tests ## Breaking Changes n/a ## Checklist: - [ ] I have performed a self-review of my own code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have added or updated relevant unit/integration/functional/e2e tests - [ ] I have made corresponding changes to the documentation - [ ] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_ ACKs for top commit: knst: utACK ec00c37 PastaPastaPasta: utACK ec00c37 Tree-SHA512: 850a02ea1bd943762cdb0be706f3703742944c294ee9603b1f9ab95a6b10fb827bc9376e03333177d956b2c13df7384cfe0eb2ffef4d05ff3ec239caa8318d24
Issue being fixed or feature implemented
Currently, mnauth has to check status of v19 hard fork. While this isn't soo terrible, it's not needed anymore. On mainnet or testnet, any mn you possible connect to will have it's TIP past v19 HF, meaning in practice it will only ever send you basic scheme anyhow. Let's just harden it. I initially guarded this behind a new protocol version, but I do not think that is needed.
What was done?
How Has This Been Tested?
Breaking Changes
This is potentially a breaking change for devnets, which are moving past the v19 hard fork, but on develop v19 activates at block 2 for devnets sooooo, this shouldn't be noticed.
Checklist:
Go over all the following points, and put an
x
in all the boxes that apply.