-
Notifications
You must be signed in to change notification settings - Fork 906
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixup mkfunding #5677
Fixup mkfunding #5677
Conversation
Was crashing on allocating wally memory, b/c was missing the wally_init that's wrapped up in common_setup Assisted-By: @TKChattoraj
Changelog-Fixed: devtools: `mkfunding` command no longer crashes (abort) Fixes ElementsProject#5363 Assisted-By: @TKChattoraj
To run the script in
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not the first time that I noted this crash of lnprototest
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
[gw5] node down: Not properly terminated
[gw5] [ 92%] FAILED tests/test_bolt2-20-open_channel_accepter.py::test_df_opener_accepter_underpays_fees
replacing crashed worker gw5
[gw7] linux Python 3.10.6 cwd: /work/external/lnprototest
[gw7] Python 3.10.6 (main, Aug 10 2022, 11:40:04) [GCC 11.3.0]
tests/test_bolt2-20-open_channel_accepter.py::test_rbf_not_valid_rbf
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Captured stdout ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[ADD STARTUP FLAG 'experimental-dual-fund']
# running Sequence:runner.py:96:
# running Block:test_bolt2-20-open_channel_accepter.py:646:
# running Connect:test_bolt2-20-open_channel_accepter.py:647:
# running ExpectMsg:test_bolt2-20-open_channel_accepter.py:648:
# running Msg:test_bolt2-20-open_channel_accepter.py:651:
# running FundChannel:test_bolt2-20-open_channel_accepter.py:652:
# running ExpectMsg:test_bolt2-20-open_channel_accepter.py:653:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Captured stderr ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DEBUG:lnprototest.runner:[START]
DEBUG:lnprototest.runner:RUN Bitcoind
DEBUG:lnprototest.runner:RUN c-lightning
ERROR:concurrent.futures:exception calling callback for <Future at 0x7fa07d9348e0 state=finished raised RpcError>
Traceback (most recent call last):
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 342, in _invoke_callbacks
callback(self)
File "/work/external/lnprototest/lnprototest/clightning/clightning.py", line 304, in _done
raise exception
File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/work/external/lnprototest/lnprototest/clightning/clightning.py", line 297, in _fundchannel
return runner.rpc.fundchannel(
File "/work/contrib/pyln-client/pyln/client/lightning.py", line 771, in fundchannel
return self.call("fundchannel", payload)
File "/work/contrib/pyln-client/pyln/client/lightning.py", line 408, in call
raise RpcError(method, payload, resp['error'])
pyln.client.lightning.RpcError: RPC call failed: method: fundchannel, payload: {'id': '02c6047f9441ed7d6d3045406e95c07cd85c778e4b8cef3ca7abac09b95c709ee5', 'amount': 999877, 'feerate': '253perkw', 'announce': True}, error: {'code': 400, 'message': 'Unable to connect, no address known for peer', 'data': {'id': '02c6047f9441ed7d6d3045406e95c07cd85c778e4b8cef3ca7abac09b95c709ee5', 'method': 'connect'}}
~~~~~~~~~~~~~~ Stack of ThreadPoolExecutor-7_1 (140327050540608) ~~~~~~~~~~~~~~~
File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/concurrent/futures/thread.py", line 83, in _worker
work_item.run()
File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/work/contrib/pyln-proto/pyln/proto/wire.py", line 229, in read_message
lc = self.connection.recv(18)
~~~~~~~~~~~~~~ Stack of ThreadPoolExecutor-7_0 (140327278073408) ~~~~~~~~~~~~~~~
File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/concurrent/futures/thread.py", line 81, in _worker
work_item = work_queue.get(block=True)
~~~~~~~~~~~~~~ Stack of ThreadPoolExecutor-0_0 (140327295903296) ~~~~~~~~~~~~~~~
File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/concurrent/futures/thread.py", line 81, in _worker
work_item = work_queue.get(block=True)
~~~~~~~~~~~~~~~~~~~~~ Stack of <unknown> (140327427958336) ~~~~~~~~~~~~~~~~~~~~~
File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 285, in _perform_spawn
reply.run()
File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 220, in run
self._result = func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 967, in _thread_receiver
msg = Message.from_io(io)
File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 432, in from_io
header = io.read(9) # type 1, channel 4, payload 4
File "/usr/local/lib/python3.10/dist-packages/execnet/gateway_base.py", line 400, in read
data = self._read(numbytes - len(buf))
Is unrelated but there is some crash around that cause a node crash! Any idea?
That seems to be unrelated to this PR though? |
ACK 3d5dc0d |
It works now! Walked through with @TKChattoraj 👍