Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: The problem with processing IP & Alias from a gossip protocol? #7223

Closed
LNBIG-COM opened this issue Dec 1, 2022 · 39 comments · Fixed by #7239
Closed

[bug]: The problem with processing IP & Alias from a gossip protocol? #7223

LNBIG-COM opened this issue Dec 1, 2022 · 39 comments · Fixed by #7239
Assignees
Labels
bug Unintended code behaviour gossip p2p Code related to the peer-to-peer behaviour
Milestone

Comments

@LNBIG-COM
Copy link

Background

The problem with the announcement of Alias + IP addresses of LND servers after changing IP & Alias.
Most of the network does not see the changes made in LND 0.15.4-beta after the server was moved to a new IP and the Alias was changed (even after a week). At the same time, most of my other nodes do not see these changes (roughly speaking, 3/4 nodes or more). There is clearly some kind of bug.

Your environment

  • version of lnd 0.15.4-beta

Steps to reproduce

Change the IP and Alias and do this at least 3-4 times with different nodes. I think at least on one you will see that other nodes of the network will not see changes about him in the graph even after a week.

Expected behaviour

Changes to IP and Alias should occur fairly quickly on other nodes of the network. Otherwise, it doesn't make sense to use Lightning.

Actual behaviour

Example - more than a week ago I moved lnd-02 to another server, the IP address was changed and I changed the Alias. At first I thought the problem was that I changed the externalip option to externalhosts for simplification (+ /etc/hosts for this). But afterwards I tried to go back to the externalip option again (with certain IP address) and restarted the lnd server many times. But it didn't help, so I returned the externhosts option. Now I suppose it doesn't matter which of the two options is used.

For example, lnd-02 was moved to another server now it's lnd-25. But even after a week, even my other nodes see it as (lncli getnodeinfo):

lnd-05, lnd-26:

        "pub_key": "03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618",
        "alias": "LNBIG.com [lnd-25/old-lnd-02]",
        "addresses": [
            {
                "network": "tcp",
                "addr": "213.174.156.69:9735"
            }
        ],

lnd-20, lnd-21, lnd-27, lnd-28, lnd-39, lnd-40 (old IP & Alias!):

        "pub_key": "03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618",
        "alias": "LNBIG.com [lnd-02]",
        "addresses": [
            {
                "network": "tcp",
                "addr": "46.229.165.138:9735"
            }
        ],

The lnd-25 (old lnd-02) itself now is:

l getinfo
{
    "version": "0.15.4-beta commit=v0.15.4-beta",
    "commit_hash": "96fe51e2e5c2ee0c97909499e0e96a3d3755757e",
    "identity_pubkey": "03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618",
    "alias": "LNBIG.com [lnd-25/old-lnd-02]",
    "color": "#3399ff",
    "num_pending_channels": 0,
    "num_active_channels": 84,
    "num_inactive_channels": 200,
    "num_peers": 85,
    "block_height": 765487,
    "block_hash": "0000000000000000000023bd2e094e70822bd9dccd2e63ed4ae253e810af94c3",
    "best_header_timestamp": "1669910883",
    "synced_to_chain": true,
    "synced_to_graph": true,
    "testnet": false,
    "chains": [
        {
            "chain": "bitcoin",
            "network": "mainnet"
        }
    ],
    "uris": [
        "03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618@213.174.156.69:9735"
    ],
...
@LNBIG-COM LNBIG-COM added bug Unintended code behaviour needs triage labels Dec 1, 2022
@LNBIG-COM LNBIG-COM changed the title [bug]: [bug]: The problem with announcing IP & Alias to the graph network Dec 1, 2022
@LNBIG-COM
Copy link
Author

LNBIG-COM commented Dec 1, 2022

And I have a lot of similar problems with other servers that I have also moved. From the latest ones:

lnd-06 -> lnd-05
lnd-07 -> lnd-26
lnd-01 -> lnd-17
lnd-31 -> lnd-51
lnd-32 -> lnd-52
lnd-33 -> lnd-15

I think these issues can be related to this problem:

#6924
#6531
#6128

@LNBIG-COM
Copy link
Author

LNBIG-COM commented Dec 1, 2022

Now I decided to look at the gossip protocol for updating nodes and there is such a field:

The following address descriptor types are defined:
...
5: DNS hostname; data = [1:hostname_len][hostname_len:hostname][2:port] (length up to 258) hostname bytes MUST be ASCII characters.

What happens when we describe the externalhosts option in lnd.conf? Could it be that LND broadcasts exactly hostname as a string from externalhosts? Judging by the lnd --help documentation, the lnd does a resolving and then uses the IP address. If the latter is true, then everything is OK in my configuration, since I use the name from /etc/hosts.. But if it anounce a string as a domain name & port as set up in externalhosts option into the graph, then this could be the my issue.

@LNBIG-COM
Copy link
Author

I see sources now.

https://github.com/lightningnetwork/lnd/blob/master/netann/host_ann.go#L24

// AdvertisedIPs is the set of IPs that we've already announced with
// our current NodeAnnouncement. This set will be constructed to avoid
// unnecessary node NodeAnnouncement updates.
AdvertisedIPs map[string]struct{}

As I understand it, when the server starts, the AdvertisedIPs structure is filled with the current IP addresses, and only if they are subsequently changed - an announcement to the network occurs. Is that true?

That is, if I run lnd, and the hostname itself does not resolve to different IP, then there is no announcement by gossip?

I suspect that such logic can be simple with the externalip option - in what cases is the announcement of a new IP?

@bitromortac
Copy link
Collaborator

My node sees the same as your lnd-05, lnd-26 nodes (IP starts with 213), so there was some partial gossip propagation. On amboss and 1ml it's still lnd-02 (IP starts with 46). I'm still investigating, but I just wanted to make you aware of this rpc https://api.lightning.community/#updatenodeannouncement, perhaps it can help you for a workaround.

@LNBIG-COM
Copy link
Author

@bitromortac Thanks!
I used now the l peers updatenodeannouncement --address_add NEW_IP --address_remove OLD_IP --alias 'alias' command. It works, writes JSON data to the screen, which has changed (I even ran it many times with other data to force the update) and which has been broadcast. But even on my nodes minutes later - there is no update for lnd-25/old-lnd-02 node. It feels like update nodes are not working. :(

Another problem is that some wallets (SBW for example) stop working with the channels of the node that was transferred, because for some reason the idea that the nodes are tied to an IP address is being persistently promoted in the lightning network. Although it would be better if they were tied to the DNS name, not the IP, that was saved for the node. This is the Achilles' heel - the IP address and its replacement. :(

@LNBIG-COM
Copy link
Author

LNBIG-COM commented Dec 2, 2022

May be @guggero or @Roasbeef will help to me?

The problem is very serious. I'll try to describe it again in simple words here: when I move a node to another server, my IP changes. The port (9735) is open, there are no problems with the connection. But node_announcement updates don't seem to work. Even in manual mode via the lncli peers updatenodeannouncement ... command (peersrpc is there in lnd). I haven't been able to get update of graph at many other nodes (even on my ones!) the new IP and Alias for more than a week! For example lnd-02 was moved to other IP and more one week the world sees this as old IP & Alias!

At lnd-02 now there is (The same result was a week ago. And this is after I have restarted this node many times on a new server and even manually submitted the update command lncli peers updatenodeannouncement ...):

    "num_active_channels": 87,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    "num_inactive_channels": 196,

I do not know how to make the world know about the new IP! I assumed that the gossip protocol existed for this. But your buggy LND doesn't want to do that!

@devastgh
Copy link

devastgh commented Dec 2, 2022

For example, lnd-02 was moved to another server now it's lnd-25. But even after a week, even my other nodes see it as (lncli getnodeinfo):

Your cln peers can see your node announcements properly, so i guess the issue is with lnd not processing your announcement, while it is being generated and gossiped properly by your node.

This is what i see of 02:

lightning@devzorln:~$ lightning-cli listnodes 03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618
{
   "nodes": [
      {
         "nodeid": "03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618",
         "alias": "LNBiG.com🇺🇦[lnd-25/lnd-02]",
         "color": "3399ff",
         "last_timestamp": 1669923084,
         "features": "800000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002000888a52a1",
         "addresses": [
            {
               "type": "ipv4",
               "address": "213.174.156.69",
               "port": 9735
            }
         ]
      }
   ]
}

@LNBIG-COM
Copy link
Author

I agree that the problem may be that the problem is not in sending update nodes messages, but in their processing when received by the LND itself. Is your abbreviation cln - it's c-lightning software? All my other nodes are LND, and 3/4 of them have the old data of the former lnd-02. Same with the other nodes I moved three days ago.

@guggero
Copy link
Collaborator

guggero commented Dec 2, 2022

Unfortunately I'm not very familiar with the p2p/announcement code of lnd. But it does look like there is something wrong with processing the announcements (apparently sending them out isn't the issue).

We did fix something in that area recently: #7186
If you're willing to try this out, I can create a rebased branch of v0.15.5 with just that PR on top.

@LNBIG-COM LNBIG-COM changed the title [bug]: The problem with announcing IP & Alias to the graph network [bug]: The problem with processing IP & Alias to the graph network Dec 2, 2022
@LNBIG-COM LNBIG-COM changed the title [bug]: The problem with processing IP & Alias to the graph network [bug]: The problem with processing IP & Alias from a gossip protocol? Dec 2, 2022
@LNBIG-COM
Copy link
Author

We did fix something in that area recently: #7186

Ok, or I can try to make git pull some commands to merge your patches.

@guggero
Copy link
Collaborator

guggero commented Dec 2, 2022

The PR is on top of master which contains some DB migrations and would make it impossible for you to revert back to v0.15.5. So let me just do this quickly, one moment.

@devastgh
Copy link

devastgh commented Dec 2, 2022

I agree that the problem may be that the problem is not in sending update nodes messages, but in their processing when received by the LND itself. Is your abbreviation cln - it's c-lightning software? All my other nodes are LND, and 3/4 of them have the old data of the former lnd-02. Same with the other nodes I moved three days ago.

Yes, i'm running core lightning. I just checked all the nodes you mentioned, i can see the new alias/ip address on all of them. Checking on amboss (as they run lnd) vs my data, lnd-33 is the only other i could find thats wrong there, but okay in my gossip data.

@guggero
Copy link
Collaborator

guggero commented Dec 2, 2022

Okay, you can check out the branch pr-7186-v0-15-5-branch which is v0.15.5-beta with a rebased version of #7186 on top.
So git fetch && git checkout pr-7186-v0-15-5-branch.

@LNBIG-COM
Copy link
Author

Okay, you can check out the branch pr-7186-v0-15-5-branch which is v0.15.5-beta with a rebased version of #7186 on top. So git fetch && git checkout pr-7186-v0-15-5-branch.

Thank you. I go to do it! Thanks!

@guggero
Copy link
Collaborator

guggero commented Dec 2, 2022

That PR also adds some additional logging. If you could set DISC=debug that would also help figure out what the problem is (if this doesn't fix it).

@LNBIG-COM
Copy link
Author

Okay, you can check out the branch pr-7186-v0-15-5-branch which is v0.15.5-beta with a rebased version of #7186 on top. So git fetch && git checkout pr-7186-v0-15-5-branch.

Question - I am currently doing this on lnd-25 (old lnd-02), but if the problem is on the receiving nodes of gossip, then it turns out that I need to run this version on the contrary - on all other nodes except lnd-25?

@bitromortac
Copy link
Collaborator

You would need to run it on a node that doesn't have an accurate node announcement from the updated node.

@LNBIG-COM
Copy link
Author

LNBIG-COM commented Dec 2, 2022

You would need to run it on a node that doesn't have an accurate node announcement from the updated node.

Ok, I will do it now... But I updated already the lnd-25. I hope this won't spoil the purity of the experiment?

UPDATE: I stopped the lnd-25 now before it started completely... I can downgrade it for clean experiement, @guggero ?

UPDATE2: I updated now the lnd-20, it saw lnd-02 as old IP & Alias. The lnd-25 (lnd-02 old) now stopped. I wait gossip updates from other nodes at lnd-20...

UPDATE3: The first run of the patched lnd-20 was finished like this:

2022-12-02 12:04:51.026 [INF] LTND: Channel backup proxy channel notifier starting
2022-12-02 12:04:51.027 [INF] ATPL: Instantiating autopilot with active=false, max_channels=4096, allocation=0.950000, min_chan_size=20000, max_chan_size=5000000, private=false, min_confs=1, conf_target=3
2022-12-02 12:04:51.027 [INF] LTND: We're not running within systemd or the service type is not 'notify'
2022-12-02 12:04:51.031 [INF] LTND: Waiting for chain backend to finish sync, start_height=765589
2022-12-02 12:08:13.647 [ERR] LTND: Shutting down because error in main method: unable to determine if wallet is synced: invalid http POST response (nil), method: getblockchaininfo, id: 379, last error=Post "http://bitcoind-backend:8332": dial tcp: lookup bitcoind-backend: too many open files
2022-12-02 12:08:13.647 [INF] RPCS: Stopping RPC Server
2022-12-02 12:08:13.647 [INF] RPCS: Stopping WalletKitRPC Sub-RPC Server

But I think it's my problem. I increased ulimit -n 4096 now (was 1024). Now I'm thinking - wasn't my problem partly related to the file opening limit? I'm continuing my research...

@guggero
Copy link
Collaborator

guggero commented Dec 2, 2022

I can downgrade it for clean experiement, @guggero ?

Yes, if you use the branch I created you will be able to downgrade to v0.15.5-beta normally.

@LNBIG-COM
Copy link
Author

LNBIG-COM commented Dec 2, 2022

lnd-20 (patched) has old info about lnd-02, after DISC=debug there I see in lnd-20 a following lines only:

2022-12-02 12:55:32.559 [DBG] DISC: Processing NodeAnnouncement: peer=024bfaf0cabe7f874fd33ebf7c6f4e5385971fc504ef3f492432e9e3ec77e1b5cf@52.1.72.207:9735, timestamp=2022-12-02 12:53:33 +0100 CET, node=03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618
2022-12-02 12:55:33.073 [DBG] DISC: Processed NodeAnnouncement: peer=024bfaf0cabe7f874fd33ebf7c6f4e5385971fc504ef3f492432e9e3ec77e1b5cf@52.1.72.207:9735, timestamp=2022-12-02 12:53:33 +0100 CET, node=03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618

I don't see a packet info there. In that time I at lnd-25 patched (lnd-02 old) did a lncli peers updatenodeannouncement commands. Now I restarted the lnd-25 with DISC=debug option and I wait of starting. I will see there logs...

@LNBIG-COM
Copy link
Author

LNBIG-COM commented Dec 2, 2022

lnd-25 patched:

date; l peers updatenodeannouncement --address_remove '213.174.156.69:9735' --alias 'LnBiG.com [lnd-25/lnd-02]'
Fri Dec  2 13:09:00 CET 2022
{
    "ops": [
        {
            "entity": "alias",
            "actions": [
                "changed to LnBiG.com [lnd-25/lnd-02]"
            ]
        },
        {
            "entity": "addresses",
            "actions": [
                "213.174.156.69:9735 removed"
            ]
        }
    ]
}

lnd-20 patched:

2022-12-02 13:10:33.334 [DBG] DISC: Processing NodeAnnouncement: peer=024bfaf0cabe7f874fd33ebf7c6f4e5385971fc504ef3f492432e9e3ec77e1b5cf@52.1.72.207:9735, timestamp=2022-12-02 13:09:00 +0100 CET, node=03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618
2022-12-02 13:10:33.868 [DBG] DISC: Processed NodeAnnouncement: peer=024bfaf0cabe7f874fd33ebf7c6f4e5385971fc504ef3f492432e9e3ec77e1b5cf@52.1.72.207:9735, timestamp=2022-12-02 13:09:00 +0100 CET, node=03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618

lnd-25 patched:

date; l peers updatenodeannouncement --address_add $(dig +short a ip-external) --alias 'LNBiG.com🇺🇦[lnd-25/lnd-02]'
Fri Dec  2 13:09:20 CET 2022
{
    "ops": [
        {
            "entity": "alias",
            "actions": [
                "changed to LNBiG.com🇺🇦[lnd-25/lnd-02]"
            ]
        },
        {
            "entity": "addresses",
            "actions": [
                "213.174.156.69:9735 added"
            ]
        }
    ]
}

lnd-20 patched:

2022-12-02 13:12:02.805 [DBG] DISC: Processing NodeAnnouncement: peer=024bfaf0cabe7f874fd33ebf7c6f4e5385971fc504ef3f492432e9e3ec77e1b5cf@52.1.72.207:9735, timestamp=2022-12-02 13:09:20 +0100 CET, node=03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618
2022-12-02 13:12:03.329 [DBG] DISC: Processed NodeAnnouncement: peer=024bfaf0cabe7f874fd33ebf7c6f4e5385971fc504ef3f492432e9e3ec77e1b5cf@52.1.72.207:9735, timestamp=2022-12-02 13:09:20 +0100 CET, node=03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618

It's all...

So at lnd-20 anounces only from one node: deezy.io ⚡✨
But the lnd-20 has "num_active_channels": 206channels online

@LNBIG-COM
Copy link
Author

LNBIG-COM commented Dec 2, 2022

And right now the lnd-20 has right IP & Alias of last update of lnd-25 (patched)!

l getnodeinfo 03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618|grep -E 'alias|addr'
        "alias": "LNBiG.com🇺🇦[lnd-25/lnd-02]",
        "addresses": [
                "addr": "213.174.156.69:9735"
                "name": "payment-addr",

And last logs from DISC at lnd-20 with nodeid lnd-25 are:

2022-12-02 13:10:33.334 [DBG] DISC: Processing NodeAnnouncement: peer=024bfaf0cabe7f874fd33ebf7c6f4e5385971fc504ef3f492432e9e3ec77e1b5cf@52.1.72.207:9735, timestamp=2022-12-02 13:09:00 +0100 CET, node=03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618
2022-12-02 13:10:33.868 [DBG] DISC: Processed NodeAnnouncement: peer=024bfaf0cabe7f874fd33ebf7c6f4e5385971fc504ef3f492432e9e3ec77e1b5cf@52.1.72.207:9735, timestamp=2022-12-02 13:09:00 +0100 CET, node=03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618
2022-12-02 13:12:02.805 [DBG] DISC: Processing NodeAnnouncement: peer=024bfaf0cabe7f874fd33ebf7c6f4e5385971fc504ef3f492432e9e3ec77e1b5cf@52.1.72.207:9735, timestamp=2022-12-02 13:09:20 +0100 CET, node=03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618
2022-12-02 13:12:03.329 [DBG] DISC: Processed NodeAnnouncement: peer=024bfaf0cabe7f874fd33ebf7c6f4e5385971fc504ef3f492432e9e3ec77e1b5cf@52.1.72.207:9735, timestamp=2022-12-02 13:09:20 +0100 CET, node=03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618

@LNBIG-COM
Copy link
Author

LNBIG-COM commented Dec 2, 2022

lnd-21, not patched, 0.15.5 right now:

l getnodeinfo 03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618|grep -E 'alias|addr'
        "alias": "LNBIG.com [lnd-02]",
        "addresses": [
                "addr": "46.229.165.138:9735"
                "name": "payment-addr",

Now I will do:

will see how will be changing getnodeinfo of lnd-25 at lnd-20 again when will I do NodeAnnouncement

After I will patch the lnd-21 and will see there result

@spyhuntergenral
Copy link

I was wondering why my node IP was not updating on 1ml, (when everything looked fine on my end), this all makes sense now,
This kind of issue, should really be a P0 and i would suspect the majority of node operators that have had an ip change on the affected versions aren't even aware, that most of the network cannot see there node anymore.

Good troubleshooting above!

@guggero guggero added p2p Code related to the peer-to-peer behaviour and removed needs triage labels Dec 5, 2022
@LNBIG-COM
Copy link
Author

Yes, I understand that. What I would like to know is whether on a node you patched, do you see messages where peer != source? Meaning: Is the patch definitely working at fixing the propagation bug?

I have written to you about this a long time ago and exactly what you are asking.

#7223 (comment)

you patch works at lnd-20 (removing/adding IP and changing alias)!

2022-12-02 13:29:57.732 [DBG] DISC: Processing NodeAnnouncement: peer=024bfaf0cabe7f874fd33ebf7c6f4e5385971fc504ef3f492432e9e3ec77e1b5cf@52.1.72.207:9735, timestamp=2022-12-02 13:27:23 +0100 CET, node=03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618
2022-12-02 13:29:58.254 [DBG] DISC: Processed NodeAnnouncement: peer=024bfaf0cabe7f874fd33ebf7c6f4e5385971fc504ef3f492432e9e3ec77e1b5cf@52.1.72.207:9735, timestamp=2022-12-02 13:27:23 +0100 CET, node=03d37fca0656558de4fd86bbe490a38d84a46228e7ec1361801f54f9437a18d618

So peer != source

@LNBIG-COM
Copy link
Author

operators that have had an ip change on the affected versions

As far as I understand, the root of the problem is the reason in the receiving nodes. That is, it doesn't matter which version you have when you change IP or Alias - it matters which version is on another node, on which much may depend (1ml, amboss or may be router node).

@LNBIG-COM
Copy link
Author

Sure, if you think the patch is working it might make sense to roll it out more widely.

Now, after a little thought, it seems to me that it is very important to apply the patch not to me, but so that it is applied in the new release as soon as possible (@Roasbeef ?)! It may be that by fixing myself, I will solve the problem of updating IP addresses in the graph only between my nodes and some that are connected to me. But this will not solve the problems about the main part of the network - they will still have the old IP addresses. Just because they have the current version of LND (1ml, amboss and other big nodes)!

I still believe that the problem may be due to what I described here:

#7223 (comment)

@yyforyongyu
Copy link
Member

It is striking that all the strings have the nodeid of the peer equal to source's nodeid. If I understand correctly, source is the node that made the NodeAnnouncement, and peer is who sent it.

No, they are the same node, both meaning the peer who relayed the message. The log needs to be updated a bit tho to make it more clear, and this is the code.

A few months ago I started using the "star" scheme with three nodes ("hub"): 'lnd-19', 'lnd-22', 'lnd-49'. That is, these three nodes are connected to each other by fat channels, and the remaining 22 nodes are each connected to these three.

I'm trying to understand the topology here. So each node is connecting to the three start nodes right? And these star nodes are also connected to each other? What about the non-star nodes?

@saubyk saubyk added this to the v0.16.0 milestone Dec 6, 2022
@saubyk saubyk moved this to 🆕 New in lnd v0.16.0 Dec 6, 2022
@saubyk saubyk moved this from 🆕 New to 🏗 In progress in lnd v0.16.0 Dec 6, 2022
@Roasbeef
Copy link
Member

Roasbeef commented Dec 7, 2022

Just created #7239 which aims to make sure we prioritize broadcasting our local announcements.

Roasbeef added a commit to Roasbeef/lnd that referenced this issue Dec 7, 2022
In this commit, we modify our gossip broadcast logic to ensure that we
always will send out our own gossip messages regardless of the
filtering/feature policies of the peer.

Before this commit, it was possible that when we went to broadcast an
announcement, none of our peers actually had us as a syncer peer (lnd
terminology). In this case, the FilterGossipMsg function wouldn't do
anything, as they don't have an active timestamp filter set. When we go
to them merge the syncer map, we'd add all these peers we didn't send
to, meaning we would skip them when it came to broadcast time.

In this commit, we now split things into two phases: we'll broadcast
_our_ own announcements to all our peers, but then do the normal
filtering and chunking for the announcements we got from a remote peer.

Fixes lightningnetwork#6531
Fixes lightningnetwork#7223
Fixes lightningnetwork#7073
Roasbeef added a commit to Roasbeef/lnd that referenced this issue Dec 15, 2022
In this commit, we modify our gossip broadcast logic to ensure that we
always will send out our own gossip messages regardless of the
filtering/feature policies of the peer.

Before this commit, it was possible that when we went to broadcast an
announcement, none of our peers actually had us as a syncer peer (lnd
terminology). In this case, the FilterGossipMsg function wouldn't do
anything, as they don't have an active timestamp filter set. When we go
to them merge the syncer map, we'd add all these peers we didn't send
to, meaning we would skip them when it came to broadcast time.

In this commit, we now split things into two phases: we'll broadcast
_our_ own announcements to all our peers, but then do the normal
filtering and chunking for the announcements we got from a remote peer.

Fixes lightningnetwork#6531
Fixes lightningnetwork#7223
Fixes lightningnetwork#7073
Repository owner moved this from 🏗 In progress to ✅ Done in lnd v0.16.0 Dec 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Unintended code behaviour gossip p2p Code related to the peer-to-peer behaviour
Projects
No open projects
Status: Done
Development

Successfully merging a pull request may close this issue.

9 participants