Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

topology: allow peering links between core ASes #4605

Merged
merged 7 commits into from
Sep 9, 2024

Conversation

jiceatscion
Copy link
Contributor

Fixes #4484

@jiceatscion
Copy link
Contributor Author

This change is Reviewable

@oncilla oncilla changed the title topology: allow peering links between core asses topology: allow peering links between core ASes Aug 30, 2024
Copy link
Contributor

@oncilla oncilla left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 files reviewed, 1 unresolved discussion (waiting on @FR4NK-W and @jiceatscion)

a discussion (no related file):
Could we get a test that ensures it actually works end to end too?

We need to ensure beaconing actually includes these links, and that the path combinator finds them.


Copy link
Contributor Author

@jiceatscion jiceatscion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 files reviewed, 1 unresolved discussion (waiting on @FR4NK-W and @oncilla)

a discussion (no related file):

Previously, oncilla (Dominik Roos) wrote…

Could we get a test that ensures it actually works end to end too?

We need to ensure beaconing actually includes these links, and that the path combinator finds them.

Fair request. While I do that, would you happen to have an idea why the end-2-end tests have become so flaky. It's getting near impossible to get them to pass. What I've found so far is that right after "awaitconnectivity" it still takes up to 20 seconds for all the up segments to get turned around and registered as down segments (something that awaitconnectivity does not check"). I still do not know what's taking so long. I'm adding a 20s sleep at the end of awaitconnectivity in the mean-time so we can continue making PRs.


Copy link
Contributor

@oncilla oncilla left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 files reviewed, 1 unresolved discussion (waiting on @FR4NK-W and @jiceatscion)

a discussion (no related file):

Previously, jiceatscion wrote…

Fair request. While I do that, would you happen to have an idea why the end-2-end tests have become so flaky. It's getting near impossible to get them to pass. What I've found so far is that right after "awaitconnectivity" it still takes up to 20 seconds for all the up segments to get turned around and registered as down segments (something that awaitconnectivity does not check"). I still do not know what's taking so long. I'm adding a 20s sleep at the end of awaitconnectivity in the mean-time so we can continue making PRs.

I do not know, what you could try to do is to look at the distributed traces.
There should be a jaeger instance running, and all RPCs + control internal processes are tracked there.


Copy link
Contributor Author

@jiceatscion jiceatscion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 files reviewed, 1 unresolved discussion (waiting on @FR4NK-W and @oncilla)

a discussion (no related file):

Previously, oncilla (Dominik Roos) wrote…

I do not know, what you could try to do is to look at the distributed traces.
There should be a jaeger instance running, and all RPCs + control internal processes are tracked there.

Yup...I's trying not to get to that :-)


Copy link
Contributor

@oncilla oncilla left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 files reviewed, 1 unresolved discussion (waiting on @FR4NK-W and @jiceatscion)

a discussion (no related file):

Previously, jiceatscion wrote…

Yup...I's trying not to get to that :-)

It is the first place I reach for if things don't work. Really powerful tool.


Copy link
Contributor Author

@jiceatscion jiceatscion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 9 files reviewed, 1 unresolved discussion (waiting on @FR4NK-W and @oncilla)

a discussion (no related file):

Previously, oncilla (Dominik Roos) wrote…

It is the first place I reach for if things don't work. Really powerful tool.

Back to the main topic. Your suspicion was right. Adding a beaconning test revealed that core ASes assume they don't have peering links to include. That's now fixed. Plus the tests. The combinator had no issue.


Copy link
Contributor

@oncilla oncilla left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewed 1 of 1 files at r1, 8 of 8 files at r2, all commit messages.
Reviewable status: :shipit: complete! all files reviewed, all discussions resolved (waiting on @FR4NK-W)

@jiceatscion jiceatscion merged commit a864a8c into scionproto:master Sep 9, 2024
5 checks passed
@jiceatscion jiceatscion deleted the coreaspeering branch September 9, 2024 10:18
oncilla added a commit to oncilla/scion that referenced this pull request Sep 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

topology: allow peering links between core-ASes
2 participants