You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As you can see above, only one service gets an external IP address. In this case it is always ipv4, but I think I have had ipv6 get the external IP address on occasion.
Klipper creates corresponding pods for these services.
Drilling further into one of these pending pods, I can see this problem.
$ sudo kubectl describe pod -n kube-system svclb-pihole-web-ipv6-9b288549-4dkgq
[snip]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 20m default-scheduler 0/4 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/4 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 3 Preemption is not helpful for scheduling.
Warning FailedScheduling 4m21s (x3 over 14m) default-scheduler 0/4 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/4 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 3 Preemption is not helpful for scheduling.
I think if we can have one service configured for dual stack, we would not have this problem.
The text was updated successfully, but these errors were encountered:
I found a related pull request, #202. After briefly testing it, it appears to have fixed my problem. It would be nice if you can merge it. In the meantime I shall use my forked repository.
I cannot get dual stack configuration to work on k3s with servicelb (Klipper) to work. Here is the values file I used:
This apparently creates two services, one for ipv4 and one for ipv6.
As you can see above, only one service gets an external IP address. In this case it is always ipv4, but I think I have had ipv6 get the external IP address on occasion.
Klipper creates corresponding pods for these services.
Drilling further into one of these pending pods, I can see this problem.
I think if we can have one service configured for dual stack, we would not have this problem.
The text was updated successfully, but these errors were encountered: