Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration of policy-based-routing example with nse-composition #9865

Open
isZams opened this issue Sep 19, 2023 · 8 comments
Open

Integration of policy-based-routing example with nse-composition #9865

isZams opened this issue Sep 19, 2023 · 8 comments
Assignees
Labels
question Further information is requested

Comments

@isZams
Copy link

isZams commented Sep 19, 2023

Question

II would like to know if it is possible to integrate the policy based routing example within nse-composition one, to be able to specify policies so that routing can be done.
If it is possible , I would also like to understand if this can be applied individually to containers and not just to the pod.

@denis-tingaikin denis-tingaikin added the question Further information is requested label Sep 19, 2023
@denis-tingaikin
Copy link
Member

@NikitaSkrynnik Could you have a look?

@NikitaSkrynnik
Copy link
Collaborator

@isZams Hello, Policy Based Routing can be used only with kernel interfaces. In nse-composition example we have cmd-nse-firewall-vpp which uses VPP interfaces. So, it looks like it's impossible to intergrate these two examples now.

@isZams
Copy link
Author

isZams commented Sep 20, 2023

@NikitaSkrynnik hi, thanks for your reply. Actually i managed to create a composition of endpoints where the pods in the middle of the chain have 2 kernel interfaces (in order to handle the incoming traffic, passing it to another container which performs some security function, and also the outgoing traffic to the next endpoint of the chain).
So at the moment i have the pod with 2 kernel interfaces and i would like to handle the traffic inside the pod: can the policy-based-routing example be used to handle the routing between the 2 containers? Or in general, can it be integrated in this example?

@d-uzlov d-uzlov moved this to In Progress in Release v1.11.0 Sep 21, 2023
@NikitaSkrynnik
Copy link
Collaborator

@isZams it's theoretically possible to integrate PBR with kernel interfaces, but we haven't considered this scenario yet. If you could contribute this case it would be great.

@denis-tingaikin denis-tingaikin changed the title Integration of policy-based-routing example withi nse-composition Integration of policy-based-routing example with nse-composition Sep 25, 2023
@denis-tingaikin denis-tingaikin moved this from In Progress to Blocked in Release v1.11.0 Sep 28, 2023
@denis-tingaikin denis-tingaikin moved this from Blocked to Moved to next release in Release v1.11.0 Oct 8, 2023
@isZams
Copy link
Author

isZams commented Feb 18, 2024

Hello, @denis-tingaikin, @NikitaSkrynnik, @glazychev-art . I was trying to implement this example as described above. Basically I added a third pod, in the middle, so that it creates two kernel interfaces. I then configured the configMaps to route incoming traffic from the client to the final endpoint (gateway).
So the chain would be as follows:

nsc --> firewall-endpoint --> endpoint

Inside the pod in the middle (firewall-endpoint), I added a third container with "tcpdump" to monitor the traffic between the interfaces so that I could see whether or not the routing actually worked.
In doing so, I noticed an ambiguous behaviour: the continuous creation of the interface with the NSC, which is as if it were being continuously created and destroyed.
Do you have any idea what this might be due to?
Also, could you tell me whether the configuration I use within "config-file-nse" file is actually correct?

Here there are all the files i'm using.
policy-based-routing.zip

Thank you very much for your help and availability!

@NikitaSkrynnik
Copy link
Collaborator

Hello, @isZams!
There is an error in your config. It should be like this

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: policies-config-file
data:
  config.yaml: |
    - from: 172.16.0.31/31
      routes:
        - prefix: 172.16.1.100/31
          nexthop: 172.16.1.101/32

You should also use this config only in nse-kernel. It will set the rules for nse-firewall.

There are more issues here:

  1. Alpine doesn't know anything about 172.16.1.100/31 subnet, but for the rules to work it needs to send packets to 172.16.1.100 dst addr.
image
  1. The addresses of nse-firewall and nse-kernel may change so you need to dynamically change the rules too.
  2. Alpine's cmd-nsc continuously creates and destroys the interfaces because of datapath healing. cmd-nsc tries to ping nse-firewall but the pings can't reach it because of the rules. So the ping fails and datapath healing closes and reopens the connection and starts pinging again.

To make it work it's required to set routes on alpine and nse-kernel too and change them dynamically if the addresses change.

I think in your case it would be better to use iptables or a bridge interface on nse-firewall. I found the discussion about iptables here:

  1. How can i make composition of endpoints? #9676 (comment)
  2. Routing between 2 containers  #9864

You can try to use bridge interface. We'd appreciate if you tried to investigate this. We will be able to consider the feature with bridges in a few weeks.

@isZams
Copy link
Author

isZams commented Mar 28, 2024

Hi @NikitaSkrynnik, first of all thanks for reply and for your time.
I didn't properly understand this sentence:
You should also use this config only in nse-kernel. It will set the rules for nse-firewall.
My goal is to try to use the PBR to manage the traffic between the 2 interfaces in the pod in between.
In the code for the implementation of the 'cmd-nse-icmp-responder' image, I saw that there is a function that takes care of getting the path in which the rules are loaded.
This image is then also used within the firewall-endpoint in one of the three containers that make up the pod.

Furthermore, since this annotation is used, for example,

- name: NSM_CIDR_PREFIX
value: 172.16.0.30/31'.

the available IP addresses turn out to be only 2, why do I need to change them dynamically?

Finally, since from the alpine pod I would ping the first element in the chain (which in this case is the firewall-endpoint) why do I need to set the rules in the alpine pod as well?

Forgive me for asking all these questions, but I would like to clarify a few things.

@denis-tingaikin denis-tingaikin moved this from In Progress to Moved to next release in Release v1.13.0 Apr 9, 2024
@isZams
Copy link
Author

isZams commented May 6, 2024

Hi @NikitaSkrynnik
I was reviewing this issue and came up with this config file:

apiVersion: v1
kind: ConfigMap
metadata:
  name: policies-config-file
data:
  config.yaml: |
    - from: 172.16.0.31/31
      routes:
        - prefix: 172.16.0.30/31
    - from: 172.16.0.30/31
      routes:
        - prefix: 172.16.1.101/31
    - from: 172.16.0.101/31
      routes:
        - prefix: 172.16.1.100/31
    - from: 172.16.1.100/31
      routes:
        - prefix: 172.16.0.31/31
          nexthop: 172.16.0.30/32

In this way, the interfaces are not destroyed and recreated for the healing issue.
However, I wanted to ask your opinion. Thank you very much in advance

@denis-tingaikin denis-tingaikin moved this to Moved to next release in Release v1.14.0 Sep 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
Status: Moved to next release
Status: Moved to next release
Status: No status
Development

No branches or pull requests

3 participants