-
Notifications
You must be signed in to change notification settings - Fork 39
Flow Allocation
The experiment on this section illustrates the mechanics and the benefits gained by the explicit flow allocation procedure available in the RINA architecture. Flow allocation (the procedure that abstracts all the details to establish a communication instance between applications) is not a RINA concept, it also exists in the Internet, but many of its steps are performed implicitly and by ad-hoc means. In this experiment we will show how the application can trigger the instantiation of flows configured with different policies based on the application's requirements. It is important to stress out that the application is using the same API in both cases, it just specifies different requirements for in-order-delivery of SDUs and the maximum gap allowable between SDUs at the receiver.
The Figure above depicts the scenario of this experiment: a client application requesting a flow to a server application via a DIF with three IPC Processes called "normal.DIF", supported by two shim DIFs over two VLANs. The Figure below illustrates all the steps of the flow allocation procedure at System 1, showing summaries of logs of the IPC Process Daemon and the kernel components of the IPC Process.
-
1. The application "rina.apps.echotime.client:1::" requests a flow to the "rina.apps.echotime.server:1::" application, without any particular requirement (e.g. does not care if data has to arrive in order at the other side, or if some data can be lost). Unlike in the Internet today, the application doesn't have to specify addresses, port-ids or be aware of the protocols of the layer - even it does not need to be aware through which layer the destination application is available.
-
2. The flow allocation request is directed to the IPC Manager Daemon, who locates the DIF through which the destination application is available (in this case the search is simple because "rina.apps.echotime.server:1::" is registered in one of the DIFs available at the "System 1" machine, but for more complicated scenarios the IPC Manager would require the assistance of the DIF Allocator). The IPC Manager Daemon forwards the flow allocation request to the IPC Process Daemon.
-
3. The IPC Process Daemon requests the dynamic creation of a port-id to the kernel. This port-id is the local handle to the flow that will be later returned to the application, and - unlike the Internet - it is not used as the connection-endpoint-id of the connection supporting the flow.
-
4 and 5. The kernel uses an algorithm to dynamically compute an available port-id and returns it to the IPC Process Daemon.
-
6. The IPC Process Daemon creates a Flow Allocator Instance (FAI), who will be the responsible for managing the flow during its lifetime. Then the FAI compares the requirements specified by the application with the QoS cubes that the IPC Process can support. QoS cubes define a region in the "performance" space that a specific set of policies can cover. Each IPC Process supports one or more QoS cubes. In this experiment, the IPC Processes in the "normal.DIF" support two types of QoS cubes, called "unreliable with flow control" and "reliable with flow control". The policies in the former don't provide any guarantees except that the sender will not exceed the pace of the receiver (flow control enforces this), while the policies in the later guarantee reliable and in order delivery of data (via retransmission control policies). The IPC Process Daemon chooses the first one - since the application didn't specify any requirement - and is now ready to create and configure the EFCP connection that will support the flow.
-
7, 8 and 9. The IPC Process Daemon sends a message to the kernel in order to create and configure an EFCP connection for the flow. Note that the binding between flow and conection is temporary, and the same flow can be supported by multiple sequential EFCP connections without the application noticing it (the application keeps using the same port-id). The IPC Process Daemon provides the information on all the policies required to configure the EFCP instance: in this case mostly related to flow-control since retransmission control is not active. The IRATI implementation only provides a window-based flow-control policy based on constant credit, which is configured with a value of 50. The connection data structures also populate the source and destination addresses and the EFCP implementation in the kernel generates source connection-endpoint-id (CEP-id) which identifies the EFCP instance within the IPC Process. Note that, unlike in the Internet, the length of the fields in the EFCP PCI is not fixed in the EFCP protocol definition, but a DIF constant (can change from DIF to DIF). Once the EFCP instance is properly configured, the source CEP-id is passed back to the IPC Process Daemon.
-
10. The IPC Process Daemon checks its directory, and sees that in order to reach the destination application it has to forward the flow request to the IPC Process with address 18. Therefore it sends a CDAP CREATE message directed to IPC Process18, with all the information of the flow (source and destination application names, source and destination addresses, source CEP-id, qos-id, policies) encoded as a flow object. This action is necessary to i) make sure the destination application is still available on the DIF; ii) make sure the source application is allowed to communicate to the destination application and iii) dynamically negotiate some of the flow characteristics (EFCP policies and CEP-ids). In the Internet everything is static (cep-ids, choice of protocol implies choice of static policies).
-
11, 12, 13 and 14. The IPC Process Daemon receives the CDAP message back (CREATE response), accepting the flow and containing the value of the destination CEP-id. The first thing it does is to communicate the kernel about the destination CEP-id for the EFCP connection, so that packets belonging to that connection can be properly identified.
-
15, 16. The IPC Process Process Daemon replies to the IPC Manager Daemon about the successful flow allocation, communicating back the port-id to be used for the flow. The IPC Manager Daemon does the equivalent procedure with the source application, who can now start using the flow.
The Figure above illustrates all the steps of the flow allocation procedure at System 3, showing summaries of logs of the IPC Process Daemon and the kernel components of the IPC Process.
-
1, 2, 3 and 4. The IPC Process Daemon receives a CDAP message from the IPC Process of address 16 requesting the creation of a flow. It checks its directory and realizes that the destination application is accessible via itself, therefore the application has been found. A first access control decision is taken wether the incoming flow should be accepted or not (note that this function makes Firewalls unnecessary, since their features are already embedded in the normal operation of the IPC Process). The IPC Process Daemon requests the kernel the allocation of a port-id for the flow, and gets back port-id number 2.
-
5, 6, 7 and 8. The IPC Process Daemon inspects all the information available in the received "Flow" object, and uses it to create and configure the EFCP instance that will support the requested flow. To do so it sends a message to the EFCP implementation in the kernel, which instantiates a new EFCP protocol machine, configures it with the right policies (flow-control only, constant credit of 50) and header values (source address, destination address, qos-id, source CEP-id and destination CEP-id). The source CEP-id (which identifies the local EFCP instance) is communicated back to the IPC Process Daemon.
-
9 and 10. The IPC Process Daemon communicates the incoming flow request to the IPC Manager Daemon, who in turn notifies about the request to the application - including the port-id for the flow. The application has the chance of accepting or rejecting the incoming flow request, also instructing the IPC Manager Daemon if a response should be sent to the application that requested the flow or it just must be silently ignored (since replying already gives hints of the existance and location of the application).
-
11, 12 and 13. The application replies back to the IPC Manager Daemon accepting the flow. The IPC Manager Daemon forwards the response to the IPC Process Daemon, who replies to the CREATE "Flow" CDAP message.
Now we repeat the experiment again with a little variation; the application requests two features for the flow: data must arrive in the same order it was sent and no data can be lost. The procedure is the same as in the former experiment, the only difference now is that the IPC Process Daemon will select a different QoS cube "reliable with flow control", so that the apropriate retransmission control policies are configured in the EFCP instances. The following snippet from the EFCP kernel component log shows that the EFCP instance now has some retransmission-control related parameters configure (such as TR - maximum time to retransmit - or the maximum number of retransmissions).
[ 1273.438656] rina-dtcp(DBG): DTCP SV initialized with dtcp_conf:
[ 1273.442394] rina-dtcp(DBG): data_retransmit_max: 5
[ 1273.447639] rina-dtcp(DBG): sndr_credit: 50
[ 1273.450642] rina-dtcp(DBG): snd_rt_wind_edge: 50
[ 1273.453871] rina-dtcp(DBG): rcvr_credit: 50
[ 1273.457415] rina-dtcp(DBG): rcvr_rt_wind_edge: 50
[ 1273.471015] rina-efcp(DBG): DT SV initialized with:
[ 1273.474512] rina-efcp(DBG): MFPS: 10000, MFSS: 10000
[ 1273.478197] rina-efcp(DBG): A: 300, R: 5000, TR: 1000
[ 1273.481823] rina-efcp(DBG): Connection created (Source address 16, Destination address 18, Destination cep-id 0, Source cep-id 825)
Summarizing, the flow allocation procedure allowed for: i) locating the destination application; ii) dynamically selecting the port-ids; iii) dynamically selecting the most convenient policies for the connection; iv) dynamically selecting the connection CEP-ids; v) making sure the source application is allowed to communicate to the destination application. In the Internet i) the application has to find out the address of the destination application; ii) port-id selection is static; iii) CEP-ids and port-ids are confused and the same thing; iv) has to be statically chosen by the application (when creating a socket) and v) has to be performed via external systems such as firewalls.
- Home
- Software Architecture Overview
- IRATI in depth
-
Tutorials
- 1. DIF-over-a-VLAN-(point-to-point-DIF)
- 2. DIF over two VLANs
- 3. Security experiments on small provider net
- 4. Multi-tenant Data Centre Network configured via the NMS-DAF
- 5. Congestion control in multi-tenant DC
- 6. Multi-tenant Data Centre network with Demonstrator
- 7. ISP Security with Demonstrator
- 8. Renumbering in a single DIF
- 9. Application discovery and Distributed Mobility Management over WiFi
- 10. Distributed Mobility Management over multiple providers
- 11. Multi-access: multiple providers, multiple technologies