Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[POC]: netif rework #12741

Closed
wants to merge 7 commits into from
Closed

[POC]: netif rework #12741

wants to merge 7 commits into from

Conversation

jia200x
Copy link
Member

@jia200x jia200x commented Nov 18, 2019

Contribution description

This POC shows a (very early) IEEE802.15.4 stack independent network interface that sends and receives data from GNRC.
This POC uses some concept and ideas discussed for several months with some contributors (@PeterKietzmann, @miri64, @haukepetersen, @bergzand, @kaspar030, @leandrolanzieri, @cgundogan ). I wouldn't call it more than a pseudocode that compiles, but at least it shows the role of different actors and how the lower network architecture could look like.

Please note implementation details shouldn't go in this discussion, since the idea is to show how the whole architecture could look like. There are still some (straightforward to remove) dependencies with GNRC netif that I didn't consider for the scope of this POC.

This POC shows:

  • How a stack independent network interface looks like.
  • The role of the HAL, PHY and MAC layers.
  • How packet data can be allocated without stack dependent functions.
  • How a network stack independent MAC layer looks like.

This POC doesn't show:

  • How events are handled by the OS (ISR, etc) (hint: sys: add shared event threads #12474 would probably be the way to go).
  • Where there should be interfaces (this assumes the IEEE802.15.4 will always call the PHY layer on top, and the PHY layer will always call the MAC layer).
  • Where the code should reside (the code is still scattered between netdev and pure ieee802154).
  • How the structures will look like, this only shows how to interact with them.

Please feel free to provide all kind of feedback (E.g if the approach makes sense, if there's something we haven't considered, etc).

Architecture

netif_arch

This is a generic representation of network interface, based on the current approach of gnrc_netif.
The key elements are:

  • Link Layer dependent code is abstracted via the netif_ops interface (same as gnrc_netif_t but with support for other stacks). netif_ieee802154, netif_ethernet and netif_lorawan are implementations of the netif_ops interface.
  • The upper layer sends data with the netif_send data and receives via netif_recv. It also configures and get Link Layer or netif params via netif_get and netif_set. Note this API is independent of the LL underneath.
  • Note that each network stack that implements netif_recv, netif_get and netif_set (and netbuf for packet abstraction, not included in the picture) is able to use the link layers.

What's new

Stack independent packet allocation

To allocate stack independent packets, the netbuf interface (#12691) is proposed. It's a wrapper of the internal packet representation (gnrc_pktsnips, static input buffers, etc).

The interface is defined by 2 functions that must be implemented by the stack:

  • netbuf_ctx_t *netbuf_alloc(size, buf): Allocate a buffer with the given size. Return the allocation context in netbuf_ctx_t.
  • void netbuf_free(netbuf_ctx_t *netbuf): Free the resources associated to the netbuf.

This POC provides the GNRC implementation of the netbuf API

Stack independent receive

Let's see how GNRC can receive a packet from a GNRC independent netif.

  1. The transceiver HAL calls a callback function (transceiver indication) to indicate there's data in the framebuffer (analog to NETDEV_EVENT_RX_DONE). In this POC, the event is called from the recv function of GNRC (as seen here), but this will eventually be in the transceiver HAL.
  2. The PHY layer implements the transceiver indication. It will use the netbuf + transceiver HAL API (netdev in the context of the POC) to allocate and fetch the received packet. It will then generate a PHY indication with the PSDU + PHY specific metadata (RSSI, LQI, SNR, etc) (see it here)
  3. The MAC layer implements the PHY indication. It process the PSDU and generates the MSDU + link layer metadata (source address, destination address, flags, etc). Then it generates a MAC indication with the MSDU + link layer metadata. (see it here and here).
  4. The netif component implements the MAC indication function. It fills in the netif_pkt_t structure with the output of the MAC layer and calls the netif_recv function. (see it here).
  5. The netif_recv function (stack dependent) converts the netif_pkt_t representation in the internal network stack representation and passes the packet to the stack. Check the GNRC implementation of the netif_recv function here

Stack independent send

Let's see how GNRC can send a packet via a GNRC independent netif.

  1. The gnrc_netif_send function converts the internal representation of a packet (gnrc_pktbuf + gnrc_netif_hdr)) into a netif_pkt_t structure and calls netif_send(netif, netif_pkt). (see it here).
  2. The netif_send function calls the L2 specific send function (in this case ieee802154_mac_send) via the netif_ops interface (netif->ops->send(netif, netif_pkt)). This is not implemented with an interface, but you can see the call here.
  3. The netif specific send function operates directly with the MAC layer underneath. E.g an IEEE802.15.4 will perform an IEEE802.15.4 MCPS-DATA request, an Ethernet will just call the Ethernet send function and a RAW ("no MAC") will directly interact with the PHY layer. In this case, the netif_ieee802154 implementation calls ieee802154 specific functions to send (as seen here).
  4. The MAC layer will eventually call the PHY layer send function (and handle retransmissions, device busy, etc). As explained above, it's assumed netdev represents the PHY layer. (see it here).
  5. The PHY layer will use the transceiver HAL in order to set the transceiver state, write data to the framebuffer and trigger send. Since we don't have a IEEE802.15.4 layer, the PHY layer is implemented in the driver. You can see this process e.g in the send function of the netdev implementation of the AT86RF2XX radios.

Receive from static framebuffer

Some device drivers and external implementation already implement a PHY layer (callback with buffer, size and metadata). An example of this is the ESP WIFI driver.
With this POC, it's possible to avoid extra copy and process the MAC layer from the static buffer and then convert it to the internal network stack representation!!.

E.g, we could think of:

void external_driver_phy_indication(void *buf, size_t size, int16_t rssi, uint8_t lqi)
{
    ieee802154_phy_ind_t ind; 
    ind.lqi = lqi;
    ind.rssi = rssi;
    ind.psdu = buf;
    ind.psdu_len = size;
    ind.ctx = NULL; /* Intentionally set to NULL to indicate the packet was not allocated by the stack */

    ieee802154_phy_ind(dev, &ind); 
}

And then in the netif_recv function:

if(pkt->ctx == NULL) {
    /* We know the packet was not allocated in the stack. We copy only the MSDU (L2 payload) */
    gnrc_pktsnip_t *pkt = gnrc_pktbuf_add(pkt->msdu.iol_base, pkt->msdu.iol_len, GNRC_NETTYPE_UNDEF);
    gnrc_netapi_dispatch_receive(pkt, ...);
}

Devices with multiple PHY layers

Some devices have multiple PHY layers (LoRa/FSK, IEEE802.15.4 2.4 GHz and SubGHZ, etc).
Since in this POC the PHY layer is a user of the transceiver (via the transceiver HAL), it's possible to map 2 PHY layers to the same device.

E.g 2 PHYs (LoRa and FSK) can use the same HAL API to implement the LoRa PHY and FSK PHY.
Then, the driver implements the transceiver HAL that hides the dual operation so the PHY layer only sees a single device.

Testing procedure

It's possible to test it using e.g a samr21-xpro with examples/default, although the intention
is to show how the code would look like.

Issues/PRs references

#12688
#11483

@jia200x jia200x added the State: WIP State: The PR is still work-in-progress and its code is not in its final presentable form yet label Nov 18, 2019
@leandrolanzieri leandrolanzieri added the State: demonstrator State: This PR (loosely) demonstrates a concept and is not necessarily meant to be merged. label Nov 19, 2019
@jia200x jia200x changed the title WIP! [POC]: netif rework [POC]: netif rework Nov 19, 2019
@jia200x jia200x removed the State: WIP State: The PR is still work-in-progress and its code is not in its final presentable form yet label Nov 19, 2019
@jia200x
Copy link
Member Author

jia200x commented Nov 19, 2019

@RIOT-OS/maintainers please have a look on the proposed netif architecture :)
EDIT: I will add a small diagram to make it easier to understand.

@kaspar030
Copy link
Contributor

* How packet data can be allocated without stack dependent functions.

can this be used without dynamic allocation?

@jia200x
Copy link
Member Author

jia200x commented Nov 19, 2019

can this be used without dynamic allocation?

Yes! The "alloc" function can be static or dynamic. It's also not mandatory to use netbuf_alloc (a PHY layer can have an internal framebuffer)

@miri64
Copy link
Member

miri64 commented Nov 19, 2019

* How packet data can be allocated without stack dependent functions.

can this be used without dynamic allocation?

I think the word allocated is a bit misleading here. netbuf_alloc() can also just return the pointer to a static buffer.

@miri64
Copy link
Member

miri64 commented Nov 19, 2019

netbuf_alloc() is a way for the MAC layer to say "hey network stack, tell me where to put my packet data" [edit]and if the network stack uses dynamic memory allocation or a memory arena it has a chance to allocate it there[/edit], right @jia200x?

@jia200x
Copy link
Member Author

jia200x commented Nov 19, 2019

I think the word allocated is a bit misleading here. netbuf_alloc() can also just return the pointer to a static buffer.

You are right. It was designed to be used with static buffers too.

Copy link
Member

@miri64 miri64 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments on the state of netbuf in this PR (didn't check if this is already defined like this in #12315)

sys/include/net/netbuf.h Outdated Show resolved Hide resolved
sys/include/net/netbuf.h Outdated Show resolved Hide resolved
@jia200x
Copy link
Member Author

jia200x commented Nov 19, 2019

netbuf_alloc() is a way for the MAC layer to say "hey network stack, tell me where to put my packet data", right @jia200x?

Yes, but from the PHY layer. This way, it's possible to do L2 processing from either a packet allocated by the network stack or by the device driver (ESP-WIFI)

@jia200x
Copy link
Member Author

jia200x commented Nov 19, 2019

netbuf_alloc() is a way for the MAC layer to say "hey network stack, tell me where to put my packet data" [edit]and if the network stack uses dynamic memory allocation or a memory arena it has a chance to allocate it there[/edit], right @jia200x?

Exactly.

* @return packet context in @p ctx.
* @return NULL if there's not enough memory for allocation.
*/
void *netbuf_alloc(size_t size, netbuf_ctx_t **ctx);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A pointer should be enough... pointer of pointer doesn't make much sense to me... (yes it comes from the fact that gnrc_pktbuf_t is a pointer, but that does not need to be the case.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the confusion comes from the netbuf_ctx_t type.
The context IS a pointer, so I need a pointer to the pointer in order to write the ctx there. I think I will remove the netbuf_ctx_t type to make it more clear.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the netbuf_ctx_t struct in f21f8e3. I also fixed the psdu check.

Copy link
Member

@miri64 miri64 Nov 20, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But what if the context is e.g. just some numerical value like an index in an array or a file descriptor? Then the stack needs to go through a lot of hassle to make that a pointer of a pointer to that numerical value. On the other hand with a pointer you can just to something like the following (based on your GNRC-implementation)

if (pkt) {
    *((void **)ctx) = pkt;
    return pkt->data;
}

the only thing you need to adopt in your implementation for GNRC would be that netbuf_free() needs to cast the pkt appropriately:

/* GNRC implementation of the netbuf_free function */
void netbuf_free(void *netbuf)
{
    gnrc_pktbuf_release(*((gnrc_pktsnip_t **)netbuf));
}

Copy link
Member Author

@jia200x jia200x Nov 20, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

after discussing this offline with @leandrolanzieri, it might make sense to add a netbuf_ctx_t typedef that should be defined by the network stack.

The caller of netbuf_alloc has no idea about the type of the ctx. This snippet:

if (pkt) {
    *((void **)ctx) = pkt;
    return pkt->data;
}

will work in this case because the ctx variable is a void pointer. However, for uint16_t or complex data structures, we would need to map the netbuf_ctx_t to the specific type.

Then we can do something like:

netbuf_ctx_t ctx; /* This could be an uint16_t, pointer or struct depending of the network stack */
void *psdu = netbuf_alloc(size, &ctx);

ind.ctx = &ctx;

For GNRC the typedef would be:

typedef gnrc_pktsnip_t* netbuf_ctx_t;

For an array-based allocator:

typedef uint16_t netbuf_ctx_t;

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huh, but currently ctx is a void pointer. The fact that a stack uses uint16_t doesn't change that. It than would do something like

The problem here is that the snippet assumes that someone allocated the proper datatype before calling netbuf_alloc().

So:

*((uint16_t *)ctx) = foobar;

Assumes that you did:

uint16_t ctx;
netbuf_alloc(size, &ctx);

But you don't know if ctx is an uint16_t, struct, etc.

How would you do it then without typedefs nor pointers?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't work, as &ctx points to somewhere in the stack, which, if ind is used with IPC might not be the thing you want to point to anymore ;-).

I was assuming there was no IPC calls between the PHY indication and the netif_recv function. This restriction is raised if the ctx is always a pointer (a.k.a something is allocated in the heap)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So:

*((uint16_t *)ctx) = foobar;

Assumes that you did:

uint16_t ctx;
netbuf_alloc(size, &ctx);

But you don't know if ctx is an uint16_t, struct, etc.

How would you do it then without typedefs nor pointers?

I was more thinking of something along the lines of

void *ctx;
netbuf_alloc(size, ctx);

but I now see that this would mean that the the callee needs to store the context somewhere not in stack context...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So yeah, having a stack-specific netbuf_ctx_t would probably be cleaner. I propose to make it like we did it with sock, so netbuf-types.h can use the types specified in netbuf.h if need be:

/* […] */
#ifdef __cplusplus
extern "C" {
#endif

typedef struct netbuf_ctx netbuf_ctx_t;

/* […] (netbuf function definitions) */

#include "netbuf-types.h"

ifdef __cplusplus
}
#endif

#endif /* NET_NETBUF_H */
/** @} */

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also maybe make it optional via a sub-module or something, in case a stack doesn't need to use netbuf_ctx_t.

@jia200x
Copy link
Member Author

jia200x commented Nov 20, 2019

I think the word allocated is a bit misleading here. netbuf_alloc() can also just return the pointer to a static buffer.

I agree. Any proposal for a name here? netbuf_get? netbuf_get_buffer?

@miri64
Copy link
Member

miri64 commented Nov 20, 2019

I think the word allocated is a bit misleading here. netbuf_alloc() can also just return the pointer to a static buffer.

I agree. Any proposal for a name here? netbuf_get? netbuf_get_buffer?

netbuf_get() seems sufficient netbuf_get_buffer() might be too redundant ;-).

* @brief Get a network buffer with a given size.
*
* @note Supposed to be implemented by the networking module.
* @note @p ctx must be NOT NULL if the function returns a valid pointer.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use @post instead of @note when declaring a postcondition.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this more of a pre-condition: ctx != NULL? Making the validity of ctx dependent of the return value makes the function somewhat of a crystal ball otherwise ;-).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I was trying to say here is: there should always be allocation context if this function returns a pointer to a buffer.
If the ctx is set to NULL, later the netif_recv function has no way to know if the packet was allocated in the stack or a framebuffer

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would be possible to avoid this is we set an extra variable in the netif_pkt_t that indicates the source of the pkt (framebuffer or stack). But I'm not sure if it's the best way to do it

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I was trying to say here is: there should always be allocation context if this function returns a pointer to a buffer.

Is there? What about stacks that don't need a context? Do they then need to set some bogus context just to make netbuf happy?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But then how do you tell netif_recv that the packet was not allocated by the stack, so there's no need to free? (and that's not required to remove the MAC header)

Just call netif_free always. The stack knows how to handle the context (or it is just a NOP).

(and that's not required to remove the MAC header)

If the MAC header is not parsed / marked, there is no need to remove it, right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hen we would need something like #12741 (comment)

I don't see the relation between freeing memory regions and a source address....

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just call netif_free always. The stack knows how to handle the context (or it is just a NOP).

The problem is, is not ensured that netbuf_get is called before calling netif_recv.

E.g for the ESP WIFI:

void esp_wifi_phy_indication(void *psdu, size_t psdu_len)
{
    /* No need to call netbuf_get since the psdu is already allocated somewhere... */
    ieee80211_phy_t phy;
    phy.psdu = psdu;
    phy.psdu_len = psdu_len;
    ...
    ieee80211_phy_ind(&phy);
}

and then in netif_recv:

void netif_recv(...)
{
    if(packet_was_allocated_by_stack() ) {
        pkt = mark_and_remove_gnrc_pktsnip_mac_header();
    }
    else {
        /* the packet is in "some" buffer that has no relation with the stack */
        /* for GNRC we need to copy it to a gnrc_pktbuf anyway... */
        pkt = gnrc_pktsnip_add(netif_pkt->msdu.iol_base, netif_pkt->msdu.iol_len, GNRC_NETTYPE_UNDEF);
    }
        build_and_append_gnrc_netif_hdr(pkt)
        dispatch_packet(pkt);
}

How would you implement this packet_was_allocated_by_stack() function?.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see the relation between freeing memory regions and a source address....

This is not source address, this is the source of the allocation (stack, framebuffer, etc)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would you implement this packet_was_allocated_by_stack() function?.

There are two approches:

  1. The stack just knows if a packet is from its own buffer.
    • Example 1: gnrc_pktbuf_static has a currently internal function _pktbuf_contains() to determine if a packet is within the packet buffer
      static inline bool _pktbuf_contains(void *ptr)
      {
      return (unsigned)((uint8_t *)ptr - _pktbuf) < GNRC_PKTBUF_SIZE;
      }
    • Example 2: The packet is stored in a global buffer, the boundaries of that buffer can be determined
  2. The indication provides this information (e.g. in a flag) independent of ctx.

(1) has the disadvantage that is not easy to implement in some cases (e.g. gnrc_pktbuf_malloc, (2) has the disadvantage that a driver implementer can easily forget to set this flag (but the same is possible in the stack's implementation of netbuf_alloc() with the ctx).

The disadvantage of (1) is pretty huge, so we should disregard it. (2) could be mitigated by having an ieee80211_phy_init() function, e.g.:

bool ieee80211_phy_init(ieee80211_phy_t *phy, void *psdu, size_t psdu_len)
{
     memset(phy, 0, sizeof(ieee80211_phy_t));
     if (DEVICE_PROVIDES_NETBUF && (psdu != NULL)) {
         phy->psdu = psdu;
         phy.psdu_len = psdu_len;
         phy->flags |= IEEE80211_PHY_BUF_PROVIDED_BY_DEV;
     }
     else {
         phy->psdu = netbuf_alloc(psdu_len, &phy->ctx);
     }
     return (phy->psdu != NULL);
}

In any case I would make such a functionality compile-time optimized (packet_was_allocated_by_stack() returns always true, if there is no device that behaves in that way). I drafted this above by using the macro DEVICE_PROVIDES_NETBUF in ieee80211_phy_init_psdu() as well.

@jia200x
Copy link
Member Author

jia200x commented Nov 22, 2019

anyway, I propose to focus in the netif first. Basically I propose to:

  1. Validate the netbuf API
  2. Netif interface (netif_send, netif_recv, netif_pkt_t).

The rest is related to MAC and PHY rework, which will eventually come in the second/third phase of #12688

@miri64
Copy link
Member

miri64 commented Nov 22, 2019

Agreed, but so far our discussion was about netbuf, right?

@jia200x
Copy link
Member Author

jia200x commented Nov 22, 2019

There's another approach we could consider for netif_pkt_t:
Instead of defining a structure that holds the packet information, we can also define a stack dependent netif_pkt_t and functions to work with it.

E.g for GNRC:

struct netif_pkt_t {
    gnrc_pktsnip_t *netif;
    gnrc_pktsnip_t *msdu;
}

And then define a series of functions:

void netif_pkt_init(netif_pkt_t *pkt, ...)
{
    /* Init gnrc_netif_hdr_t in pkt->netif_hdr */
    /* Init packet snip or point to the allocated by the netbuf */
}
void netif_pkt_set_msdu(...)
{
    /* Mark GNRC pkt snip header here */
}

void netif_pkt_set_rssi(netif_pkt_t *pkt, uint8_t *rssi)
{
    /* Set RSSI in gnrc_netif_hdr */
}

or at least:

void netif_set(int option_enum, void *data, size_t size)
{
    switch(option_enum) {
        case NETIF_PKT_RSSI:
            /* Set RSSI in gnrc_netif_hdr */
            ...
    }
}

The advantage is that it's possible to optimize as much as possible for a given network stack (e.g no need to duplicate netif_pkt_t and gnrc_netif_hdr structs). On the other side, it increases the complexity of adding a new network stack.

@jia200x
Copy link
Member Author

jia200x commented Nov 22, 2019

Agreed, but so far our discussion was about netbuf, right?

Yes, I just wanted to say that we focus only on netbuf specific components (the snippet in #12741 (comment) is a PHY layer that uses netbuf, so we could have the discussion again when the netif component is ready).

However, that discussion shows that the ctx should be always maintain by the stack. I will update the documentation and behavior.

@miri64
Copy link
Member

miri64 commented Nov 22, 2019

it increases the complexity of adding a new network stack.

And that is why I don't like it ;-P. I think the common approach here is enough and in the end most likely more efficient (memory and porting-wise ;-)).

@miri64
Copy link
Member

miri64 commented Nov 22, 2019

Yes, I just wanted to say that we focus only on netbuf specific components (the snippet in #12741 (comment) is a PHY layer that uses netbuf, so we could have the discussion again when the netif component is ready).

It was just an example how to mitigate a drawback of the design for netbuf / netif I proposed, following the example you gave 🙂. I did not want to open a discussion about IEEE 802.11 PHY, just emphasis my point.

@jia200x
Copy link
Member Author

jia200x commented Nov 22, 2019

@miri64 done (ee1b95f).

@jia200x
Copy link
Member Author

jia200x commented Nov 28, 2019

I added a picture with the netif architecture in the PR description

Copy link
Member

@miri64 miri64 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some more indepth comments

@@ -37,6 +37,7 @@

#include "list.h"
#include "net/netopt.h"
#include "net/netbuf.h"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This include shouldn't be needed.

Comment on lines +62 to +75
typedef struct {
iolist_t msdu;
void *ctx;
uint8_t *src_l2addr;
uint8_t *dst_l2addr;
int16_t rssi;
uint8_t lqi;
uint8_t src_l2addr_len;
uint8_t dst_l2addr_len;
uint8_t flags;
} netif_pkt_t;


int netif_recv(netif_t *netif, netif_pkt_t *pkt);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a POC, I know, but I'm missing some doc.

* `gnrc_pktbuf_add(pkt->msdu.iol_base, pkt->msdu.iol_len, GNRC_NETTYPE_UNDEF)`
* instead of removing the snip and reallocating data.
*/
gnrc_pktsnip_t *ieee802154_hdr = gnrc_pktbuf_mark(gnrc_pkt, ((uint8_t*) pkt->msdu.iol_base) - ((uint8_t*) gnrc_pkt->data), GNRC_NETTYPE_UNDEF);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what is happening here... Why the pointer arithmetic? Please add at least a comment about that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The indication contains a pointer to the MSDU (and length).
In most network stacks it's possible to do:

the_network_stack_input(pkt->msdu.iol_base, pkt->msdu.iol_len);

However, in GNRC it's required to mark the pkt before passing it to the network stack.
So what I'm doing there is removing the IEEE802.15.4 header (all bytes between the beginning of the GNRC packet and the MSDU) plus footer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, this seems like a misuse of gnrc_pktbuf_mark then. This is purely meant to mark a header, nothing else, sot he function should just receive the MAC header length as an input value.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sot he function should just receive the MAC header length as an input value.
which function do you mean?

gnrc_netif_hdr_set_netif(hdr, (gnrc_netif_t*) netif);
dev->driver->get(dev, NETOPT_PROTO, &gnrc_pkt->type, sizeof(gnrc_pkt->type));

hdr->flags |= pkt->flags;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are the flags guaranteed to be transparent? If yes, what's the reasoning for that?

gnrc_pktsnip_t *ieee802154_hdr = gnrc_pktbuf_mark(gnrc_pkt, ((uint8_t*) pkt->msdu.iol_base) - ((uint8_t*) gnrc_pkt->data), GNRC_NETTYPE_UNDEF);

gnrc_pktbuf_remove_snip(gnrc_pkt, ieee802154_hdr);
gnrc_pktbuf_realloc_data(gnrc_pkt, pkt->msdu.iol_len);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is a reallocation necessary here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some radios don't process FCS. Thus, it's necessary to remove it before passing it to the stack.
There might be better ways to do it though, I agree

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it make sense to add functions like gnrc_pktsnip_remove_hdr() or gnrc_pktsnip_remove_footer()?

Copy link
Member

@miri64 miri64 Nov 28, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some radios don't process FCS. Thus, it's necessary to remove it before passing it to the stack.

Ok. If it is just removing the trailing FCS, this is the correct usage. No need to add extra functions for that.

Copy link
Member

@miri64 miri64 Nov 28, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(But maybe a comment would be nice ;-)).

{
/* Of course, this will be:
* netif->ops->send(netif, pkt); */
netif_ieee802154_send(netif, pkt); /* We use this one, since there's no netif->ops yet :( */
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since there's no netif->ops yet

Why not?

/* prepare destination address */
if (netif_hdr->flags & /* If any of these flags is set assume broadcast */
if (pkt->flags & /* If any of these flags is set assume broadcast */
(GNRC_NETIF_HDR_FLAGS_BROADCAST | GNRC_NETIF_HDR_FLAGS_MULTICAST)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So these flags only exist now to make one stack (GNRC) happy? IMHO these flags shouldn't be known netif and the stack is responsible to set the proper broadcast address beforehand in some glue code, as I believe that GNRC is the odd one out here ;-)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively (not sure this is already the case here) there could be a stack-specific region in the flags field.

}
#endif
#ifdef MODULE_GNRC_NETIF_DEDUP
/* NOTE!!: This should be implemented by the IEEE802.15.4 layer! */
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this the IEEE802.15.4 layer?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the comment is misleading. What I meant there is that it shouldn't be GNRC specific (it's using gnrc_netif_hdr stuff)

netif->stats.rx_bytes += nread;
#endif

if (((gnrc_netif_t*) netdev->context)->flags & GNRC_NETIF_FLAGS_RAWMODE) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not good that there is GNRC code in here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not intended. I will add a comment there. To remove that I would need to add the flags to the netif structure.

@@ -0,0 +1,186 @@
#include "net/netdev/ieee802154.h"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense to make the following submodules?

  • ieee802154_mac
  • ieee802154_phy
  • ieee802154_netif

At least the latter would make sense IMHO if someone wants to use PHY/MAC without netif.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so. In fact, this translates to

#Use raw `at86rf2xx` transceiver (with or without HAL)
USEMODULE += at86rf2xx

# Enable IEEE802.15.4 PHY layer support
USEMODULE += ieee802154_phy

# Enable IEEE802.15.4 MAC layer (automatically enables the IEEE802.15.4 PHY)
USEMODULE += ieee802154_mac

# Enable IEEE802.15.4 network interface (automatically enables PHY and MAC).
USEMODULE += ieee802154_netif

@kaspar030
Copy link
Contributor

This POC shows a (very early) IEEE802.15.4 stack independent network interface that sends and receives data from GNRC.

What is a network stack?
To me, this POC implements part of a network stack. This part is integrated into our driver interface.
I wouldn't go that route. netdev should be the minimal interface needed to implement the higher levels without having to write device dependent code, with the least amount of design choices imposed onto users of the interface. That way, different network stacks can choose different design trade-offs.

IMO, this proposal waters what minimal means, by integrating mandatory code and design.
While that makes this new code re-usable, it makes it more difficult to not use it.

I'm all for improving the MAC layer story, but please, don't mess with the stack independence of netdev.

I'm happy to discuss improving netdev so it can fulfill its role as minimal interface (alongside #12469). But netif, as proposed needs to sit on top of that.

@miri64
Copy link
Member

miri64 commented Nov 30, 2019

Not sure what you are getting at @kaspar030. As proposed, it makes netdev more minimal than before, as it takes out all the PHY and MAC stuff, currently required to be implemented within the implementation of netdev or the network stack on top. Likewise, it uses a standardized API to implement (optional) PHY and MAC on top of netdev. This API is, as I realized during this review, already very boiled down to the minimum necessary. Moreover, taking MAC implementations out of a network stack implementation (with network stack I mean any network abstraction layer above the link-layer here) makes MAC protocols (for which implementations can be quite involved, see BLE or 6TSCH) re-usable for any network stack and the standardized API gives MAC implementors a comfortable, well-known playing field to implement the MAC protocol.

@jia200x
Copy link
Member Author

jia200x commented Dec 2, 2019

Hi @kaspar030

What is a network stack?

Strictly speaking it's the software implementation of the communication protocols. E.g GNRC, OpenThread, LWIP, emb6, etc.

To me, this POC implements part of a network stack.

The POC yes (IEEE802.15.4 HAL, PHY, and MAC). The concept of "netif" no. But I added the whole minimal IEEE802.15.4 implementation just to show how to interface them.
In fact, this allows us to have external MAC or PHY layers, something that's not easy in our current architecture.

This part is integrated into our driver interface.

Well, I didn't touch any driver files in this POC. I don't understand what do you mean with that.

I wouldn't go that route. netdev should be the minimal interface needed to implement the higher levels without having to write device dependent code,

Please define higher layers here so we are in the same scope.
By definition the lowest point where we can write hardware independent code is the Hardware Abstraction Layer. Everything on top is device independent by definition (PHY, MAC, netif).

with the least amount of design choices imposed onto users of the interface. That way, different network stacks can choose different design trade-offs.

If you want different trade-off you will need direct access to each layer. There's nothing more slim that a raw component without an interface on top (e.g raw IEEE802.15.4 PHY vs an interface that encapsulates the IEEE802.15.4 PHY).

IMO, this proposal waters what minimal means, by integrating mandatory code and design.
While that makes this new code re-usable, it makes it more difficult to not use it.

Can you point out what you consider as mandatory? With this approach you can use the following layers (all, one or a combination of them):

  • Raw transceiver (e.g raw AT86RF2xx with at86rf2xx_xxx functions)
  • Transceiver HAL
  • PHY
  • MAC
  • Netif

We cannot do that in our current architecture.
In fact, our current architecture makes a lot of assumptions that are not needed in other MAC layers. E.g, in a TSCH layer the radio shouldn't make decisions about state changes (send, recv function). So as @miri64 pointed out, this leaves everything optional.

I'm all for improving the MAC layer story, but please, don't mess with the stack independence of netdev.

I didn't touch netdev here. If you see some files, it's because I had header dependency problems (as described above).

But netif, as proposed needs to sit on top of that.

That's an independent discussion, as netif is independent of the usage of netdev

@jia200x
Copy link
Member Author

jia200x commented Feb 13, 2020

After some thoughts, it probably makes more sense to have a dedicated IEEEE802.15.4 SubMAC layer (instead of a pure IEEE802.15.4 PHY layer).
The IEEE802.15.4 is the lowest common layer required to send raw packets and to use a IEEE802.15.4 MAC. This includes all services of the PHY layer (send raw data, set channels, CCA, etc) plus:

  • Retransmissions with CSMA-CA
  • ACK handling

The closest representation of the SubMAC now is netdev on top of the at86rf2xx or mrf24j40 radios.
We can fill in the holes of missing hw acceleration using soft CSMA-CA and soft retransmissions, but the SubMAC layer should be the one that decides.

I will open an issue with the proposal.

@jia200x
Copy link
Member Author

jia200x commented Mar 31, 2020

this concept has been refactored into the concepts of #13771 . So, it's outdated now.
I will close it.

@jia200x jia200x closed this Mar 31, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
State: demonstrator State: This PR (loosely) demonstrates a concept and is not necessarily meant to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants