-
Notifications
You must be signed in to change notification settings - Fork 539
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
orchagent/portsorch: Keeping sched_group_ids in port #2520
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As comments.
Also suggest adding a mock test to simulate port removal and make sure no error.
m_scheduler_group_port_info[port.m_port_id] = { | ||
.groups = std::move(groups), | ||
.groups = port.m_scheduler_group_ids, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This avoids the memory issue but there is still a memory leak issue in DPB. eg. if the old port is removed then a set of new ports are created, the SAI OID of ports will be changed but m_scheduler_group_port_info[<SAI OID of old port>]
won't be removed. Eventually, m_scheduler_group_port_info
will contain a lot of info of ports that have been removed.
I would like to suggest
- moving the whole m_scheduler_group_port_info to
Port
struct and initialize it during port initialization or in this function. - or keeping in
m_scheduler_group_port_info
and removing items from it on port removing.
@neethajohn how do you think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the second solution will be a lot cleaner
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Stephen/Neetha, I am not familiar with DPB. How is the port removal scenario handled currently for scheduler group? Is it already getting covered in existing code before this PR or it is new functionality that needs to be added? Please clarify, thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@stephenxs, can you comment on the DPB scenario?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When it comes to DPB, I think what we care about is the port removal flow.
In short, the QoS orchagent should listen to the config_db.PORT
table and handle the port removal event.
On receiving a port removal event, it should map the port name to OID and then erase the item indexed by the OID from m_scheduler_group_port_info
.
I think we can use the function bool PortsOrch::getPort(string alias, Port &p)
to map the port name to OID.
But what I'm not sure about is whether the mapping is still available in ports orch in the port removal scenario because there is no guarantee in terms of the order in which the orchagents (ports ~, QoS ~) handle the port removal notification.
this is why I proposed option #1
. but if we want to use #2
maybe we also have a way. I will check with the DPB expert in my team (if there is one) and get back to you.
thanks.
What I did
Based on comments from PR #2174, keeping scheduler groups in port and reusing them in qosorch.
Why I did it
Avoiding querying twice for scheduler groups.
How I verified it
sudo config warm_restart enable swss
sudo service swss restart
and
warm-boot