-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Streaming events URL support "not to use cache" #45
Streaming events URL support "not to use cache" #45
Conversation
When configured via args: 1) Write responses only to a o/p file instead of stdout. 2) For on change events, filter for a specific event. 3) Exit upon receiving N responses. 4) Exit upon timeout. The above would help use gnmi_cli as tool in scripting environment that does testing.
* syntax * Update gnmi_cli When configured via args: 1) Write responses only to a o/p file instead of stdout. 2) For on change events, filter for a specific event. 3) Exit upon receiving N responses. 4) Exit upon timeout. The above would help use gnmi_cli as tool in scripting environment that does testing.
…e" in Query URL. This is useful as only one events client can use cahe. The only client that uses cache today is gNMI events-client. So when we use gnmi_cli, which also uses events-client, have the ability to disable cache. This way, while GWS service is connected, we could still use gnmi_cli w/o any impact to GWS service.
<br/> | ||
The SONiC switch does offline cache when gNMI client is down.<br/> | ||
The cached events are delivered upon nest gNMI connection<br/> | ||
Only one client should use the cache service<br/> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How to guarantee this limitation?
If two clients use the cache service, what will happen?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Limitation by design.
Trade off between code complexity in supporting multiple clients with cache vs practical use case.
In practice, there is only one client streaming out.
This is added to enable test tools to function w/o disturbing the main service.
To answer your question: If multiple use cache, of course first come first served. The cache will go to one or other.
Why I did it
Only one gNMI client can use cache at any time.
The external clients that stream events reliably need to use cache.
While an external client is connected, if we need to use some test client (e.g. gnmi_cli), it could be used provided it does not use cache.
How I did it
Added support for a new URL parameter [usecache=false]. This parameter will disable use of cache.
How to verify it
Which release branch to backport (provide reason below if selected)
Description for the changelog
Link to config_db schema for YANG module changes
A picture of a cute animal (not mandatory but encouraged)