-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Group sending of datapoints #3
base: main
Are you sure you want to change the base?
Group sending of datapoints #3
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, some minor hints for potential improvement, but nothing critical.
Group: Frame 1 | Cycle time(ms): 10 | ||
API: KuksaValV2 | ||
Elapsed time: 9.99 s | ||
Rate limit: 10 ms between iterations | ||
Sent: 1000 iterations * 1 signals = 1000 updates |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
--help shows that "iterations" have been removed and replaced by "seconds"?
|
||
By default, the results output will be shown like this: | ||
``` | ||
[00:00:08] Group: Frame A | Cycle(ms): 10 | Current latency: 0.725 ms [==============================================================================================================] 8/8 seconds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Splitting the output results into groups as well seems counter-intuitive, harder to understand.
Now, the user has to do the mental work to combine and aggregate all the results of the individual groups into a single result. How would he do that? Calculating the average of all the averages?
Imho, sending signals is grouped (as seen by the provider, sort of CAN-Bus point of view), but measuring the latency is done on the application side (subscriber, which is on VSS level) and the subscriber doesn't really know about groups of CAN signals and cycle times.
May be a future feature request to also show the "Total summary over all groups".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Splitting the output results into groups as well seems counter-intuitive, harder to understand.
I agree.
Now, the user has to do the mental work to combine and aggregate all the results of the individual groups into a single result. How would he do that? Calculating the average of all the averages?
Exactly, and I don't think it's even possible to correctly calculate the average after the fact (without having access to the underlying data).
This should be done by the application.
@@ -0,0 +1,13 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure why the change in config file format was necessary, probably due to giving them a name.
Without the need for a name, the grouping could have been derived from previous file format. There was already an object for the definition of a signal, and it would have been easier to add it there.
Example (based config3.json
)
{
"signals": [
{
"path": "Vehicle.Speed",
"cycle_time": "10ms"
},
{
"path": "Vehicle.TraveledDistance"
"cycle_time": "1000ms"
},
{
"path": "Vehicle.TraveledDistanceSinceStart"
"cycle_time": "1000ms"
}
]
}
This is just a hint, because the input comes from DBC, which is defined signal-by-signal. When you want to convert from that input source into the new databroker-perf config file format, you have to implement the "grouping" logic again.
I understand the theoretical idea of having group names though, but there is no such thing as a "group name" for all signals of the same cycle time in DBC. User would have to come up with an artificial name like All signals with 10ms cycle time
, which I doubt has any benefit.
@@ -54,23 +51,23 @@ struct Args { | |||
#[clap(long, display_order = 4, default_value_t = 55555)] | |||
port: u64, | |||
|
|||
/// Number of iterations to run (skip) before measuring the latency. | |||
/// Seconds to run (skip) before measuring the latency. | |||
#[clap( | |||
long, | |||
display_order = 5, | |||
value_name = "ITERATIONS", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this still called ITERATIONS if everything is switched to Seconds? Overlooked?
let endpoint = tonic::transport::Channel::from_shared(databroker_address.clone()) | ||
.with_context(|| "Failed to parse server url")?; | ||
let endpoint = endpoint | ||
.initial_stream_window_size(1000 * 3 * 128 * 1024) // 20 MB stream window size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it be useful to have these endpoint settings configurable by the user with additional command line options?
We already know that you might want to fine-tune these settings depending on the available memory on the device and the expected bus load.
Could be separate feature/story, just a hint for the future.
let endpoint = tonic::transport::Channel::from_shared(databroker_address.clone()) | ||
.with_context(|| "Failed to parse server url")?; | ||
let endpoint = endpoint | ||
.initial_stream_window_size(1000 * 3 * 128 * 1024) // 20 MB stream window size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comments mismatch with actual values (actual 393MB, but comment says 20 MB)
Imho, this should be changed.
It's very unrealistic to have so much memory on a target embedded device.
let endpoint = endpoint | ||
.initial_stream_window_size(1000 * 3 * 128 * 1024) // 20 MB stream window size | ||
.initial_connection_window_size(1000 * 3 * 128 * 1024) // 20 MB connection window size | ||
.keep_alive_timeout(Duration::from_secs(1)) // 60 seconds keepalive time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicate call to keep_alive_timeout()
Comment mismatches actual value.
Are these settings different than the defaults?
Just out of curiosity, if they differ and are more suitable, it would make sense to document it for real clients (not only for databroker-perf).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure we want these customization here at all... And if we do, it should be clearly documented why they should be different from the defaults. Or add them as command line arguments in a future version, so they can be experimented with?
Just doing a quick run, the latency is 20% better if I remove these non-default settings...
|
||
By default, the results output will be shown like this: | ||
``` | ||
[00:00:08] Group: Frame A | Cycle(ms): 10 | Current latency: 0.725 ms [==============================================================================================================] 8/8 seconds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Splitting the output results into groups as well seems counter-intuitive, harder to understand.
I agree.
Now, the user has to do the mental work to combine and aggregate all the results of the individual groups into a single result. How would he do that? Calculating the average of all the averages?
Exactly, and I don't think it's even possible to correctly calculate the average after the fact (without having access to the underlying data).
This should be done by the application.
let endpoint = endpoint | ||
.initial_stream_window_size(1000 * 3 * 128 * 1024) // 20 MB stream window size | ||
.initial_connection_window_size(1000 * 3 * 128 * 1024) // 20 MB connection window size | ||
.keep_alive_timeout(Duration::from_secs(1)) // 60 seconds keepalive time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure we want these customization here at all... And if we do, it should be clearly documented why they should be different from the defaults. Or add them as command line arguments in a future version, so they can be experimented with?
Just doing a quick run, the latency is 20% better if I remove these non-default settings...
|
||
print!("\n\nSummary:"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's better if all output goes through Term::stdout()
as done in fn write_output(...)
, instead of using print!
#[clap( | ||
long, | ||
short, | ||
display_order = 1, | ||
default_value_t = 1000, | ||
default_value_t = 8, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a better default would be be to run forever (i.e. have no default value for this option).
}; | ||
|
||
pub fn read_config(config_file: Option<&String>) -> Result<Vec<Signal>> { | ||
const MAX_NUMBER_OF_GROUPS: usize = 10; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we want a limit here?
This PR implements the group sending of datapoints by a newly defined json file.
Databroker-perf creates two new gRPC channels for each group: one for the provider and one for the subscriber.
Each provider will update its group signal values to the Databroker at the cycle time specified (in milliseconds) in the JSON configuration file provided.
config_group.json file:
Run seconds test parameter implemented instead of iterations parameter since all groups should end at the same exact time.
--detail-output flag added to extend the test result view.
Max number of groups defined to 10.
Check docu for more info: https://github.com/eclipse-kuksa/kuksa-perf/blob/539d48951f5860bf34f87e87da66d450e0e6c46f/README.md