Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make MPU9250 use the FIFO #192

Merged
merged 5 commits into from
Nov 22, 2022
Merged

Make MPU9250 use the FIFO #192

merged 5 commits into from
Nov 22, 2022

Conversation

kitlith
Copy link
Member

@kitlith kitlith commented Sep 4, 2022

Idea is that using the FIFO with give us more accurate information, if we're able to process all of the samples.

One thing that should be done is a way to signal visibly somewhere that the buffer has overrun, so that we can realize that that's actually happening when testing this.

I'm going to leave a review highlighting all the TODOs I left in the code, so that other people can remark, and decide if they need to be handled now or later.

@kitlith kitlith requested a review from deiteris September 4, 2022 03:25
Comment on lines +126 to +132
// TODO: set a rate we prefer instead of getting the current rate from the device.
deltat = 1.0 / 1000.0 * (1 + imu.getRate());
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could be wrong, but we don't appear to have ever been setting a specific rate anywhere? so I'm querying from the IMU what rate is set and computing timings based off of that, but we can change this to set it to a rate of our choosing.

Comment on lines +188 to +213
// TODO: would it be faster to read multiple samples at once
while (getNextSample (&buf, &remaining_samples)) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I started with a relatively complicated design that involved the ability to read multiple samples into a buffer at once, and then maybe focused too much on avoiding issues with strict aliasing, but settled on this simpler design in the end.

In essence, this TODO is asking the question "how much time is wasted doing individual i2c transactions instead of trying to batch them"? If the time is neglegible, then it's not important. If it's a lot, then maybe we should try adding this in.

might want to write a microbenchmark to figure this out, if we care enough to do so.

Comment on lines +194 to +223
// TODO: monitor magnetometer status
// buf.sample.mag_status;
// TODO: monitor interrupts
// imu.getIntStatus();
// TODO: monitor remaining_samples to ensure that the number is going down, not up.
// remaining_samples
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • actually looking at the magnetometer status would allow us to detect when the magnetometer reading is incorrect
  • looking at the interrupt status would allow us to handle/detect fifo overflow
  • monitoring remaining_samples would allow is to avoid an infinite loop scenario in the case that our sample processing loop is slower than the imu outputs data.

Comment on lines +411 to +436
// TODO: refactor so that calibration/conversion to float is only done in one place.
void MPU9250Sensor::parseGyroData(int16_t data[3]) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this TODO might be out of date. whoops.

Comment on lines +419 to +444
// really just an implementation detail of getNextSample...
void MPU9250Sensor::swapFifoData(union fifo_sample_raw* sample) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I should consider moving the internals of swapFifoData directly into getNextSample

Comment on lines +449 to +475
// TODO: strict aliasing might not be violated if we just read directly into a fifo_sample*.
// which means the union approach may be overcomplicated. *shrug*
bool MPU9250Sensor::getNextSample(union fifo_sample_raw *buffer, uint16_t *remaining_count) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it is overcomplicated, it should be in a way that doesn't affect performance.

Comment on lines +443 to +472
// thought experiments:
// is a single burst i2c transaction faster than multiple? by how much?
// how does that compare to the performance of a memcpy?
// how does that compare to the performance of data fusion?
// if we read an extra byte from the magnetometer (or otherwise did something funky)
// we could read into a properly aligned array of fifo_samples (and not require a memcpy?)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This thought experiment is asking the same question as "would it be faster to read multiple samples at once" up above, except in more documentation and detail, in case this gets merged and someone is poking around for something to try.

The comment about an extra byte from the magnetometer is referring to the fact that a single sample is currently 19 bytes, because of the magnetometer status byte. If we had an array of fifo_sample, there would likely be a single byte of padding between each sample for 16 bit address alignment, because these structures mostly consist of 16 bit integers. If we read an extra byte from the magnetometer, then we could have a sample that is 20 bytes, and requires no additional padding bytes.

@Eirenliel
Copy link
Member

Please rebase to main.

This includes moving some member variables that were only used within a
single function into the function that they were used.
@Eirenliel Eirenliel merged commit fd3e463 into main Nov 22, 2022
@Eirenliel Eirenliel deleted the mpu9250_fifo branch November 22, 2022 01:37
kitlith added a commit that referenced this pull request Jan 19, 2023
This restores the behaviour from before #192, which I previously missed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants