Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QUESTION][IMX] Timer domain with Zephyr native drivers on i.MX93 #8447

Closed
LaurentiuM1234 opened this issue Nov 6, 2023 · 5 comments
Closed
Labels

Comments

@LaurentiuM1234
Copy link
Contributor

LaurentiuM1234 commented Nov 6, 2023

While working on transitioning to Zephyr native drivers (and the host-zephyr/dai-zephyr combo)
on i.MX93 I've noticed a few interesting issues. I've been wondering how said issues are managed
on Intel's side of things:

  1. The DAI trigger commands seem to be using the SOF encoding instead of the Zephyr one for the direction. What I mean by this is that the Zephyr driver is passed the values corresponding to SOF_IPC_STREAM_PLAYBACK (0) SOF_IPC_STREAM_CAPTURE (1) instead of DAI_DIR_TX (1) and DAI_DIR_RX (2). Of course, if we try to validate the direction on the Zephyr side we're going to get an error (in the case of playback, capture is going to be mistaken for playback).

I've also been playing around with the timer domain with the native Zephyr drivers on i.MX93. Here's what I found (+some questions):

  1. The IPC4_MOD_ID() macro seems to be returning values != 0 even on IPC3, which leads to issues such as calling host_reset() twice (once after the trigger stop operation and once during the trigger reset operation). According to the comments I've found in the code this is supposed to be IPC4-only behavior, right? Also, is the encoding for the components IDs supposed to be different in the case of IPC4? Seems like they're encoded in the first 16 bits, which also seems to be the case for IPC3 (since IPC4_MOD_ID() returns something != 0) EDIT: Upon doing a rebase, this issue seems to have been addressed via a44ddbe

  2. On NXP's side, the timer domain seems very "delicate" with respect to the operations you're allowed to perform. For instance, we can't perform any kind of busy waiting as it may lead to a DAI underrun* (I've tried with 100, 50 and 10 microseconds and they're all troublesome). This busy waiting (although very bad) is required to make sure that our transmitter/receiver has been disabled after a trigger stop operation. I'm curious to know if Intel has faced this kind of issue before. My main concern is that we may face even more problems while employing more sophisticated topologies (note: I've been using the imx93 upstream topology). Can Intel provide some feedback regarding the timer domain? I think that would be very useful. Thanks!

  • context: we have a capture pipeline scheduled. After some time we schedule a playback pipeline, let it run for a bit and then we cancel the capture pipeline. The busy waiting is performed during the trigger stop operation for the capture pipeline. The above statement is not entirely true since the underrun only happens if the busy waiting is performed while the DAI is waiting for dma_reload() to be called via dai_common_copy() (since this call will allow the DAI to trigger more transfers). We're doing this as a way to somewhat synchronize with the timer as it guarantees that after copying 2 periods worth of data it will wait for the timer to kick in.

CC: @dbaluta @iuliana-prodan @lgirdwood @kv2019i

@lgirdwood
Copy link
Member

@LaurentiuM1234 re 2) on Intel side use the pipeline state machine to sequence some of the events we need when triggering certain configurations. i.e. for delayed BCLK on I2S we have some early pipeline trigger ops that start the BCLK and return (this then allows other LL pipelines to run). Then subsequent pipeline iterations will call the regular pipeline trigger start (after BCLK has been running for desired time) and start the DMA. I think the same can be applied in reverse order to your trigger stop.

I agree though, bad to busy wait for longer than 10 - 50uS since this will eat into any overhead from other low latency pipelines and will glitch.

@LaurentiuM1234
Copy link
Contributor Author

@LaurentiuM1234 re 2) on Intel side use the pipeline state machine to sequence some of the events we need when triggering certain configurations. i.e. for delayed BCLK on I2S we have some early pipeline trigger ops that start the BCLK and return (this then allows other LL pipelines to run). Then subsequent pipeline iterations will call the regular pipeline trigger start (after BCLK has been running for desired time) and start the DMA. I think the same can be applied in reverse order to your trigger stop.

I agree though, bad to busy wait for longer than 10 - 50uS since this will eat into any overhead from other low latency pipelines and will glitch.

ACK. What about synchronization with the timer? Does Intel perfrom any kind of operations to make sure the DMA transfers are somewhat synchronized with the timer or are you letting the DAI do its own thing w.r.t the DMA transfers and counting on the fact that after each period you'll have transferred an amount of bytes?

@lgirdwood
Copy link
Member

@LaurentiuM1234 all we do is

  1. make sure we have enough overhead and space in the DMA buffers.
  2. DMA runs in cyclic mode (circular buffer mode). i.e. we do not need to race and program the DMA IP on each timer tick, we program the DMA during stream init()
  3. only update DMA IP R/W pointers during active stream playback and capture.
  4. I think most of our topologies use 3 periods for DMA buffers (to fix 1 & 2 above).

@LaurentiuM1234
Copy link
Contributor Author

@lgirdwood TYVM for your answers, they were extremely helpful! Was able to get the timer domain working on i.MX93 with the native Zephyr drivers. For now, I've only managed to test the cases supported by the currently upstream i.MX93 topology so I'm not 100% sure if it'll work for fancier topologies (e.g: mixer), but at least we've got a decent start.

@LaurentiuM1234
Copy link
Contributor Author

Closing this as issue 2) has been answered and issue 1) should be discussed in #8520.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants