Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure only one PowerCommand and ActivePowerCommand is queued #1093

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

schlimmchen
Copy link
Contributor

Neither a PowerCommand nor an ActivePowerCommand shall be enqueued if another one (of the same type) is already pending.

Adding multiple PowerCommands or ActivePowerCommands is not bad in general, but suppose the following situation:

  • An ActivePowerCommand was queued and the command state was set to CMD_PENDING.
  • The radio is busy with the queue, maybe communication is unreliable and it takes some time.
  • Another ActivePowerCommand is queued and the command state is set to CMD_PENDING again.
  • The radio finished working on the first ActivePowerCommand. The "last command state" is changed to CMD_OK or CMD_NOK.
  • The state is now inconsistent: There is an ActivePowerCommand pending, but the "last command state" is not CMD_PENDING.

This check restricts one of each of the commands per inverter. It seems to me that this is a reasonable limitation.

Another possible solution might be to check the radio queue for other commands of the same type and keeping the CMD_PENDING state. This is computationally more expensive, but would make sure that racing commands would all be processed. The current implementation will reject a second command of the same type.

neither a PowerCommand nor an ActivePowerCommand shall be enqueued if
another one is already pending.
@stefan123t
Copy link

Reading up on this in relation to #1201.
As this describes your reasoning quite well with regards to the core issue in queued Commands I would explicitly link it to that PR once more.

While I understand your restriction to one of the commands each per command type and inverter in the queue, I think there might be two intended solutions:

a) FIFO: as you described allow only one command per type and inverter and do not allow new commands until this has executed.
Easy to implement but maybe not intended.
It might send an outdated limit and prevents to send a new one until the old one has been acknowledged by the inverter.

b) LIFO: allow only one command per type and inverter and remove the old command from and add the new command instead to the queue. This may be more complex as we would need to discard the inverter response from the radio, eventually discarding received data and preventing the radio loop to request potentially missing frames / packets.
From an end-user perspective this may be better as it would always use the latest limit in the command. On the other hand it may try to send a limit and retry from scratch with a new limit before the inverter acknowledged the old limit.

Also note that the radio may have to send multiple packets or frames and/or may receive multiple response packets / frames. For multiframe responses the radio may have to re-request the missing packets.

@stefan123t
Copy link

stefan123t commented Sep 30, 2024

@schlimmchen I had an other thought when reviewing this with #2237 in mind.

We could add a target (and actual) value to each of our inverters.
When we read the actual value from the inverter, we may also compare it with the target and choose to send a new ActivePowerLimit (APL) in case the difference is greater than 30W / 2-5% of the temporary limit.
That is we only need to update the target value and the rest will be done in due time by the normal processing.
I believe something along these lines the Hoymiles DTU works as well.

We may consider to keep the current blocking behaviour in case a permanent Limit is set, as we want to make sure this is followed.

What will the Inverter (ex. HM600) actually do, when we first set a permanent limit of 400W, then a temporary limit of 300W and at last set a permanent limit of 600W ?
Will it forget about the temporary limit, or do we have to reset the temporary limit to 600W too, in order to "unlimit" the inverter again ?

@schlimmchen
Copy link
Contributor Author

That is we only need to update the target value and the rest will be done in due time by the normal processing.

You mean something like updateInverter() in OpenDTU-OnBattery?

What will the Inverter (ex. HM600) actually do, when we first set a permanent limit of 400W, then a temporary limit of 300W and at last set a permanent limit of 600W ?

It will produce 400W once the 400W limit was received, then 300W when the 300W limit was received, then 600W when the 600W limit was received. Frankly, I don't understand the question 😉 Oh, you send a persistent limit in the last step, okay. Well, it's like I said. The limit sent is the limit applied. The only difference it that the limit is persisted or not.

I am not sure what your proposal changes with regard to this PR. This change is implemented in OpenDTU-OnBattery -- but maybe it could be reverted to the upstream state? I am not sure... I did not sufficiently annotate the change, so I can't say why this was or maybe still is important for the DPL. I'll look into it again some time.

@stefan123t
Copy link

@schlimmchen the reason I was asking about temporary vs. persistent limits is that with the above approach we just update a "soft" target limit in the OpenDTU and do not need to guarantee that it will be actually sent to the inverter in due time. E.g. in case a newer temporary limit is given we then try to send the new limit instead.
In case of a persistent limit we have to guarantee that it will be persisted, hence we will have to wait for a response from the inverter that it has been persisted and only then can accept a new "soft" limit in OpenDTU.
I hope that explains my train of thought.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants