Skip to content
This repository has been archived by the owner on May 7, 2020. It is now read-only.

Requirements for audio capabilities #2333

Open
Marty56 opened this issue Oct 17, 2016 · 11 comments
Open

Requirements for audio capabilities #2333

Marty56 opened this issue Oct 17, 2016 · 11 comments

Comments

@Marty56
Copy link

Marty56 commented Oct 17, 2016

I have noted that in issue #2306 an enhancement of the audio capability in OH2 is actively developed.
I am not sure whether the following requirements are covered.

  1. Enable an asynchronous for the say instructions.
    User should be able to issue many "say" instructions in rules without having to worry about the timing. OH2 should manage those "say" instructions and process the output sequentially not loosing or overriding any previous "say" commands.
    This should also be applicable if the source is e.g. a SONOS speaker.
  2. Enable audio output on "remote" WebUI
    The WebUI is not necessarily running on the same machine as the OH2 server. It would be desirable to have an item that creates a sound locally on the device where the WebUI is displayed.
@lolodomo
Copy link
Contributor

I agree with your first point.
@kaikreuzer : can you comment please ?

@kaikreuzer
Copy link
Contributor

  1. I don't agree with this. I think it is good to have rule actions behave synchronously, so that the caller knows when it is done. It should rather be easy within rules to start certain code asynchronously (in a separate thread), which is currently already possible through createTimer(now,....) - so the rule implementor has the full flexibility and this is not related to only "say", but to all actions that are available in rules.
  2. This is implemented with initial contribution of a web audio sink #2313. Paper UI and HABPanel both support it.

@kaikreuzer kaikreuzer changed the title Requirements for audio capabilities of OH2 Requirements for audio capabilities Nov 3, 2016
@Marty56
Copy link
Author

Marty56 commented Nov 13, 2016

@kaikreuzer
I do not understand your answer but maybe you did not understand my request aswell.
Example:
say("first sentence")
Thread::sleep(some first sentence dependent delay)
say("second sentence")

If user does not add the Thread::sleep(some first sentence dependent delay) and the second "say" will overwrite the first and the frist "say" will not be played completely and interrupted.
It would be great if this heuristic delay would not be needed because it is also varying depending on what the machine is doing elsewhere.
So either the user is using very long delay which add undesirable pauses or user is experiecing truncated output.

@kaikreuzer
Copy link
Contributor

I agree, this sleep should not be required - and that's what I meant with they should be "synchronous", i.e. only returning once the say command is fully executed.
It might not always easy to implement it that way, especially if streams are played on some client like through the webaudio sink.

@lolodomo
Copy link
Contributor

lolodomo commented Feb 6, 2017

It is something to be checked but my feeling when looking at the code is that it is correctly handled by the Sonos audio sink.

@Marty56
Copy link
Author

Marty56 commented Feb 6, 2017

I can confirm that the above "synchronous" operation is not working.
I had to insert message specific Thread::sleeps again.

@ghost
Copy link

ghost commented Jun 9, 2017

Hello @kaikreuzer

not having seen this thread, I also opened a question on the forum: https://community.openhab.org/t/how-to-prevent-overlap-of-say-commands/29644/10

I think that is actually in sync what you say that the say command should be synchronous and not asynchronous (as it seems to be right now).

@lolodomo
Copy link
Contributor

There are two different cases.
If you have a rule with several TTS calls in sequence, we should have this correctly handled. Considering the code of the Sonos audio sink, my feeling is that the process method called by "say" only returns when the TTS has finished.

Another case is when you have several rules trigerring "say" commands that overlap. This case is clearly not handled by ESH.

@kaikreuzer : if the first case is not working, could it be because there is a kind of timeout in the rule engine when a function is called ? Or could it be bacause all calls in a rule are called asynchronously ?!

@ghost
Copy link

ghost commented Jun 11, 2017

@lolodomo, @kaikreuzer

technically as you can see in the code I posted in the above forum thread, I am using a String item that I monitor for change with a rule. If the string command changed, the rule will be invoked and call say with the content. So I would say that from @lolodomo's last post the first case applies here.

My hope was that the say command will stop execution of the rule until it is finished. To prevent speech overlap through multiple executions of the rule (i.e., when the String item changes while the rule is still producing speech), I introduced a boolean variable that was set true right before the say command and reset to false after the say command.

The rule itself starts with a loop checking whether the variable is true, and if so, it will Thread.sleep and then check again until the variable is false. This should IMHO be a rudimentary way to prevent overlaps, however, as it seems that the "say" command does not stop the rule execution the variable gets set back to false before say finishes and so all say commands are executed in parallel instead to wait for the variable.

@lolodomo
Copy link
Contributor

My strange feeling is that the problem is inside the rule engine, not the audio sink.
By the way, we probably could add a "synchronized" on the process method of the audio sink to be sure that it is not executed a second time while the first execution is not finished. But I am not sure at all that the problem is in the audio sink itself.

@lolodomo
Copy link
Contributor

lolodomo commented Jul 29, 2017

Considering the code and my tests, for Sonos I see no problem when several calls to say are done in sequence in the same rule because the Sonos binding implements a mechanism that only returns when the notification is finished and previous context is restored. I am not sure that all audio sinks implement such a mechanism so it is possible that it is not working for other bindings than Sonos.
What is clearly not working even with Sonos is when you have several rules triggered, each one calling say or playSound. In this case, the notification are not in sequence. A lock mechanism is necessary. I will fix this case.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants