-
Notifications
You must be signed in to change notification settings - Fork 782
Adapt Sonos binding to serve as an AudioSink #1200
Comments
@kdavis-mozilla As you wanted to use Sonos for first prototypes, is this an issue that your team will be working on? |
I know the SqueezeBox binding isn't part of ESH but you also must serve the audio through a HTTP URL for that too so maybe a system wide feature would be useful for that. |
If have written a Chromecast binding. The Chromecast device plays media from a HTTP URL similar to Sonos. I would like to use that for my Sonos and Chromcast devices. |
@kaikreuzer We have no immediate plans to make a "SonosAudioSink", but I think it's something we eventually want to do. |
@kaikreuzer @kdavis-mozilla @maggu2810 @tavalin In combination with https://github.com/eclipse/smarthome/pull/1196/files#diff-90f3581e099705007b249d04224ad701R1798 this will be rather trivial for the Sonos binding then. In fact, one can already find a base (Sonos centric) implementation of the servlet at kgoderis/openhab-addons@69ad519 who takes the lead? |
I'm interested in such feature and would like to help. However (so far) my knowledge in Java is very limited and I've never worked in such an open-source environment. |
@JueBag Thanks for your offer, but I have the feeling that this issue is probably a rather deep dive into different parts of the architecture, so probably not a good candidate to start with contributions... @kgoderis #1196 is now merged, so the Sonos code is now again up-to-date.
@maggu2810 Any plans to contribute this? I'm personally very interested as well and it will of course also be a perfect candidate for an audio sink implementation. |
@kaikreuzer I used a third party library and did some PR there. But at some point I need some further changes. I forked the lib and then I got no answer anymore from the author (see vitalidze/chromecast-java-api-v2#27 (comment)). It is using the Apache License but needs to be checked if we can use a fork and add further modifications for ESH. Perhaps we / someone should write a chromecast API library itself, but this does need some time... |
@kaikreuzer How should we proceed with the pull requests. We've a lot of them. The earliest commit we have is here[1], starting around the middle of the page, and there are many more in the subsequent "newer" pages. How would you like them to be divided up and submitted? |
@kdavis-mozilla This is pretty much up to what you think is best. If you can break it down into independent parts that make sense on their own, it would be nice. If the changes highly depend upon another, rather do fewer PRs and maybe split it in stuff where there is likely no discussion and stuff where you are not so sure if it fits and should be contributed as-is. |
@kdavis-mozilla When can I expect them to hit me? |
@kaikreuzer unfortunately this week we have a "work" week where we're spending the entire week not coding. So this week is basically out for any real work. I might be able to squeeze some actual work in over the coming weekend and make a pull requests. But, realistically early next week. |
@kdavis-mozilla any progress made on this? |
See #584 (comment). |
Ok - have this on my todo list |
@kaikreuzer I just studied org.eclipse.smarthome.core.audio and before I embark on this task, just a question: are mp3 or alike served by a web server are to be considered AudioSources or not? You know this is the workaround for the Sonos players, but this might be generalised. On a separate note, is cascading of AudioSources and AudioSinks something that should be avoided? e.g. think along something like: TTS -> mp3 via http -> Sonos ; with the middle piece of code implementing AudioSink and AudioSource interfaces |
No. AudioSource is for audio sources "which can provide a continuous live stream of audio. Its main use is for microphones and other "line-in" sources".
Right. So the Sonos AudioSink should actually take the AudioStream, create a temporary url where it is served once and then destroyed. The servlet that provides this functionality could indeed be built into the framework, so that different bindings could make use of it without having to reinvent it.
This is exactly the use case that was the blueprint for the whole APIs: The TTS service produces an AudioStream, which is passed to the AudioSink, which provides it on HTTP to the Sonos speaker. |
Let us refactor the sonos part after I get it working?
This is not what I meant, in fact. I was just wondering if one can daisy chain various Sinks and Sources ? i.e. have audio processing in there, mixing of AudioStreams, and so forth, to get to functionalities like a Public Address system or alike. In the example above, the "mp3 via http" step would be a Sink and Source, connected to the TTS Source at one side, and the Sonos Sink at the other end. Just thinking out loud |
You mean you first implement this servlet in the sonos binding and then we refactor it out? Ok, but I would do it before it is merged then.
You cannot daisy chain them, because a sink does not have any output that could be fed into a source. What you can do is to chain processing of AudioStreams and at the end you feed the result to a sink.
No, as explained above, this would only be a sink, but not a source. You could only define a service that takes the HTTP url and creates an AudioStream from it, but then the whole step of putting it up on HTTP sounds a bit obsolete... |
mmh... and if you implement both AudioSink and AudioSource interfaces on a class... Anyways, it was a theoretical question above all |
What is the way to trigger this (TTS, selection of TTS engine, playback of output by Sonos, ...) ? |
Qoute:[ Will the Sonos resume playback after playing a TTS message ?] |
@lolodomo you would use the say() command in rules, but as discussed with Kai elsewhere, I will also provide a OH2 implementation of the playSound() action so that you can play mp3 in rules. So, in essence, the only thing that will change from a Sonos point of view is that you will be able to play an mp3 on any Player without having to go through the line-in of one of the Players. You could still group Players before or after the action. |
@kgoderis I would actually expect this action directly in ESH, not in OH2. As it will be decoupled from the jl library, this should be fine. |
ESH fine indeed... |
@kaikreuzer I started putting the code together for this feature, I just noticed that there are 2 bundles now that implement the "say" AbstractConsoleCommandExtension, e.g. o.e.s.io.voice.internal.extensions and one o.e.s.core.voice.internal. I presume that io.voice is depreciated ? |
@kaikreuzer mm... can you elaborate on that ? I have been studying the old and new code, also javax.sound, and the only way to support mp3 is through an external lib like jl for playSound() there are two approaches: 1. is to be similar to the VoiceManager class, e.g. pick up an mp3 file (through jl), put it in an AudioStream, and then pick/choose the AudioSink to play it on K |
This does not exist anymore (only on your local filesystem...?) - so yes, you are right: This one is deprecated!
With decoupled I mean that "playSound" can be within ESH and it creates an AudioStream from a file (and also setting the appropriate AudioFormat, be it WAV or mp3 or whatever). In OH2, I started implementing an AudioSink similar to the io.javasound, but which comes with jl and thus has mp3 support. In the OH2 distro, I would then ONLY include the OH2-io.jlsound AudioSink, but no the ESH javasound AudioSink.
Does it have to be through jl? I would have hoped that the file content can be used as a byte stream without requiring a library?
What I had in mind is the reading part, i.e. rather a "playSoundFromFile" - i.e. it reads a file (in my tests I have put the files in {openhab.home}/conf/sounds (as opposed to {openhab.home}/sounds), in OH1) and passes it to the default AudioSink. |
Ok - that's the route I have taken. I think we have done a bit of double work here, I just happen to read this post after a night of coding away. I have a jl based sink, but had to use some PipedInputStream/PipedOutputStream to get things done
From what I understood afterwards, I assume that it is up to the Sink to deal with the format specific stuff, all the rest before is just byte piping
Done - I have this covered |
Probably not, because I actually didn't get the jl-sound bundle to work yet. If you did, I am very much looking forward to your PR :-) |
In order to play audio sources (e.g. mp3 sounds or TTS output) on a Sonos speaker, the binding should implement an
AudioSink
.A few questions have to be answered for this:
The text was updated successfully, but these errors were encountered: