-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementation of the panning #665
Comments
I think there is no optimal curve for this "pan law". Some DAW just provide a parameter to let user choose : https://www.soundonsound.com/sound-advice/q-what-pan-law-setting-should-use |
FWIW, I've got no particular point of view on this. |
Yes, I like the idea and could collaborate on it. For what I've read the optimal parameter depends on the room in which you are hearing, that's why some manufacturers use -4.5dB compensation at center and others -3dB. @theGreatWhiteShark Are you working on it? |
Although I like the idea of giving the user control over internal algorithms, adding a parameter for the pan law is probably a little bit over the top. For proper audio production people will most likely use the JACK per track outputs and some dedicated software mixing and mastering tools and ambisonics incorporating the characteristics of the room. (At least I think so #1027). But nevertheless some attenuation would seem reasonable. @oddtime you can of course implement it yourself if you want to |
I am doing personal studies on the matter and I really would like to do some experiments! Good point #1027 What do you think if we deprecate pan_R and pan_L in |
Especially if we want use the Jack per track outputs, one only |
The panning law could be a static member function of maybe the I haven't looked into the implementation of the panning yet and does not really understand right now why there needs to be both pan properties in the
That will require some extended rewriting in the code base + it will affect all drumkit files. As this will only affect working and internal code with no perceived changes for the users I'm not quite sure whether rewriting the pan implementation is required here |
It's because you can use note pan like an automation for the notes and an instrument pan to determine the "stereo extension" of the instrument, but I think that the pan automation should bypass the instrument pan.
Yes I see the problem. Clearly from pan_R and pan_L you can easily get the single scalar parameter |
Yeah. I sometimes wonder why Hydrogen offers such a general behavior. After all, I would consider a drumkit something static and e.g. the snare not moving positions between individual beats. Anyway. I think it's fine the way it's implemented right now and I added some lines to the doc.
Sure. But deprecating existing parameters stored in the drumkit file is a little more invasive than adding a new ones. Instead of using a default parameter when not existing - as in your pitch offset PR - one needs to regard the drumkits as legacy ones, load them with a different function, and ensure the old pan information will be mapped to the new one. Chances that something breaks in there without noticing are not negligible. But in general you are of course right. It adds unnecessary complexity to have a redundant parameter. |
It depends probably if you use Hydrogen only as drum machine or as a general sequencer. |
Maybe the new |
Another point that should be considered is that the current implementation of panning acts like a "balance" if the sample is dual channel. I.e. it sets just the gains of L/R channels (right?) I mean: generally dual channel tracks in daws have the option to adjust the pan with a single pan parameter (which I think is like the hydrogen, a "balance") or a pan for each separate channel. In the second way you can really "move" the sample in the stereo space (it may cause loss of volume in the channel if L/R signals are out of phase). In dual-channel track, sometimes most of the dry sound is in one channel, and changing the balance to the opposite side causes a different effect than actual panning. Instead, if the sample is mono, the current implementation of pan "moves" the signal in the stereo virtual space without collateral effects. |
About the multi parameter pan:
Actually I don't know if this is a good way to deal with "multi parameter" panning because it causes very low (if not killed) volumes when the two parameters are set to opposite sides. Furthermore, to bypass the |
It's still there. You can use this button at the top of the pattern editor to switch to the piano roll. In addition, via the menu you can switch the input mode between drumkit and instrument allowing you to either play the different components of your drumkit using the keyboard or pitch shifted versions of the currently selected one.
We don't have to worry in here. Since there is stuff like pitch-shifting a audio sample going on, applying an analytic function on two numbers is negligible.
That's an excellent point! Didn't thought about it. Viewed from this angle we have to keep the current panning law at least as default. But then again, this more or less defeats the object of updating the panning implementation. Hydrogen's mixer is not comparable with the ones provided by DAWs since essential things like a position-dependent effect/plugin chain, CV support, aux channels etc. are missing. Having an advanced option that makes all drumkits sound worse - under the assumption they are perfectly mixed - does not sound like an improvement at all.
That's also a good point. It's a potential bug. We should check |
Thank for the suggestion, I missed the input mode via the menu.
For this I have some arbitrary ideas:
|
I would vote for
since this way it behaves like all the other controls found in the |
@theGreatWhiteShark could you have a look at pull req #1061? Especially for the |
Closed with #1061 |
I have a quite general question about the implementation of the panning.
In Hydrogen the panning is realized by introducing one volume for the right channel pan_R and one for the left pan_L, with both of them ranging from 0 to 0.5. If the sound is positioned in the center, both of them are set to 0.5 and if the source moves in either direction, the volume of the opposite channel is reduced.
But in this way the loudness of a sample depends on its location. It is the loudest in the middle and the quietest at either far right or far left. Wouldn't it make more sense to have it at the same level all the time? After all we are adjusting the panning and not the distance. This could be done by keeping the total level of a sample constant for all pannings and expressing it as a combination of the volumes of the two channels. Depending on how they are added, like a 1 = pan_R + pan_L or 1 = sqrt( pan_R^2 + pan_R^2), the scaling between perceived location and panning value can be altered. I'm not a hundred percent sure about which one to use, but from intuition I would guess the second one will lead to a linear scaling of the perceived location since ears and loudspeakers do form a plane. But textbooks will tell us.
The text was updated successfully, but these errors were encountered: