Replies: 1 comment 2 replies
-
Hi. The logic internally works like this: On each platform there is a raw PCM audio output (Web Audio Context, NAudio WaveProvider32 etc. When this audio output requests samples to be played, alphaTab grabs them from an internal audio buffer. If this audio buffer is filled less than 50%, it requests another 50% of the audio samples. Internally a loop starts which sends midi events to TinySoundFont and generates the audio samples. But to connect an appropriate midi output, you need a real time sequencer, which plays the midi events in the exact speed. Today you would always receive in regular intervals, based on the internal audio buffers, a list of buffers which have been synthesized. I don't know your exact use case yet. But in general there are two features related to the midi you might be interested:
Depending on what you want to achieve, you might have the features already available. You can use any components understanding Midi and send the generated midi along. Some conversion between the event classes is obviously needed or you go via binary writing/reading. This can be any synth of midi library depending on your preference. Or you use the original synth of alphaTab, but you don't load any SoundFont, but just use the internals to have a real time playback of "silence". Based on your needs you can forward the midi events or react on them manually to do something. What I do not plan to support, is a separate real time sequencer which does a playback of the events in real time. There are enough libs for that. |
Beta Was this translation helpful? Give feedback.
-
Hello,
I would like to know if you have plan to add the possibility to send MIDI information to a MIDI Output or if you want to lock alphaTab to use only TinySoundFont?
I made a new class in AlphaTab.Windows and replace _synthesizer in alphaTab by this new class and MIDI works.
Beta Was this translation helpful? Give feedback.
All reactions