Skip to content

Commit

Permalink
Move Audio Mixing to appendix
Browse files Browse the repository at this point in the history
  • Loading branch information
nigelmegitt committed Mar 28, 2023
1 parent 88a376b commit f2d0194
Showing 1 changed file with 42 additions and 41 deletions.
83 changes: 42 additions & 41 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -1227,49 +1227,10 @@ <h3>Mixing Instruction</h3>
</ul>
<p>The TTML representation of animated <a>Mixing Instructions</a> is
illustrated by <a href="#example-4"></a>.</p>
<p>See also <a href="#audio-mixing"></a>.</p>
</section>
</section> <!-- #data-model -->

<section id="audio-mixing" class="informative">
<h2>Audio Mixing</h2>
<p class="ednote">Keep here or move to appendix?</p>
<p>Applying the <a>Mixing Instructions</a> can be implemented using [[webaudio]].
<a href="#audio-mixing-figure"></a> shows the flow of programme audio,
and how, when audio-generating elements are active,
the pan and gain (if set) on the <a>Script Event</a> are applied,
then the output is passed to the <a>Text</a>,
which mixes in the audio from any active <a>Audio Recording</a>,
itself subject to its own <a>Mixing Instructions</a>,
then the result has the <a>Text</a>'s <a>Mixing Instructions</a>
applied, prior to the output being mixed on to the master bus.
</p>

<figure id="audio-mixing-figure">
<div data-include="figures/audio-mixing.svg"></div>
<figcaption>Example simple audio routing between objects</figcaption>
</figure>

<p>This example is shown as [[webaudio]] nodes in <a href="#webaudio-nodes-figure"></a>.</p>

<figure id="webaudio-nodes-figure">
<div data-include="figures/webaudio-nodes.svg"></div>
<figcaption>Web audio nodes representing the audio processing needed.</figcaption>
</figure>

<p>The above examples are simplified in at least two ways:</p>
<ul>
<li>if a <a>Text</a> contains
<code>&lt;span&gt;</code> elements that themselves have <a>Mixing Instructions</a>
applied, then additional nodes would be needed;</li>
<li>the application of <em>animated</em> <a>Mixing Instructions</a> is not
shown explicitly. [[webaudio]] supports the timed variation of
input parameters to its nodes: it is possible to translate the
TTML <code>&lt;animate&gt;</code> semantics directly into
[[webaudio]] API calls to achieve the equivalent effect.</li>
</ul>

</section> <!-- #audio-mixing -->

<section id="profile-constraints">
<h2>Constraints</h2>
<section id="Document Encoding">
Expand Down Expand Up @@ -1691,7 +1652,47 @@ <h3>Conformance of DAPT Processors</h3>
<!-- All the terms will magically appear here -->
</section>

<section id="profiles-section" class="appendix">
<section id="audio-mixing" class="informative appendix">
<h2>Audio Mixing</h2>

<p>Applying the <a>Mixing Instructions</a> can be implemented using [[webaudio]].
<a href="#audio-mixing-figure"></a> shows the flow of programme audio,
and how, when audio-generating elements are active,
the pan and gain (if set) on the <a>Script Event</a> are applied,
then the output is passed to the <a>Text</a>,
which mixes in the audio from any active <a>Audio Recording</a>,
itself subject to its own <a>Mixing Instructions</a>,
then the result has the <a>Text</a>'s <a>Mixing Instructions</a>
applied, prior to the output being mixed on to the master bus.
</p>

<figure id="audio-mixing-figure">
<div data-include="figures/audio-mixing.svg"></div>
<figcaption>Example simple audio routing between objects</figcaption>
</figure>

<p>This example is shown as [[webaudio]] nodes in <a href="#webaudio-nodes-figure"></a>.</p>

<figure id="webaudio-nodes-figure">
<div data-include="figures/webaudio-nodes.svg"></div>
<figcaption>Web audio nodes representing the audio processing needed.</figcaption>
</figure>

<p>The above examples are simplified in at least two ways:</p>
<ul>
<li>if a <a>Text</a> contains
<code>&lt;span&gt;</code> elements that themselves have <a>Mixing Instructions</a>
applied, then additional nodes would be needed;</li>
<li>the application of <em>animated</em> <a>Mixing Instructions</a> is not
shown explicitly. [[webaudio]] supports the timed variation of
input parameters to its nodes: it is possible to translate the
TTML <code>&lt;animate&gt;</code> semantics directly into
[[webaudio]] API calls to achieve the equivalent effect.</li>
</ul>

</section> <!-- #audio-mixing -->

<section id="profiles-section" class="appendix">
<h3>Profiles</h3>
<p>This section defines
a [[ttml2]] <a>content profile</a>
Expand Down

0 comments on commit f2d0194

Please sign in to comment.