diff --git a/docs/404.html b/docs/404.html index b5aa5ee02..399591071 100644 --- a/docs/404.html +++ b/docs/404.html @@ -396,7 +396,7 @@ - diff --git a/docs/about/index.html b/docs/about/index.html index 044fe49ec..c66ff2265 100644 --- a/docs/about/index.html +++ b/docs/about/index.html @@ -449,7 +449,7 @@ - diff --git a/docs/examples/SAM/index.html b/docs/examples/SAM/index.html index 6c862752f..e865226cf 100644 --- a/docs/examples/SAM/index.html +++ b/docs/examples/SAM/index.html @@ -407,7 +407,7 @@ - diff --git a/docs/examples/ambisonics/index.html b/docs/examples/ambisonics/index.html index e6bac5fef..bcd8b829d 100644 --- a/docs/examples/ambisonics/index.html +++ b/docs/examples/ambisonics/index.html @@ -468,7 +468,7 @@ - diff --git a/docs/examples/analysis/index.html b/docs/examples/analysis/index.html index d639dcee3..bec4c5df8 100644 --- a/docs/examples/analysis/index.html +++ b/docs/examples/analysis/index.html @@ -572,7 +572,7 @@ - diff --git a/docs/examples/autodiff/index.html b/docs/examples/autodiff/index.html index 13a52e3d9..34e09b5d0 100644 --- a/docs/examples/autodiff/index.html +++ b/docs/examples/autodiff/index.html @@ -456,7 +456,7 @@ - diff --git a/docs/examples/bela/index.html b/docs/examples/bela/index.html index 9fd2887e4..af74cc813 100644 --- a/docs/examples/bela/index.html +++ b/docs/examples/bela/index.html @@ -2829,7 +2829,7 @@ - diff --git a/docs/examples/delayEcho/index.html b/docs/examples/delayEcho/index.html index d5584770d..8134a027d 100644 --- a/docs/examples/delayEcho/index.html +++ b/docs/examples/delayEcho/index.html @@ -603,7 +603,7 @@ - diff --git a/docs/examples/dynamic/index.html b/docs/examples/dynamic/index.html index dbf340fbe..39b722cf5 100644 --- a/docs/examples/dynamic/index.html +++ b/docs/examples/dynamic/index.html @@ -530,7 +530,7 @@ - diff --git a/docs/examples/filtering/index.html b/docs/examples/filtering/index.html index 800a296aa..a62ca3b69 100644 --- a/docs/examples/filtering/index.html +++ b/docs/examples/filtering/index.html @@ -1616,7 +1616,7 @@ - diff --git a/docs/examples/gameaudio/index.html b/docs/examples/gameaudio/index.html index fef6cde10..167687856 100644 --- a/docs/examples/gameaudio/index.html +++ b/docs/examples/gameaudio/index.html @@ -1210,7 +1210,7 @@ - diff --git a/docs/examples/generator/index.html b/docs/examples/generator/index.html index e6ca30827..b2c6cb99a 100644 --- a/docs/examples/generator/index.html +++ b/docs/examples/generator/index.html @@ -778,7 +778,7 @@ - diff --git a/docs/examples/misc/index.html b/docs/examples/misc/index.html index 1c5e26645..e3305b890 100644 --- a/docs/examples/misc/index.html +++ b/docs/examples/misc/index.html @@ -1007,7 +1007,7 @@ - diff --git a/docs/examples/phasing/index.html b/docs/examples/phasing/index.html index 69d9b2319..339cdc27f 100644 --- a/docs/examples/phasing/index.html +++ b/docs/examples/phasing/index.html @@ -479,7 +479,7 @@ - diff --git a/docs/examples/physicalModeling/index.html b/docs/examples/physicalModeling/index.html index bccc87e77..01daa664d 100644 --- a/docs/examples/physicalModeling/index.html +++ b/docs/examples/physicalModeling/index.html @@ -985,7 +985,7 @@ - diff --git a/docs/examples/pitchShifting/index.html b/docs/examples/pitchShifting/index.html index 778cbab3a..57962053d 100644 --- a/docs/examples/pitchShifting/index.html +++ b/docs/examples/pitchShifting/index.html @@ -441,7 +441,7 @@ - diff --git a/docs/examples/psychoacoustic/index.html b/docs/examples/psychoacoustic/index.html index 182ca1f33..7ff6c389b 100644 --- a/docs/examples/psychoacoustic/index.html +++ b/docs/examples/psychoacoustic/index.html @@ -431,7 +431,7 @@ - diff --git a/docs/examples/quantizing/index.html b/docs/examples/quantizing/index.html index 20466861b..a41d904df 100644 --- a/docs/examples/quantizing/index.html +++ b/docs/examples/quantizing/index.html @@ -470,7 +470,7 @@ - diff --git a/docs/examples/reverb/index.html b/docs/examples/reverb/index.html index 7a63942d9..6f312685f 100644 --- a/docs/examples/reverb/index.html +++ b/docs/examples/reverb/index.html @@ -640,7 +640,7 @@ - diff --git a/docs/examples/smartKeyboard/index.html b/docs/examples/smartKeyboard/index.html index 3016cc9d0..7f3533f34 100644 --- a/docs/examples/smartKeyboard/index.html +++ b/docs/examples/smartKeyboard/index.html @@ -2326,7 +2326,7 @@ - diff --git a/docs/examples/spat/index.html b/docs/examples/spat/index.html index 36216794b..1c52ccc1b 100644 --- a/docs/examples/spat/index.html +++ b/docs/examples/spat/index.html @@ -478,7 +478,7 @@ - diff --git a/docs/index.html b/docs/index.html index 880445096..e218743e5 100644 --- a/docs/index.html +++ b/docs/index.html @@ -469,7 +469,7 @@ - @@ -477,5 +477,5 @@ diff --git a/docs/manual/architectures/index.html b/docs/manual/architectures/index.html index 64a92a035..a59874f99 100644 --- a/docs/manual/architectures/index.html +++ b/docs/manual/architectures/index.html @@ -2820,7 +2820,7 @@ - diff --git a/docs/manual/community/index.html b/docs/manual/community/index.html index 4071602a4..2e8507aff 100644 --- a/docs/manual/community/index.html +++ b/docs/manual/community/index.html @@ -587,7 +587,7 @@ - diff --git a/docs/manual/compiler/index.html b/docs/manual/compiler/index.html index b5f8f2ba7..c5410ac72 100644 --- a/docs/manual/compiler/index.html +++ b/docs/manual/compiler/index.html @@ -984,7 +984,7 @@ - diff --git a/docs/manual/debugging/index.html b/docs/manual/debugging/index.html index 5bc511784..671254a3f 100644 --- a/docs/manual/debugging/index.html +++ b/docs/manual/debugging/index.html @@ -537,7 +537,7 @@ - diff --git a/docs/manual/deploying/index.html b/docs/manual/deploying/index.html index cf03f8158..78bfbc196 100644 --- a/docs/manual/deploying/index.html +++ b/docs/manual/deploying/index.html @@ -477,7 +477,7 @@ - diff --git a/docs/manual/embedding/index.html b/docs/manual/embedding/index.html index 8b0b3936b..c5187f264 100644 --- a/docs/manual/embedding/index.html +++ b/docs/manual/embedding/index.html @@ -686,7 +686,7 @@ - diff --git a/docs/manual/errors/index.html b/docs/manual/errors/index.html index ce6142b00..af64d6b02 100644 --- a/docs/manual/errors/index.html +++ b/docs/manual/errors/index.html @@ -734,7 +734,7 @@ - diff --git a/docs/manual/faq/index.html b/docs/manual/faq/index.html index a61682f1f..de4585eda 100644 --- a/docs/manual/faq/index.html +++ b/docs/manual/faq/index.html @@ -611,7 +611,7 @@ - diff --git a/docs/manual/http/index.html b/docs/manual/http/index.html index 7db74f081..987b4540b 100644 --- a/docs/manual/http/index.html +++ b/docs/manual/http/index.html @@ -733,7 +733,7 @@ - diff --git a/docs/manual/introduction/index.html b/docs/manual/introduction/index.html index 7bb73cece..97e4753dd 100644 --- a/docs/manual/introduction/index.html +++ b/docs/manual/introduction/index.html @@ -506,7 +506,7 @@ - diff --git a/docs/manual/mathdoc/index.html b/docs/manual/mathdoc/index.html index 4da5f5b37..2afabb3c5 100644 --- a/docs/manual/mathdoc/index.html +++ b/docs/manual/mathdoc/index.html @@ -641,7 +641,7 @@ - diff --git a/docs/manual/midi/index.html b/docs/manual/midi/index.html index 9101fcd43..56ee79617 100644 --- a/docs/manual/midi/index.html +++ b/docs/manual/midi/index.html @@ -977,7 +977,7 @@ - diff --git a/docs/manual/optimizing/index.html b/docs/manual/optimizing/index.html index 8e36db1dd..e4de0795f 100644 --- a/docs/manual/optimizing/index.html +++ b/docs/manual/optimizing/index.html @@ -1048,7 +1048,7 @@ - diff --git a/docs/manual/options/index.html b/docs/manual/options/index.html index a722b7629..3a87952a1 100644 --- a/docs/manual/options/index.html +++ b/docs/manual/options/index.html @@ -586,7 +586,7 @@ - diff --git a/docs/manual/osc/index.html b/docs/manual/osc/index.html index 4714dbcb3..274374b6f 100644 --- a/docs/manual/osc/index.html +++ b/docs/manual/osc/index.html @@ -1069,7 +1069,7 @@ - diff --git a/docs/manual/overview/index.html b/docs/manual/overview/index.html index dca232f71..b167f38d1 100644 --- a/docs/manual/overview/index.html +++ b/docs/manual/overview/index.html @@ -459,7 +459,7 @@ - diff --git a/docs/manual/quick-start/index.html b/docs/manual/quick-start/index.html index be8267efd..7d21e5c0b 100644 --- a/docs/manual/quick-start/index.html +++ b/docs/manual/quick-start/index.html @@ -733,7 +733,7 @@ - diff --git a/docs/manual/remote/index.html b/docs/manual/remote/index.html index e1c25cd8c..ebadf9408 100644 --- a/docs/manual/remote/index.html +++ b/docs/manual/remote/index.html @@ -420,7 +420,7 @@ - diff --git a/docs/manual/soundfiles/index.html b/docs/manual/soundfiles/index.html index 64a8b43a5..59a6d1672 100644 --- a/docs/manual/soundfiles/index.html +++ b/docs/manual/soundfiles/index.html @@ -430,7 +430,7 @@ - diff --git a/docs/manual/syntax/index.html b/docs/manual/syntax/index.html index 652ce98bc..f971e4b58 100644 --- a/docs/manual/syntax/index.html +++ b/docs/manual/syntax/index.html @@ -4504,7 +4504,7 @@ - diff --git a/docs/manual/tools/index.html b/docs/manual/tools/index.html index 7222be0d8..c3744a0e3 100644 --- a/docs/manual/tools/index.html +++ b/docs/manual/tools/index.html @@ -1529,7 +1529,7 @@ - diff --git a/docs/qreference/1-introduction/index.html b/docs/qreference/1-introduction/index.html index ffc7da0fd..8b93f5a0a 100644 --- a/docs/qreference/1-introduction/index.html +++ b/docs/qreference/1-introduction/index.html @@ -471,7 +471,7 @@ - diff --git a/docs/qreference/10-poly/index.html b/docs/qreference/10-poly/index.html index df7b03b52..a0f281878 100644 --- a/docs/qreference/10-poly/index.html +++ b/docs/qreference/10-poly/index.html @@ -518,7 +518,7 @@ - diff --git a/docs/qreference/11-codegeneration/index.html b/docs/qreference/11-codegeneration/index.html index 09edda87c..d90f75c93 100644 --- a/docs/qreference/11-codegeneration/index.html +++ b/docs/qreference/11-codegeneration/index.html @@ -786,7 +786,7 @@ - diff --git a/docs/qreference/12-mdoc/index.html b/docs/qreference/12-mdoc/index.html index cd17a33c1..6152abc8f 100644 --- a/docs/qreference/12-mdoc/index.html +++ b/docs/qreference/12-mdoc/index.html @@ -652,7 +652,7 @@ - diff --git a/docs/qreference/13-acknowledgments/index.html b/docs/qreference/13-acknowledgments/index.html index 7f1ffd864..4a4725895 100644 --- a/docs/qreference/13-acknowledgments/index.html +++ b/docs/qreference/13-acknowledgments/index.html @@ -464,7 +464,7 @@ - diff --git a/docs/qreference/2-install/index.html b/docs/qreference/2-install/index.html index fd755ed79..f5dfdd93f 100644 --- a/docs/qreference/2-install/index.html +++ b/docs/qreference/2-install/index.html @@ -529,7 +529,7 @@ - diff --git a/docs/qreference/3-syntax/index.html b/docs/qreference/3-syntax/index.html index 65864e550..78ff777ce 100644 --- a/docs/qreference/3-syntax/index.html +++ b/docs/qreference/3-syntax/index.html @@ -641,7 +641,7 @@

Imports

For example import("maths.lib"); imports the definitions of the maths.lib library, a set of additional mathematical functions provided as foreign functions.

Documentation

-

Documentation statements are optional and typically used to control the generation of the mathematical documentation of a Faust program. This documentation system is detailed chapter \ref{chapter-mdoc}. In this section we will essentially describe the documentation statements syntax.

+

Documentation statements are optional and typically used to control the generation of the mathematical documentation of a Faust program. This documentation system is detailed chapter mdoc. In this section we will essentially describe the documentation statements syntax.

A documentation statement starts with an opening <mdoc> tag and ends with a closing </mdoc> tag. Free text content, typically in LaTeX format, can be placed in between these two tags.

documentation @@ -2741,7 +2741,7 @@
- diff --git a/docs/qreference/4-compiler/index.html b/docs/qreference/4-compiler/index.html index 8a2bafe8e..f84476745 100644 --- a/docs/qreference/4-compiler/index.html +++ b/docs/qreference/4-compiler/index.html @@ -597,7 +597,7 @@ - diff --git a/docs/qreference/5-libfaust/index.html b/docs/qreference/5-libfaust/index.html index 804f6f52d..3562ae363 100644 --- a/docs/qreference/5-libfaust/index.html +++ b/docs/qreference/5-libfaust/index.html @@ -531,7 +531,7 @@ - diff --git a/docs/qreference/6-audio/index.html b/docs/qreference/6-audio/index.html index 61cba5fba..f7405ffc5 100644 --- a/docs/qreference/6-audio/index.html +++ b/docs/qreference/6-audio/index.html @@ -1094,7 +1094,7 @@ - diff --git a/docs/qreference/7-osc/index.html b/docs/qreference/7-osc/index.html index 6a585956b..b7ca3250e 100644 --- a/docs/qreference/7-osc/index.html +++ b/docs/qreference/7-osc/index.html @@ -1025,7 +1025,7 @@ - diff --git a/docs/qreference/8-http/index.html b/docs/qreference/8-http/index.html index 23c562b37..aca91ceb5 100644 --- a/docs/qreference/8-http/index.html +++ b/docs/qreference/8-http/index.html @@ -731,7 +731,7 @@ - diff --git a/docs/qreference/9-midi/index.html b/docs/qreference/9-midi/index.html index e773e626e..899f33d8f 100644 --- a/docs/qreference/9-midi/index.html +++ b/docs/qreference/9-midi/index.html @@ -561,7 +561,7 @@ - diff --git a/docs/search/search_index.json b/docs/search/search_index.json index 854e1ffbc..d475aea23 100644 --- a/docs/search/search_index.json +++ b/docs/search/search_index.json @@ -1 +1 @@ -{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Faust Language Documentation This website centralizes all the documentation of the Faust programming language . It contains tutorials, the Faust manual, various examples, etc. It is meant to be used in tandem with the Faust Web IDE . The main Faust website can be found at the following URL: https://faust.grame.fr What is Faust? Faust (Functional Audio Stream) is a functional programming language for sound synthesis and audio processing with a strong focus on the design of synthesizers, musical instruments, audio effects, etc. Faust targets high-performance signal processing applications and audio plug-ins for a variety of platforms and standards. It is used on stage for concerts and artistic productions, in education and research, in open source projects as well as in commercial applications. The core component of Faust is its compiler. It allows us to \"translate\" any Faust digital signal processing (DSP) specification to a wide range of non-domain specific languages such as C++, C, JAVA, LLVM IR, WebAssembly, etc. In this regard, Faust can be seen as an alternative to C++ but is much simpler and intuitive to learn. Thanks to a wrapping system called \"architectures,\" codes generated by Faust can be easily compiled into a wide variety of objects ranging from audio plug-ins to standalone applications or smartphone and web apps, etc. Getting Started If You're In a Hurry If you\u2019re in a hurry and just wanna have a look at how Faust programs look like, you can simply check the Faust Examples . If You Wanna Get Started With Faust If you\u2019re looking for a step by step tutorial of approximately 2 hours that will walk you through writing simple Faust programs and give you an overview of what Faust can do, have a look at our Quick Start Tutorial . If You\u2019re Looking For the \"Manual\" Faust\u2019s syntax and features are thoroughly documented in the Faust Manual . This resource contains hundreds of code examples and many short tutorials. If You\u2019re Looking For the Documentation of a Function In the Faust Libraries The documentation of Faust's standard libraries is automatically generated directly from the libraries' source code. If You Prefer Video Tutorials Check out the Faust Kadenze course . If You're Looking For Something in Particular You can use the Search function of this website if you're looking for something specific. Other Resources to learn Faust","title":"Home"},{"location":"#faust-language-documentation","text":"This website centralizes all the documentation of the Faust programming language . It contains tutorials, the Faust manual, various examples, etc. It is meant to be used in tandem with the Faust Web IDE . The main Faust website can be found at the following URL: https://faust.grame.fr","title":"Faust Language Documentation"},{"location":"#what-is-faust","text":"Faust (Functional Audio Stream) is a functional programming language for sound synthesis and audio processing with a strong focus on the design of synthesizers, musical instruments, audio effects, etc. Faust targets high-performance signal processing applications and audio plug-ins for a variety of platforms and standards. It is used on stage for concerts and artistic productions, in education and research, in open source projects as well as in commercial applications. The core component of Faust is its compiler. It allows us to \"translate\" any Faust digital signal processing (DSP) specification to a wide range of non-domain specific languages such as C++, C, JAVA, LLVM IR, WebAssembly, etc. In this regard, Faust can be seen as an alternative to C++ but is much simpler and intuitive to learn. Thanks to a wrapping system called \"architectures,\" codes generated by Faust can be easily compiled into a wide variety of objects ranging from audio plug-ins to standalone applications or smartphone and web apps, etc.","title":"What is Faust?"},{"location":"#getting-started","text":"","title":"Getting Started"},{"location":"#if-youre-in-a-hurry","text":"If you\u2019re in a hurry and just wanna have a look at how Faust programs look like, you can simply check the Faust Examples .","title":"If You're In a Hurry"},{"location":"#if-you-wanna-get-started-with-faust","text":"If you\u2019re looking for a step by step tutorial of approximately 2 hours that will walk you through writing simple Faust programs and give you an overview of what Faust can do, have a look at our Quick Start Tutorial .","title":"If You Wanna Get Started With Faust"},{"location":"#if-youre-looking-for-the-manual","text":"Faust\u2019s syntax and features are thoroughly documented in the Faust Manual . This resource contains hundreds of code examples and many short tutorials.","title":"If You\u2019re Looking For the \"Manual\""},{"location":"#if-youre-looking-for-the-documentation-of-a-function-in-the-faust-libraries","text":"The documentation of Faust's standard libraries is automatically generated directly from the libraries' source code.","title":"If You\u2019re Looking For the Documentation of a Function In the Faust Libraries"},{"location":"#if-you-prefer-video-tutorials","text":"Check out the Faust Kadenze course .","title":"If You Prefer Video Tutorials"},{"location":"#if-youre-looking-for-something-in-particular","text":"You can use the Search function of this website if you're looking for something specific.","title":"If You're Looking For Something in Particular"},{"location":"#other-resources-to-learn-faust","text":"","title":"Other Resources to learn Faust"},{"location":"about/","text":"The Faust Project The Faust Project has started in 2002. It is actively developed by the GRAME-CNCM Research Department . Many persons are contributing to the Faust project, by providing code for the compiler, architecture files, libraries, examples, documentation, scripts, bug reports, ideas, etc. We would like in particular to thank: Fons Adriaensen, Karim Barkati, J\u00e9r\u00f4me Barth\u00e9lemy, Tim Blechmann, Tiziano Bole, Alain Bonardi, Bart Brouns, Thomas Charbonnel, Raffaele Ciavarella, Ian Clester, Julien Colafrancesco, Damien Cramet, Sarah Denoux, \u00c9tienne Gaudrin, Olivier Guillerminet, Pierre Guillot, Albert Gr\u00e4f, Christoph Hart, Agathe Herrou, Pierre Jouvelot, Stefan Kersten, Victor Lazzarini, Matthieu Leberre, Mathieu Leroi, Fernando Lopez-Lezcano, Kjetil Matheussen, Hermann Meyer, R\u00e9my Muller, Raphael Panis, Eliott Paris, Reza Payami, Laurent Pottier, Dirk Roosenburg, Sampo Savolainen, Nicolas Scaringella, Anne Sedes, Priyanka Shekar, Stephen Sinclair, Travis Skare, Julius Smith, Mike Solomon, Roman Sommer Michael Wilson. as well as our colleagues at GRAME : Dominique Fober Christophe Lebreton St\u00e9phane Letz Romain Michon Yann Orlarey We would like also to thank for their financial support: the French Ministry of Culture , the Auvergne-Rh\u00f4ne-Alpes Region , the City of Lyon , the French National Research Agency (ANR) .","title":"About"},{"location":"about/#the-faust-project","text":"The Faust Project has started in 2002. It is actively developed by the GRAME-CNCM Research Department . Many persons are contributing to the Faust project, by providing code for the compiler, architecture files, libraries, examples, documentation, scripts, bug reports, ideas, etc. We would like in particular to thank: Fons Adriaensen, Karim Barkati, J\u00e9r\u00f4me Barth\u00e9lemy, Tim Blechmann, Tiziano Bole, Alain Bonardi, Bart Brouns, Thomas Charbonnel, Raffaele Ciavarella, Ian Clester, Julien Colafrancesco, Damien Cramet, Sarah Denoux, \u00c9tienne Gaudrin, Olivier Guillerminet, Pierre Guillot, Albert Gr\u00e4f, Christoph Hart, Agathe Herrou, Pierre Jouvelot, Stefan Kersten, Victor Lazzarini, Matthieu Leberre, Mathieu Leroi, Fernando Lopez-Lezcano, Kjetil Matheussen, Hermann Meyer, R\u00e9my Muller, Raphael Panis, Eliott Paris, Reza Payami, Laurent Pottier, Dirk Roosenburg, Sampo Savolainen, Nicolas Scaringella, Anne Sedes, Priyanka Shekar, Stephen Sinclair, Travis Skare, Julius Smith, Mike Solomon, Roman Sommer Michael Wilson. as well as our colleagues at GRAME : Dominique Fober Christophe Lebreton St\u00e9phane Letz Romain Michon Yann Orlarey We would like also to thank for their financial support: the French Ministry of Culture , the Auvergne-Rh\u00f4ne-Alpes Region , the City of Lyon , the French National Research Agency (ANR) .","title":"The Faust Project"},{"location":"examples/SAM/","text":"SAM","title":" SAM "},{"location":"examples/SAM/#sam","text":"","title":"SAM"},{"location":"examples/ambisonics/","text":"ambisonics fourSourcesToOcto oneSourceToStereo","title":" ambisonics "},{"location":"examples/ambisonics/#ambisonics","text":"","title":"ambisonics"},{"location":"examples/ambisonics/#foursourcestoocto","text":"","title":"fourSourcesToOcto"},{"location":"examples/ambisonics/#onesourcetostereo","text":"","title":"oneSourceToStereo"},{"location":"examples/analysis/","text":"analysis FFT dbmeter spectralLevel spectralTiltLab vumeter","title":" analysis "},{"location":"examples/analysis/#analysis","text":"","title":"analysis"},{"location":"examples/analysis/#fft","text":"","title":"FFT"},{"location":"examples/analysis/#dbmeter","text":"","title":"dbmeter"},{"location":"examples/analysis/#spectrallevel","text":"","title":"spectralLevel"},{"location":"examples/analysis/#spectraltiltlab","text":"","title":"spectralTiltLab"},{"location":"examples/analysis/#vumeter","text":"","title":"vumeter"},{"location":"examples/autodiff/","text":"autodiff noise noop ramp","title":"autodiff"},{"location":"examples/autodiff/#autodiff","text":"","title":"autodiff"},{"location":"examples/autodiff/#noise","text":"","title":"noise"},{"location":"examples/autodiff/#noop","text":"","title":"noop"},{"location":"examples/autodiff/#ramp","text":"","title":"ramp"},{"location":"examples/bela/","text":"bela AdditiveSynth AdditiveSynth_Analog FMSynth2 FMSynth2_Analog FMSynth2_FX FMSynth2_FX_Analog FXChaine2 GrainGenerator WaveSynth WaveSynth_Analog WaveSynth_FX WaveSynth_FX_Analog crossDelay2 granulator repeater simpleFX simpleFX_Analog simpleSynth simpleSynth_Analog simpleSynth_FX simpleSynth_FX_Analog trill_simple_monophonic_keyboard trill_simple_polyphonic_keyboard","title":" bela "},{"location":"examples/bela/#bela","text":"","title":"bela"},{"location":"examples/bela/#additivesynth","text":"","title":"AdditiveSynth"},{"location":"examples/bela/#additivesynth_analog","text":"","title":"AdditiveSynth_Analog"},{"location":"examples/bela/#fmsynth2","text":"","title":"FMSynth2"},{"location":"examples/bela/#fmsynth2_analog","text":"","title":"FMSynth2_Analog"},{"location":"examples/bela/#fmsynth2_fx","text":"","title":"FMSynth2_FX"},{"location":"examples/bela/#fmsynth2_fx_analog","text":"","title":"FMSynth2_FX_Analog"},{"location":"examples/bela/#fxchaine2","text":"","title":"FXChaine2"},{"location":"examples/bela/#graingenerator","text":"","title":"GrainGenerator"},{"location":"examples/bela/#wavesynth","text":"","title":"WaveSynth"},{"location":"examples/bela/#wavesynth_analog","text":"","title":"WaveSynth_Analog"},{"location":"examples/bela/#wavesynth_fx","text":"","title":"WaveSynth_FX"},{"location":"examples/bela/#wavesynth_fx_analog","text":"","title":"WaveSynth_FX_Analog"},{"location":"examples/bela/#crossdelay2","text":"","title":"crossDelay2"},{"location":"examples/bela/#granulator","text":"","title":"granulator"},{"location":"examples/bela/#repeater","text":"","title":"repeater"},{"location":"examples/bela/#simplefx","text":"","title":"simpleFX"},{"location":"examples/bela/#simplefx_analog","text":"","title":"simpleFX_Analog"},{"location":"examples/bela/#simplesynth","text":"","title":"simpleSynth"},{"location":"examples/bela/#simplesynth_analog","text":"","title":"simpleSynth_Analog"},{"location":"examples/bela/#simplesynth_fx","text":"","title":"simpleSynth_FX"},{"location":"examples/bela/#simplesynth_fx_analog","text":"","title":"simpleSynth_FX_Analog"},{"location":"examples/bela/#trill_simple_monophonic_keyboard","text":"","title":"trill_simple_monophonic_keyboard"},{"location":"examples/bela/#trill_simple_polyphonic_keyboard","text":"","title":"trill_simple_polyphonic_keyboard"},{"location":"examples/delayEcho/","text":"delayEcho echo quadEcho smoothDelay stereoEcho tapiir","title":" delayEcho "},{"location":"examples/delayEcho/#delayecho","text":"","title":"delayEcho"},{"location":"examples/delayEcho/#echo","text":"","title":"echo"},{"location":"examples/delayEcho/#quadecho","text":"","title":"quadEcho"},{"location":"examples/delayEcho/#smoothdelay","text":"","title":"smoothDelay"},{"location":"examples/delayEcho/#stereoecho","text":"","title":"stereoEcho"},{"location":"examples/delayEcho/#tapiir","text":"","title":"tapiir"},{"location":"examples/dynamic/","text":"dynamic compressor distortion gateCompressor noiseGate volume","title":" dynamic "},{"location":"examples/dynamic/#dynamic","text":"","title":"dynamic"},{"location":"examples/dynamic/#compressor","text":"","title":"compressor"},{"location":"examples/dynamic/#distortion","text":"","title":"distortion"},{"location":"examples/dynamic/#gatecompressor","text":"","title":"gateCompressor"},{"location":"examples/dynamic/#noisegate","text":"","title":"noiseGate"},{"location":"examples/dynamic/#volume","text":"","title":"volume"},{"location":"examples/filtering/","text":"filtering APF BPF DNN HPF LPF bandFilter cryBaby diodeLadder filterBank graphicEqLab highShelf korg35HPF korg35LPF lfBoost lowBoost lowCut lowShelf moogHalfLadder moogLadder moogVCF notch oberheim oberheimBPF oberheimBSF oberheimHPF oberheimLPF parametricEqLab parametricEqualizer peakNotch peakingEQ sallenKey2ndOrder sallenKey2ndOrderBPF sallenKey2ndOrderHPF sallenKey2ndOrderLPF sallenKeyOnePole sallenKeyOnePoleHPF sallenKeyOnePoleLPF spectralTilt vcfWahLab vocoder wahPedal","title":" filtering "},{"location":"examples/filtering/#filtering","text":"","title":"filtering"},{"location":"examples/filtering/#apf","text":"","title":"APF"},{"location":"examples/filtering/#bpf","text":"","title":"BPF"},{"location":"examples/filtering/#dnn","text":"","title":"DNN"},{"location":"examples/filtering/#hpf","text":"","title":"HPF"},{"location":"examples/filtering/#lpf","text":"","title":"LPF"},{"location":"examples/filtering/#bandfilter","text":"","title":"bandFilter"},{"location":"examples/filtering/#crybaby","text":"","title":"cryBaby"},{"location":"examples/filtering/#diodeladder","text":"","title":"diodeLadder"},{"location":"examples/filtering/#filterbank","text":"","title":"filterBank"},{"location":"examples/filtering/#graphiceqlab","text":"","title":"graphicEqLab"},{"location":"examples/filtering/#highshelf","text":"","title":"highShelf"},{"location":"examples/filtering/#korg35hpf","text":"","title":"korg35HPF"},{"location":"examples/filtering/#korg35lpf","text":"","title":"korg35LPF"},{"location":"examples/filtering/#lfboost","text":"","title":"lfBoost"},{"location":"examples/filtering/#lowboost","text":"","title":"lowBoost"},{"location":"examples/filtering/#lowcut","text":"","title":"lowCut"},{"location":"examples/filtering/#lowshelf","text":"","title":"lowShelf"},{"location":"examples/filtering/#mooghalfladder","text":"","title":"moogHalfLadder"},{"location":"examples/filtering/#moogladder","text":"","title":"moogLadder"},{"location":"examples/filtering/#moogvcf","text":"","title":"moogVCF"},{"location":"examples/filtering/#notch","text":"","title":"notch"},{"location":"examples/filtering/#oberheim","text":"","title":"oberheim"},{"location":"examples/filtering/#oberheimbpf","text":"","title":"oberheimBPF"},{"location":"examples/filtering/#oberheimbsf","text":"","title":"oberheimBSF"},{"location":"examples/filtering/#oberheimhpf","text":"","title":"oberheimHPF"},{"location":"examples/filtering/#oberheimlpf","text":"","title":"oberheimLPF"},{"location":"examples/filtering/#parametriceqlab","text":"","title":"parametricEqLab"},{"location":"examples/filtering/#parametricequalizer","text":"","title":"parametricEqualizer"},{"location":"examples/filtering/#peaknotch","text":"","title":"peakNotch"},{"location":"examples/filtering/#peakingeq","text":"","title":"peakingEQ"},{"location":"examples/filtering/#sallenkey2ndorder","text":"","title":"sallenKey2ndOrder"},{"location":"examples/filtering/#sallenkey2ndorderbpf","text":"","title":"sallenKey2ndOrderBPF"},{"location":"examples/filtering/#sallenkey2ndorderhpf","text":"","title":"sallenKey2ndOrderHPF"},{"location":"examples/filtering/#sallenkey2ndorderlpf","text":"","title":"sallenKey2ndOrderLPF"},{"location":"examples/filtering/#sallenkeyonepole","text":"","title":"sallenKeyOnePole"},{"location":"examples/filtering/#sallenkeyonepolehpf","text":"","title":"sallenKeyOnePoleHPF"},{"location":"examples/filtering/#sallenkeyonepolelpf","text":"","title":"sallenKeyOnePoleLPF"},{"location":"examples/filtering/#spectraltilt","text":"","title":"spectralTilt"},{"location":"examples/filtering/#vcfwahlab","text":"","title":"vcfWahLab"},{"location":"examples/filtering/#vocoder","text":"","title":"vocoder"},{"location":"examples/filtering/#wahpedal","text":"","title":"wahPedal"},{"location":"examples/gameaudio/","text":"gameaudio bubble complex_rain door fire insects rain thunder wind windchimes","title":" gameaudio "},{"location":"examples/gameaudio/#gameaudio","text":"","title":"gameaudio"},{"location":"examples/gameaudio/#bubble","text":"","title":"bubble"},{"location":"examples/gameaudio/#complex_rain","text":"","title":"complex_rain"},{"location":"examples/gameaudio/#door","text":"","title":"door"},{"location":"examples/gameaudio/#fire","text":"","title":"fire"},{"location":"examples/gameaudio/#insects","text":"","title":"insects"},{"location":"examples/gameaudio/#rain","text":"","title":"rain"},{"location":"examples/gameaudio/#thunder","text":"","title":"thunder"},{"location":"examples/gameaudio/#wind","text":"","title":"wind"},{"location":"examples/gameaudio/#windchimes","text":"","title":"windchimes"},{"location":"examples/generator/","text":"generator churchOrgan filterOsc noise noiseMetadata osc osci sawtoothLab virtualAnalog virtualAnalogLab","title":" generator "},{"location":"examples/generator/#generator","text":"","title":"generator"},{"location":"examples/generator/#churchorgan","text":"","title":"churchOrgan"},{"location":"examples/generator/#filterosc","text":"","title":"filterOsc"},{"location":"examples/generator/#noise","text":"","title":"noise"},{"location":"examples/generator/#noisemetadata","text":"","title":"noiseMetadata"},{"location":"examples/generator/#osc","text":"","title":"osc"},{"location":"examples/generator/#osci","text":"","title":"osci"},{"location":"examples/generator/#sawtoothlab","text":"","title":"sawtoothLab"},{"location":"examples/generator/#virtualanalog","text":"","title":"virtualAnalog"},{"location":"examples/generator/#virtualanaloglab","text":"","title":"virtualAnalogLab"},{"location":"examples/misc/","text":"misc UITester autopan capture drumkit matrix midiTester statespace switcher tester tester2","title":" misc "},{"location":"examples/misc/#misc","text":"","title":"misc"},{"location":"examples/misc/#uitester","text":"","title":"UITester"},{"location":"examples/misc/#autopan","text":"","title":"autopan"},{"location":"examples/misc/#capture","text":"","title":"capture"},{"location":"examples/misc/#drumkit","text":"","title":"drumkit"},{"location":"examples/misc/#matrix","text":"","title":"matrix"},{"location":"examples/misc/#miditester","text":"","title":"midiTester"},{"location":"examples/misc/#statespace","text":"","title":"statespace"},{"location":"examples/misc/#switcher","text":"","title":"switcher"},{"location":"examples/misc/#tester","text":"","title":"tester"},{"location":"examples/misc/#tester2","text":"","title":"tester2"},{"location":"examples/phasing/","text":"phasing flanger phaser phaserFlangerLab","title":" phasing "},{"location":"examples/phasing/#phasing","text":"","title":"phasing"},{"location":"examples/phasing/#flanger","text":"","title":"flanger"},{"location":"examples/phasing/#phaser","text":"","title":"phaser"},{"location":"examples/phasing/#phaserflangerlab","text":"","title":"phaserFlangerLab"},{"location":"examples/physicalModeling/","text":"physicalModeling brass brassMIDI churchBell clarinet clarinetMIDI djembeMIDI elecGuitarMIDI englishBell flute fluteMIDI frenchBell germanBell guitarMIDI karplus marimbaMIDI modularInterpInstrMIDI nylonGuitarMIDI russianBell standardBell violin violinMIDI vocalBP vocalBPMIDI vocalFOF vocalFOFMIDI","title":" physicalModeling "},{"location":"examples/physicalModeling/#physicalmodeling","text":"","title":"physicalModeling"},{"location":"examples/physicalModeling/#brass","text":"","title":"brass"},{"location":"examples/physicalModeling/#brassmidi","text":"","title":"brassMIDI"},{"location":"examples/physicalModeling/#churchbell","text":"","title":"churchBell"},{"location":"examples/physicalModeling/#clarinet","text":"","title":"clarinet"},{"location":"examples/physicalModeling/#clarinetmidi","text":"","title":"clarinetMIDI"},{"location":"examples/physicalModeling/#djembemidi","text":"","title":"djembeMIDI"},{"location":"examples/physicalModeling/#elecguitarmidi","text":"","title":"elecGuitarMIDI"},{"location":"examples/physicalModeling/#englishbell","text":"","title":"englishBell"},{"location":"examples/physicalModeling/#flute","text":"","title":"flute"},{"location":"examples/physicalModeling/#flutemidi","text":"","title":"fluteMIDI"},{"location":"examples/physicalModeling/#frenchbell","text":"","title":"frenchBell"},{"location":"examples/physicalModeling/#germanbell","text":"","title":"germanBell"},{"location":"examples/physicalModeling/#guitarmidi","text":"","title":"guitarMIDI"},{"location":"examples/physicalModeling/#karplus","text":"","title":"karplus"},{"location":"examples/physicalModeling/#marimbamidi","text":"","title":"marimbaMIDI"},{"location":"examples/physicalModeling/#modularinterpinstrmidi","text":"","title":"modularInterpInstrMIDI"},{"location":"examples/physicalModeling/#nylonguitarmidi","text":"","title":"nylonGuitarMIDI"},{"location":"examples/physicalModeling/#russianbell","text":"","title":"russianBell"},{"location":"examples/physicalModeling/#standardbell","text":"","title":"standardBell"},{"location":"examples/physicalModeling/#violin","text":"","title":"violin"},{"location":"examples/physicalModeling/#violinmidi","text":"","title":"violinMIDI"},{"location":"examples/physicalModeling/#vocalbp","text":"","title":"vocalBP"},{"location":"examples/physicalModeling/#vocalbpmidi","text":"","title":"vocalBPMIDI"},{"location":"examples/physicalModeling/#vocalfof","text":"","title":"vocalFOF"},{"location":"examples/physicalModeling/#vocalfofmidi","text":"","title":"vocalFOFMIDI"},{"location":"examples/pitchShifting/","text":"pitchShifting pitchShifter","title":" pitchShifting "},{"location":"examples/pitchShifting/#pitchshifting","text":"","title":"pitchShifting"},{"location":"examples/pitchShifting/#pitchshifter","text":"","title":"pitchShifter"},{"location":"examples/psychoacoustic/","text":"psychoacoustic harmonicExciter","title":" psychoacoustic "},{"location":"examples/psychoacoustic/#psychoacoustic","text":"","title":"psychoacoustic"},{"location":"examples/psychoacoustic/#harmonicexciter","text":"","title":"harmonicExciter"},{"location":"examples/quantizing/","text":"quantizing quantizedChords","title":"quantizing"},{"location":"examples/quantizing/#quantizing","text":"","title":"quantizing"},{"location":"examples/quantizing/#quantizedchords","text":"","title":"quantizedChords"},{"location":"examples/reverb/","text":"reverb dattorro fdnRev freeverb greyhole jprev reverbDesigner reverbTester vital_rev zitaRev zitaRevFDN","title":" reverb "},{"location":"examples/reverb/#reverb","text":"","title":"reverb"},{"location":"examples/reverb/#dattorro","text":"","title":"dattorro"},{"location":"examples/reverb/#fdnrev","text":"","title":"fdnRev"},{"location":"examples/reverb/#freeverb","text":"","title":"freeverb"},{"location":"examples/reverb/#greyhole","text":"","title":"greyhole"},{"location":"examples/reverb/#jprev","text":"","title":"jprev"},{"location":"examples/reverb/#reverbdesigner","text":"","title":"reverbDesigner"},{"location":"examples/reverb/#reverbtester","text":"","title":"reverbTester"},{"location":"examples/reverb/#vital_rev","text":"","title":"vital_rev"},{"location":"examples/reverb/#zitarev","text":"","title":"zitaRev"},{"location":"examples/reverb/#zitarevfdn","text":"","title":"zitaRevFDN"},{"location":"examples/smartKeyboard/","text":"smartKeyboard acGuitar bells bowed brass clarinet crazyGuiro drums dubDub elecGuitar fm frog harp midiOnly multiSynth toy trumpet turenas violin violin2 vocal","title":" smartKeyboard "},{"location":"examples/smartKeyboard/#smartkeyboard","text":"","title":"smartKeyboard"},{"location":"examples/smartKeyboard/#acguitar","text":"","title":"acGuitar"},{"location":"examples/smartKeyboard/#bells","text":"","title":"bells"},{"location":"examples/smartKeyboard/#bowed","text":"","title":"bowed"},{"location":"examples/smartKeyboard/#brass","text":"","title":"brass"},{"location":"examples/smartKeyboard/#clarinet","text":"","title":"clarinet"},{"location":"examples/smartKeyboard/#crazyguiro","text":"","title":"crazyGuiro"},{"location":"examples/smartKeyboard/#drums","text":"","title":"drums"},{"location":"examples/smartKeyboard/#dubdub","text":"","title":"dubDub"},{"location":"examples/smartKeyboard/#elecguitar","text":"","title":"elecGuitar"},{"location":"examples/smartKeyboard/#fm","text":"","title":"fm"},{"location":"examples/smartKeyboard/#frog","text":"","title":"frog"},{"location":"examples/smartKeyboard/#harp","text":"","title":"harp"},{"location":"examples/smartKeyboard/#midionly","text":"","title":"midiOnly"},{"location":"examples/smartKeyboard/#multisynth","text":"","title":"multiSynth"},{"location":"examples/smartKeyboard/#toy","text":"","title":"toy"},{"location":"examples/smartKeyboard/#trumpet","text":"","title":"trumpet"},{"location":"examples/smartKeyboard/#turenas","text":"","title":"turenas"},{"location":"examples/smartKeyboard/#violin","text":"","title":"violin"},{"location":"examples/smartKeyboard/#violin2","text":"","title":"violin2"},{"location":"examples/smartKeyboard/#vocal","text":"","title":"vocal"},{"location":"examples/spat/","text":"spat panpot spat","title":" spat "},{"location":"examples/spat/#spat","text":"","title":"spat"},{"location":"examples/spat/#panpot","text":"","title":"panpot"},{"location":"examples/spat/#spat_1","text":"","title":"spat"},{"location":"manual/architectures/","text":"Architecture Files A Faust program describes a signal processor , a pure DSP computation that maps input signals to output signals . It says nothing about audio drivers or controllers (like GUI, OSC, MIDI, sensors) that are going to control the DSP. This additional information is provided by architecture files . An architecture file describes how to relate a Faust program to the external world, in particular the audio drivers and the controllers interfaces to be used. This approach allows a single Faust program to be easily deployed to a large variety of audio standards (e.g., Max/MSP externals, PD externals, VST plugins, CoreAudio applications, JACK applications, iPhone/Android, etc.): The architecture to be used is specified at compile time with the -a option. For example faust -a jack-gtk.cpp foo.dsp indicates to use the JACK GTK architecture when compiling foo.dsp . Some of these architectures are a modular combination of an audio module and one or more controller modules . Some architecture only combine an audio module with the generated DSP to create an audio engine to be controlled with an additional setParamValue/getParamValue kind of API, so that the controller part can be completeley defined externally. This is the purpose of the faust2api script explained later on. Minimal Structure of an Architecture File Before going into the details of the architecture files provided with Faust distribution, it is important to have an idea of the essential parts that compose an architecture file. Technically, an architecture file is any text file with two placeholders <> and <> . The first placeholder is currently not used, and the second one is replaced by the code generated by the FAUST compiler. Therefore, the really minimal architecture file, let's call it nullarch.cpp , is the following: <> <> This nullarch.cpp architecture has the property that faust foo.dsp and faust -a nullarch.cpp foo.dsp produce the same result. Obviously, this is not very useful, moreover the resulting cpp file doesn't compile. Here is miniarch.cpp , a minimal architecture file that contains enough information to produce a cpp file that can be successfully compiled: <> #define FAUSTFLOAT float class dsp {}; struct Meta { virtual void declare(const char* key, const char* value) {}; }; struct Soundfile {}; struct UI { // -- widget's layouts virtual void openTabBox(const char* label) {} virtual void openHorizontalBox(const char* label) {} virtual void openVerticalBox(const char* label) {} virtual void closeBox() {} // -- active widgets virtual void addButton(const char* label, FAUSTFLOAT* zone) {} virtual void addCheckButton(const char* label, FAUSTFLOAT* zone) {} virtual void addVerticalSlider(const char* label, FAUSTFLOAT* zone, FAUSTFLOAT init, FAUSTFLOAT min, FAUSTFLOAT max, FAUSTFLOAT step) {} virtual void addHorizontalSlider(const char* label, FAUSTFLOAT* zone, FAUSTFLOAT init, FAUSTFLOAT min, FAUSTFLOAT max, FAUSTFLOAT step) {} virtual void addNumEntry(const char* label, FAUSTFLOAT* zone, FAUSTFLOAT init, FAUSTFLOAT min, FAUSTFLOAT max, FAUSTFLOAT step) {} // -- passive widgets virtual void addHorizontalBargraph(const char* label, FAUSTFLOAT* zone, FAUSTFLOAT min, FAUSTFLOAT max) {} virtual void addVerticalBargraph(const char* label, FAUSTFLOAT* zone, FAUSTFLOAT min, FAUSTFLOAT max) {} // -- soundfiles virtual void addSoundfile(const char* label, const char* filename, Soundfile** sf_zone) {} // -- metadata declarations virtual void declare(FAUSTFLOAT* zone, const char* key, const char* val) {} }; <> This architecture is still not very useful, but it gives an idea of what a real-life architecture file has to implement, in addition to the audio part itself. As we will see in the next section, Faust architectures are implemented using a modular approach to avoid code duplication and favor code maintenance and reuse. Audio Architecture Modules A Faust generated program has to connect to a underlying audio layer. Depending if the final program is a application or plugin, the way to connect to this audio layer will differ: applications typically use the OS audio driver API, which will be CoreAudio on macOS, ALSA on Linux, WASAPI on Windows for instance, or any kind of multi-platforms API like PortAudio or JACK . In this case a subclass of the base class audio (see later) has to be written plugins (like VST3 , Audio Unit or JUCE for instance) usually have to follow a more constrained API which imposes a life cyle , something like loading/initializing/starting/running/stopping/unloading sequence of operations. In this case the Faust generated module new/init/compute/delete methods have to be inserted in the plugin API, by calling each module function at the appropriate place. External and internal audio sample formats Audio samples are managed by the underlying audio layer, typically as 32 bits float or 64 bits double values in the [-1..1] interval. Their format is defined with the FAUSTFLOAT macro implemented in the architecture file as float by default. The DSP audio samples format is choosen at compile time, with the -single (= default), -double or -quad compilation option. Control parameters like buttons, sliders... also use the FAUSTFLOAT format. By default, the FAUSTFLOAT macro is written with the following code: #ifndef FAUSTFLOAT #define FAUSTFLOAT float #endif which gives it a value ( if not already defined ), and since the default internal format is float , nothing special has to be done in the general case. But when the DSP is compiled using the -double option, the audio inputs/outputs buffers have to be adapted , with a dsp_sample_adapter class, for instance like in the dynamic-jack-gt tool . Note that an architecture may redefine FAUSTFLOAT in double, and have the complete audio chain running in double. This has to be done before including any architecture file that would define FAUSTFLOAT itself (because of the #ifndef logic). Connection to an audio driver API An audio driver architecture typically connects a Faust program to the audio drivers. It is responsible for: allocating and releasing the audio channels and presenting the audio as non-interleaved float/double data (depending of the FAUSTFLOAT macro definition), normalized between -1.0 and 1.0 calling the DSP init method at init time, to setup the ma.SR variable possibly used in the DSP code calling the DSP compute method to handle incoming audio buffers and/or to produce audio outputs. The default compilation model uses separated audio input and output buffers not referring to the same memory locations. The -inpl (--in-place) code generation model allows us to generate code working when input and output buffers are the same (which is typically needed in some embedded devices). This option currently only works in scalar (= default) code generation mode. A Faust audio architecture module derives from an audio class can be defined as below (simplified version, see the real version here) : class audio { public: audio() {} virtual ~audio() {} /** * Init the DSP. * @param name - the DSP name to be given to the audio driven * (could appear as a JACK client for instance) * @param dsp - the dsp that will be initialized with the driver sample rate * * @return true is sucessful, false if case of driver failure. **/ virtual bool init(const char* name, dsp* dsp) = 0; /** * Start audio processing. * @return true is sucessfull, false if case of driver failure. **/ virtual bool start() = 0; /** * Stop audio processing. **/ virtual void stop() = 0; void setShutdownCallback(shutdown_callback cb, void* arg) = 0; // Return buffer size in frames. virtual int getBufferSize() = 0; // Return the driver sample rate in Hz. virtual int getSampleRate() = 0; // Return the driver hardware inputs number. virtual int getNumInputs() = 0; // Return the driver hardware outputs number. virtual int getNumOutputs() = 0; /** * @return Returns the average proportion of available CPU * being spent inside the audio callbacks (between 0.0 and 1.0). **/ virtual float getCPULoad() = 0; }; The API is simple enough to give a great flexibility to audio architectures implementations. The init method should initialize the audio. At init exit, the system should be in a safe state to recall the dsp object state. Here is the hierarchy of some of the supported audio drivers: Connection to a plugin audio API In the case of plugin, an audio plugin architecture has to be developed, by integrating the Faust DSP new/init/compute/delete methods in the plugin API. Here is a concrete example using the JUCE framework: a FaustPlugInAudioProcessor class, subclass of the juce::AudioProcessor has to be defined. The Faust generated C++ instance will be created in its constructor, either in monophonic of polyphonic mode (see later sections) the Faust DSP instance is initialized in the JUCE prepareToPlay method using the current sample rate value the Faust dsp compute is called in the JUCE process which receives the audio inputs/outputs buffers to be processed additional methods can possibly be implemented to handle MIDI messages or save/restore the plugin parameters state for instance. This methodology obviously has to be adapted for each supported plugin API. MIDI Architecture Modules A MIDI architecture module typically connects a Faust program to the MIDI drivers. MIDI control connects DSP parameters with MIDI messages (in both directions), and can be used to trigger polyphonic instruments. MIDI Messages in the DSP Source Code MIDI control messages are described as metadata in UI elements. They are decoded by a MidiUI class, subclass of UI , which parses incoming MIDI messages and updates the appropriate control parameters, or sends MIDI messages when the UI elements (sliders, buttons...) are moved. Defined Standard MIDI Messages A special [midi:xxx yyy...] metadata needs to be added to the UI element. The full description of supported MIDI messages is part of the Faust documentation . MIDI Classes A midi base class defining MIDI messages decoding/encoding methods has been developed. It will be used to receive and transmit MIDI messages: class midi { public: midi() {} virtual ~midi() {} // Additional timestamped API for MIDI input virtual MapUI* keyOn(double, int channel, int pitch, int velocity) { return keyOn(channel, pitch, velocity); } virtual void keyOff(double, int channel, int pitch, int velocity = 0) { keyOff(channel, pitch, velocity); } virtual void keyPress(double, int channel, int pitch, int press) { keyPress(channel, pitch, press); } virtual void chanPress(double date, int channel, int press) { chanPress(channel, press); } virtual void pitchWheel(double, int channel, int wheel) { pitchWheel(channel, wheel); } virtual void ctrlChange(double, int channel, int ctrl, int value) { ctrlChange(channel, ctrl, value); } virtual void ctrlChange14bits(double, int channel, int ctrl, int value) { ctrlChange14bits(channel, ctrl, value); } virtual void rpn(double, int channel, int ctrl, int value) { rpn(channel, ctrl, value); } virtual void progChange(double, int channel, int pgm) { progChange(channel, pgm); } virtual void sysEx(double, std::vector& message) { sysEx(message); } // MIDI sync virtual void startSync(double date) {} virtual void stopSync(double date) {} virtual void clock(double date) {} // Standard MIDI API virtual MapUI* keyOn(int channel, int pitch, int velocity) { return nullptr; } virtual void keyOff(int channel, int pitch, int velocity) {} virtual void keyPress(int channel, int pitch, int press) {} virtual void chanPress(int channel, int press) {} virtual void ctrlChange(int channel, int ctrl, int value) {} virtual void ctrlChange14bits(int channel, int ctrl, int value) {} virtual void rpn(int channel, int ctrl, int value) {} virtual void pitchWheel(int channel, int wheel) {} virtual void progChange(int channel, int pgm) {} virtual void sysEx(std::vector& message) {} enum MidiStatus { // channel voice messages MIDI_NOTE_OFF = 0x80, MIDI_NOTE_ON = 0x90, MIDI_CONTROL_CHANGE = 0xB0, MIDI_PROGRAM_CHANGE = 0xC0, MIDI_PITCH_BEND = 0xE0, MIDI_AFTERTOUCH = 0xD0, // aka channel pressure MIDI_POLY_AFTERTOUCH = 0xA0, // aka key pressure MIDI_CLOCK = 0xF8, MIDI_START = 0xFA, MIDI_CONT = 0xFB, MIDI_STOP = 0xFC, MIDI_SYSEX_START = 0xF0, MIDI_SYSEX_STOP = 0xF7 }; enum MidiCtrl { ALL_NOTES_OFF = 123, ALL_SOUND_OFF = 120 }; enum MidiNPN { PITCH_BEND_RANGE = 0 }; }; A pure interface for MIDI handlers that can send/receive MIDI messages to/from midi objects is defined: struct midi_interface { virtual void addMidiIn(midi* midi_dsp) = 0; virtual void removeMidiIn(midi* midi_dsp) = 0; virtual ~midi_interface() {} }; A midi_hander subclass implements actual MIDI decoding and maintains a list of MIDI aware components (classes inheriting from midi and ready to send and/or receive MIDI events) using the addMidiIn/removeMidiIn methods: class midi_handler : public midi, public midi_interface { protected: std::vector fMidiInputs; std::string fName; MidiNRPN fNRPN; public: midi_handler(const std::string& name = \"MIDIHandler\"):fName(name) {} virtual ~midi_handler() {} void addMidiIn(midi* midi_dsp) {...} void removeMidiIn(midi* midi_dsp) {...} ... ... }; Several concrete implementations subclassing midi_handler using native APIs have been written and can be found in the faust/midi folder: Depending on the native MIDI API being used, event timestamps are either expressed in absolute time or in frames. They are converted to offsets expressed in samples relative to the beginning of the audio buffer. Connected with the MidiUI class (a subclass of UI ), they allow a given DSP to be controlled with incoming MIDI messages or possibly send MIDI messages when its internal control state changes. In the following piece of code, a MidiUI object is created and connected to a rt_midi MIDI messages handler (using the RTMidi library), then given as a parameter to the standard buildUserInterface to control DSP parameters: ... rt_midi midi_handler(\"MIDI\"); MidiUI midi_interface(&midi_handler); DSP->buildUserInterface(&midi_interface); ... UI Architecture Modules A UI architecture module links user actions (i.e., via graphic widgets, command line parameters, OSC messages, etc.) with the Faust program to control. It is responsible for associating program parameters to user interface elements and to update parameter\u2019s values according to user actions. This association is triggered by the buildUserInterface call, where the dsp asks a UI object to build the DSP module controllers. Since the interface is basically graphic-oriented, the main concepts are widget based: an UI architecture module is semantically oriented to handle active widgets, passive widgets and widgets layout. A Faust UI architecture module derives the UI base class: template struct UIReal { UIReal() {} virtual ~UIReal() {} // -- widget's layouts virtual void openTabBox(const char* label) = 0; virtual void openHorizontalBox(const char* label) = 0; virtual void openVerticalBox(const char* label) = 0; virtual void closeBox() = 0; // -- active widgets virtual void addButton(const char* label, REAL* zone) = 0; virtual void addCheckButton(const char* label, REAL* zone) = 0; virtual void addVerticalSlider(const char* label, REAL* zone, REAL init, REAL min, REAL max, REAL step) = 0; virtual void addHorizontalSlider(const char* label, REAL* zone, REAL init, REAL min, REAL max, REAL step) = 0; virtual void addNumEntry(const char* label, REAL* zone, REAL init, REAL min, REAL max, REAL step) = 0; // -- passive widgets virtual void addHorizontalBargraph(const char* label, REAL* zone, REAL min, REAL max) = 0; virtual void addVerticalBargraph(const char* label, REAL* zone, REAL min, REAL max) = 0; // -- soundfiles virtual void addSoundfile(const char* label, const char* filename, Soundfile** sf_zone) = 0; // -- metadata declarations virtual void declare(REAL* zone, const char* key, const char* val) {} }; struct UI : public UIReal { UI() {} virtual ~UI() {} }; The FAUSTFLOAT* zone element is the primary connection point between the control interface and the dsp code. The compiled dsp Faust code will give access to all internal control value addresses used by the dsp code by calling the approriate addButton , addVerticalSlider , addNumEntry etc. methods (depending of what is described in the original Faust DSP source code). The control/UI code keeps those addresses, and will typically change their pointed values each time a control value in the dsp code has to be changed. On the dsp side, all control values are sampled once at the beginning of the compute method, so that to keep the same value during the entire audio buffer. Writing and reading the control values is typically done in two different threads: the controller (a GUI, an OSC or MIDI.etc. one) write the values, and the audio real-time thread read them in the audio callback. Since writing/reading the FAUSTFLOAT* zone element is atomic, there is no need (in general) of complex synchronization mechanism between the writer (controller) and the reader (the Faust dsp object). Here is part of the UI classes hierarchy: Active Widgets Active widgets are graphical elements controlling a parameter value. They are initialized with the widget name and a pointer to the linked value, using the FAUSTFLOAT macro type (defined at compile time as either float or double ). Active widgets in Faust are Button , CheckButton , VerticalSlider , HorizontalSlider and NumEntry . A GUI architecture must implement a method addXxx(const char* name, FAUSTFLOAT* zone, ...) for each active widget. Additional parameters are available for Slider and NumEntry : the init , min , max and step values. Passive Widgets Passive widgets are graphical elements reflecting values. Similarly to active widgets, they are initialized with the widget name and a pointer to the linked value. Passive widgets in Faust are HorizontalBarGraph and VerticalBarGraph . A UI architecture must implement a method addXxx(const char* name, FAUSTFLOAT* zone, ...) for each passive widget. Additional parameters are available, depending on the passive widget type. Widgets Layout Generally, a GUI is hierarchically organized into boxes and/or tab boxes. A UI architecture must support the following methods to setup this hierarchy: openTabBox(const char* label); openHorizontalBox(const char* label); openVerticalBox(const char* label); closeBox(const char* label); Note that all the widgets are added to the current box. Metadata The Faust language allows widget labels to contain metadata enclosed in square brackets as key/value pairs. These metadata are handled at GUI level by a declare method taking as argument, a pointer to the widget associated zone, the metadata key and value: declare(FAUSTFLOAT* zone, const char* key, const char* value); Here is the table of currently supported general medadata: Key Value tooltip actual string content hidden 0 or 1 unit Hz or dB scale log or exp style knob or led or numerical style radio{\u2019label1\u2019:v1;\u2019label2\u2019:v2...} style menu{\u2019label1\u2019:v1;\u2019label2\u2019:v2...} acc axe curve amin amid amax gyr axe curve amin amid amax screencolor red or green or blue or white Here acc means accelerometer and gyr means gyroscope , both use the same parameters (a mapping description) but are linked to different sensors. Some typical example where several metadata are defined could be: nentry(\"freq [unit:Hz][scale:log][acc:0 0 -30 0 30][style:menu{\u2019white noise\u2019:0;\u2019pink noise\u2019:1;\u2019sine\u2019:2}][hidden:0]\", 0, 20, 100, 1) or: vslider(\"freq [unit:dB][style:knob][gyr:0 0 -30 0 30]\", 0, 20, 100, 1) When one or several metadata are added in the same item label, then will appear in the generated code as one or successives declare(FAUSTFLOAT* zone, const char* key, const char* value); lines before the line describing the item itself. Thus the UI managing code has to associate them with the proper item. Look at the MetaDataUI class for an example of this technique. MIDI specific metadata are described here and are decoded the MidiUI class. Note that medatada are not supported in all architecture files. Some of them like ( acc or gyr for example) only make sense on platforms with accelerometers or gyroscopes sensors. The set of medatada may be extended in the future and can possibly be adapted for a specific project. They can be decoded using the MetaDataUI class. Graphic-oriented, pure controllers, code generator UI Even if the UI architecture module is graphic-oriented, a given implementation can perfectly choose to ignore all layout information and only keep the controller ones, like the buttons, sliders, nentries, bargraphs. This is typically what is done in the MidiUI or OSCUI architectures. Note that pure code generator can also be written. The JSONUI UI architecture is an example of an architecture generating the DSP JSON description as a text file. DSP JSON Description The full description of a given compiled DSP can be generated as a JSON file, to be used at several places in the architecture system. This JSON describes the DSP with its inputs/outputs number, some metadata (filename, name, used compilation parameters, used libraries etc.) as well as its UI with a hierarchy of groups up to terminal items ( buttons , sliders , nentries , bargraphs ) with all their parameters ( type , label , shortname , address , meta , init , min , max and step values). For the following DSP program: import(\"stdfaust.lib\"); vol = hslider(\"volume [unit:dB]\", 0, -96, 0, 0.1) : ba.db2linear : si.smoo; freq = hslider(\"freq [unit:Hz]\", 600, 20, 2000, 1); process = vgroup(\"Oscillator\", os.osc(freq) * vol) <: (_,_); The generated JSON file is then: { \"name\": \"foo\", \"filename\": \"foo.dsp\", \"version\": \"2.40.8\", \"compile_options\": \"-lang cpp -es 1 -mcd 16 -single -ftz 0\", \"library_list\": [], \"include_pathnames\": [\"/usr/local/share/faust\"], \"inputs\": 0, \"outputs\": 2, \"meta\": [ { \"basics.lib/name\": \"Faust Basic Element Library\" }, { \"basics.lib/version\": \"0.6\" }, { \"compile_options\": \"-lang cpp -es 1 -mcd 16 -single -ftz 0\" }, { \"filename\": \"foo.dsp\" }, { \"maths.lib/author\": \"GRAME\" }, { \"maths.lib/copyright\": \"GRAME\" }, { \"maths.lib/license\": \"LGPL with exception\" }, { \"maths.lib/name\": \"Faust Math Library\" }, { \"maths.lib/version\": \"2.5\" }, { \"name\": \"tes\" }, { \"oscillators.lib/name\": \"Faust Oscillator Library\" }, { \"oscillators.lib/version\": \"0.3\" }, { \"platform.lib/name\": \"Generic Platform Library\" }, { \"platform.lib/version\": \"0.2\" }, { \"signals.lib/name\": \"Faust Signal Routing Library\" }, { \"signals.lib/version\": \"0.1\" } ], \"ui\": [ { \"type\": \"vgroup\", \"label\": \"Oscillator\", \"items\": [ { \"type\": \"hslider\", \"label\": \"freq\", \"shortname\": \"freq\", \"address\": \"/Oscillator/freq\", \"meta\": [ { \"unit\": \"Hz\" } ], \"init\": 600, \"min\": 20, \"max\": 2000, \"step\": 1 }, { \"type\": \"hslider\", \"label\": \"volume\", \"shortname\": \"volume\", \"address\": \"/Oscillator/volume\", \"meta\": [ { \"unit\": \"dB\" } ], \"init\": 0, \"min\": -96, \"max\": 0, \"step\": 0.1 } ] } ] } The JSON file can be generated with faust -json foo.dsp command, or programmatically using the JSONUI UI architecture (see next Some Useful UI Classes and Tools for Developers section). Here is the description of ready-to-use UI classes, followed by classes to be used in developer code: GUI Builders Here is the description of the main GUI classes: the GTKUI class uses the GTK toolkit to create a Graphical User Interface with a proper group-based layout the QTUI class uses the QT toolkit to create a Graphical User Interface with a proper group based layout the JuceUI class uses the JUCE framework to create a Graphical User Interface with a proper group based layout Non-GUI Controllers Here is the description of the main non-GUI controller classes: the OSCUI class implements OSC remote control in both directions the httpdUI class implements HTTP remote control using the libmicrohttpd library to embed a HTTP server inside the application. Then by opening a browser on a specific URL, the GUI will appear and allow to control the distant application or plugin. The connection works in both directions the MIDIUI class implements MIDI control in both directions, and it explained more deeply later on Some Useful UI Classes and Tools for Developers Some useful UI classes and tools can possibly be reused in developer code: the MapUI class establishes a mapping beween UI items and their labels , shortname or paths , and offers a setParamValue/getParamValue API to set and get their values. It uses an helper PathBuilder class to create complete shortnames and pathnames to the leaves in the UI hierarchy. Note that the item path encodes the UI hierarchy in the form of a /group1/group2/.../label string and is the way to distinguish control that may have the same label, but different localisation in the UI tree. Using shortnames (built so that they never collide) is an alternative way to access items. The setParamValue/getParamValue API takes either labels , shortname or paths as the way to describe the control, but using shortnames or paths is the safer way to use it the extended APIUI offers setParamValue/getParamValue API similar to MapUI , with additional methods to deal with accelerometer/gyroscope kind of metadata the MetaDataUI class decodes all currently supported metadata and can be used to retrieve their values the JSONUI class allows us to generate the JSON description of a given DSP the JSONUIDecoder class is used to decode the DSP JSON description and implement its buildUserInterface and metadata methods the FUI class allows us to save and restore the parameters state as a text file the SoundUI class with the associated Soundfile class is used to implement the soundfile primitive, and load the described audio resources (typically audio files), by using different concrete implementations, either using libsndfile (with the LibsndfileReader.h file), or JUCE (with the JuceReader file). Paths to sound files can be absolute, but it should be noted that a relative path mechanism can be set up when creating an instance of SoundUI , in order to load sound files with a more flexible strategy. the ControlSequenceUI class with the associated OSCSequenceReader class allow to control parameters change in time, using the OSC time tag format. Changing the control values will have to be mixed with audio rendering. Look at the sndfile.cpp use-case. the ValueConverter file contains several mapping classes used to map user interface values (for example a gui slider delivering values between 0 and 1) to Faust values (for example a vslider between 20 and 2000) using linear/log/exp scales. It also provides classes to handle the [acc:a b c d e] and [gyr:a b c d e] Sensors Control Metadatas . Multi-Controller and Synchronization A given DSP can perfectly be controlled by several UI classes at the same time, and they will all read and write the same DSP control memory zones. Here is an example of code using a GUI using GTKUI architecture, as well as OSC control using OSCUI : ... GTKUI gtk_interface(name, &argc, &argv); DSP->buildUserInterface(>k_interface); OSCUI osc_interface(name, argc, argv); DSP->buildUserInterface(&osc_interface); ... Since several controller access the same values, you may have to synchronize them, in order for instance to have the GUI sliders or buttons reflect the state that would have been changed by the OSCUI controller at reception time, of have OSC messages been sent each time UI items like sliders or buttons are moved. This synchronization mecanism is implemented in a generic way in the GUI class. First the uiItemBase class is defined as the basic synchronizable memory zone, then grouped in a list controlling the same zone from different GUI instances. The uiItemBase::modifyZone method is used to change the uiItemBase state at reception time, and uiItemBase::reflectZone will be called to reflect a new value, and can change the Widget layout for instance, or send a message (OSC, MIDI...). All classes needing to use this synchronization mechanism will have to subclass the GUI class, which keeps all of them at runtime in a static class GUI::fGuiList variable. This is the case for the previously used GTKUI and OSCUI classes. Note that when using the GUI class, the 2 following static class variables have to be defined in the code, (once in one .cpp file in the project) like in this code example: // Globals std::list GUI::fGuiList; ztimedmap GUI::gTimedZoneMap; Finally the static GUI::updateAllGuis() synchronization method will have to be called regularly, in the application or plugin event management loop, or in a periodic timer for instance. This is typically implemented in the GUI::run method which has to be called to start event or messages processing. In the following code, the OSCUI::run method is called first to start processing OSC messages, then the blocking GTKUI::run method, which opens the GUI window, to be closed to finally finish the application: ... // Start OSC messages processing osc_interface.run(); // Start GTK GUI as the last one, since it blocks until the opened window is closed gtk_interface.run() ... DSP Architecture Modules The Faust compiler produces a DSP module whose format will depend of the chosen backend: a C++ class with the -lang cpp option, a data structure with associated functions with the -lang c option, an LLVM IR module with the -lang llvm option, a WebAssembly binary module with the -lang wasm option, a bytecode stream with the -lang interp option... and so on. The Base dsp Class In C++, the generated class derives from a base dsp class: class dsp { public: dsp() {} virtual ~dsp() {} /* Return instance number of audio inputs */ virtual int getNumInputs() = 0; /* Return instance number of audio outputs */ virtual int getNumOutputs() = 0; /** * Trigger the ui_interface parameter with instance specific calls * to 'openTabBox', 'addButton', 'addVerticalSlider'... in order to build the UI. * * @param ui_interface - the user interface builder */ virtual void buildUserInterface(UI* ui_interface) = 0; /* Return the sample rate currently used by the instance */ virtual int getSampleRate() = 0; /** * Global init, calls the following methods: * - static class 'classInit': static tables initialization * - 'instanceInit': constants and instance state initialization * * @param sample_rate - the sampling rate in Hz */ virtual void init(int sample_rate) = 0; /** * Init instance state * * @param sample_rate - the sampling rate in Hz */ virtual void instanceInit(int sample_rate) = 0; /** * Init instance constant state * * @param sample_rate - the sampling rate in HZ */ virtual void instanceConstants(int sample_rate) = 0; /* Init default control parameters values */ virtual void instanceResetUserInterface() = 0; /* Init instance state (like delay lines..) but keep the control parameter values */ virtual void instanceClear() = 0; /** * Return a clone of the instance. * * @return a copy of the instance on success, otherwise a null pointer. */ virtual dsp* clone() = 0; /** * Trigger the Meta* parameter with instance specific calls to 'declare' * (key, value) metadata. * * @param m - the Meta* meta user */ virtual void metadata(Meta* m) = 0; /** * DSP instance computation, to be called with successive in/out audio buffers. * * @param count - the number of frames to compute * @param inputs - the input audio buffers as an array of non-interleaved * FAUSTFLOAT samples (eiher float, double or quad) * @param outputs - the output audio buffers as an array of non-interleaved * FAUSTFLOAT samples (eiher float, double or quad) * */ virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) = 0; /** * Alternative DSP instance computation method for use by subclasses, incorporating an additional `date_usec` parameter, * which specifies the timestamp of the first sample in the audio buffers. * * @param date_usec - the timestamp in microsec given by audio driver. By convention timestamp of -1 means 'no timestamp conversion', * events already have a timestamp expressed in frames. * @param count - the number of frames to compute * @param inputs - the input audio buffers as an array of non-interleaved * FAUSTFLOAT samples (either float, double or quad) * @param outputs - the output audio buffers as an array of non-interleaved * FAUSTFLOAT samples (either float, double or quad) * */ virtual void compute(double date_usec, int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) = 0; }; The dsp class is central to the Faust architecture design: the getNumInputs , getNumOutputs methods provides information about the signal processor the buildUserInterface method creates the user interface using a given UI class object (see later) the init method (and some more specialized methods like instanceInit , instanceConstants , etc.) is called to initialize the dsp object with a given sampling rate, typically obtained from the audio architecture the compute method is called by the audio architecture to execute the actual audio processing. It takes a count number of samples to process, and inputs and outputs arrays of non-interleaved float/double samples, to be allocated and handled by the audio driver with the required dsp input and outputs channels (as given by getNumInputs and getNumOutputs ) the clone method can be used to duplicate the instance the metadata(Meta* m) method can be called with a Meta object to decode the instance global metadata (see next section) (note that FAUSTFLOAT label is typically defined to be the actual type of sample: either float or double using #define FAUSTFLOAT float in the code for instance). For a given compiled DSP program, the compiler will generate a mydsp subclass of dsp and fill the different methods (the actual name can be changed using the -cn option). For dynamic code producing backends like the LLVM IR, Cmajor or the Interpreter ones, the actual code (an LLVM module, a Cmajor module or a bytecode stream) is actually wrapped by some additional C++ code glue, to finally produces an llvm_dsp typed object (defined in the llvm-dsp.h file), a cmajorpatch_dsp typed object (defined in the cmajorpatch-dsp.h file) or an interpreter_dsp typed object (defined in interpreter-dsp.h file), ready to be used with the UI and audio C++ classes (like the C++ generated class). See the following class diagram: Global DSP metadata All global metadata declaration in Faust start with declare , followed by a key and a string. For example: declare name \"Noise\"; allows us to specify the name of a Faust program in its whole. Unlike regular comments, metadata declarations will appear in the C++ code generated by the Faust compiler, for instance the Faust program: declare name \"NoiseProgram\"; declare author \"MySelf\"; declare copyright \"MyCompany\"; declare version \"1.00\"; declare license \"BSD\"; import(\"stdfaust.lib\"); process = no.noise; will generate the following C++ metadata(Meta* m) method in the dsp class: void metadata(Meta* m) { m->declare(\"author\", \"MySelf\"); m->declare(\"compile_options\", \"-lang cpp -es 1 -scal -ftz 0\"); m->declare(\"copyright\", \"MyCompany\"); m->declare(\"filename\", \"metadata.dsp\"); m->declare(\"license\", \"BSD\"); m->declare(\"name\", \"NoiseProgram\"); m->declare(\"noises.lib/name\", \"Faust Noise Generator Library\"); m->declare(\"noises.lib/version\", \"0.0\"); m->declare(\"version\", \"1.00\"); } which interacts with an instance of an implementation class of the following virtual Meta class: struct Meta { virtual ~Meta() {}; virtual void declare(const char* key, const char* value) = 0; }; and are part of three different types of global metadata: metadata like compile_options or filename are automatically generated metadata like author of copyright are part of the Global Medata metadata like noises.lib/name are part of the Function Metadata Specialized subclasses of the Meta class can be implemented to decode the needed key/value pairs for a given use-case. Macro Construction of DSP Components The Faust program specification is usually entirely done in the language itself. But in some specific cases it may be useful to develop separated DSP components and combine them in a more complex setup. Since taking advantage of the huge number of already available UI and audio architecture files is important, keeping the same dsp API is preferable, so that more complex DSP can be controlled and audio rendered the usual way. Extended DSP classes will typically subclass the dsp base class and override or complete part of its API. DSP Decorator Pattern A dsp_decorator class, subclass of the root dsp class has first been defined. Following the decorator design pattern, it allows behavior to be added to an individual object, either statically or dynamically. As an example of the decorator pattern, the timed_dsp class allows to decorate a given DSP with sample accurate control capability or the mydsp_poly class for polyphonic DSPs, explained in the next sections. Combining DSP Components A few additional macro construction classes, subclasses of the root dsp class have been defined in the dsp-combiner.h header file with a five operators construction API: the dsp_sequencer class combines two DSP in sequence, assuming that the number of outputs of the first DSP equals the number of input of the second one. It somewhat mimics the sequence (that is : ) operator of the language by combining two separated C++ objects. Its buildUserInterface method is overloaded to group the two DSP in a tabgroup, so that control parameters of both DSPs can be individually controlled. Its compute method is overloaded to call each DSP compute in sequence, using an intermediate output buffer produced by first DSP as the input one given to the second DSP. the dsp_parallelizer class combines two DSP in parallel. It somewhat mimics the parallel (that is , ) operator of the language by combining two separated C++ objects. Its getNumInputs/getNumOutputs methods are overloaded by correctly reflecting the input/output of the resulting DSP as the sum of the two combined ones. Its buildUserInterface method is overloaded to group the two DSP in a tabgroup, so that control parameters of both DSP can be individually controlled. Its compute method is overloaded to call each DSP compute, where each DSP consuming and producing its own number of input/output audio buffers taken from the method parameters. This methology is followed to implement the three remaining composition operators ( split , merge , recursion ), which ends up with a C++ API to combine DSPs with the usual five operators: createDSPSequencer , createDSPParallelizer , createDSPSplitter , createDSPMerger , createDSPRecursiver to be used at C++ level to dynamically combine DSPs. And finally the createDSPCrossfader tool allows you to crossfade between two DSP modules. The crossfade parameter (as a slider) controls the mix between the two modules outputs. When Crossfade = 1 , the first DSP only is computed, when Crossfade = 0 , the second DSP only is computed, otherwise both DSPs are computed and mixed. Note that this idea of decorating or combining several C++ dsp objects can perfectly be extended in specific projects, to meet other needs: like muting some part of a graph of several DSPs for instance. But keep in mind that keeping the dsp API then allows to take profit of all already available UI and audio based classes. Sample Accurate Control DSP audio languages usually deal with several timing dimensions when treating control events and generating audio samples. For performance reasons, systems maintain separated audio rate for samples generation and control rate for asynchronous messages handling. The audio stream is most often computed by blocks, and control is updated between blocks. To smooth control parameter changes, some languages chose to interpolate parameter values between blocks. In some cases control may be more finely interleaved with audio rendering, and some languages simply choose to interleave control and sample computation at sample level. Although the Faust language permits the description of sample level algorithms (i.e., like recursive filters, etc.), Faust generated DSP are usually computed by blocks. Underlying audio architectures give a fixed size buffer over and over to the DSP compute method which consumes and produces audio samples. Control to DSP Link In the current version of the Faust generated code, the primary connection point between the control interface and the DSP code is simply a memory zone. For control inputs, the architecture layer continuously write values in this zone, which is then sampled by the DSP code at the beginning of the compute method, and used with the same values throughout the call. Because of this simple control/DSP connexion mechanism, the most recent value is used by the DSP code. Similarly for control outputs , the DSP code inside the compute method possibly writes several values at the same memory zone, and the last value only will be seen by the control architecture layer when the method finishes. Although this behaviour is satisfactory for most use-cases, some specific usages need to handle the complete stream of control values with sample accurate timing. For instance keeping all control messages and handling them at their exact position in time is critical for proper MIDI clock synchronisation. Timestamped Control The first step consists in extending the architecture control mechanism to deal with timestamped control events. Note that this requires the underlying event control layer to support this capability. The native MIDI API for instance is usually able to deliver timestamped MIDI messages. The next step is to keep all timestamped events in a time ordered data structure to be continuously written by the control side, and read by the audio side. Finally the sample computation has to take account of all queued control events, and correctly change the DSP control state at successive points in time. Slices Based DSP Computation With timestamped control messages, changing control values at precise sample indexes on the audio stream becomes possible. A generic slices based DSP rendering strategy has been implemented in the timed_dsp class. A ring-buffer is used to transmit the stream of timestamped events from the control layer to the DSP one. In the case of MIDI control for instance, the ring-buffer is written with a pair containing the timestamp expressed in samples (or microseconds) and the actual MIDI message each time one is received. In the DSP compute method, the ring-buffer will be read to handle all messages received during the previous audio block. Since control values can change several times inside the same audio block, the DSP compute cannot be called only once with the total number of frames and the complete inputs/outputs audio buffers. The following strategy has to be used: several slices are defined with control values changing between consecutive slices all control values having the same timestamp are handled together, and change the DSP control internal state. The slice is computed up to the next control parameters timestamp until the end of the given audio block is reached in the next figure, four slices with the sequence of c1, c2, c3, c4 frames are successively given to the DSP compute method, with the appropriate part of the audio input/output buffers. Control values (appearing here as [v1,v2,v3] , then [v1,v3] , then [v1] , then [v1,v2,v3] sets) are changed between slices Since timestamped control messages from the previous audio block are used in the current block, control messages are aways handled with one audio buffer latency. Note that this slices based computation model can always be directly implemented on top of the underlying audio layer, without relying on the timed_dsp wrapper class. Audio driver timestamping Some audio drivers can get the timestamp of the first sample in the audio buffers, and will typically call the DSP alternative compute(double date_usec, int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) function with the correct timestamp. By convention timestamp of -1 means 'no timestamp conversion', events already have a timestamp expressed in frames (see jackaudio_midi for an example driver using timestamp expressed in frames). The timed_dsp wrapper class is an example of a DSP class actually using the timestamp information. Typical Use-Case A typical Faust program can use the MIDI clock command signal to possibly compute the Beat Per Minutes (BPM) information for any synchronization need it may have. Here is a simple example of a sinusoid generated which a frequency controlled by the MIDI clock stream, and starting/stopping when receiving the MIDI start/stop messages: import(\"stdfaust.lib\"); // square signal (1/0), changing state // at each received clock clocker = checkbox(\"MIDI clock[midi:clock]\"); // ON/OFF button controlled // with MIDI start/stop messages play = checkbox(\"On/Off [midi:start][midi:stop]\"); // detect front front(x) = (x-x\u2019) != 0.0; // count number of peaks during one second freq(x) = (x-x@ma.SR) : + ~ _; process = os.osc(8*freq(front(clocker))) * play; Each received group of 24 clocks will move the time position by exactly one beat. Then it is absolutely mandatory to never loose any MIDI clock message and the standard memory zone based model with the use the last received control value semantic is not adapted. The DSP object that needs to be controlled using the sample-accurate machinery can then simply be decorated using the timed_dsp class with the following kind of code: dsp* sample_accurate_dsp = new timed_dsp(DSP); Note that the described sample accurate MIDI clock synchronization model can currently only be used at input level. Because of the simple memory zone based connection point between the control interface and the DSP code, output controls (like bargraph) cannot generate a stream of control values. Thus a reliable MIDI clock generator cannot be implemented with the current approach. Polyphonic Instruments Directly programing polyphonic instruments in Faust is perfectly possible. It is also needed if very complex signal interaction between the different voices have to be described. But since all voices would always be computed, this approach could be too CPU costly for simpler or more limited needs. In this case describing a single voice in a Faust DSP program and externally combining several of them with a special polyphonic instrument aware architecture file is a better solution. Moreover, this special architecture file takes care of dynamic voice allocations and control MIDI messages decoding and mapping. Polyphonic ready DSP Code By convention Faust architecture files with polyphonic capabilities expect to find control parameters named freq , gain , and gate . The metadata declare nvoices \"8\"; kind of line with a desired value of voices can be added in the source code. In the case of MIDI control, the freq parameter (which should be a frequency) will be automatically computed from MIDI note numbers, gain (which should be a value between 0 and 1) from velocity and gate from keyon/keyoff events. Thus, gate can be used as a trigger signal for any envelope generator, etc. Using the mydsp_poly Class The single voice has to be described by a Faust DSP program, the mydsp_poly class is then used to combine several voices and create a polyphonic ready DSP: the poly-dsp.h file contains the definition of the mydsp_poly class used to wrap the DSP voice into the polyphonic architecture. This class maintains an array of dsp* objects, manage dynamic voice allocation, control MIDI messages decoding and mapping, mixing of all running voices, and stopping a voice when its output level decreases below a given threshold as a subclass of DSP, the mydsp_poly class redefines the buildUserInterface method. By convention all allocated voices are grouped in a global Polyphonic tabgroup. The first tab contains a Voices group, a master like component used to change parameters on all voices at the same time, with a Panic button to be used to stop running voices, followed by one tab for each voice. Graphical User Interface components will then reflect the multi-voices structure of the new polyphonic DSP The resulting polyphonic DSP object can be used as usual, connected with the needed audio driver, and possibly other UI control objects like OSCUI , httpdUI , etc. Having this new UI hierarchical view allows complete OSC control of each single voice and their control parameters, but also all voices using the master component. The following OSC messages reflect the same DSP code either compiled normally, or in polyphonic mode (only part of the OSC hierarchies are displayed here): // Mono mode /Organ/vol f -10.0 /Organ/pan f 0.0 // Polyphonic mode /Polyphonic/Voices/Organ/pan f 0.0 /Polyphonic/Voices/Organ/vol f -10.0 ... /Polyphonic/Voice1/Organ/vol f -10.0 /Polyphonic/Voice1/Organ/pan f 0.0 ... /Polyphonic/Voice2/Organ/vol f -10.0 /Polyphonic/Voice2/Organ/pan f 0.0 Note that to save space on the screen, the /Polyphonic/VoiceX/xxx syntax is used when the number of allocated voices is less than 8, then the /Polyphonic/VX/xxx syntax is used when more voices are used. The polyphonic instrument allocation takes the DSP to be used for one voice, the desired number of voices, the dynamic voice allocation state, and the group state which controls if separated voices are displayed or not: dsp* poly = new mydsp_poly(dsp, 2, true, true); With the following code, note that a polyphonic instrument may be used outside of a MIDI control context, so that all voices will be always running and possibly controlled with OSC messages for instance: dsp* poly = new mydsp_poly(dsp, 8, false, true); Polyphonic Instrument With a Global Output Effect Polyphonic instruments may be used with an output effect. Putting that effect in the main Faust code is generally not a good idea since it would be instantiated for each voice which would be very inefficient. A convention has been defined to use the effect = some effect; line in the DSP source code. The actual effect definition has to be extracted from the DSP code, compiled separately, and then combined using the dsp_sequencer class previously presented to connect the polyphonic DSP in sequence with a unique global effect, with something like: dsp* poly = new dsp_sequencer(new mydsp_poly(dsp, 2, true, true), new effect()); | Some helper classes like the base dsp_poly_factory class, and concrete implementations llvm_dsp_poly_factory when using the LLVM backend or interpreter_dsp_poly_factory when using the Interpreter backend can also be used to automatically handle the voice and effect part of the DSP. Controlling the Polyphonic Instrument The mydsp_poly class is also ready for MIDI control (as a class implementing the midi interface) and can react to keyOn/keyOff and pitchWheel events. Other MIDI control parameters can directly be added in the DSP source code as MIDI metadata. To receive MIDI events, the created polyphonic DSP will be automatically added to the midi_handler object when calling buildUserInterface on a MidiUI object. Deploying the Polyphonic Instrument Several architecture files and associated scripts have been updated to handle polyphonic instruments: As an example on OSX, the script faust2caqt foo.dsp can be used to create a polyphonic CoreAudio/QT application. The desired number of voices is either declared in a nvoices metadata or changed with the -nvoices num additional parameter. MIDI control is activated using the -midi parameter. The number of allocated voices can possibly be changed at runtime using the -nvoices parameter to change the default value (so using ./foo -nvoices 16 for instance). Several other scripts have been adapted using the same conventions. faustcaqt -midi -noices 12 inst.dsp -effect effect.dsp with inst.dsp and effect.dsp in the same folder, and the number of outputs of the instrument matching the number of inputs of the effect, has to be used. Polyphonic ready faust2xx scripts will then compile the polyphonic instrument and the effect, combine them in sequence, and create a ready-to-use DSP. Custom Memory Manager In C and C++, the Faust compiler produces a class (or a struct in C), to be instantiated to create each DSP instance. The standard generation model produces a flat memory layout, where all fields (scalar and arrays) are simply consecutive in the generated code (following the compilation order). So the DSP is allocated on a single block of memory, either on the stack or the heap depending on the use-case. The following DSP program: import(\"stdfaust.lib\"); gain = hslider(\"gain\", 0.5, 0, 1, 0.01); feedback = hslider(\"feedback\", 0.8, 0, 1, 0.01); echo(del_sec, fb, g) = + ~ de.delay(50000, del_samples) * fb * g with { del_samples = del_sec * ma.SR; }; process = echo(1.6, 0.6, 0.7), echo(0.7, feedback, gain); will have the flat memory layout: int IOTA0; int fSampleRate; int iConst1; float fRec0[65536]; FAUSTFLOAT fHslider0; FAUSTFLOAT fHslider1; int iConst2; float fRec1[65536]; So scalar fHslider0 and fHslider1 correspond to the gain and feedback controllers. The iConst1 and iConst2 values are typically computed once at init time using the dynamically given the fSampleRate value, and used in the DSP loop later on. The fRec0 and fRec1 arrays are used for the recursive delays and finally the shared IOTA0 index is used to access them. Here is the generated compute function: virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { FAUSTFLOAT* input0 = inputs[0]; FAUSTFLOAT* input1 = inputs[1]; FAUSTFLOAT* output0 = outputs[0]; FAUSTFLOAT* output1 = outputs[1]; float fSlow0 = float(fHslider0) * float(fHslider1); for (int i0 = 0; i0 < count; i0 = i0 + 1) { fRec0[IOTA0 & 65535] = float(input0[i0]) + 0.419999987f * fRec0[(IOTA0 - iConst1) & 65535]; output0[i0] = FAUSTFLOAT(fRec0[IOTA0 & 65535]); fRec1[IOTA0 & 65535] = float(input1[i0]) + fSlow0 * fRec1[(IOTA0 - iConst2) & 65535]; output1[i0] = FAUSTFLOAT(fRec1[IOTA0 & 65535]); IOTA0 = IOTA0 + 1; } } The -mem option On audio boards where the memory is separated as several blocks (like SRAM, SDRAM\u2026) with different access time, it becomes important to refine the DSP memory model so that the DSP structure will not be allocated on a single block of memory, but possibly distributed on all available blocks. The idea is then to allocate parts of the DSP that are often accessed in fast memory and the other ones in slow memory. The first remark is that scalar values will typically stay in the DSP structure, and the point is to move the big array buffers ( fRec0 and fRec1 in the example) into separated memory blocks. The -mem (--memory-manager) option can be used to generate adapted code. On the previous DSP program, we now have the following generated C++ code: int IOTA0; int fSampleRate; int iConst1; float* fRec0; FAUSTFLOAT fHslider0; FAUSTFLOAT fHslider1; int iConst2; float* fRec1; The two fRec0 and fRec1 arrays are becoming pointers, and will be allocated elsewhere. An external memory manager is needed to interact with the DSP code. The proposed model does the following: in a first step the generated C++ code will inform the memory allocator about its needs in terms of 1) number of separated memory zones, with 2) their size 3) access characteristics, like number of Read and Write for each frame computation. This is done be generating an additional static memoryInfo method with the complete information available, the memory manager can then define the best strategy to allocate all separated memory zones an additional memoryCreate method is generated to allocate each of the separated zones an additional memoryDestroy method is generated to deallocate each of the separated zones Here is the API for the memory manager: struct dsp_memory_manager { virtual ~dsp_memory_manager() {} /** * Inform the Memory Manager with the number of expected memory zones. * @param count - the number of memory zones */ virtual void begin(size_t count); /** * Give the Memory Manager information on a given memory zone. * @param size - the size in bytes of the memory zone * @param reads - the number of Read access to the zone used to compute one frame * @param writes - the number of Write access to the zone used to compute one frame */ virtual void info(size_t size, size_t reads, size_t writes) {} /** * Inform the Memory Manager that all memory zones have been described, * to possibly start a 'compute the best allocation strategy' step. */ virtual void end(); /** * Allocate a memory zone. * @param size - the memory zone size in bytes */ virtual void* allocate(size_t size) = 0; /** * Destroy a memory zone. * @param ptr - the memory zone pointer to be deallocated */ virtual void destroy(void* ptr) = 0; }; A class static member is added in the mydsp class, to be set with an concrete memory manager later on: dsp_memory_manager* mydsp::fManager = nullptr; The C++ generated code now contains a new memoryInfo method, which interacts with the memory manager: static void memoryInfo() { fManager->begin(3); // mydsp fManager->info(56, 9, 1); // fRec0 fManager->info(262144, 2, 1); // fRec1 fManager->info(262144, 2, 1); fManager->end(); } The begin method is first generated to inform that three separated memory zones will be needed. Then three consecutive calls to the info method are generated, one for the DSP object itself, one for each recursive delay array. The end method is then called to finish the memory layout description, and let the memory manager prepare the actual allocations. Note that the memory layout information is also available in the JSON file generated using the -json option, to possibly be used statically by the architecture machinery (that is at compile time). With the previous program, the memory layout section is: \"memory_layout\": [ { \"name\": \"mydsp\", \"type\": \"kObj_ptr\", \"size\": 0, \"size_bytes\": 56, \"read\": 9, \"write\": 1 }, { \"name\": \"IOTA0\", \"type\": \"kInt32\", \"size\": 1, \"size_bytes\": 4, \"read\": 7, \"write\": 1 }, { \"name\": \"iConst1\", \"type\": \"kInt32\", \"size\": 1, \"size_bytes\": 4, \"read\": 1, \"write\": 0 }, { \"name\": \"fRec0\", \"type\": \"kFloat_ptr\", \"size\": 65536, \"size_bytes\": 262144, \"read\": 2, \"write\": 1 }, { \"name\": \"iConst2\", \"type\": \"kInt32\", \"size\": 1, \"size_bytes\": 4, \"read\": 1, \"write\": 0 }, { \"name\": \"fRec1\", \"type\": \"kFloat_ptr\", \"size\": 65536, \"size_bytes\": 262144, \"read\": 2, \"write\": 1 } ] Finally the memoryCreate and memoryDestroy methods are generated. The memoryCreate method asks the memory manager to allocate the fRec0 and fRec1 buffers: void memoryCreate() { fRec0 = static_cast(fManager->allocate(262144)); fRec1 = static_cast(fManager->allocate(262144)); } And the memoryDestroy method asks the memory manager to destroy them: virtual memoryDestroy() { fManager->destroy(fRec0); fManager->destroy(fRec1); } Additional static create/destroy methods are generated: static mydsp* create() { mydsp* dsp = new (fManager->allocate(sizeof(mydsp))) mydsp(); dsp->memoryCreate(); return dsp; } static void destroy(dsp* dsp) { static_cast(dsp)->memoryDestroy(); fManager->destroy(dsp); } Note that the so-called C++ placement new will be used to allocate the DSP object itself. Static tables When rdtable or rwtable primitives are used in the source code, the C++ class will contain a table shared by all instances of the class. By default, this table is generated as a static class array, and so allocated in the application global static memory. Taking the following DSP example: process = (waveform {10,20,30,40,50,60,70}, %(7)~+(3) : rdtable), (waveform {1.1,2.2,3.3,4.4,5.5,6.6,7.7}, %(7)~+(3) : rdtable); Here is the generated code in default mode: ... static int itbl0mydspSIG0[7]; static float ftbl1mydspSIG1[7]; class mydsp : public dsp { ... public: ... static void classInit(int sample_rate) { mydspSIG0* sig0 = newmydspSIG0(); sig0->instanceInitmydspSIG0(sample_rate); sig0->fillmydspSIG0(7, itbl0mydspSIG0); mydspSIG1* sig1 = newmydspSIG1(); sig1->instanceInitmydspSIG1(sample_rate); sig1->fillmydspSIG1(7, ftbl1mydspSIG1); deletemydspSIG0(sig0); deletemydspSIG1(sig1); } virtual void init(int sample_rate) { classInit(sample_rate); instanceInit(sample_rate); } virtual void instanceInit(int sample_rate) { instanceConstants(sample_rate); instanceResetUserInterface(); instanceClear(); } ... } The two itbl0mydspSIG0 and ftbl1mydspSIG1 tables are static global arrays. They are filled in the classInit method. The architecture code will typically call the init method (which calls classInit ) on a given DSP, to allocate class related arrays and the DSP itself. If several DSPs are going to be allocated, calling classInit only once then the instanceInit method on each allocated DSP is the way to go. In the -mem mode, the generated C++ code is now: ... static int* itbl0mydspSIG0 = 0; static float* ftbl1mydspSIG1 = 0; class mydsp : public dsp { ... public: ... static dsp_memory_manager* fManager; static void classInit(int sample_rate) { mydspSIG0* sig0 = newmydspSIG0(fManager); sig0->instanceInitmydspSIG0(sample_rate); itbl0mydspSIG0 = static_cast(fManager->allocate(28)); sig0->fillmydspSIG0(7, itbl0mydspSIG0); mydspSIG1* sig1 = newmydspSIG1(fManager); sig1->instanceInitmydspSIG1(sample_rate); ftbl1mydspSIG1 = static_cast(fManager->allocate(28)); sig1->fillmydspSIG1(7, ftbl1mydspSIG1); deletemydspSIG0(sig0, fManager); deletemydspSIG1(sig1, fManager); } static void classDestroy() { fManager->destroy(itbl0mydspSIG0); fManager->destroy(ftbl1mydspSIG1); } virtual void init(int sample_rate) {} virtual void instanceInit(int sample_rate) { instanceConstants(sample_rate); instanceResetUserInterface(); instanceClear(); } ... } The two itbl0mydspSIG0 and ftbl1mydspSIG1 tables are generated as static global pointers. The classInit method uses the fManager object used to allocate tables. A new classDestroy method is generated to deallocate the tables. Finally the init method is now empty, since the architecture file is supposed to use the classInit/classDestroy method once to allocate and deallocate static tables, and the instanceInit method on each allocated DSP. The memoryInfo method now has the following shape, with the two itbl0mydspSIG0 and ftbl1mydspSIG1 tables: static void memoryInfo() { fManager->begin(6); // mydspSIG0 fManager->info(4, 0, 0); // itbl0mydspSIG0 fManager->info(28, 1, 0); // mydspSIG1 fManager->info(4, 0, 0); // ftbl1mydspSIG1 fManager->info(28, 1, 0); // mydsp fManager->info(28, 0, 0); // iRec0 fManager->info(8, 3, 2); fManager->end(); } Defining and using a custom memory manager When compiled with the -mem option, the client code has to define an adapted memory_manager class for its specific needs. A cutom memory manager is implemented by subclassing the dsp_memory_manager abstract base class, and defining the begin , end , \u00ecnfo , allocate and destroy methods. Here is an example of a simple heap allocating manager (implemented in the dummy-mem.cpp architecture file): struct malloc_memory_manager : public dsp_memory_manager { virtual void begin(size_t count) { // TODO: use \u2018count\u2019 } virtual void end() { // TODO: start sorting the list of memory zones, to prepare // for the future allocations done in memoryCreate() } virtual void info(size_t size, size_t reads, size_t writes) { // TODO: use 'size', \u2018reads\u2019 and \u2018writes\u2019 // to prepare memory layout for allocation } virtual void* allocate(size_t size) { // TODO: refine the allocation scheme to take // in account what was collected in info return calloc(1, size); } virtual void destroy(void* ptr) { // TODO: refine the allocation scheme to take // in account what was collected in info free(ptr); } }; The specialized malloc_memory_manager class can now be used the following way: // Allocate a global static custom memory manager static malloc_memory_manager gManager; // Setup the global custom memory manager on the DSP class mydsp::fManager = &gManager; // Make the memory manager get information on all subcontainers, // static tables, DSP and arrays and prepare memory allocation mydsp::memoryInfo(); // Done once before allocating any DSP, to allocate static tables mydsp::classInit(44100); // \u2018Placement new\u2019 and 'memoryCreate' are used inside the \u2018create\u2019 method dsp* DSP = mydsp::create(); // Init the DSP instance DSP->instanceInit(44100); ... ... // use the DSP ... // 'memoryDestroy' and memory manager 'destroy' are used to deallocate memory mydsp::destroy(); // Done once after the last DSP has been destroyed mydsp::classDestroy(); Note that the client code can still choose to allocate/deallocate the DSP instance using the regular C++ new/delete operators: // Allocate a global static custom memory manager static malloc_memory_manager gManager; // Setup the global custom memory manager on the DSP class mydsp::fManager = &gManager; // Make the memory manager get information on all subcontainers, // static tables, DSP and arrays and prepare memory allocation mydsp::memoryInfo(); // Done once before allocating any DSP, to allocate static tables mydsp::classInit(44100); // Use regular C++ new dsp* DSP = new mydsp(); /// Allocate internal buffers DSP->memoryCreate(); // Init the DSP instance DSP->instanceInit(44100); ... ... // use the DSP ... // Deallocate internal buffers DSP->memoryDestroy(); // Use regular C++ delete delete DSP; // Done once after the last DSP has been destroyed mydsp::classDestroy(); Or even on the stack with: ... // Allocation on the stack mydsp DSP; // Allocate internal buffers DSP.memoryCreate(); // Init the DSP instance DSP.instanceInit(44100); ... ... // use the DSP ... // Deallocate internal buffers DSP.memoryDestroy(); ... More complex custom memory allocators can be developed by refining this malloc_memory_manager example, possibly defining real-time memory allocators...etc... The OWL architecture file uses a custom OwlMemoryManager . Allocating several DSP instances In a multiple instances scheme, static data structures shared by all instances have to be allocated once at beginning using mydsp::classInit , and deallocated at the end using mydsp::classDestroy . Individual instances are then allocated with mydsp::create() and deallocated with mydsp::destroy() , possibly directly using regular new/delete , or using stack allocation as explained before. Measuring the DSP CPU The measure_dsp class defined in the faust/dsp/dsp-bench.h file allows to decorate a given DSP object and measure its compute method CPU consumption. Results are given in Megabytes/seconds (higher is better) and DSP CPU at 44,1 kHz. Here is a C++ code example of its use: static void bench(dsp* dsp, const string& name) { // Init the DSP dsp->init(48000); // Wraps it with a 'measure_dsp' decorator measure_dsp mes(dsp, 1024, 5); // Measure the CPU use mes.measure(); // Returns the Megabytes/seconds and relative standard deviation values std::pair res = mes.getStats(); // Print the stats cout << name << \" MBytes/sec : \" << res.first << \" \" << \"(DSP CPU % : \" << (mes.getCPULoad() * 100) << \")\" << endl; } Defined in the faust/dsp/dsp-optimizer.h file, the dsp_optimizer class uses the libfaust library and its LLVM backend to dynamically compile DSP objects produced with different Faust compiler options, and then measure their DSP CPU. Here is a C++ code example of its use: static void dynamic_bench(const string& in_filename) { // Init the DSP optimizer with the in_filename to compile dsp_optimizer optimizer(in_filename, 0, nullptr, \"\", 1024); // Discover the best set of parameters tuple res = optimizer.findOptimizedParameters(); cout << \"Best value for '\" << in_filename << \"' is : \" << get<0>(res) << \" MBytes/sec with \"; for (size_t i = 0; i < get<3>(res).size(); i++) { cout << get<3>(res)[i] << \" \"; } cout << endl; } This class can typically be used in tools that help developers discover the best Faust compilation parameters for a given DSP program, like the faustbench and faustbench-llvm tools. The Proxy DSP Class In some cases, a DSP may run outside of the application or plugin context, like on another machine. The proxy_dsp class allows to create a proxy DSP that will be finally connected to the real one (using an OSC or HTTP based machinery for instance), and will reflect its behaviour. It uses the previously described JSONUIDecoder class. Then the proxy_dsp can be used in place of the real DSP, and connected with UI controllers using the standard buildUserInterface to control it. The faust-osc-controller tool demonstrates this capability using an OSC connection between the real DSP and its proxy. The proxy_osc_dsp class implements a specialized proxy_dsp using the liblo OSC library to connect to a OSC controllable DSP (which is using the OSCUI class and running in another context or machine). Then the faust-osc-controller program creates a real GUI (using GTKUI in this example) and have it control the remote DSP and reflect its dynamic state (like vumeter values coming back from the real DSP). Embedded Platforms Faust has been targeting an increasing number of embedded platforms for real-time audio signal processing applications in recent years. It can now be used to program microcontrollers (i.e., ESP32 , Teensy , Pico DSP and Daisy ), mobile platforms, embedded Linux systems (i.e., Bela and Elk ), Digital Signal Processors (DSPs), and more. Specialized architecture files and faust2xx scripts have been developed. Metadata Naming Convention A specific question arises when dealing with devices without or limited screen to display any GUI, and a set of physical knobs or buttons to be connected to control parameters. The standard way is then to use metadata in control labels. Since beeing able to use the same DSP file on all devices is always desirable, a common set of metadata has been defined: [switch:N] is used to connect to switch buttons [knob:N] is used to connect to knobs A extended set of metadata will probably have to be progressively defined and standardized. Using the -uim Compiler Option On embedded platforms with limited capabilities, using the -uim option can be helpful. The C/C++ generated code then contains a static description of several caracteristics of the DSP, like the number of audio inputs/outputs , the number of controls inputs/outputs , and macros feed with the controls parameters (label, DSP field name, init, min, max, step) that can be implemented in the architecture file for various needs. For example the following DSP program: process = _*hslider(\"Gain\", 0, 0, 1, 0.01) : hbargraph(\"Vol\", 0, 1); compiled with faust -uim foo.dsp gives this additional section: #ifdef FAUST_UIMACROS #define FAUST_FILE_NAME \"foo.dsp\" #define FAUST_CLASS_NAME \"mydsp\" #define FAUST_INPUTS 1 #define FAUST_OUTPUTS 1 #define FAUST_ACTIVES 1 #define FAUST_PASSIVES 1 FAUST_ADDHORIZONTALSLIDER(\"Gain\", fHslider0, 0.0f, 0.0f, 1.0f, 0.01f); FAUST_ADDHORIZONTALBARGRAPH(\"Vol\", fHbargraph0, 0.0f, 1.0f); #define FAUST_LIST_ACTIVES(p) \\ p(HORIZONTALSLIDER, Gain, \"Gain\", fHslider0, 0.0f, 0.0f, 1.0f, 0.01f) \\ #define FAUST_LIST_PASSIVES(p) \\ p(HORIZONTALBARGRAPH, Vol, \"Vol\", fHbargraph0, 0.0, 0.0f, 1.0f, 0.0) \\ #endif The FAUST_ADDHORIZONTALSLIDER or FAUST_ADDHORIZONTALBARGRAPH macros can then be implemented to do whatever is needed with the Gain\", fHslider0, 0.0f, 0.0f, 1.0f, 0.01f and \"Vol\", fHbargraph0, 0.0f, 1.0f parameters respectively. The more sophisticated FAUST_LIST_ACTIVES and FAUST_LIST_PASSIVES macros can possibly be used to call any p function (defined elsewhere in the architecture file) on each item. The minimal-static.cpp file demonstrates this feature. Developing a New Architecture File Developing a new architecture file typically means writing a generic file, that will be populated with the actual output of the Faust compiler, in order to produce a complete file, ready-to-be-compiled as a standalone application or plugin. The architecture to be used is specified at compile time with the -a option. It must contain the <> and <> lines that will be recognized by the Faust compiler, and replaced by the generated code. Here is an example in C++, but the same logic can be used with other languages producing textual outputs, like C, Cmajor, Rust or Dlang. Look at the minimal.cpp example located in the architecture folder: #include #include \"faust/gui/PrintUI.h\" #include \"faust/gui/meta.h\" #include \"faust/audio/dummy-audio.h\" #include \"faust/dsp/one-sample-dsp.h\" // To be replaced by the compiler generated C++ class <> <> int main(int argc, char* argv[]) { mydsp DSP; std::cout << \"DSP size: \" << sizeof(DSP) << \" bytes\\n\"; // Activate the UI, here that only print the control paths PrintUI ui; DSP.buildUserInterface(&ui); // Allocate the audio driver to render 5 buffers of 512 frames dummyaudio audio(5); audio.init(\"Test\", static_cast(&DSP)); // Render buffers... audio.start(); audio.stop(); } Calling faust -a minimal.cpp noise.dsp -o noise.cpp will produce a ready to compile noise.cpp file: /* ------------------------------------------------------------ name: \"noise\" Code generated with Faust 2.28.0 (https://faust.grame.fr) Compilation options: -lang cpp -scal -ftz 0 ------------------------------------------------------------ */ #ifndef __mydsp_H__ #define __mydsp_H__ #include #include \"faust/gui/PrintUI.h\" #include \"faust/gui/meta.h\" #include \"faust/audio/dummy-audio.h\" #ifndef FAUSTFLOAT #define FAUSTFLOAT float #endif #include #include #ifndef FAUSTCLASS #define FAUSTCLASS mydsp #endif #ifdef __APPLE__ #define exp10f __exp10f #define exp10 __exp10 #endif class mydsp : public dsp { private: FAUSTFLOAT fHslider0; int iRec0[2]; int fSampleRate; public: void metadata(Meta* m) { m->declare(\"filename\", \"noise.dsp\"); m->declare(\"name\", \"noise\"); m->declare(\"noises.lib/name\", \"Faust Noise Generator Library\"); m->declare(\"noises.lib/version\", \"0.0\"); } virtual int getNumInputs() { return 0; } virtual int getNumOutputs() { return 1; } static void classInit(int sample_rate) { } virtual void instanceConstants(int sample_rate) { fSampleRate = sample_rate; } virtual void instanceResetUserInterface() { fHslider0 = FAUSTFLOAT(0.5f); } virtual void instanceClear() { for (int l0 = 0; (l0 < 2); l0 = (l0 + 1)) { iRec0[l0] = 0; } } virtual void init(int sample_rate) { classInit(sample_rate); instanceInit(sample_rate); } virtual void instanceInit(int sample_rate) { instanceConstants(sample_rate); instanceResetUserInterface(); instanceClear(); } virtual mydsp* clone() { return new mydsp(); } virtual int getSampleRate() { return fSampleRate; } virtual void buildUserInterface(UI* ui_interface) { ui_interface->openVerticalBox(\"noise\"); ui_interface->addHorizontalSlider(\"Volume\", &fHslider0, 0.5, 0.0, 1.0, 0.001); ui_interface->closeBox(); } virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { FAUSTFLOAT* output0 = outputs[0]; float fSlow0 = (4.65661287e-10f * float(fHslider0)); for (int i = 0; (i < count); i = (i + 1)) { iRec0[0] = ((1103515245 * iRec0[1]) + 12345); output0[i] = FAUSTFLOAT((fSlow0 * float(iRec0[0]))); iRec0[1] = iRec0[0]; } } }; int main(int argc, char* argv[]) { mydsp DSP; std::cout << \"DSP size: \" << sizeof(DSP) << \" bytes\\n\"; // Activate the UI, here that only print the control paths PrintUI ui; DSP.buildUserInterface(&ui); // Allocate the audio driver to render 5 buffers of 512 frames dummyaudio audio(5); audio.init(\"Test\", &DSP); // Render buffers... audio.start(); audio.stop(); } Generally, several files to connect to the audio layer, controller layer, and possibly other (MIDI, OSC...) have to be used. One of them is the main file and include the others. The -i option can be added to actually inline all #include \"faust/xxx/yyy\" headers (all files starting with faust ) to produce a single self-contained unique file. Then a faust2xxx script has to be written to chain the Faust compilation step and the C++ compilation one (and possibly others). Look at the Developing a faust2xx Script section. Adapting the Generated DSP Developing the adapted C++ file may require aggregating the generated mydsp class (subclass of the dsp base class defined in faust/dsp/dsp.h header) in the specific class, so something like the following would have to be written: class my_class : public base_interface { private: mydsp fDSP; public: my_class() { // Do something specific } virtual ~my_class() { // Do something specific } // Do something specific void my_compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { // Do something specific // Call the fDSP 'compute' fDSP.compute(count, inputs, outputs); } // Do something specific }; or subclassing and extending it , so writing something like: class my_class : public mydsp { private: // Do something specific public: my_class() { // Do something specific } virtual ~my_class() { // Do something specific } // Override the 'compute' method void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { // Do something specific // Call the inherited 'compute' mydsp::compute(count, inputs, outputs); } // Do something specific }; or decorating a DSP object using the decorator pattern , which is already implemented in this file , and can possibly be sub-classed like: class my_decorator : public decorator_dsp { private: // Do something specific public: my_decorator(dsp* dsp):decorator_dsp(dsp) { // Do something specific } virtual ~my_class() { // Do something specific } // Implementation of some of the methods // Override the 'instanceClear' method void instanceClear() { // Do something specific // Call the inherited 'instanceClear' decorator_dsp::instanceClear(); } // Override the 'compute' method void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { // Do something specific // Call the inherited 'compute' decorator_dsp::compute(count, inputs, outputs); } // Do something specific }; // Decorates a concrete instance my_decorator DSP = new my_decorator(new mydsp()); ... Developing New UI Architectures For really new architectures, the UI base class, the GenericUI helper class or the GUI class (described before), have to be subclassed. Note that a lot of classes presented in the Some useful UI classes for developers section can also be subclassed or possibly enriched with additional code. Developing New Audio Architectures The audio base class has to be subclassed and each method implemented for the given audio hardware. In some cases the audio driver can adapt to the required number of DSP inputs/outputs (like the JACK audio system for instance which can open any number of virtual audio ports). But in general, the number of hardware audio inputs/outputs may not exactly match the DSP ones. This is the responsability of the audio driver to adapt to this situation. The dsp_adapter dsp decorator can help in this case. Developing a New Soundfile Loader Soundfiles are defined in the DSP program using the soundfile primitive . Here is a simple DSP program which uses a single tango.wav audio file and play it until its end: process = 0,_~+(1):soundfile(\"sound[url:{'tango.wav'}]\",2):!,!, The compiled C++ class has the following structure: class mydsp : public dsp { private: Soundfile* fSoundfile0; int iRec0[2]; int fSampleRate; .... with the Soundfile* fSoundfile0; field and its definition : struct Soundfile { void* fBuffers; // will correspond to a double** or float** pointer chosen at runtime int* fLength; // length of each part (so fLength[P] contains the length in frames of part P) int* fSR; // sample rate of each part (so fSR[P] contains the SR of part P) int* fOffset; // offset of each part in the global buffer (so fOffset[P] contains the offset in frames of part P) int fChannels; // max number of channels of all concatenated files int fParts; // the total number of loaded parts bool fIsDouble; // keep the sample format (float or double) }; The following buildUserInterface method in generated, containing a addSoundfile method called with the appropriate parameters extracted from the soundfile(\"sound[url:{'tango.wav'}]\",2) piece of DSP code, to be used to load the tango.wav audio file and prepare the fSoundfile0 field: virtual void buildUserInterface(UI* ui_interface) { ui_interface->openVerticalBox(\"tp0\"); ui_interface->addSoundfile(\"sound\", \"{'tango.wav'}\", &fSoundfile0); ui_interface->closeBox(); } The specialized SoundUI architecture file is then used to load the required soundfiles at DSP init time, by using a SoundfileReader object. It only implements the addSoundfile method which will load all needed audio files, create and fill the fSoundfile0 object. Different concrete implementations are already written, either using libsndfile (with the LibsndfileReader.h file), or JUCE (with the JuceReader file). A new audio file loader can be written by subclassing the SoundfileReader class. A pure memory reader could be implemented for instance to load wavetables to be used as the soundfile URL list. Look at the template MemoryReader class, as an example to be completed, with the following methods to be implemented: /** * Check the availability of a sound resource. * * @param path_name - the name of the file, or sound resource identified this way * * @return true if the sound resource is available, false otherwise. */ virtual bool checkFile(const std::string& path_name); /** * Get the channels and length values of the given sound resource. * * @param path_name - the name of the file, or sound resource identified this way * @param channels - the channels value to be filled with the sound resource * number of channels * @param length - the length value to be filled with the sound resource length in frames * */ virtual void getParamsFile(const std::string& path_name, int& channels, int& length); /** * Read one sound resource and fill the 'soundfile' structure accordingly * * @param path_name - the name of the file, or sound resource identified this way * @param part - the part number to be filled in the soundfile * @param offset - the offset value to be incremented with the actual * sound resource length in frames * @param max_chan - the maximum number of mono channels to fill * */ virtual void readFile(Soundfile* soundfile, const std::string& path_name, int part, int& offset, int max_chan); Another example to look at is WaveReader . The SoundUI architecture is then used the following way: mydsp DSP; // Here using a compiled time chosen SoundfileReader SoundUI* sound_interface = new SoundUI(); DSP.buildUserInterface(sound_interface); ... run the DSP ... // Finally deallocate the sound_interface and associated Soundfile resources delete sound_interface; The SoundfileReader object can be dynamically choosen by using an alternate version of the SoundUI constructor, possibly choosing the sample format to be double when the DSP code is compiled with the -double option: mydsp DSP; // Here using a dynamically chosen custom MyMemoryReader SoundfileReader* sound_reader = new MyMemoryReader(...); SoundUI* sound_interface = new SoundUI(\"\", false, sound_reader, true); DSP.buildUserInterface(sound_interface); ... run the DSP ... // Finally deallocate the sound_interface and associated Soundfile resources delete sound_interface; Other Languages Than C++ Most of the architecture files have been developed in C++ over the years. Thus they are ready to be used with the C++ backend and the one that generate C++ wrapped modules (like the LLVM, Cmajor and Interpreter backends). For other languages, specific architecture files have to be written. Here is the current situation for other backends: the C backend needs additional CGlue.h and CInterface.h files, with the minimal-c file as a simple console mode example using them the Rust backend can be used with the minimal-rs architecture, the more complex JACK jack.rs used in faust2jackrust script, or the PortAudio portaudio.rs used in faust2portaudiorust script the experimental Dlang backend can be used with the minimal.d or the dplug.d to generate DPlug plugins with the faust2dplug script. the Julia backend can be used with the minimal.jl architecture or the portaudio.jl used in faust2portaudiojulia script. The faust2xx Scripts Using faust2xx Scripts The faust2xx scripts finally combine different architecture files to generate a ready-to-use application or plugin, etc... from a Faust DSP program. They typically combine the generated DSP with an UI architecture file and an audio architecture file. Most of the also have addition options like -midi , -nvoices , -effect or -soundfile to generate polyphonic instruments with or without effects, or audio file support. Look at the following page for a more complete description. Developing a faust2xx Script The faust2xx script are mostly written in bash (but any scripting language can be used) and aims to produce a ready-to-use application, plugin, etc... from a DSP program. A faust2minimal template script using the C++ backend, can be used to start the process. The helper scripts, faustpath , faustoptflags , and usage.sh can be used to setup common variables: # Define some common paths . faustpath # Define compilation flags . faustoptflags # Helper file to build the 'help' option . usage.sh CXXFLAGS+=\" $MYGCCFLAGS\" # So that additional CXXFLAGS can be used # The architecture file name ARCHFILE=$FAUSTARCH/minimal.cpp # Global variables OPTIONS=\"\" FILES=\"\" The script arguments then have to be analysed, compiler options are kept in the OPTIONS variable and all DSP files in the FILES one: #------------------------------------------------------------------- # dispatch command arguments #------------------------------------------------------------------- while [ $1 ] do p=$1 if [ $p = \"-help\" ] || [ $p = \"-h\" ]; then usage faust2minimal \"[options] [Faust options] \" exit fi echo \"dispatch command arguments\" if [ ${p:0:1} = \"-\" ]; then OPTIONS=\"$OPTIONS $p\" elif [[ -f \"$p\" ]] && [ ${p: -4} == \".dsp\" ]; then FILES=\"$FILES $p\" else OPTIONS=\"$OPTIONS $p\" fi shift done Each DSP file is first compiled to C++ using the faust -a command and the appropriate architecture file, then to the final executable program, here using the C++ compiler: #------------------------------------------------------------------- # compile the *.dsp files #------------------------------------------------------------------- for f in $FILES; do # compile the DSP to c++ using the architecture file echo \"compile the DSP to c++ using the architecture file\" faust -i -a $ARCHFILE $OPTIONS \"$f\" -o \"${f%.dsp}.cpp\"|| exit # compile c++ to binary echo \"compile c++ to binary\" ( $CXX $CXXFLAGS \"${f%.dsp}.cpp\" -o \"${f%.dsp}\" ) > /dev/null || exit # remove tempory files rm -f \"${f%.dsp}.cpp\" # collect binary file name for FaustWorks BINARIES=\"$BINARIES${f%.dsp};\" done echo $BINARIES The existing faust2xx scripts can be used as examples. The faust2api Model This model combining the generated DSP the audio and UI architecture components is very convenient to automatically produce ready-to-use standalone application or plugins, since the controller part (GUI, MIDI or OSC...) is directly compiled and deployed. In some cases, developers prefer to control the DSP by creating a completely new GUI (using a toolkit not supported in the standard architecture files), or even without any GUI and using another control layer. A model that only combines the generated DSP with an audio architecture file to produce an audio engine has been developed (thus gluing the blue and red parts of the three color model explained at the beginning). A generic template class DspFaust has been written in the DspFaust.h and DspFaust.cpp files. This code contains conditional compilation sections to add and initialize the appropriate audio driver (written as a subclass of the previously described base audio class), and can produce audio generators , effects , of fully MIDI and sensor controllable pophyphonic instruments . The resulting audio engine contains start and stop methods to control audio processing. It also provides a set of functions like getParamsCount, setParamValue, getParamValue etc. to access all parameters (or the additional setVoiceParamValue method function to access a single voice in a polyphonic case), and let the developer adds his own GUI or any kind of controller. Look at the faust2api script, which uses the previously described architecture files, and provide a tool to easily generate custom APIs based on one or several Faust objects. Using the -inj Option With faust2xx Scripts The compiler -inj option allows to inject a pre-existing C++ file (instead of compiling a dsp file) into the architecture files machinery. Assuming that the C++ file implements a subclass of the base dsp class, the faust2xx scripts can possibly be used to produce a ready-to-use application or plugin that can take profit of all already existing UI and audio architectures. Two examples of use are presented next. Using the template-llvm.cpp architecture The first one demonstrates how faust2xx scripts can become more dynamic by loading and compiling an arbitrary DSP at runtime. This is done using the template-llvm.cpp architecture file which uses the libfaust library and the LLVM backend to dynamically compile a foo.dsp file. So instead of producing a static binary based on a given DSP, the resulting program will be able to load and compile a DSP at runtime. This template-llvm.cpp can be used with the -inj option in faust2xx tools like: faust2cagtk -inj template-llvm.cpp faust2cagtk-llvm.dsp (a dummy DSP) to generate a monophonic faust2cagtk-llvm application, ready to be used to load and compile a DSP, and run it with the CoreAudio audio layer and GTK as the GUI freamework. Then faust2cagtk-llvm will ask for a DSP to compile: ./faust2cagtk-llvm A generic polyphonic (8 voices) and MIDI controllable version can be compiled using: faust2cagtk -inj template-llvm.cpp -midi -nvoices 8 faust2cagtk-llvm.dsp (a dummy DSP) Note that the resulting binary keeps its own control options, like: ./faust2cagtk-llvm -h ./faust2cagtk-llvm [--frequency ] [--buffer ] [--nvoices ] [--control <0/1>] [--group <0/1>] [--virtual-midi <0/1>] So now ./faust2cagtk-llvm --nvoices 16 starts the program with 16 voices. The technique has currently be tested with the faust2cagtk , faust2jack , faust2csvplot , and faust2plot tools. Second use-case computing the spectrogram of a set of audio files Here is a second use case where some external C++ code is used to compute the spectrogram of a set of audio files (which is something that cannot be simply done with the current version fo the Faust language) and output the spectrogram as an audio signal. A nentry controller will be used to select the currently playing spectrogram. The Faust compiler will be used to generate a C++ class which is going to be manually edited and enriched with additional code. Writting the DSP code First a fake DSP program spectral.dsp using the soundfile primitive loading two audio files and a nentry control is written: sf = soundfile(\"sound[url:{'sound1.wav';'sound2.wav'}]\",2); process = (hslider(\"Spectro\", 0, 0, 1, 1),0) : sf : !,!,_,_; The point of explicitly using soundfile primitive and a nentry control is to generate a C++ file with a prefilled DSP structure (containing the fSoundfile0 and fHslider0 fields) and code inside the buildUserInterface method. Compiling it manually with the following command: faust spectral.dsp -cn spectral -o spectral.cpp produces the following C++ code containing the spectral class: class spectral : public dsp { private: Soundfile* fSoundfile0; FAUSTFLOAT fHslider0; int fSampleRate; public: ... virtual int getNumInputs() { return 0; } virtual int getNumOutputs() { return 2; } ... virtual void buildUserInterface(UI* ui_interface) { ui_interface->openVerticalBox(\"spectral\"); ui_interface->addHorizontalSlider(\"Spectrogram\", &fHslider0, 0.0f, 0.0f, 1.0f, 1.0f); ui_interface->addSoundfile(\"sound\", \"{'sound1.wav';'sound2.wav';}\", &fSoundfile0); ui_interface->closeBox(); } virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { int iSlow0 = int(float(fHslider0)); .... } }; Customizing the C++ code Now the spectral class can be manually edited and completed with additional code, to compute the two audio files spectrograms in buildUserInterface , and play them in compute . a new line Spectrogram fSpectro[2]; is added in the DSP structure a createSpectrogram(fSoundfile0, fSpectro); function is added in buildUserInterface and used to compute and fill the two spectrograms, by reading the two loaded audio files in fSoundfile0 part of the generated code in compute is removed and replaced by new code to play one of spectrograms (selected with the fHslider0 control in the GUI) using a playSpectrogram(fSpectro, count, iSlow0, outputs); function: class spectral : public dsp { private: Soundfile* fSoundfile0; FAUSTFLOAT fHslider0; int fSampleRate; Spectrogram fSpectro[2]; public: ... virtual int getNumInputs() { return 0; } virtual int getNumOutputs() { return 2; } ... virtual void buildUserInterface(UI* ui_interface) { ui_interface->openVerticalBox(\"spectral\"); ui_interface->addHorizontalSlider(\"Spectro\", &fHslider0, 0.0f, 0.0f, 1.0f, 1.0f); ui_interface->addSoundfile(\"sound\", \"{'sound1.wav';'sound2.wav';}\", &fSoundfile0); // Read 'fSoundfile0' and fill 'fSpectro' createSpectrogram(fSoundfile0, fSpectro); ui_interface->closeBox(); } virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { int iSlow0 = int(float(fHslider0)); // Play 'fSpectro' indexed by 'iSlow0' by writting 'count' samples in 'outputs' playSpectrogram(fSpectro, count, iSlow0, outputs); } }; Here we assume that createSpectrogram and playSpectrogram functions are defined elsewhere and ready to be compiled. Deploying it as a Max/MSP External Using the faust2max6 Script The completed spectral.cpp file is now ready to be deployed as a Max/MSP external using the faust2max6 script and the -inj option with the following line: faust2max6 -inj spectral.cpp -soundfile spectral.dsp The two needed sound1.wav and sound2.wav audio files are embedded in the generated external, loaded at init time (since the buildUserInterface method is automatically called), and the manually added C++ code will be executed to compute the spectrograms and play them. Finally by respecting the naming coherency for the fake spectral.dsp DSP program, the generated spectral.cpp C++ file, the automatically generated spectral.maxpat Max/MSP patch will be able to build the GUI with a ready-to-use slider. Additional Ressources Several external projects are providing tools to arrange the way Faust source code is generated or used, in different languages. Preprocessing tools fpp fpp is a standalone Perl script with no dependencies which allows ANY C/C++ code in a Faust .dsp file as long as you are targeting C/C++ in scalar mode. C++ tools Using and adapting the dsp/UI/audio model in a more sophisticated way, or integrating Faust generated C++ classes in others frameworks (like JUCE). faust2hpp Convert Faust code to a header-only standalone C++ library. A collection of header files is generated as the output. A class is provided from which a DSP object can be built with methods in the style of JUCE DSP objects. faustpp A post-processor for Faust, which allows to generate with more flexibility. This is a source transformation tool based on the Faust compiler. It permits to arrange the way how Faust source is generated with greater flexibility. cookiecutter-dpf-faust A cookiecutter project template for DISTRHO plugin framework audio effect plugins using Faust for the implementation of the DSP pipeline. faustmd Static metadata generator for Faust/C++. This program builds the metadata for a Faust DSP ahead of time, rather than dynamically. The result is a block of C++ code which can be appended to the code generation. FaustCPPConverter Eyal Amir tool to facilitate the use of Faust generated C++ code in JUCE projects. JOSModules and josm_faust Julius Smith projects to facilitate the use of Faust generated C++ code in JUCE projects. Arduino tools An alternative way to use the ESP32 board with Faust, possibly easier and more versatile than the examples mentioned on the esp32 tutorial . Cmajor tools Using Faust in Cmajor A tutorial to show how Faust can be used with Cmajor , a C like procedural high-performance language especially designed for audio processing, and with dynamic JIT based compilation. RNBO tools Using Faust in RNBO with codebox~ A tutorial to show how Faust can be used with RNBO , a library and toolchain that can take Max-like patches, export them as portable code, and directly compile that code to targets like a VST, a Max External, or a Raspberry Pi. DLang tools Faust 2 Dplug Guide Explains how to use Faust in a Dplug project. Dplug Faust Example This is an example plugin using Dplug with a Faust backend. It is a stereo reverb plugin using the Freeverb demo from the Faust library. Julia tools Faust.jl Julia wrapper for the Faust compiler. Uses the Faust LLVM C API. Using Faust in Julia A tutorial to show how Faust can be used with Julia , a high-level, general-purpose dynamic programming language with features well suited for numerical analysis and computational science. Python tools FAUSTPy FAUSTPy is a Python wrapper for the FAUST DSP language. It is implemented using the CFFI and hence creates the wrapper dynamically at run-time. A updated version of the project is available on this fork . Faust Ctypes A port of Marc Joliet's FaustPy from CFFI to Ctypes. Faust-Ctypes documentation is available online . An SCons Tool for FAUST This is an SCons tool for compiling FAUST programs. It adds various builders to your construction environment: Faust, FaustXML, FaustSVG, FaustSC, and FaustHaskell. Their behaviour can be modified by changing various construction variables (see \"Usage\" below). Faustwatch At the moment there is one tool present, faustwatch.py. Faustwatch is a tool that observes a .dsp file used by the dsp language Faust. faustWidgets Creates interactive widgets inside jupyter notebooks from Faust dsp files and produces a (customizable) plot. Faust Synth This is an example project for controlling a synth, programmed and compiled with Faust, through a Python script. The synth runs as a JACK client on Linux systems and the output is automatically recorded by jack_capture. DawDreamer DawDreamer is an audio-processing Python framework supporting Faust and Faust's Box API. ode2dsp ode2dsp is a Python library for generating ordinary differential equation (ODE) solvers in digital signal processing (DSP) languages. It automates the tedious and error-prone symbolic calculations involved in creating a DSP model of an ODE. Finite difference equations (FDEs) are rendered to Faust code. faustlab A exploratory project to wrap the Faust interpreter for use by python via the following wrapping frameworks using the RtAudio cross-platform audio driver: cyfaust: cython (faust c++ interface) cfaustt: cython (faust c interface) pyfaust: pybind11 (faust c++ interface) nanobind: nanobind (faust c++ interface) cyfaust A cython wrapper of the Faust interpreter and the RtAudio cross-platform audio driver, derived from the faustlab project. The objective is to end up with a minimal, modular, self-contained, cross-platform python3 extension. Rust tools rust-faust A better integration of Faust for Rust. It allows to build the DSPs via build.rs and has some abstractions to make it much easier to work with params and meta of the DSPs. Faust egui Proof of concept of drawing a UI with egui and rust-faust . RustFaustExperiments Tools to compare C++ and Rust code generated from Faust. fl-tui Rust wrapper for the Faust compiler. It uses the libfaust LLVM C API. faustlive-jack-rs Another Rust wrapper for the Faust compiler, using JACK server for audio. It uses the libfaust LLVM C API. lowpass-lr4-faust-nih-plug A work-in-progress project to integrate Faust generated Rust code with NIH-plug . nih-faust-jit A plugin to load Faust dsp files and JIT-compile them with LLVM. A simple GUI is provided to select which script to load and where to look for the Faust libraries that this script may import. The selected DSP script is saved as part of the plugin state and therefore is saved with your DAW project. WebAssembly tools faust-loader Import Faust .dsp files, and get back an AudioWorklet or ScriptProcessor node. faust2cpp2wasm A drop in replacement for the wasm file generated by faust2wasm , but with Faust's C++ backend instead of its wasm backend. Faust Compiler Microservice This is a microservice that serves a single purpose: compiling Faust code that is sent to it into WebAssembly that can then be loaded and run natively from within the web synth application. It is written in go because go is supposed to be good for this sort of thing. mosfez-faust Makes dynamic compilation of Faust on the web a little easier, and has a dev project to run values through dsp offline, and preview dsp live. It's an opinionated version of some parts of Faust for webaudio , mostly just the Web Assembly Faust compiler, wrapped up in a library with additional features. faust-wap2-playground Playground and template for Faust-based web audio experiments. Dart tools flutter_faust_ffi A basic flutter app as a proof of concept utilizing Faust's C API export with Dart's ffi methods to create cross-platform plug-ins.","title":"Architecture Files"},{"location":"manual/architectures/#architecture-files","text":"A Faust program describes a signal processor , a pure DSP computation that maps input signals to output signals . It says nothing about audio drivers or controllers (like GUI, OSC, MIDI, sensors) that are going to control the DSP. This additional information is provided by architecture files . An architecture file describes how to relate a Faust program to the external world, in particular the audio drivers and the controllers interfaces to be used. This approach allows a single Faust program to be easily deployed to a large variety of audio standards (e.g., Max/MSP externals, PD externals, VST plugins, CoreAudio applications, JACK applications, iPhone/Android, etc.): The architecture to be used is specified at compile time with the -a option. For example faust -a jack-gtk.cpp foo.dsp indicates to use the JACK GTK architecture when compiling foo.dsp . Some of these architectures are a modular combination of an audio module and one or more controller modules . Some architecture only combine an audio module with the generated DSP to create an audio engine to be controlled with an additional setParamValue/getParamValue kind of API, so that the controller part can be completeley defined externally. This is the purpose of the faust2api script explained later on.","title":"Architecture Files"},{"location":"manual/architectures/#minimal-structure-of-an-architecture-file","text":"Before going into the details of the architecture files provided with Faust distribution, it is important to have an idea of the essential parts that compose an architecture file. Technically, an architecture file is any text file with two placeholders <> and <> . The first placeholder is currently not used, and the second one is replaced by the code generated by the FAUST compiler. Therefore, the really minimal architecture file, let's call it nullarch.cpp , is the following: <> <> This nullarch.cpp architecture has the property that faust foo.dsp and faust -a nullarch.cpp foo.dsp produce the same result. Obviously, this is not very useful, moreover the resulting cpp file doesn't compile. Here is miniarch.cpp , a minimal architecture file that contains enough information to produce a cpp file that can be successfully compiled: <> #define FAUSTFLOAT float class dsp {}; struct Meta { virtual void declare(const char* key, const char* value) {}; }; struct Soundfile {}; struct UI { // -- widget's layouts virtual void openTabBox(const char* label) {} virtual void openHorizontalBox(const char* label) {} virtual void openVerticalBox(const char* label) {} virtual void closeBox() {} // -- active widgets virtual void addButton(const char* label, FAUSTFLOAT* zone) {} virtual void addCheckButton(const char* label, FAUSTFLOAT* zone) {} virtual void addVerticalSlider(const char* label, FAUSTFLOAT* zone, FAUSTFLOAT init, FAUSTFLOAT min, FAUSTFLOAT max, FAUSTFLOAT step) {} virtual void addHorizontalSlider(const char* label, FAUSTFLOAT* zone, FAUSTFLOAT init, FAUSTFLOAT min, FAUSTFLOAT max, FAUSTFLOAT step) {} virtual void addNumEntry(const char* label, FAUSTFLOAT* zone, FAUSTFLOAT init, FAUSTFLOAT min, FAUSTFLOAT max, FAUSTFLOAT step) {} // -- passive widgets virtual void addHorizontalBargraph(const char* label, FAUSTFLOAT* zone, FAUSTFLOAT min, FAUSTFLOAT max) {} virtual void addVerticalBargraph(const char* label, FAUSTFLOAT* zone, FAUSTFLOAT min, FAUSTFLOAT max) {} // -- soundfiles virtual void addSoundfile(const char* label, const char* filename, Soundfile** sf_zone) {} // -- metadata declarations virtual void declare(FAUSTFLOAT* zone, const char* key, const char* val) {} }; <> This architecture is still not very useful, but it gives an idea of what a real-life architecture file has to implement, in addition to the audio part itself. As we will see in the next section, Faust architectures are implemented using a modular approach to avoid code duplication and favor code maintenance and reuse.","title":"Minimal Structure of an Architecture File"},{"location":"manual/architectures/#audio-architecture-modules","text":"A Faust generated program has to connect to a underlying audio layer. Depending if the final program is a application or plugin, the way to connect to this audio layer will differ: applications typically use the OS audio driver API, which will be CoreAudio on macOS, ALSA on Linux, WASAPI on Windows for instance, or any kind of multi-platforms API like PortAudio or JACK . In this case a subclass of the base class audio (see later) has to be written plugins (like VST3 , Audio Unit or JUCE for instance) usually have to follow a more constrained API which imposes a life cyle , something like loading/initializing/starting/running/stopping/unloading sequence of operations. In this case the Faust generated module new/init/compute/delete methods have to be inserted in the plugin API, by calling each module function at the appropriate place.","title":"Audio Architecture Modules"},{"location":"manual/architectures/#external-and-internal-audio-sample-formats","text":"Audio samples are managed by the underlying audio layer, typically as 32 bits float or 64 bits double values in the [-1..1] interval. Their format is defined with the FAUSTFLOAT macro implemented in the architecture file as float by default. The DSP audio samples format is choosen at compile time, with the -single (= default), -double or -quad compilation option. Control parameters like buttons, sliders... also use the FAUSTFLOAT format. By default, the FAUSTFLOAT macro is written with the following code: #ifndef FAUSTFLOAT #define FAUSTFLOAT float #endif which gives it a value ( if not already defined ), and since the default internal format is float , nothing special has to be done in the general case. But when the DSP is compiled using the -double option, the audio inputs/outputs buffers have to be adapted , with a dsp_sample_adapter class, for instance like in the dynamic-jack-gt tool . Note that an architecture may redefine FAUSTFLOAT in double, and have the complete audio chain running in double. This has to be done before including any architecture file that would define FAUSTFLOAT itself (because of the #ifndef logic).","title":"External and internal audio sample formats"},{"location":"manual/architectures/#connection-to-an-audio-driver-api","text":"An audio driver architecture typically connects a Faust program to the audio drivers. It is responsible for: allocating and releasing the audio channels and presenting the audio as non-interleaved float/double data (depending of the FAUSTFLOAT macro definition), normalized between -1.0 and 1.0 calling the DSP init method at init time, to setup the ma.SR variable possibly used in the DSP code calling the DSP compute method to handle incoming audio buffers and/or to produce audio outputs. The default compilation model uses separated audio input and output buffers not referring to the same memory locations. The -inpl (--in-place) code generation model allows us to generate code working when input and output buffers are the same (which is typically needed in some embedded devices). This option currently only works in scalar (= default) code generation mode. A Faust audio architecture module derives from an audio class can be defined as below (simplified version, see the real version here) : class audio { public: audio() {} virtual ~audio() {} /** * Init the DSP. * @param name - the DSP name to be given to the audio driven * (could appear as a JACK client for instance) * @param dsp - the dsp that will be initialized with the driver sample rate * * @return true is sucessful, false if case of driver failure. **/ virtual bool init(const char* name, dsp* dsp) = 0; /** * Start audio processing. * @return true is sucessfull, false if case of driver failure. **/ virtual bool start() = 0; /** * Stop audio processing. **/ virtual void stop() = 0; void setShutdownCallback(shutdown_callback cb, void* arg) = 0; // Return buffer size in frames. virtual int getBufferSize() = 0; // Return the driver sample rate in Hz. virtual int getSampleRate() = 0; // Return the driver hardware inputs number. virtual int getNumInputs() = 0; // Return the driver hardware outputs number. virtual int getNumOutputs() = 0; /** * @return Returns the average proportion of available CPU * being spent inside the audio callbacks (between 0.0 and 1.0). **/ virtual float getCPULoad() = 0; }; The API is simple enough to give a great flexibility to audio architectures implementations. The init method should initialize the audio. At init exit, the system should be in a safe state to recall the dsp object state. Here is the hierarchy of some of the supported audio drivers:","title":"Connection to an audio driver API"},{"location":"manual/architectures/#connection-to-a-plugin-audio-api","text":"In the case of plugin, an audio plugin architecture has to be developed, by integrating the Faust DSP new/init/compute/delete methods in the plugin API. Here is a concrete example using the JUCE framework: a FaustPlugInAudioProcessor class, subclass of the juce::AudioProcessor has to be defined. The Faust generated C++ instance will be created in its constructor, either in monophonic of polyphonic mode (see later sections) the Faust DSP instance is initialized in the JUCE prepareToPlay method using the current sample rate value the Faust dsp compute is called in the JUCE process which receives the audio inputs/outputs buffers to be processed additional methods can possibly be implemented to handle MIDI messages or save/restore the plugin parameters state for instance. This methodology obviously has to be adapted for each supported plugin API.","title":"Connection to a plugin audio API"},{"location":"manual/architectures/#midi-architecture-modules","text":"A MIDI architecture module typically connects a Faust program to the MIDI drivers. MIDI control connects DSP parameters with MIDI messages (in both directions), and can be used to trigger polyphonic instruments.","title":"MIDI Architecture Modules"},{"location":"manual/architectures/#midi-messages-in-the-dsp-source-code","text":"MIDI control messages are described as metadata in UI elements. They are decoded by a MidiUI class, subclass of UI , which parses incoming MIDI messages and updates the appropriate control parameters, or sends MIDI messages when the UI elements (sliders, buttons...) are moved.","title":"MIDI Messages in the DSP Source Code"},{"location":"manual/architectures/#defined-standard-midi-messages","text":"A special [midi:xxx yyy...] metadata needs to be added to the UI element. The full description of supported MIDI messages is part of the Faust documentation .","title":"Defined Standard MIDI Messages"},{"location":"manual/architectures/#midi-classes","text":"A midi base class defining MIDI messages decoding/encoding methods has been developed. It will be used to receive and transmit MIDI messages: class midi { public: midi() {} virtual ~midi() {} // Additional timestamped API for MIDI input virtual MapUI* keyOn(double, int channel, int pitch, int velocity) { return keyOn(channel, pitch, velocity); } virtual void keyOff(double, int channel, int pitch, int velocity = 0) { keyOff(channel, pitch, velocity); } virtual void keyPress(double, int channel, int pitch, int press) { keyPress(channel, pitch, press); } virtual void chanPress(double date, int channel, int press) { chanPress(channel, press); } virtual void pitchWheel(double, int channel, int wheel) { pitchWheel(channel, wheel); } virtual void ctrlChange(double, int channel, int ctrl, int value) { ctrlChange(channel, ctrl, value); } virtual void ctrlChange14bits(double, int channel, int ctrl, int value) { ctrlChange14bits(channel, ctrl, value); } virtual void rpn(double, int channel, int ctrl, int value) { rpn(channel, ctrl, value); } virtual void progChange(double, int channel, int pgm) { progChange(channel, pgm); } virtual void sysEx(double, std::vector& message) { sysEx(message); } // MIDI sync virtual void startSync(double date) {} virtual void stopSync(double date) {} virtual void clock(double date) {} // Standard MIDI API virtual MapUI* keyOn(int channel, int pitch, int velocity) { return nullptr; } virtual void keyOff(int channel, int pitch, int velocity) {} virtual void keyPress(int channel, int pitch, int press) {} virtual void chanPress(int channel, int press) {} virtual void ctrlChange(int channel, int ctrl, int value) {} virtual void ctrlChange14bits(int channel, int ctrl, int value) {} virtual void rpn(int channel, int ctrl, int value) {} virtual void pitchWheel(int channel, int wheel) {} virtual void progChange(int channel, int pgm) {} virtual void sysEx(std::vector& message) {} enum MidiStatus { // channel voice messages MIDI_NOTE_OFF = 0x80, MIDI_NOTE_ON = 0x90, MIDI_CONTROL_CHANGE = 0xB0, MIDI_PROGRAM_CHANGE = 0xC0, MIDI_PITCH_BEND = 0xE0, MIDI_AFTERTOUCH = 0xD0, // aka channel pressure MIDI_POLY_AFTERTOUCH = 0xA0, // aka key pressure MIDI_CLOCK = 0xF8, MIDI_START = 0xFA, MIDI_CONT = 0xFB, MIDI_STOP = 0xFC, MIDI_SYSEX_START = 0xF0, MIDI_SYSEX_STOP = 0xF7 }; enum MidiCtrl { ALL_NOTES_OFF = 123, ALL_SOUND_OFF = 120 }; enum MidiNPN { PITCH_BEND_RANGE = 0 }; }; A pure interface for MIDI handlers that can send/receive MIDI messages to/from midi objects is defined: struct midi_interface { virtual void addMidiIn(midi* midi_dsp) = 0; virtual void removeMidiIn(midi* midi_dsp) = 0; virtual ~midi_interface() {} }; A midi_hander subclass implements actual MIDI decoding and maintains a list of MIDI aware components (classes inheriting from midi and ready to send and/or receive MIDI events) using the addMidiIn/removeMidiIn methods: class midi_handler : public midi, public midi_interface { protected: std::vector fMidiInputs; std::string fName; MidiNRPN fNRPN; public: midi_handler(const std::string& name = \"MIDIHandler\"):fName(name) {} virtual ~midi_handler() {} void addMidiIn(midi* midi_dsp) {...} void removeMidiIn(midi* midi_dsp) {...} ... ... }; Several concrete implementations subclassing midi_handler using native APIs have been written and can be found in the faust/midi folder: Depending on the native MIDI API being used, event timestamps are either expressed in absolute time or in frames. They are converted to offsets expressed in samples relative to the beginning of the audio buffer. Connected with the MidiUI class (a subclass of UI ), they allow a given DSP to be controlled with incoming MIDI messages or possibly send MIDI messages when its internal control state changes. In the following piece of code, a MidiUI object is created and connected to a rt_midi MIDI messages handler (using the RTMidi library), then given as a parameter to the standard buildUserInterface to control DSP parameters: ... rt_midi midi_handler(\"MIDI\"); MidiUI midi_interface(&midi_handler); DSP->buildUserInterface(&midi_interface); ...","title":"MIDI Classes"},{"location":"manual/architectures/#ui-architecture-modules","text":"A UI architecture module links user actions (i.e., via graphic widgets, command line parameters, OSC messages, etc.) with the Faust program to control. It is responsible for associating program parameters to user interface elements and to update parameter\u2019s values according to user actions. This association is triggered by the buildUserInterface call, where the dsp asks a UI object to build the DSP module controllers. Since the interface is basically graphic-oriented, the main concepts are widget based: an UI architecture module is semantically oriented to handle active widgets, passive widgets and widgets layout. A Faust UI architecture module derives the UI base class: template struct UIReal { UIReal() {} virtual ~UIReal() {} // -- widget's layouts virtual void openTabBox(const char* label) = 0; virtual void openHorizontalBox(const char* label) = 0; virtual void openVerticalBox(const char* label) = 0; virtual void closeBox() = 0; // -- active widgets virtual void addButton(const char* label, REAL* zone) = 0; virtual void addCheckButton(const char* label, REAL* zone) = 0; virtual void addVerticalSlider(const char* label, REAL* zone, REAL init, REAL min, REAL max, REAL step) = 0; virtual void addHorizontalSlider(const char* label, REAL* zone, REAL init, REAL min, REAL max, REAL step) = 0; virtual void addNumEntry(const char* label, REAL* zone, REAL init, REAL min, REAL max, REAL step) = 0; // -- passive widgets virtual void addHorizontalBargraph(const char* label, REAL* zone, REAL min, REAL max) = 0; virtual void addVerticalBargraph(const char* label, REAL* zone, REAL min, REAL max) = 0; // -- soundfiles virtual void addSoundfile(const char* label, const char* filename, Soundfile** sf_zone) = 0; // -- metadata declarations virtual void declare(REAL* zone, const char* key, const char* val) {} }; struct UI : public UIReal { UI() {} virtual ~UI() {} }; The FAUSTFLOAT* zone element is the primary connection point between the control interface and the dsp code. The compiled dsp Faust code will give access to all internal control value addresses used by the dsp code by calling the approriate addButton , addVerticalSlider , addNumEntry etc. methods (depending of what is described in the original Faust DSP source code). The control/UI code keeps those addresses, and will typically change their pointed values each time a control value in the dsp code has to be changed. On the dsp side, all control values are sampled once at the beginning of the compute method, so that to keep the same value during the entire audio buffer. Writing and reading the control values is typically done in two different threads: the controller (a GUI, an OSC or MIDI.etc. one) write the values, and the audio real-time thread read them in the audio callback. Since writing/reading the FAUSTFLOAT* zone element is atomic, there is no need (in general) of complex synchronization mechanism between the writer (controller) and the reader (the Faust dsp object). Here is part of the UI classes hierarchy:","title":"UI Architecture Modules"},{"location":"manual/architectures/#active-widgets","text":"Active widgets are graphical elements controlling a parameter value. They are initialized with the widget name and a pointer to the linked value, using the FAUSTFLOAT macro type (defined at compile time as either float or double ). Active widgets in Faust are Button , CheckButton , VerticalSlider , HorizontalSlider and NumEntry . A GUI architecture must implement a method addXxx(const char* name, FAUSTFLOAT* zone, ...) for each active widget. Additional parameters are available for Slider and NumEntry : the init , min , max and step values.","title":"Active Widgets"},{"location":"manual/architectures/#passive-widgets","text":"Passive widgets are graphical elements reflecting values. Similarly to active widgets, they are initialized with the widget name and a pointer to the linked value. Passive widgets in Faust are HorizontalBarGraph and VerticalBarGraph . A UI architecture must implement a method addXxx(const char* name, FAUSTFLOAT* zone, ...) for each passive widget. Additional parameters are available, depending on the passive widget type.","title":"Passive Widgets"},{"location":"manual/architectures/#widgets-layout","text":"Generally, a GUI is hierarchically organized into boxes and/or tab boxes. A UI architecture must support the following methods to setup this hierarchy: openTabBox(const char* label); openHorizontalBox(const char* label); openVerticalBox(const char* label); closeBox(const char* label); Note that all the widgets are added to the current box.","title":"Widgets Layout"},{"location":"manual/architectures/#metadata","text":"The Faust language allows widget labels to contain metadata enclosed in square brackets as key/value pairs. These metadata are handled at GUI level by a declare method taking as argument, a pointer to the widget associated zone, the metadata key and value: declare(FAUSTFLOAT* zone, const char* key, const char* value); Here is the table of currently supported general medadata: Key Value tooltip actual string content hidden 0 or 1 unit Hz or dB scale log or exp style knob or led or numerical style radio{\u2019label1\u2019:v1;\u2019label2\u2019:v2...} style menu{\u2019label1\u2019:v1;\u2019label2\u2019:v2...} acc axe curve amin amid amax gyr axe curve amin amid amax screencolor red or green or blue or white Here acc means accelerometer and gyr means gyroscope , both use the same parameters (a mapping description) but are linked to different sensors. Some typical example where several metadata are defined could be: nentry(\"freq [unit:Hz][scale:log][acc:0 0 -30 0 30][style:menu{\u2019white noise\u2019:0;\u2019pink noise\u2019:1;\u2019sine\u2019:2}][hidden:0]\", 0, 20, 100, 1) or: vslider(\"freq [unit:dB][style:knob][gyr:0 0 -30 0 30]\", 0, 20, 100, 1) When one or several metadata are added in the same item label, then will appear in the generated code as one or successives declare(FAUSTFLOAT* zone, const char* key, const char* value); lines before the line describing the item itself. Thus the UI managing code has to associate them with the proper item. Look at the MetaDataUI class for an example of this technique. MIDI specific metadata are described here and are decoded the MidiUI class. Note that medatada are not supported in all architecture files. Some of them like ( acc or gyr for example) only make sense on platforms with accelerometers or gyroscopes sensors. The set of medatada may be extended in the future and can possibly be adapted for a specific project. They can be decoded using the MetaDataUI class.","title":"Metadata"},{"location":"manual/architectures/#graphic-oriented-pure-controllers-code-generator-ui","text":"Even if the UI architecture module is graphic-oriented, a given implementation can perfectly choose to ignore all layout information and only keep the controller ones, like the buttons, sliders, nentries, bargraphs. This is typically what is done in the MidiUI or OSCUI architectures. Note that pure code generator can also be written. The JSONUI UI architecture is an example of an architecture generating the DSP JSON description as a text file.","title":"Graphic-oriented, pure controllers, code generator UI"},{"location":"manual/architectures/#dsp-json-description","text":"The full description of a given compiled DSP can be generated as a JSON file, to be used at several places in the architecture system. This JSON describes the DSP with its inputs/outputs number, some metadata (filename, name, used compilation parameters, used libraries etc.) as well as its UI with a hierarchy of groups up to terminal items ( buttons , sliders , nentries , bargraphs ) with all their parameters ( type , label , shortname , address , meta , init , min , max and step values). For the following DSP program: import(\"stdfaust.lib\"); vol = hslider(\"volume [unit:dB]\", 0, -96, 0, 0.1) : ba.db2linear : si.smoo; freq = hslider(\"freq [unit:Hz]\", 600, 20, 2000, 1); process = vgroup(\"Oscillator\", os.osc(freq) * vol) <: (_,_); The generated JSON file is then: { \"name\": \"foo\", \"filename\": \"foo.dsp\", \"version\": \"2.40.8\", \"compile_options\": \"-lang cpp -es 1 -mcd 16 -single -ftz 0\", \"library_list\": [], \"include_pathnames\": [\"/usr/local/share/faust\"], \"inputs\": 0, \"outputs\": 2, \"meta\": [ { \"basics.lib/name\": \"Faust Basic Element Library\" }, { \"basics.lib/version\": \"0.6\" }, { \"compile_options\": \"-lang cpp -es 1 -mcd 16 -single -ftz 0\" }, { \"filename\": \"foo.dsp\" }, { \"maths.lib/author\": \"GRAME\" }, { \"maths.lib/copyright\": \"GRAME\" }, { \"maths.lib/license\": \"LGPL with exception\" }, { \"maths.lib/name\": \"Faust Math Library\" }, { \"maths.lib/version\": \"2.5\" }, { \"name\": \"tes\" }, { \"oscillators.lib/name\": \"Faust Oscillator Library\" }, { \"oscillators.lib/version\": \"0.3\" }, { \"platform.lib/name\": \"Generic Platform Library\" }, { \"platform.lib/version\": \"0.2\" }, { \"signals.lib/name\": \"Faust Signal Routing Library\" }, { \"signals.lib/version\": \"0.1\" } ], \"ui\": [ { \"type\": \"vgroup\", \"label\": \"Oscillator\", \"items\": [ { \"type\": \"hslider\", \"label\": \"freq\", \"shortname\": \"freq\", \"address\": \"/Oscillator/freq\", \"meta\": [ { \"unit\": \"Hz\" } ], \"init\": 600, \"min\": 20, \"max\": 2000, \"step\": 1 }, { \"type\": \"hslider\", \"label\": \"volume\", \"shortname\": \"volume\", \"address\": \"/Oscillator/volume\", \"meta\": [ { \"unit\": \"dB\" } ], \"init\": 0, \"min\": -96, \"max\": 0, \"step\": 0.1 } ] } ] } The JSON file can be generated with faust -json foo.dsp command, or programmatically using the JSONUI UI architecture (see next Some Useful UI Classes and Tools for Developers section). Here is the description of ready-to-use UI classes, followed by classes to be used in developer code:","title":"DSP JSON Description"},{"location":"manual/architectures/#gui-builders","text":"Here is the description of the main GUI classes: the GTKUI class uses the GTK toolkit to create a Graphical User Interface with a proper group-based layout the QTUI class uses the QT toolkit to create a Graphical User Interface with a proper group based layout the JuceUI class uses the JUCE framework to create a Graphical User Interface with a proper group based layout","title":"GUI Builders"},{"location":"manual/architectures/#non-gui-controllers","text":"Here is the description of the main non-GUI controller classes: the OSCUI class implements OSC remote control in both directions the httpdUI class implements HTTP remote control using the libmicrohttpd library to embed a HTTP server inside the application. Then by opening a browser on a specific URL, the GUI will appear and allow to control the distant application or plugin. The connection works in both directions the MIDIUI class implements MIDI control in both directions, and it explained more deeply later on","title":"Non-GUI Controllers"},{"location":"manual/architectures/#some-useful-ui-classes-and-tools-for-developers","text":"Some useful UI classes and tools can possibly be reused in developer code: the MapUI class establishes a mapping beween UI items and their labels , shortname or paths , and offers a setParamValue/getParamValue API to set and get their values. It uses an helper PathBuilder class to create complete shortnames and pathnames to the leaves in the UI hierarchy. Note that the item path encodes the UI hierarchy in the form of a /group1/group2/.../label string and is the way to distinguish control that may have the same label, but different localisation in the UI tree. Using shortnames (built so that they never collide) is an alternative way to access items. The setParamValue/getParamValue API takes either labels , shortname or paths as the way to describe the control, but using shortnames or paths is the safer way to use it the extended APIUI offers setParamValue/getParamValue API similar to MapUI , with additional methods to deal with accelerometer/gyroscope kind of metadata the MetaDataUI class decodes all currently supported metadata and can be used to retrieve their values the JSONUI class allows us to generate the JSON description of a given DSP the JSONUIDecoder class is used to decode the DSP JSON description and implement its buildUserInterface and metadata methods the FUI class allows us to save and restore the parameters state as a text file the SoundUI class with the associated Soundfile class is used to implement the soundfile primitive, and load the described audio resources (typically audio files), by using different concrete implementations, either using libsndfile (with the LibsndfileReader.h file), or JUCE (with the JuceReader file). Paths to sound files can be absolute, but it should be noted that a relative path mechanism can be set up when creating an instance of SoundUI , in order to load sound files with a more flexible strategy. the ControlSequenceUI class with the associated OSCSequenceReader class allow to control parameters change in time, using the OSC time tag format. Changing the control values will have to be mixed with audio rendering. Look at the sndfile.cpp use-case. the ValueConverter file contains several mapping classes used to map user interface values (for example a gui slider delivering values between 0 and 1) to Faust values (for example a vslider between 20 and 2000) using linear/log/exp scales. It also provides classes to handle the [acc:a b c d e] and [gyr:a b c d e] Sensors Control Metadatas .","title":"Some Useful UI Classes and Tools for Developers"},{"location":"manual/architectures/#multi-controller-and-synchronization","text":"A given DSP can perfectly be controlled by several UI classes at the same time, and they will all read and write the same DSP control memory zones. Here is an example of code using a GUI using GTKUI architecture, as well as OSC control using OSCUI : ... GTKUI gtk_interface(name, &argc, &argv); DSP->buildUserInterface(>k_interface); OSCUI osc_interface(name, argc, argv); DSP->buildUserInterface(&osc_interface); ... Since several controller access the same values, you may have to synchronize them, in order for instance to have the GUI sliders or buttons reflect the state that would have been changed by the OSCUI controller at reception time, of have OSC messages been sent each time UI items like sliders or buttons are moved. This synchronization mecanism is implemented in a generic way in the GUI class. First the uiItemBase class is defined as the basic synchronizable memory zone, then grouped in a list controlling the same zone from different GUI instances. The uiItemBase::modifyZone method is used to change the uiItemBase state at reception time, and uiItemBase::reflectZone will be called to reflect a new value, and can change the Widget layout for instance, or send a message (OSC, MIDI...). All classes needing to use this synchronization mechanism will have to subclass the GUI class, which keeps all of them at runtime in a static class GUI::fGuiList variable. This is the case for the previously used GTKUI and OSCUI classes. Note that when using the GUI class, the 2 following static class variables have to be defined in the code, (once in one .cpp file in the project) like in this code example: // Globals std::list GUI::fGuiList; ztimedmap GUI::gTimedZoneMap; Finally the static GUI::updateAllGuis() synchronization method will have to be called regularly, in the application or plugin event management loop, or in a periodic timer for instance. This is typically implemented in the GUI::run method which has to be called to start event or messages processing. In the following code, the OSCUI::run method is called first to start processing OSC messages, then the blocking GTKUI::run method, which opens the GUI window, to be closed to finally finish the application: ... // Start OSC messages processing osc_interface.run(); // Start GTK GUI as the last one, since it blocks until the opened window is closed gtk_interface.run() ...","title":"Multi-Controller and Synchronization"},{"location":"manual/architectures/#dsp-architecture-modules","text":"The Faust compiler produces a DSP module whose format will depend of the chosen backend: a C++ class with the -lang cpp option, a data structure with associated functions with the -lang c option, an LLVM IR module with the -lang llvm option, a WebAssembly binary module with the -lang wasm option, a bytecode stream with the -lang interp option... and so on.","title":"DSP Architecture Modules"},{"location":"manual/architectures/#the-base-dsp-class","text":"In C++, the generated class derives from a base dsp class: class dsp { public: dsp() {} virtual ~dsp() {} /* Return instance number of audio inputs */ virtual int getNumInputs() = 0; /* Return instance number of audio outputs */ virtual int getNumOutputs() = 0; /** * Trigger the ui_interface parameter with instance specific calls * to 'openTabBox', 'addButton', 'addVerticalSlider'... in order to build the UI. * * @param ui_interface - the user interface builder */ virtual void buildUserInterface(UI* ui_interface) = 0; /* Return the sample rate currently used by the instance */ virtual int getSampleRate() = 0; /** * Global init, calls the following methods: * - static class 'classInit': static tables initialization * - 'instanceInit': constants and instance state initialization * * @param sample_rate - the sampling rate in Hz */ virtual void init(int sample_rate) = 0; /** * Init instance state * * @param sample_rate - the sampling rate in Hz */ virtual void instanceInit(int sample_rate) = 0; /** * Init instance constant state * * @param sample_rate - the sampling rate in HZ */ virtual void instanceConstants(int sample_rate) = 0; /* Init default control parameters values */ virtual void instanceResetUserInterface() = 0; /* Init instance state (like delay lines..) but keep the control parameter values */ virtual void instanceClear() = 0; /** * Return a clone of the instance. * * @return a copy of the instance on success, otherwise a null pointer. */ virtual dsp* clone() = 0; /** * Trigger the Meta* parameter with instance specific calls to 'declare' * (key, value) metadata. * * @param m - the Meta* meta user */ virtual void metadata(Meta* m) = 0; /** * DSP instance computation, to be called with successive in/out audio buffers. * * @param count - the number of frames to compute * @param inputs - the input audio buffers as an array of non-interleaved * FAUSTFLOAT samples (eiher float, double or quad) * @param outputs - the output audio buffers as an array of non-interleaved * FAUSTFLOAT samples (eiher float, double or quad) * */ virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) = 0; /** * Alternative DSP instance computation method for use by subclasses, incorporating an additional `date_usec` parameter, * which specifies the timestamp of the first sample in the audio buffers. * * @param date_usec - the timestamp in microsec given by audio driver. By convention timestamp of -1 means 'no timestamp conversion', * events already have a timestamp expressed in frames. * @param count - the number of frames to compute * @param inputs - the input audio buffers as an array of non-interleaved * FAUSTFLOAT samples (either float, double or quad) * @param outputs - the output audio buffers as an array of non-interleaved * FAUSTFLOAT samples (either float, double or quad) * */ virtual void compute(double date_usec, int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) = 0; }; The dsp class is central to the Faust architecture design: the getNumInputs , getNumOutputs methods provides information about the signal processor the buildUserInterface method creates the user interface using a given UI class object (see later) the init method (and some more specialized methods like instanceInit , instanceConstants , etc.) is called to initialize the dsp object with a given sampling rate, typically obtained from the audio architecture the compute method is called by the audio architecture to execute the actual audio processing. It takes a count number of samples to process, and inputs and outputs arrays of non-interleaved float/double samples, to be allocated and handled by the audio driver with the required dsp input and outputs channels (as given by getNumInputs and getNumOutputs ) the clone method can be used to duplicate the instance the metadata(Meta* m) method can be called with a Meta object to decode the instance global metadata (see next section) (note that FAUSTFLOAT label is typically defined to be the actual type of sample: either float or double using #define FAUSTFLOAT float in the code for instance). For a given compiled DSP program, the compiler will generate a mydsp subclass of dsp and fill the different methods (the actual name can be changed using the -cn option). For dynamic code producing backends like the LLVM IR, Cmajor or the Interpreter ones, the actual code (an LLVM module, a Cmajor module or a bytecode stream) is actually wrapped by some additional C++ code glue, to finally produces an llvm_dsp typed object (defined in the llvm-dsp.h file), a cmajorpatch_dsp typed object (defined in the cmajorpatch-dsp.h file) or an interpreter_dsp typed object (defined in interpreter-dsp.h file), ready to be used with the UI and audio C++ classes (like the C++ generated class). See the following class diagram:","title":"The Base dsp Class"},{"location":"manual/architectures/#global-dsp-metadata","text":"All global metadata declaration in Faust start with declare , followed by a key and a string. For example: declare name \"Noise\"; allows us to specify the name of a Faust program in its whole. Unlike regular comments, metadata declarations will appear in the C++ code generated by the Faust compiler, for instance the Faust program: declare name \"NoiseProgram\"; declare author \"MySelf\"; declare copyright \"MyCompany\"; declare version \"1.00\"; declare license \"BSD\"; import(\"stdfaust.lib\"); process = no.noise; will generate the following C++ metadata(Meta* m) method in the dsp class: void metadata(Meta* m) { m->declare(\"author\", \"MySelf\"); m->declare(\"compile_options\", \"-lang cpp -es 1 -scal -ftz 0\"); m->declare(\"copyright\", \"MyCompany\"); m->declare(\"filename\", \"metadata.dsp\"); m->declare(\"license\", \"BSD\"); m->declare(\"name\", \"NoiseProgram\"); m->declare(\"noises.lib/name\", \"Faust Noise Generator Library\"); m->declare(\"noises.lib/version\", \"0.0\"); m->declare(\"version\", \"1.00\"); } which interacts with an instance of an implementation class of the following virtual Meta class: struct Meta { virtual ~Meta() {}; virtual void declare(const char* key, const char* value) = 0; }; and are part of three different types of global metadata: metadata like compile_options or filename are automatically generated metadata like author of copyright are part of the Global Medata metadata like noises.lib/name are part of the Function Metadata Specialized subclasses of the Meta class can be implemented to decode the needed key/value pairs for a given use-case.","title":"Global DSP metadata"},{"location":"manual/architectures/#macro-construction-of-dsp-components","text":"The Faust program specification is usually entirely done in the language itself. But in some specific cases it may be useful to develop separated DSP components and combine them in a more complex setup. Since taking advantage of the huge number of already available UI and audio architecture files is important, keeping the same dsp API is preferable, so that more complex DSP can be controlled and audio rendered the usual way. Extended DSP classes will typically subclass the dsp base class and override or complete part of its API.","title":"Macro Construction of DSP Components"},{"location":"manual/architectures/#dsp-decorator-pattern","text":"A dsp_decorator class, subclass of the root dsp class has first been defined. Following the decorator design pattern, it allows behavior to be added to an individual object, either statically or dynamically. As an example of the decorator pattern, the timed_dsp class allows to decorate a given DSP with sample accurate control capability or the mydsp_poly class for polyphonic DSPs, explained in the next sections.","title":"DSP Decorator Pattern"},{"location":"manual/architectures/#combining-dsp-components","text":"A few additional macro construction classes, subclasses of the root dsp class have been defined in the dsp-combiner.h header file with a five operators construction API: the dsp_sequencer class combines two DSP in sequence, assuming that the number of outputs of the first DSP equals the number of input of the second one. It somewhat mimics the sequence (that is : ) operator of the language by combining two separated C++ objects. Its buildUserInterface method is overloaded to group the two DSP in a tabgroup, so that control parameters of both DSPs can be individually controlled. Its compute method is overloaded to call each DSP compute in sequence, using an intermediate output buffer produced by first DSP as the input one given to the second DSP. the dsp_parallelizer class combines two DSP in parallel. It somewhat mimics the parallel (that is , ) operator of the language by combining two separated C++ objects. Its getNumInputs/getNumOutputs methods are overloaded by correctly reflecting the input/output of the resulting DSP as the sum of the two combined ones. Its buildUserInterface method is overloaded to group the two DSP in a tabgroup, so that control parameters of both DSP can be individually controlled. Its compute method is overloaded to call each DSP compute, where each DSP consuming and producing its own number of input/output audio buffers taken from the method parameters. This methology is followed to implement the three remaining composition operators ( split , merge , recursion ), which ends up with a C++ API to combine DSPs with the usual five operators: createDSPSequencer , createDSPParallelizer , createDSPSplitter , createDSPMerger , createDSPRecursiver to be used at C++ level to dynamically combine DSPs. And finally the createDSPCrossfader tool allows you to crossfade between two DSP modules. The crossfade parameter (as a slider) controls the mix between the two modules outputs. When Crossfade = 1 , the first DSP only is computed, when Crossfade = 0 , the second DSP only is computed, otherwise both DSPs are computed and mixed. Note that this idea of decorating or combining several C++ dsp objects can perfectly be extended in specific projects, to meet other needs: like muting some part of a graph of several DSPs for instance. But keep in mind that keeping the dsp API then allows to take profit of all already available UI and audio based classes.","title":"Combining DSP Components"},{"location":"manual/architectures/#sample-accurate-control","text":"DSP audio languages usually deal with several timing dimensions when treating control events and generating audio samples. For performance reasons, systems maintain separated audio rate for samples generation and control rate for asynchronous messages handling. The audio stream is most often computed by blocks, and control is updated between blocks. To smooth control parameter changes, some languages chose to interpolate parameter values between blocks. In some cases control may be more finely interleaved with audio rendering, and some languages simply choose to interleave control and sample computation at sample level. Although the Faust language permits the description of sample level algorithms (i.e., like recursive filters, etc.), Faust generated DSP are usually computed by blocks. Underlying audio architectures give a fixed size buffer over and over to the DSP compute method which consumes and produces audio samples.","title":"Sample Accurate Control"},{"location":"manual/architectures/#control-to-dsp-link","text":"In the current version of the Faust generated code, the primary connection point between the control interface and the DSP code is simply a memory zone. For control inputs, the architecture layer continuously write values in this zone, which is then sampled by the DSP code at the beginning of the compute method, and used with the same values throughout the call. Because of this simple control/DSP connexion mechanism, the most recent value is used by the DSP code. Similarly for control outputs , the DSP code inside the compute method possibly writes several values at the same memory zone, and the last value only will be seen by the control architecture layer when the method finishes. Although this behaviour is satisfactory for most use-cases, some specific usages need to handle the complete stream of control values with sample accurate timing. For instance keeping all control messages and handling them at their exact position in time is critical for proper MIDI clock synchronisation.","title":"Control to DSP Link"},{"location":"manual/architectures/#timestamped-control","text":"The first step consists in extending the architecture control mechanism to deal with timestamped control events. Note that this requires the underlying event control layer to support this capability. The native MIDI API for instance is usually able to deliver timestamped MIDI messages. The next step is to keep all timestamped events in a time ordered data structure to be continuously written by the control side, and read by the audio side. Finally the sample computation has to take account of all queued control events, and correctly change the DSP control state at successive points in time.","title":"Timestamped Control"},{"location":"manual/architectures/#slices-based-dsp-computation","text":"With timestamped control messages, changing control values at precise sample indexes on the audio stream becomes possible. A generic slices based DSP rendering strategy has been implemented in the timed_dsp class. A ring-buffer is used to transmit the stream of timestamped events from the control layer to the DSP one. In the case of MIDI control for instance, the ring-buffer is written with a pair containing the timestamp expressed in samples (or microseconds) and the actual MIDI message each time one is received. In the DSP compute method, the ring-buffer will be read to handle all messages received during the previous audio block. Since control values can change several times inside the same audio block, the DSP compute cannot be called only once with the total number of frames and the complete inputs/outputs audio buffers. The following strategy has to be used: several slices are defined with control values changing between consecutive slices all control values having the same timestamp are handled together, and change the DSP control internal state. The slice is computed up to the next control parameters timestamp until the end of the given audio block is reached in the next figure, four slices with the sequence of c1, c2, c3, c4 frames are successively given to the DSP compute method, with the appropriate part of the audio input/output buffers. Control values (appearing here as [v1,v2,v3] , then [v1,v3] , then [v1] , then [v1,v2,v3] sets) are changed between slices Since timestamped control messages from the previous audio block are used in the current block, control messages are aways handled with one audio buffer latency. Note that this slices based computation model can always be directly implemented on top of the underlying audio layer, without relying on the timed_dsp wrapper class.","title":"Slices Based DSP Computation"},{"location":"manual/architectures/#audio-driver-timestamping","text":"Some audio drivers can get the timestamp of the first sample in the audio buffers, and will typically call the DSP alternative compute(double date_usec, int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) function with the correct timestamp. By convention timestamp of -1 means 'no timestamp conversion', events already have a timestamp expressed in frames (see jackaudio_midi for an example driver using timestamp expressed in frames). The timed_dsp wrapper class is an example of a DSP class actually using the timestamp information.","title":"Audio driver timestamping"},{"location":"manual/architectures/#typical-use-case","text":"A typical Faust program can use the MIDI clock command signal to possibly compute the Beat Per Minutes (BPM) information for any synchronization need it may have. Here is a simple example of a sinusoid generated which a frequency controlled by the MIDI clock stream, and starting/stopping when receiving the MIDI start/stop messages: import(\"stdfaust.lib\"); // square signal (1/0), changing state // at each received clock clocker = checkbox(\"MIDI clock[midi:clock]\"); // ON/OFF button controlled // with MIDI start/stop messages play = checkbox(\"On/Off [midi:start][midi:stop]\"); // detect front front(x) = (x-x\u2019) != 0.0; // count number of peaks during one second freq(x) = (x-x@ma.SR) : + ~ _; process = os.osc(8*freq(front(clocker))) * play; Each received group of 24 clocks will move the time position by exactly one beat. Then it is absolutely mandatory to never loose any MIDI clock message and the standard memory zone based model with the use the last received control value semantic is not adapted. The DSP object that needs to be controlled using the sample-accurate machinery can then simply be decorated using the timed_dsp class with the following kind of code: dsp* sample_accurate_dsp = new timed_dsp(DSP); Note that the described sample accurate MIDI clock synchronization model can currently only be used at input level. Because of the simple memory zone based connection point between the control interface and the DSP code, output controls (like bargraph) cannot generate a stream of control values. Thus a reliable MIDI clock generator cannot be implemented with the current approach.","title":"Typical Use-Case"},{"location":"manual/architectures/#polyphonic-instruments","text":"Directly programing polyphonic instruments in Faust is perfectly possible. It is also needed if very complex signal interaction between the different voices have to be described. But since all voices would always be computed, this approach could be too CPU costly for simpler or more limited needs. In this case describing a single voice in a Faust DSP program and externally combining several of them with a special polyphonic instrument aware architecture file is a better solution. Moreover, this special architecture file takes care of dynamic voice allocations and control MIDI messages decoding and mapping.","title":"Polyphonic Instruments"},{"location":"manual/architectures/#polyphonic-ready-dsp-code","text":"By convention Faust architecture files with polyphonic capabilities expect to find control parameters named freq , gain , and gate . The metadata declare nvoices \"8\"; kind of line with a desired value of voices can be added in the source code. In the case of MIDI control, the freq parameter (which should be a frequency) will be automatically computed from MIDI note numbers, gain (which should be a value between 0 and 1) from velocity and gate from keyon/keyoff events. Thus, gate can be used as a trigger signal for any envelope generator, etc.","title":"Polyphonic ready DSP Code"},{"location":"manual/architectures/#using-the-mydsp_poly-class","text":"The single voice has to be described by a Faust DSP program, the mydsp_poly class is then used to combine several voices and create a polyphonic ready DSP: the poly-dsp.h file contains the definition of the mydsp_poly class used to wrap the DSP voice into the polyphonic architecture. This class maintains an array of dsp* objects, manage dynamic voice allocation, control MIDI messages decoding and mapping, mixing of all running voices, and stopping a voice when its output level decreases below a given threshold as a subclass of DSP, the mydsp_poly class redefines the buildUserInterface method. By convention all allocated voices are grouped in a global Polyphonic tabgroup. The first tab contains a Voices group, a master like component used to change parameters on all voices at the same time, with a Panic button to be used to stop running voices, followed by one tab for each voice. Graphical User Interface components will then reflect the multi-voices structure of the new polyphonic DSP The resulting polyphonic DSP object can be used as usual, connected with the needed audio driver, and possibly other UI control objects like OSCUI , httpdUI , etc. Having this new UI hierarchical view allows complete OSC control of each single voice and their control parameters, but also all voices using the master component. The following OSC messages reflect the same DSP code either compiled normally, or in polyphonic mode (only part of the OSC hierarchies are displayed here): // Mono mode /Organ/vol f -10.0 /Organ/pan f 0.0 // Polyphonic mode /Polyphonic/Voices/Organ/pan f 0.0 /Polyphonic/Voices/Organ/vol f -10.0 ... /Polyphonic/Voice1/Organ/vol f -10.0 /Polyphonic/Voice1/Organ/pan f 0.0 ... /Polyphonic/Voice2/Organ/vol f -10.0 /Polyphonic/Voice2/Organ/pan f 0.0 Note that to save space on the screen, the /Polyphonic/VoiceX/xxx syntax is used when the number of allocated voices is less than 8, then the /Polyphonic/VX/xxx syntax is used when more voices are used. The polyphonic instrument allocation takes the DSP to be used for one voice, the desired number of voices, the dynamic voice allocation state, and the group state which controls if separated voices are displayed or not: dsp* poly = new mydsp_poly(dsp, 2, true, true); With the following code, note that a polyphonic instrument may be used outside of a MIDI control context, so that all voices will be always running and possibly controlled with OSC messages for instance: dsp* poly = new mydsp_poly(dsp, 8, false, true);","title":"Using the mydsp_poly Class"},{"location":"manual/architectures/#polyphonic-instrument-with-a-global-output-effect","text":"Polyphonic instruments may be used with an output effect. Putting that effect in the main Faust code is generally not a good idea since it would be instantiated for each voice which would be very inefficient. A convention has been defined to use the effect = some effect; line in the DSP source code. The actual effect definition has to be extracted from the DSP code, compiled separately, and then combined using the dsp_sequencer class previously presented to connect the polyphonic DSP in sequence with a unique global effect, with something like: dsp* poly = new dsp_sequencer(new mydsp_poly(dsp, 2, true, true), new effect()); | Some helper classes like the base dsp_poly_factory class, and concrete implementations llvm_dsp_poly_factory when using the LLVM backend or interpreter_dsp_poly_factory when using the Interpreter backend can also be used to automatically handle the voice and effect part of the DSP.","title":"Polyphonic Instrument With a Global Output Effect"},{"location":"manual/architectures/#controlling-the-polyphonic-instrument","text":"The mydsp_poly class is also ready for MIDI control (as a class implementing the midi interface) and can react to keyOn/keyOff and pitchWheel events. Other MIDI control parameters can directly be added in the DSP source code as MIDI metadata. To receive MIDI events, the created polyphonic DSP will be automatically added to the midi_handler object when calling buildUserInterface on a MidiUI object.","title":"Controlling the Polyphonic Instrument"},{"location":"manual/architectures/#deploying-the-polyphonic-instrument","text":"Several architecture files and associated scripts have been updated to handle polyphonic instruments: As an example on OSX, the script faust2caqt foo.dsp can be used to create a polyphonic CoreAudio/QT application. The desired number of voices is either declared in a nvoices metadata or changed with the -nvoices num additional parameter. MIDI control is activated using the -midi parameter. The number of allocated voices can possibly be changed at runtime using the -nvoices parameter to change the default value (so using ./foo -nvoices 16 for instance). Several other scripts have been adapted using the same conventions. faustcaqt -midi -noices 12 inst.dsp -effect effect.dsp with inst.dsp and effect.dsp in the same folder, and the number of outputs of the instrument matching the number of inputs of the effect, has to be used. Polyphonic ready faust2xx scripts will then compile the polyphonic instrument and the effect, combine them in sequence, and create a ready-to-use DSP.","title":"Deploying the Polyphonic Instrument"},{"location":"manual/architectures/#custom-memory-manager","text":"In C and C++, the Faust compiler produces a class (or a struct in C), to be instantiated to create each DSP instance. The standard generation model produces a flat memory layout, where all fields (scalar and arrays) are simply consecutive in the generated code (following the compilation order). So the DSP is allocated on a single block of memory, either on the stack or the heap depending on the use-case. The following DSP program: import(\"stdfaust.lib\"); gain = hslider(\"gain\", 0.5, 0, 1, 0.01); feedback = hslider(\"feedback\", 0.8, 0, 1, 0.01); echo(del_sec, fb, g) = + ~ de.delay(50000, del_samples) * fb * g with { del_samples = del_sec * ma.SR; }; process = echo(1.6, 0.6, 0.7), echo(0.7, feedback, gain); will have the flat memory layout: int IOTA0; int fSampleRate; int iConst1; float fRec0[65536]; FAUSTFLOAT fHslider0; FAUSTFLOAT fHslider1; int iConst2; float fRec1[65536]; So scalar fHslider0 and fHslider1 correspond to the gain and feedback controllers. The iConst1 and iConst2 values are typically computed once at init time using the dynamically given the fSampleRate value, and used in the DSP loop later on. The fRec0 and fRec1 arrays are used for the recursive delays and finally the shared IOTA0 index is used to access them. Here is the generated compute function: virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { FAUSTFLOAT* input0 = inputs[0]; FAUSTFLOAT* input1 = inputs[1]; FAUSTFLOAT* output0 = outputs[0]; FAUSTFLOAT* output1 = outputs[1]; float fSlow0 = float(fHslider0) * float(fHslider1); for (int i0 = 0; i0 < count; i0 = i0 + 1) { fRec0[IOTA0 & 65535] = float(input0[i0]) + 0.419999987f * fRec0[(IOTA0 - iConst1) & 65535]; output0[i0] = FAUSTFLOAT(fRec0[IOTA0 & 65535]); fRec1[IOTA0 & 65535] = float(input1[i0]) + fSlow0 * fRec1[(IOTA0 - iConst2) & 65535]; output1[i0] = FAUSTFLOAT(fRec1[IOTA0 & 65535]); IOTA0 = IOTA0 + 1; } }","title":"Custom Memory Manager"},{"location":"manual/architectures/#the-mem-option","text":"On audio boards where the memory is separated as several blocks (like SRAM, SDRAM\u2026) with different access time, it becomes important to refine the DSP memory model so that the DSP structure will not be allocated on a single block of memory, but possibly distributed on all available blocks. The idea is then to allocate parts of the DSP that are often accessed in fast memory and the other ones in slow memory. The first remark is that scalar values will typically stay in the DSP structure, and the point is to move the big array buffers ( fRec0 and fRec1 in the example) into separated memory blocks. The -mem (--memory-manager) option can be used to generate adapted code. On the previous DSP program, we now have the following generated C++ code: int IOTA0; int fSampleRate; int iConst1; float* fRec0; FAUSTFLOAT fHslider0; FAUSTFLOAT fHslider1; int iConst2; float* fRec1; The two fRec0 and fRec1 arrays are becoming pointers, and will be allocated elsewhere. An external memory manager is needed to interact with the DSP code. The proposed model does the following: in a first step the generated C++ code will inform the memory allocator about its needs in terms of 1) number of separated memory zones, with 2) their size 3) access characteristics, like number of Read and Write for each frame computation. This is done be generating an additional static memoryInfo method with the complete information available, the memory manager can then define the best strategy to allocate all separated memory zones an additional memoryCreate method is generated to allocate each of the separated zones an additional memoryDestroy method is generated to deallocate each of the separated zones Here is the API for the memory manager: struct dsp_memory_manager { virtual ~dsp_memory_manager() {} /** * Inform the Memory Manager with the number of expected memory zones. * @param count - the number of memory zones */ virtual void begin(size_t count); /** * Give the Memory Manager information on a given memory zone. * @param size - the size in bytes of the memory zone * @param reads - the number of Read access to the zone used to compute one frame * @param writes - the number of Write access to the zone used to compute one frame */ virtual void info(size_t size, size_t reads, size_t writes) {} /** * Inform the Memory Manager that all memory zones have been described, * to possibly start a 'compute the best allocation strategy' step. */ virtual void end(); /** * Allocate a memory zone. * @param size - the memory zone size in bytes */ virtual void* allocate(size_t size) = 0; /** * Destroy a memory zone. * @param ptr - the memory zone pointer to be deallocated */ virtual void destroy(void* ptr) = 0; }; A class static member is added in the mydsp class, to be set with an concrete memory manager later on: dsp_memory_manager* mydsp::fManager = nullptr; The C++ generated code now contains a new memoryInfo method, which interacts with the memory manager: static void memoryInfo() { fManager->begin(3); // mydsp fManager->info(56, 9, 1); // fRec0 fManager->info(262144, 2, 1); // fRec1 fManager->info(262144, 2, 1); fManager->end(); } The begin method is first generated to inform that three separated memory zones will be needed. Then three consecutive calls to the info method are generated, one for the DSP object itself, one for each recursive delay array. The end method is then called to finish the memory layout description, and let the memory manager prepare the actual allocations. Note that the memory layout information is also available in the JSON file generated using the -json option, to possibly be used statically by the architecture machinery (that is at compile time). With the previous program, the memory layout section is: \"memory_layout\": [ { \"name\": \"mydsp\", \"type\": \"kObj_ptr\", \"size\": 0, \"size_bytes\": 56, \"read\": 9, \"write\": 1 }, { \"name\": \"IOTA0\", \"type\": \"kInt32\", \"size\": 1, \"size_bytes\": 4, \"read\": 7, \"write\": 1 }, { \"name\": \"iConst1\", \"type\": \"kInt32\", \"size\": 1, \"size_bytes\": 4, \"read\": 1, \"write\": 0 }, { \"name\": \"fRec0\", \"type\": \"kFloat_ptr\", \"size\": 65536, \"size_bytes\": 262144, \"read\": 2, \"write\": 1 }, { \"name\": \"iConst2\", \"type\": \"kInt32\", \"size\": 1, \"size_bytes\": 4, \"read\": 1, \"write\": 0 }, { \"name\": \"fRec1\", \"type\": \"kFloat_ptr\", \"size\": 65536, \"size_bytes\": 262144, \"read\": 2, \"write\": 1 } ] Finally the memoryCreate and memoryDestroy methods are generated. The memoryCreate method asks the memory manager to allocate the fRec0 and fRec1 buffers: void memoryCreate() { fRec0 = static_cast(fManager->allocate(262144)); fRec1 = static_cast(fManager->allocate(262144)); } And the memoryDestroy method asks the memory manager to destroy them: virtual memoryDestroy() { fManager->destroy(fRec0); fManager->destroy(fRec1); } Additional static create/destroy methods are generated: static mydsp* create() { mydsp* dsp = new (fManager->allocate(sizeof(mydsp))) mydsp(); dsp->memoryCreate(); return dsp; } static void destroy(dsp* dsp) { static_cast(dsp)->memoryDestroy(); fManager->destroy(dsp); } Note that the so-called C++ placement new will be used to allocate the DSP object itself.","title":"The -mem option"},{"location":"manual/architectures/#static-tables","text":"When rdtable or rwtable primitives are used in the source code, the C++ class will contain a table shared by all instances of the class. By default, this table is generated as a static class array, and so allocated in the application global static memory. Taking the following DSP example: process = (waveform {10,20,30,40,50,60,70}, %(7)~+(3) : rdtable), (waveform {1.1,2.2,3.3,4.4,5.5,6.6,7.7}, %(7)~+(3) : rdtable); Here is the generated code in default mode: ... static int itbl0mydspSIG0[7]; static float ftbl1mydspSIG1[7]; class mydsp : public dsp { ... public: ... static void classInit(int sample_rate) { mydspSIG0* sig0 = newmydspSIG0(); sig0->instanceInitmydspSIG0(sample_rate); sig0->fillmydspSIG0(7, itbl0mydspSIG0); mydspSIG1* sig1 = newmydspSIG1(); sig1->instanceInitmydspSIG1(sample_rate); sig1->fillmydspSIG1(7, ftbl1mydspSIG1); deletemydspSIG0(sig0); deletemydspSIG1(sig1); } virtual void init(int sample_rate) { classInit(sample_rate); instanceInit(sample_rate); } virtual void instanceInit(int sample_rate) { instanceConstants(sample_rate); instanceResetUserInterface(); instanceClear(); } ... } The two itbl0mydspSIG0 and ftbl1mydspSIG1 tables are static global arrays. They are filled in the classInit method. The architecture code will typically call the init method (which calls classInit ) on a given DSP, to allocate class related arrays and the DSP itself. If several DSPs are going to be allocated, calling classInit only once then the instanceInit method on each allocated DSP is the way to go. In the -mem mode, the generated C++ code is now: ... static int* itbl0mydspSIG0 = 0; static float* ftbl1mydspSIG1 = 0; class mydsp : public dsp { ... public: ... static dsp_memory_manager* fManager; static void classInit(int sample_rate) { mydspSIG0* sig0 = newmydspSIG0(fManager); sig0->instanceInitmydspSIG0(sample_rate); itbl0mydspSIG0 = static_cast(fManager->allocate(28)); sig0->fillmydspSIG0(7, itbl0mydspSIG0); mydspSIG1* sig1 = newmydspSIG1(fManager); sig1->instanceInitmydspSIG1(sample_rate); ftbl1mydspSIG1 = static_cast(fManager->allocate(28)); sig1->fillmydspSIG1(7, ftbl1mydspSIG1); deletemydspSIG0(sig0, fManager); deletemydspSIG1(sig1, fManager); } static void classDestroy() { fManager->destroy(itbl0mydspSIG0); fManager->destroy(ftbl1mydspSIG1); } virtual void init(int sample_rate) {} virtual void instanceInit(int sample_rate) { instanceConstants(sample_rate); instanceResetUserInterface(); instanceClear(); } ... } The two itbl0mydspSIG0 and ftbl1mydspSIG1 tables are generated as static global pointers. The classInit method uses the fManager object used to allocate tables. A new classDestroy method is generated to deallocate the tables. Finally the init method is now empty, since the architecture file is supposed to use the classInit/classDestroy method once to allocate and deallocate static tables, and the instanceInit method on each allocated DSP. The memoryInfo method now has the following shape, with the two itbl0mydspSIG0 and ftbl1mydspSIG1 tables: static void memoryInfo() { fManager->begin(6); // mydspSIG0 fManager->info(4, 0, 0); // itbl0mydspSIG0 fManager->info(28, 1, 0); // mydspSIG1 fManager->info(4, 0, 0); // ftbl1mydspSIG1 fManager->info(28, 1, 0); // mydsp fManager->info(28, 0, 0); // iRec0 fManager->info(8, 3, 2); fManager->end(); }","title":"Static tables"},{"location":"manual/architectures/#defining-and-using-a-custom-memory-manager","text":"When compiled with the -mem option, the client code has to define an adapted memory_manager class for its specific needs. A cutom memory manager is implemented by subclassing the dsp_memory_manager abstract base class, and defining the begin , end , \u00ecnfo , allocate and destroy methods. Here is an example of a simple heap allocating manager (implemented in the dummy-mem.cpp architecture file): struct malloc_memory_manager : public dsp_memory_manager { virtual void begin(size_t count) { // TODO: use \u2018count\u2019 } virtual void end() { // TODO: start sorting the list of memory zones, to prepare // for the future allocations done in memoryCreate() } virtual void info(size_t size, size_t reads, size_t writes) { // TODO: use 'size', \u2018reads\u2019 and \u2018writes\u2019 // to prepare memory layout for allocation } virtual void* allocate(size_t size) { // TODO: refine the allocation scheme to take // in account what was collected in info return calloc(1, size); } virtual void destroy(void* ptr) { // TODO: refine the allocation scheme to take // in account what was collected in info free(ptr); } }; The specialized malloc_memory_manager class can now be used the following way: // Allocate a global static custom memory manager static malloc_memory_manager gManager; // Setup the global custom memory manager on the DSP class mydsp::fManager = &gManager; // Make the memory manager get information on all subcontainers, // static tables, DSP and arrays and prepare memory allocation mydsp::memoryInfo(); // Done once before allocating any DSP, to allocate static tables mydsp::classInit(44100); // \u2018Placement new\u2019 and 'memoryCreate' are used inside the \u2018create\u2019 method dsp* DSP = mydsp::create(); // Init the DSP instance DSP->instanceInit(44100); ... ... // use the DSP ... // 'memoryDestroy' and memory manager 'destroy' are used to deallocate memory mydsp::destroy(); // Done once after the last DSP has been destroyed mydsp::classDestroy(); Note that the client code can still choose to allocate/deallocate the DSP instance using the regular C++ new/delete operators: // Allocate a global static custom memory manager static malloc_memory_manager gManager; // Setup the global custom memory manager on the DSP class mydsp::fManager = &gManager; // Make the memory manager get information on all subcontainers, // static tables, DSP and arrays and prepare memory allocation mydsp::memoryInfo(); // Done once before allocating any DSP, to allocate static tables mydsp::classInit(44100); // Use regular C++ new dsp* DSP = new mydsp(); /// Allocate internal buffers DSP->memoryCreate(); // Init the DSP instance DSP->instanceInit(44100); ... ... // use the DSP ... // Deallocate internal buffers DSP->memoryDestroy(); // Use regular C++ delete delete DSP; // Done once after the last DSP has been destroyed mydsp::classDestroy(); Or even on the stack with: ... // Allocation on the stack mydsp DSP; // Allocate internal buffers DSP.memoryCreate(); // Init the DSP instance DSP.instanceInit(44100); ... ... // use the DSP ... // Deallocate internal buffers DSP.memoryDestroy(); ... More complex custom memory allocators can be developed by refining this malloc_memory_manager example, possibly defining real-time memory allocators...etc... The OWL architecture file uses a custom OwlMemoryManager .","title":"Defining and using a custom memory manager"},{"location":"manual/architectures/#allocating-several-dsp-instances","text":"In a multiple instances scheme, static data structures shared by all instances have to be allocated once at beginning using mydsp::classInit , and deallocated at the end using mydsp::classDestroy . Individual instances are then allocated with mydsp::create() and deallocated with mydsp::destroy() , possibly directly using regular new/delete , or using stack allocation as explained before.","title":"Allocating several DSP instances"},{"location":"manual/architectures/#measuring-the-dsp-cpu","text":"The measure_dsp class defined in the faust/dsp/dsp-bench.h file allows to decorate a given DSP object and measure its compute method CPU consumption. Results are given in Megabytes/seconds (higher is better) and DSP CPU at 44,1 kHz. Here is a C++ code example of its use: static void bench(dsp* dsp, const string& name) { // Init the DSP dsp->init(48000); // Wraps it with a 'measure_dsp' decorator measure_dsp mes(dsp, 1024, 5); // Measure the CPU use mes.measure(); // Returns the Megabytes/seconds and relative standard deviation values std::pair res = mes.getStats(); // Print the stats cout << name << \" MBytes/sec : \" << res.first << \" \" << \"(DSP CPU % : \" << (mes.getCPULoad() * 100) << \")\" << endl; } Defined in the faust/dsp/dsp-optimizer.h file, the dsp_optimizer class uses the libfaust library and its LLVM backend to dynamically compile DSP objects produced with different Faust compiler options, and then measure their DSP CPU. Here is a C++ code example of its use: static void dynamic_bench(const string& in_filename) { // Init the DSP optimizer with the in_filename to compile dsp_optimizer optimizer(in_filename, 0, nullptr, \"\", 1024); // Discover the best set of parameters tuple res = optimizer.findOptimizedParameters(); cout << \"Best value for '\" << in_filename << \"' is : \" << get<0>(res) << \" MBytes/sec with \"; for (size_t i = 0; i < get<3>(res).size(); i++) { cout << get<3>(res)[i] << \" \"; } cout << endl; } This class can typically be used in tools that help developers discover the best Faust compilation parameters for a given DSP program, like the faustbench and faustbench-llvm tools.","title":"Measuring the DSP CPU"},{"location":"manual/architectures/#the-proxy-dsp-class","text":"In some cases, a DSP may run outside of the application or plugin context, like on another machine. The proxy_dsp class allows to create a proxy DSP that will be finally connected to the real one (using an OSC or HTTP based machinery for instance), and will reflect its behaviour. It uses the previously described JSONUIDecoder class. Then the proxy_dsp can be used in place of the real DSP, and connected with UI controllers using the standard buildUserInterface to control it. The faust-osc-controller tool demonstrates this capability using an OSC connection between the real DSP and its proxy. The proxy_osc_dsp class implements a specialized proxy_dsp using the liblo OSC library to connect to a OSC controllable DSP (which is using the OSCUI class and running in another context or machine). Then the faust-osc-controller program creates a real GUI (using GTKUI in this example) and have it control the remote DSP and reflect its dynamic state (like vumeter values coming back from the real DSP).","title":"The Proxy DSP Class"},{"location":"manual/architectures/#embedded-platforms","text":"Faust has been targeting an increasing number of embedded platforms for real-time audio signal processing applications in recent years. It can now be used to program microcontrollers (i.e., ESP32 , Teensy , Pico DSP and Daisy ), mobile platforms, embedded Linux systems (i.e., Bela and Elk ), Digital Signal Processors (DSPs), and more. Specialized architecture files and faust2xx scripts have been developed.","title":"Embedded Platforms"},{"location":"manual/architectures/#metadata-naming-convention","text":"A specific question arises when dealing with devices without or limited screen to display any GUI, and a set of physical knobs or buttons to be connected to control parameters. The standard way is then to use metadata in control labels. Since beeing able to use the same DSP file on all devices is always desirable, a common set of metadata has been defined: [switch:N] is used to connect to switch buttons [knob:N] is used to connect to knobs A extended set of metadata will probably have to be progressively defined and standardized.","title":"Metadata Naming Convention"},{"location":"manual/architectures/#using-the-uim-compiler-option","text":"On embedded platforms with limited capabilities, using the -uim option can be helpful. The C/C++ generated code then contains a static description of several caracteristics of the DSP, like the number of audio inputs/outputs , the number of controls inputs/outputs , and macros feed with the controls parameters (label, DSP field name, init, min, max, step) that can be implemented in the architecture file for various needs. For example the following DSP program: process = _*hslider(\"Gain\", 0, 0, 1, 0.01) : hbargraph(\"Vol\", 0, 1); compiled with faust -uim foo.dsp gives this additional section: #ifdef FAUST_UIMACROS #define FAUST_FILE_NAME \"foo.dsp\" #define FAUST_CLASS_NAME \"mydsp\" #define FAUST_INPUTS 1 #define FAUST_OUTPUTS 1 #define FAUST_ACTIVES 1 #define FAUST_PASSIVES 1 FAUST_ADDHORIZONTALSLIDER(\"Gain\", fHslider0, 0.0f, 0.0f, 1.0f, 0.01f); FAUST_ADDHORIZONTALBARGRAPH(\"Vol\", fHbargraph0, 0.0f, 1.0f); #define FAUST_LIST_ACTIVES(p) \\ p(HORIZONTALSLIDER, Gain, \"Gain\", fHslider0, 0.0f, 0.0f, 1.0f, 0.01f) \\ #define FAUST_LIST_PASSIVES(p) \\ p(HORIZONTALBARGRAPH, Vol, \"Vol\", fHbargraph0, 0.0, 0.0f, 1.0f, 0.0) \\ #endif The FAUST_ADDHORIZONTALSLIDER or FAUST_ADDHORIZONTALBARGRAPH macros can then be implemented to do whatever is needed with the Gain\", fHslider0, 0.0f, 0.0f, 1.0f, 0.01f and \"Vol\", fHbargraph0, 0.0f, 1.0f parameters respectively. The more sophisticated FAUST_LIST_ACTIVES and FAUST_LIST_PASSIVES macros can possibly be used to call any p function (defined elsewhere in the architecture file) on each item. The minimal-static.cpp file demonstrates this feature.","title":"Using the -uim Compiler Option"},{"location":"manual/architectures/#developing-a-new-architecture-file","text":"Developing a new architecture file typically means writing a generic file, that will be populated with the actual output of the Faust compiler, in order to produce a complete file, ready-to-be-compiled as a standalone application or plugin. The architecture to be used is specified at compile time with the -a option. It must contain the <> and <> lines that will be recognized by the Faust compiler, and replaced by the generated code. Here is an example in C++, but the same logic can be used with other languages producing textual outputs, like C, Cmajor, Rust or Dlang. Look at the minimal.cpp example located in the architecture folder: #include #include \"faust/gui/PrintUI.h\" #include \"faust/gui/meta.h\" #include \"faust/audio/dummy-audio.h\" #include \"faust/dsp/one-sample-dsp.h\" // To be replaced by the compiler generated C++ class <> <> int main(int argc, char* argv[]) { mydsp DSP; std::cout << \"DSP size: \" << sizeof(DSP) << \" bytes\\n\"; // Activate the UI, here that only print the control paths PrintUI ui; DSP.buildUserInterface(&ui); // Allocate the audio driver to render 5 buffers of 512 frames dummyaudio audio(5); audio.init(\"Test\", static_cast(&DSP)); // Render buffers... audio.start(); audio.stop(); } Calling faust -a minimal.cpp noise.dsp -o noise.cpp will produce a ready to compile noise.cpp file: /* ------------------------------------------------------------ name: \"noise\" Code generated with Faust 2.28.0 (https://faust.grame.fr) Compilation options: -lang cpp -scal -ftz 0 ------------------------------------------------------------ */ #ifndef __mydsp_H__ #define __mydsp_H__ #include #include \"faust/gui/PrintUI.h\" #include \"faust/gui/meta.h\" #include \"faust/audio/dummy-audio.h\" #ifndef FAUSTFLOAT #define FAUSTFLOAT float #endif #include #include #ifndef FAUSTCLASS #define FAUSTCLASS mydsp #endif #ifdef __APPLE__ #define exp10f __exp10f #define exp10 __exp10 #endif class mydsp : public dsp { private: FAUSTFLOAT fHslider0; int iRec0[2]; int fSampleRate; public: void metadata(Meta* m) { m->declare(\"filename\", \"noise.dsp\"); m->declare(\"name\", \"noise\"); m->declare(\"noises.lib/name\", \"Faust Noise Generator Library\"); m->declare(\"noises.lib/version\", \"0.0\"); } virtual int getNumInputs() { return 0; } virtual int getNumOutputs() { return 1; } static void classInit(int sample_rate) { } virtual void instanceConstants(int sample_rate) { fSampleRate = sample_rate; } virtual void instanceResetUserInterface() { fHslider0 = FAUSTFLOAT(0.5f); } virtual void instanceClear() { for (int l0 = 0; (l0 < 2); l0 = (l0 + 1)) { iRec0[l0] = 0; } } virtual void init(int sample_rate) { classInit(sample_rate); instanceInit(sample_rate); } virtual void instanceInit(int sample_rate) { instanceConstants(sample_rate); instanceResetUserInterface(); instanceClear(); } virtual mydsp* clone() { return new mydsp(); } virtual int getSampleRate() { return fSampleRate; } virtual void buildUserInterface(UI* ui_interface) { ui_interface->openVerticalBox(\"noise\"); ui_interface->addHorizontalSlider(\"Volume\", &fHslider0, 0.5, 0.0, 1.0, 0.001); ui_interface->closeBox(); } virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { FAUSTFLOAT* output0 = outputs[0]; float fSlow0 = (4.65661287e-10f * float(fHslider0)); for (int i = 0; (i < count); i = (i + 1)) { iRec0[0] = ((1103515245 * iRec0[1]) + 12345); output0[i] = FAUSTFLOAT((fSlow0 * float(iRec0[0]))); iRec0[1] = iRec0[0]; } } }; int main(int argc, char* argv[]) { mydsp DSP; std::cout << \"DSP size: \" << sizeof(DSP) << \" bytes\\n\"; // Activate the UI, here that only print the control paths PrintUI ui; DSP.buildUserInterface(&ui); // Allocate the audio driver to render 5 buffers of 512 frames dummyaudio audio(5); audio.init(\"Test\", &DSP); // Render buffers... audio.start(); audio.stop(); } Generally, several files to connect to the audio layer, controller layer, and possibly other (MIDI, OSC...) have to be used. One of them is the main file and include the others. The -i option can be added to actually inline all #include \"faust/xxx/yyy\" headers (all files starting with faust ) to produce a single self-contained unique file. Then a faust2xxx script has to be written to chain the Faust compilation step and the C++ compilation one (and possibly others). Look at the Developing a faust2xx Script section.","title":"Developing a New Architecture File"},{"location":"manual/architectures/#adapting-the-generated-dsp","text":"Developing the adapted C++ file may require aggregating the generated mydsp class (subclass of the dsp base class defined in faust/dsp/dsp.h header) in the specific class, so something like the following would have to be written: class my_class : public base_interface { private: mydsp fDSP; public: my_class() { // Do something specific } virtual ~my_class() { // Do something specific } // Do something specific void my_compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { // Do something specific // Call the fDSP 'compute' fDSP.compute(count, inputs, outputs); } // Do something specific }; or subclassing and extending it , so writing something like: class my_class : public mydsp { private: // Do something specific public: my_class() { // Do something specific } virtual ~my_class() { // Do something specific } // Override the 'compute' method void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { // Do something specific // Call the inherited 'compute' mydsp::compute(count, inputs, outputs); } // Do something specific }; or decorating a DSP object using the decorator pattern , which is already implemented in this file , and can possibly be sub-classed like: class my_decorator : public decorator_dsp { private: // Do something specific public: my_decorator(dsp* dsp):decorator_dsp(dsp) { // Do something specific } virtual ~my_class() { // Do something specific } // Implementation of some of the methods // Override the 'instanceClear' method void instanceClear() { // Do something specific // Call the inherited 'instanceClear' decorator_dsp::instanceClear(); } // Override the 'compute' method void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { // Do something specific // Call the inherited 'compute' decorator_dsp::compute(count, inputs, outputs); } // Do something specific }; // Decorates a concrete instance my_decorator DSP = new my_decorator(new mydsp()); ...","title":"Adapting the Generated DSP"},{"location":"manual/architectures/#developing-new-ui-architectures","text":"For really new architectures, the UI base class, the GenericUI helper class or the GUI class (described before), have to be subclassed. Note that a lot of classes presented in the Some useful UI classes for developers section can also be subclassed or possibly enriched with additional code.","title":"Developing New UI Architectures"},{"location":"manual/architectures/#developing-new-audio-architectures","text":"The audio base class has to be subclassed and each method implemented for the given audio hardware. In some cases the audio driver can adapt to the required number of DSP inputs/outputs (like the JACK audio system for instance which can open any number of virtual audio ports). But in general, the number of hardware audio inputs/outputs may not exactly match the DSP ones. This is the responsability of the audio driver to adapt to this situation. The dsp_adapter dsp decorator can help in this case.","title":"Developing New Audio Architectures"},{"location":"manual/architectures/#developing-a-new-soundfile-loader","text":"Soundfiles are defined in the DSP program using the soundfile primitive . Here is a simple DSP program which uses a single tango.wav audio file and play it until its end: process = 0,_~+(1):soundfile(\"sound[url:{'tango.wav'}]\",2):!,!, The compiled C++ class has the following structure: class mydsp : public dsp { private: Soundfile* fSoundfile0; int iRec0[2]; int fSampleRate; .... with the Soundfile* fSoundfile0; field and its definition : struct Soundfile { void* fBuffers; // will correspond to a double** or float** pointer chosen at runtime int* fLength; // length of each part (so fLength[P] contains the length in frames of part P) int* fSR; // sample rate of each part (so fSR[P] contains the SR of part P) int* fOffset; // offset of each part in the global buffer (so fOffset[P] contains the offset in frames of part P) int fChannels; // max number of channels of all concatenated files int fParts; // the total number of loaded parts bool fIsDouble; // keep the sample format (float or double) }; The following buildUserInterface method in generated, containing a addSoundfile method called with the appropriate parameters extracted from the soundfile(\"sound[url:{'tango.wav'}]\",2) piece of DSP code, to be used to load the tango.wav audio file and prepare the fSoundfile0 field: virtual void buildUserInterface(UI* ui_interface) { ui_interface->openVerticalBox(\"tp0\"); ui_interface->addSoundfile(\"sound\", \"{'tango.wav'}\", &fSoundfile0); ui_interface->closeBox(); } The specialized SoundUI architecture file is then used to load the required soundfiles at DSP init time, by using a SoundfileReader object. It only implements the addSoundfile method which will load all needed audio files, create and fill the fSoundfile0 object. Different concrete implementations are already written, either using libsndfile (with the LibsndfileReader.h file), or JUCE (with the JuceReader file). A new audio file loader can be written by subclassing the SoundfileReader class. A pure memory reader could be implemented for instance to load wavetables to be used as the soundfile URL list. Look at the template MemoryReader class, as an example to be completed, with the following methods to be implemented: /** * Check the availability of a sound resource. * * @param path_name - the name of the file, or sound resource identified this way * * @return true if the sound resource is available, false otherwise. */ virtual bool checkFile(const std::string& path_name); /** * Get the channels and length values of the given sound resource. * * @param path_name - the name of the file, or sound resource identified this way * @param channels - the channels value to be filled with the sound resource * number of channels * @param length - the length value to be filled with the sound resource length in frames * */ virtual void getParamsFile(const std::string& path_name, int& channels, int& length); /** * Read one sound resource and fill the 'soundfile' structure accordingly * * @param path_name - the name of the file, or sound resource identified this way * @param part - the part number to be filled in the soundfile * @param offset - the offset value to be incremented with the actual * sound resource length in frames * @param max_chan - the maximum number of mono channels to fill * */ virtual void readFile(Soundfile* soundfile, const std::string& path_name, int part, int& offset, int max_chan); Another example to look at is WaveReader . The SoundUI architecture is then used the following way: mydsp DSP; // Here using a compiled time chosen SoundfileReader SoundUI* sound_interface = new SoundUI(); DSP.buildUserInterface(sound_interface); ... run the DSP ... // Finally deallocate the sound_interface and associated Soundfile resources delete sound_interface; The SoundfileReader object can be dynamically choosen by using an alternate version of the SoundUI constructor, possibly choosing the sample format to be double when the DSP code is compiled with the -double option: mydsp DSP; // Here using a dynamically chosen custom MyMemoryReader SoundfileReader* sound_reader = new MyMemoryReader(...); SoundUI* sound_interface = new SoundUI(\"\", false, sound_reader, true); DSP.buildUserInterface(sound_interface); ... run the DSP ... // Finally deallocate the sound_interface and associated Soundfile resources delete sound_interface;","title":"Developing a New Soundfile Loader"},{"location":"manual/architectures/#other-languages-than-c","text":"Most of the architecture files have been developed in C++ over the years. Thus they are ready to be used with the C++ backend and the one that generate C++ wrapped modules (like the LLVM, Cmajor and Interpreter backends). For other languages, specific architecture files have to be written. Here is the current situation for other backends: the C backend needs additional CGlue.h and CInterface.h files, with the minimal-c file as a simple console mode example using them the Rust backend can be used with the minimal-rs architecture, the more complex JACK jack.rs used in faust2jackrust script, or the PortAudio portaudio.rs used in faust2portaudiorust script the experimental Dlang backend can be used with the minimal.d or the dplug.d to generate DPlug plugins with the faust2dplug script. the Julia backend can be used with the minimal.jl architecture or the portaudio.jl used in faust2portaudiojulia script.","title":"Other Languages Than C++"},{"location":"manual/architectures/#the-faust2xx-scripts","text":"","title":"The faust2xx Scripts"},{"location":"manual/architectures/#using-faust2xx-scripts","text":"The faust2xx scripts finally combine different architecture files to generate a ready-to-use application or plugin, etc... from a Faust DSP program. They typically combine the generated DSP with an UI architecture file and an audio architecture file. Most of the also have addition options like -midi , -nvoices , -effect or -soundfile to generate polyphonic instruments with or without effects, or audio file support. Look at the following page for a more complete description.","title":"Using faust2xx Scripts"},{"location":"manual/architectures/#developing-a-faust2xx-script","text":"The faust2xx script are mostly written in bash (but any scripting language can be used) and aims to produce a ready-to-use application, plugin, etc... from a DSP program. A faust2minimal template script using the C++ backend, can be used to start the process. The helper scripts, faustpath , faustoptflags , and usage.sh can be used to setup common variables: # Define some common paths . faustpath # Define compilation flags . faustoptflags # Helper file to build the 'help' option . usage.sh CXXFLAGS+=\" $MYGCCFLAGS\" # So that additional CXXFLAGS can be used # The architecture file name ARCHFILE=$FAUSTARCH/minimal.cpp # Global variables OPTIONS=\"\" FILES=\"\" The script arguments then have to be analysed, compiler options are kept in the OPTIONS variable and all DSP files in the FILES one: #------------------------------------------------------------------- # dispatch command arguments #------------------------------------------------------------------- while [ $1 ] do p=$1 if [ $p = \"-help\" ] || [ $p = \"-h\" ]; then usage faust2minimal \"[options] [Faust options] \" exit fi echo \"dispatch command arguments\" if [ ${p:0:1} = \"-\" ]; then OPTIONS=\"$OPTIONS $p\" elif [[ -f \"$p\" ]] && [ ${p: -4} == \".dsp\" ]; then FILES=\"$FILES $p\" else OPTIONS=\"$OPTIONS $p\" fi shift done Each DSP file is first compiled to C++ using the faust -a command and the appropriate architecture file, then to the final executable program, here using the C++ compiler: #------------------------------------------------------------------- # compile the *.dsp files #------------------------------------------------------------------- for f in $FILES; do # compile the DSP to c++ using the architecture file echo \"compile the DSP to c++ using the architecture file\" faust -i -a $ARCHFILE $OPTIONS \"$f\" -o \"${f%.dsp}.cpp\"|| exit # compile c++ to binary echo \"compile c++ to binary\" ( $CXX $CXXFLAGS \"${f%.dsp}.cpp\" -o \"${f%.dsp}\" ) > /dev/null || exit # remove tempory files rm -f \"${f%.dsp}.cpp\" # collect binary file name for FaustWorks BINARIES=\"$BINARIES${f%.dsp};\" done echo $BINARIES The existing faust2xx scripts can be used as examples.","title":"Developing a faust2xx Script"},{"location":"manual/architectures/#the-faust2api-model","text":"This model combining the generated DSP the audio and UI architecture components is very convenient to automatically produce ready-to-use standalone application or plugins, since the controller part (GUI, MIDI or OSC...) is directly compiled and deployed. In some cases, developers prefer to control the DSP by creating a completely new GUI (using a toolkit not supported in the standard architecture files), or even without any GUI and using another control layer. A model that only combines the generated DSP with an audio architecture file to produce an audio engine has been developed (thus gluing the blue and red parts of the three color model explained at the beginning). A generic template class DspFaust has been written in the DspFaust.h and DspFaust.cpp files. This code contains conditional compilation sections to add and initialize the appropriate audio driver (written as a subclass of the previously described base audio class), and can produce audio generators , effects , of fully MIDI and sensor controllable pophyphonic instruments . The resulting audio engine contains start and stop methods to control audio processing. It also provides a set of functions like getParamsCount, setParamValue, getParamValue etc. to access all parameters (or the additional setVoiceParamValue method function to access a single voice in a polyphonic case), and let the developer adds his own GUI or any kind of controller. Look at the faust2api script, which uses the previously described architecture files, and provide a tool to easily generate custom APIs based on one or several Faust objects.","title":"The faust2api Model"},{"location":"manual/architectures/#using-the-inj-option-with-faust2xx-scripts","text":"The compiler -inj option allows to inject a pre-existing C++ file (instead of compiling a dsp file) into the architecture files machinery. Assuming that the C++ file implements a subclass of the base dsp class, the faust2xx scripts can possibly be used to produce a ready-to-use application or plugin that can take profit of all already existing UI and audio architectures. Two examples of use are presented next.","title":"Using the -inj Option With faust2xx Scripts"},{"location":"manual/architectures/#using-the-template-llvmcpp-architecture","text":"The first one demonstrates how faust2xx scripts can become more dynamic by loading and compiling an arbitrary DSP at runtime. This is done using the template-llvm.cpp architecture file which uses the libfaust library and the LLVM backend to dynamically compile a foo.dsp file. So instead of producing a static binary based on a given DSP, the resulting program will be able to load and compile a DSP at runtime. This template-llvm.cpp can be used with the -inj option in faust2xx tools like: faust2cagtk -inj template-llvm.cpp faust2cagtk-llvm.dsp (a dummy DSP) to generate a monophonic faust2cagtk-llvm application, ready to be used to load and compile a DSP, and run it with the CoreAudio audio layer and GTK as the GUI freamework. Then faust2cagtk-llvm will ask for a DSP to compile: ./faust2cagtk-llvm A generic polyphonic (8 voices) and MIDI controllable version can be compiled using: faust2cagtk -inj template-llvm.cpp -midi -nvoices 8 faust2cagtk-llvm.dsp (a dummy DSP) Note that the resulting binary keeps its own control options, like: ./faust2cagtk-llvm -h ./faust2cagtk-llvm [--frequency ] [--buffer ] [--nvoices ] [--control <0/1>] [--group <0/1>] [--virtual-midi <0/1>] So now ./faust2cagtk-llvm --nvoices 16 starts the program with 16 voices. The technique has currently be tested with the faust2cagtk , faust2jack , faust2csvplot , and faust2plot tools.","title":"Using the template-llvm.cpp architecture"},{"location":"manual/architectures/#second-use-case-computing-the-spectrogram-of-a-set-of-audio-files","text":"Here is a second use case where some external C++ code is used to compute the spectrogram of a set of audio files (which is something that cannot be simply done with the current version fo the Faust language) and output the spectrogram as an audio signal. A nentry controller will be used to select the currently playing spectrogram. The Faust compiler will be used to generate a C++ class which is going to be manually edited and enriched with additional code.","title":"Second use-case computing the spectrogram of a set of audio files"},{"location":"manual/architectures/#writting-the-dsp-code","text":"First a fake DSP program spectral.dsp using the soundfile primitive loading two audio files and a nentry control is written: sf = soundfile(\"sound[url:{'sound1.wav';'sound2.wav'}]\",2); process = (hslider(\"Spectro\", 0, 0, 1, 1),0) : sf : !,!,_,_; The point of explicitly using soundfile primitive and a nentry control is to generate a C++ file with a prefilled DSP structure (containing the fSoundfile0 and fHslider0 fields) and code inside the buildUserInterface method. Compiling it manually with the following command: faust spectral.dsp -cn spectral -o spectral.cpp produces the following C++ code containing the spectral class: class spectral : public dsp { private: Soundfile* fSoundfile0; FAUSTFLOAT fHslider0; int fSampleRate; public: ... virtual int getNumInputs() { return 0; } virtual int getNumOutputs() { return 2; } ... virtual void buildUserInterface(UI* ui_interface) { ui_interface->openVerticalBox(\"spectral\"); ui_interface->addHorizontalSlider(\"Spectrogram\", &fHslider0, 0.0f, 0.0f, 1.0f, 1.0f); ui_interface->addSoundfile(\"sound\", \"{'sound1.wav';'sound2.wav';}\", &fSoundfile0); ui_interface->closeBox(); } virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { int iSlow0 = int(float(fHslider0)); .... } };","title":"Writting the DSP code"},{"location":"manual/architectures/#customizing-the-c-code","text":"Now the spectral class can be manually edited and completed with additional code, to compute the two audio files spectrograms in buildUserInterface , and play them in compute . a new line Spectrogram fSpectro[2]; is added in the DSP structure a createSpectrogram(fSoundfile0, fSpectro); function is added in buildUserInterface and used to compute and fill the two spectrograms, by reading the two loaded audio files in fSoundfile0 part of the generated code in compute is removed and replaced by new code to play one of spectrograms (selected with the fHslider0 control in the GUI) using a playSpectrogram(fSpectro, count, iSlow0, outputs); function: class spectral : public dsp { private: Soundfile* fSoundfile0; FAUSTFLOAT fHslider0; int fSampleRate; Spectrogram fSpectro[2]; public: ... virtual int getNumInputs() { return 0; } virtual int getNumOutputs() { return 2; } ... virtual void buildUserInterface(UI* ui_interface) { ui_interface->openVerticalBox(\"spectral\"); ui_interface->addHorizontalSlider(\"Spectro\", &fHslider0, 0.0f, 0.0f, 1.0f, 1.0f); ui_interface->addSoundfile(\"sound\", \"{'sound1.wav';'sound2.wav';}\", &fSoundfile0); // Read 'fSoundfile0' and fill 'fSpectro' createSpectrogram(fSoundfile0, fSpectro); ui_interface->closeBox(); } virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { int iSlow0 = int(float(fHslider0)); // Play 'fSpectro' indexed by 'iSlow0' by writting 'count' samples in 'outputs' playSpectrogram(fSpectro, count, iSlow0, outputs); } }; Here we assume that createSpectrogram and playSpectrogram functions are defined elsewhere and ready to be compiled.","title":"Customizing the C++ code"},{"location":"manual/architectures/#deploying-it-as-a-maxmsp-external-using-the-faust2max6-script","text":"The completed spectral.cpp file is now ready to be deployed as a Max/MSP external using the faust2max6 script and the -inj option with the following line: faust2max6 -inj spectral.cpp -soundfile spectral.dsp The two needed sound1.wav and sound2.wav audio files are embedded in the generated external, loaded at init time (since the buildUserInterface method is automatically called), and the manually added C++ code will be executed to compute the spectrograms and play them. Finally by respecting the naming coherency for the fake spectral.dsp DSP program, the generated spectral.cpp C++ file, the automatically generated spectral.maxpat Max/MSP patch will be able to build the GUI with a ready-to-use slider.","title":"Deploying it as a Max/MSP External Using the faust2max6 Script"},{"location":"manual/architectures/#additional-ressources","text":"Several external projects are providing tools to arrange the way Faust source code is generated or used, in different languages.","title":"Additional Ressources"},{"location":"manual/architectures/#preprocessing-tools","text":"","title":"Preprocessing tools"},{"location":"manual/architectures/#fpp","text":"fpp is a standalone Perl script with no dependencies which allows ANY C/C++ code in a Faust .dsp file as long as you are targeting C/C++ in scalar mode.","title":"fpp"},{"location":"manual/architectures/#c-tools","text":"Using and adapting the dsp/UI/audio model in a more sophisticated way, or integrating Faust generated C++ classes in others frameworks (like JUCE).","title":"C++ tools"},{"location":"manual/architectures/#faust2hpp","text":"Convert Faust code to a header-only standalone C++ library. A collection of header files is generated as the output. A class is provided from which a DSP object can be built with methods in the style of JUCE DSP objects.","title":"faust2hpp"},{"location":"manual/architectures/#faustpp","text":"A post-processor for Faust, which allows to generate with more flexibility. This is a source transformation tool based on the Faust compiler. It permits to arrange the way how Faust source is generated with greater flexibility.","title":"faustpp"},{"location":"manual/architectures/#cookiecutter-dpf-faust","text":"A cookiecutter project template for DISTRHO plugin framework audio effect plugins using Faust for the implementation of the DSP pipeline.","title":"cookiecutter-dpf-faust"},{"location":"manual/architectures/#faustmd","text":"Static metadata generator for Faust/C++. This program builds the metadata for a Faust DSP ahead of time, rather than dynamically. The result is a block of C++ code which can be appended to the code generation.","title":"faustmd"},{"location":"manual/architectures/#faustcppconverter","text":"Eyal Amir tool to facilitate the use of Faust generated C++ code in JUCE projects.","title":"FaustCPPConverter"},{"location":"manual/architectures/#josmodules-and-josm_faust","text":"Julius Smith projects to facilitate the use of Faust generated C++ code in JUCE projects.","title":"JOSModules and josm_faust"},{"location":"manual/architectures/#arduino-tools","text":"An alternative way to use the ESP32 board with Faust, possibly easier and more versatile than the examples mentioned on the esp32 tutorial .","title":"Arduino tools"},{"location":"manual/architectures/#cmajor-tools","text":"","title":"Cmajor tools"},{"location":"manual/architectures/#using-faust-in-cmajor","text":"A tutorial to show how Faust can be used with Cmajor , a C like procedural high-performance language especially designed for audio processing, and with dynamic JIT based compilation.","title":"Using Faust in Cmajor"},{"location":"manual/architectures/#rnbo-tools","text":"","title":"RNBO tools"},{"location":"manual/architectures/#using-faust-in-rnbo-with-codebox","text":"A tutorial to show how Faust can be used with RNBO , a library and toolchain that can take Max-like patches, export them as portable code, and directly compile that code to targets like a VST, a Max External, or a Raspberry Pi.","title":"Using Faust in RNBO with codebox~"},{"location":"manual/architectures/#dlang-tools","text":"","title":"DLang tools"},{"location":"manual/architectures/#faust-2-dplug-guide","text":"Explains how to use Faust in a Dplug project.","title":"Faust 2 Dplug Guide"},{"location":"manual/architectures/#dplug-faust-example","text":"This is an example plugin using Dplug with a Faust backend. It is a stereo reverb plugin using the Freeverb demo from the Faust library.","title":"Dplug Faust Example"},{"location":"manual/architectures/#julia-tools","text":"","title":"Julia tools"},{"location":"manual/architectures/#faustjl","text":"Julia wrapper for the Faust compiler. Uses the Faust LLVM C API.","title":"Faust.jl"},{"location":"manual/architectures/#using-faust-in-julia","text":"A tutorial to show how Faust can be used with Julia , a high-level, general-purpose dynamic programming language with features well suited for numerical analysis and computational science.","title":"Using Faust in Julia"},{"location":"manual/architectures/#python-tools","text":"","title":"Python tools"},{"location":"manual/architectures/#faustpy","text":"FAUSTPy is a Python wrapper for the FAUST DSP language. It is implemented using the CFFI and hence creates the wrapper dynamically at run-time. A updated version of the project is available on this fork .","title":"FAUSTPy"},{"location":"manual/architectures/#faust-ctypes","text":"A port of Marc Joliet's FaustPy from CFFI to Ctypes. Faust-Ctypes documentation is available online .","title":"Faust Ctypes"},{"location":"manual/architectures/#an-scons-tool-for-faust","text":"This is an SCons tool for compiling FAUST programs. It adds various builders to your construction environment: Faust, FaustXML, FaustSVG, FaustSC, and FaustHaskell. Their behaviour can be modified by changing various construction variables (see \"Usage\" below).","title":"An SCons Tool for FAUST"},{"location":"manual/architectures/#faustwatch","text":"At the moment there is one tool present, faustwatch.py. Faustwatch is a tool that observes a .dsp file used by the dsp language Faust.","title":"Faustwatch"},{"location":"manual/architectures/#faustwidgets","text":"Creates interactive widgets inside jupyter notebooks from Faust dsp files and produces a (customizable) plot.","title":"faustWidgets"},{"location":"manual/architectures/#faust-synth","text":"This is an example project for controlling a synth, programmed and compiled with Faust, through a Python script. The synth runs as a JACK client on Linux systems and the output is automatically recorded by jack_capture.","title":"Faust Synth"},{"location":"manual/architectures/#dawdreamer","text":"DawDreamer is an audio-processing Python framework supporting Faust and Faust's Box API.","title":"DawDreamer"},{"location":"manual/architectures/#ode2dsp","text":"ode2dsp is a Python library for generating ordinary differential equation (ODE) solvers in digital signal processing (DSP) languages. It automates the tedious and error-prone symbolic calculations involved in creating a DSP model of an ODE. Finite difference equations (FDEs) are rendered to Faust code.","title":"ode2dsp"},{"location":"manual/architectures/#faustlab","text":"A exploratory project to wrap the Faust interpreter for use by python via the following wrapping frameworks using the RtAudio cross-platform audio driver: cyfaust: cython (faust c++ interface) cfaustt: cython (faust c interface) pyfaust: pybind11 (faust c++ interface) nanobind: nanobind (faust c++ interface)","title":"faustlab"},{"location":"manual/architectures/#cyfaust","text":"A cython wrapper of the Faust interpreter and the RtAudio cross-platform audio driver, derived from the faustlab project. The objective is to end up with a minimal, modular, self-contained, cross-platform python3 extension.","title":"cyfaust"},{"location":"manual/architectures/#rust-tools","text":"","title":"Rust tools"},{"location":"manual/architectures/#rust-faust","text":"A better integration of Faust for Rust. It allows to build the DSPs via build.rs and has some abstractions to make it much easier to work with params and meta of the DSPs.","title":"rust-faust"},{"location":"manual/architectures/#faust-egui","text":"Proof of concept of drawing a UI with egui and rust-faust .","title":"Faust egui"},{"location":"manual/architectures/#rustfaustexperiments","text":"Tools to compare C++ and Rust code generated from Faust.","title":"RustFaustExperiments"},{"location":"manual/architectures/#fl-tui","text":"Rust wrapper for the Faust compiler. It uses the libfaust LLVM C API.","title":"fl-tui"},{"location":"manual/architectures/#faustlive-jack-rs","text":"Another Rust wrapper for the Faust compiler, using JACK server for audio. It uses the libfaust LLVM C API.","title":"faustlive-jack-rs"},{"location":"manual/architectures/#lowpass-lr4-faust-nih-plug","text":"A work-in-progress project to integrate Faust generated Rust code with NIH-plug .","title":"lowpass-lr4-faust-nih-plug"},{"location":"manual/architectures/#nih-faust-jit","text":"A plugin to load Faust dsp files and JIT-compile them with LLVM. A simple GUI is provided to select which script to load and where to look for the Faust libraries that this script may import. The selected DSP script is saved as part of the plugin state and therefore is saved with your DAW project.","title":"nih-faust-jit"},{"location":"manual/architectures/#webassembly-tools","text":"","title":"WebAssembly tools"},{"location":"manual/architectures/#faust-loader","text":"Import Faust .dsp files, and get back an AudioWorklet or ScriptProcessor node.","title":"faust-loader"},{"location":"manual/architectures/#faust2cpp2wasm","text":"A drop in replacement for the wasm file generated by faust2wasm , but with Faust's C++ backend instead of its wasm backend.","title":"faust2cpp2wasm"},{"location":"manual/architectures/#faust-compiler-microservice","text":"This is a microservice that serves a single purpose: compiling Faust code that is sent to it into WebAssembly that can then be loaded and run natively from within the web synth application. It is written in go because go is supposed to be good for this sort of thing.","title":"Faust Compiler Microservice"},{"location":"manual/architectures/#mosfez-faust","text":"Makes dynamic compilation of Faust on the web a little easier, and has a dev project to run values through dsp offline, and preview dsp live. It's an opinionated version of some parts of Faust for webaudio , mostly just the Web Assembly Faust compiler, wrapped up in a library with additional features.","title":"mosfez-faust"},{"location":"manual/architectures/#faust-wap2-playground","text":"Playground and template for Faust-based web audio experiments.","title":"faust-wap2-playground"},{"location":"manual/architectures/#dart-tools","text":"","title":"Dart tools"},{"location":"manual/architectures/#flutter_faust_ffi","text":"A basic flutter app as a proof of concept utilizing Faust's C API export with Dart's ffi methods to create cross-platform plug-ins.","title":"flutter_faust_ffi"},{"location":"manual/community/","text":"Material from the community Here is a list of additional material contributed by the community of Faust developers or users. Articles, Video and Blog Posts Generate WAMs with FaustIDE Web Audio Modules (WAM) ias a standard for Web Audio plugins and DAWs. The 2.0 version of Web Audio Modules has been released in 2021 as a group effort by a large set of people and since then, multiple plugins and hosts have been published, mostly as open source and free software. The FAUST IDE is a very popular tool for generating WAMs from existing FAUST code (and there are hundreds of source code example available for audio effects, instruments, etc.). You can generate WAMs directly from the command line using the faust2wam script . Mozzi Revisited Mozzi brings your Arduino to life by allowing it to produce much more complex and interesting growls, sweeps and chorusing atmospherics. These sounds can be quickly and easily constructed from familiar synthesis units like oscillators, delays, filters and envelopes and can be programmed with Faust . How to compile HISE and FAUST for Audio Plugin Development This video shows how you can use Faust inside HISE . More info on how to use Faust in HISE can be found on the HISE Faust forum . How to build Mod Duo plugin written in Faust This article shows how you can compile a Faust program to run on Mod Duo . Handling infinity and not-a-number (NaN) values in Faust and C++ audio programming This post by Dario Sanfilippo discusses insights gained over a few years of audio programming to implement robust Faust/C++ software, particularly when dealing with infinity and NaN values. Three ways to implement recursive circuits in the Faust language This post by Dario Sanfilippo is about the implementation of not-so-simple recursive circuits in the Faust language. Make LV2 plugins with Faust This post by Nicola Landro is about making LV2 plugins with Faust. Getting started with Faust for SuperCollider This post by Mads Kjeldgaard is about using Faust with SuperCollider . Get Started Audio Programming with the FAUST Language This post by Matt K is about starting audio Programming with Faust. Using Faust on the OWL family of devices This tutorial focus on using Faust and on features that are specific to OWL and the OpenWare firmware. I ported native guitar plugins to JavaScript (in-depth) This post by Konstantine Kutalia is porting Faust coded Kapitonov Plugins Pack in JavaScript. Using Faust with the Arduino Audio Tools Library A blog about using Faust with Arduino Audio Tools. Writing a Slew Limiter in the Faust Language A video about writing a Slew Limiter in the Faust Language by Julius Smith. Make an Eight Channel Mixer in the Faust IDE A video about making an Eight Channel Mixer in the Faust IDE by Julius Smith. Creating VSTs and more using FAUST FAUST is a programming language that enables us to quickly create cross-platform DSP code. We can easily create VST plugins, Max-Externals and more. Its high-quality built-in library of effects and tools enables us to quickly draft high quality audio processing devices. The workshop aims at getting new users started, conveying what FAUST is good for and how to use it effectively to produce high quality results quickly. Various Tools Syntax Highlighting tree-sitter-faust Tree-sitter grammar Faust. Every Faust syntax feature should be supported. The npm package is here . Syntax Highlighting Files This folder contains syntax highlighting files for various editors. Sublime Text syntax Sublime Text syntax file for the Faust programming language. Faust-Mode Major Emacs mode for the Faust programming language, featuring syntax highlighting, automatic indentation and auto-completion. Faustine Faustine allows the edition of Faust code using emacs. faust neovim plugin Plugin to edit Faust code in the hyperextensible Vim-based text editor neowim . Code Generators faust2pdex Generator of Faust wrappers for Pure Data. This software wraps the C++ code generated by Faust into a native external for Pure Data. You obtain a piece of source code that you can use with pd-lib-builder to produce a native binary with the help of make. No knowledge of C++ programming is required. Faust.quark This SuperCollider package makes it possible to create SuperCollider packages (Quarks) containing plugins written in Faust code. With this, you can distribute plugins written in Faust and make it easy for others to install, compile or uninstall them. It also contains some simple interfaces for the faust and faust2sc.py commands used behind the scenes. ode2dsp ode2dsp is a Python library for generating ordinary differential equation (ODE) solvers in digital signal processing (DSP) languages. It automates the tedious and error-prone symbolic calculations involved in creating a DSP model of an ODE. Features: Support linear and nonlinear systems of ODEs Support trapezoidal and backward Euler discrete-time integral approximations Approximate solutions of implicit equations using Newton's method Render finite difference equations (FDEs) to Faust code Calculate stability of ODEs and FDEs at an operating point Contributing Feel free to contribute by forking this project and creating a pull request , or by mailing the library description here .","title":"Community"},{"location":"manual/community/#material-from-the-community","text":"Here is a list of additional material contributed by the community of Faust developers or users.","title":"Material from the community"},{"location":"manual/community/#articles-video-and-blog-posts","text":"","title":"Articles, Video and Blog Posts"},{"location":"manual/community/#generate-wams-with-faustide","text":"Web Audio Modules (WAM) ias a standard for Web Audio plugins and DAWs. The 2.0 version of Web Audio Modules has been released in 2021 as a group effort by a large set of people and since then, multiple plugins and hosts have been published, mostly as open source and free software. The FAUST IDE is a very popular tool for generating WAMs from existing FAUST code (and there are hundreds of source code example available for audio effects, instruments, etc.). You can generate WAMs directly from the command line using the faust2wam script .","title":"Generate WAMs with FaustIDE"},{"location":"manual/community/#mozzi-revisited","text":"Mozzi brings your Arduino to life by allowing it to produce much more complex and interesting growls, sweeps and chorusing atmospherics. These sounds can be quickly and easily constructed from familiar synthesis units like oscillators, delays, filters and envelopes and can be programmed with Faust .","title":"Mozzi Revisited"},{"location":"manual/community/#how-to-compile-hise-and-faust-for-audio-plugin-development","text":"This video shows how you can use Faust inside HISE . More info on how to use Faust in HISE can be found on the HISE Faust forum .","title":"How to compile HISE and FAUST for Audio Plugin Development"},{"location":"manual/community/#how-to-build-mod-duo-plugin-written-in-faust","text":"This article shows how you can compile a Faust program to run on Mod Duo .","title":"How to build Mod Duo plugin written in Faust"},{"location":"manual/community/#handling-infinity-and-not-a-number-nan-values-in-faust-and-c-audio-programming","text":"This post by Dario Sanfilippo discusses insights gained over a few years of audio programming to implement robust Faust/C++ software, particularly when dealing with infinity and NaN values.","title":"Handling infinity and not-a-number (NaN) values in Faust and C++ audio programming"},{"location":"manual/community/#three-ways-to-implement-recursive-circuits-in-the-faust-language","text":"This post by Dario Sanfilippo is about the implementation of not-so-simple recursive circuits in the Faust language.","title":"Three ways to implement recursive circuits in the Faust language"},{"location":"manual/community/#make-lv2-plugins-with-faust","text":"This post by Nicola Landro is about making LV2 plugins with Faust.","title":"Make LV2 plugins with Faust"},{"location":"manual/community/#getting-started-with-faust-for-supercollider","text":"This post by Mads Kjeldgaard is about using Faust with SuperCollider .","title":"Getting started with Faust for SuperCollider"},{"location":"manual/community/#get-started-audio-programming-with-the-faust-language","text":"This post by Matt K is about starting audio Programming with Faust.","title":"Get Started Audio Programming with the FAUST Language"},{"location":"manual/community/#using-faust-on-the-owl-family-of-devices","text":"This tutorial focus on using Faust and on features that are specific to OWL and the OpenWare firmware.","title":"Using Faust on the OWL family of devices"},{"location":"manual/community/#i-ported-native-guitar-plugins-to-javascript-in-depth","text":"This post by Konstantine Kutalia is porting Faust coded Kapitonov Plugins Pack in JavaScript.","title":"I ported native guitar plugins to JavaScript (in-depth)"},{"location":"manual/community/#using-faust-with-the-arduino-audio-tools-library","text":"A blog about using Faust with Arduino Audio Tools.","title":"Using Faust with the Arduino Audio Tools Library"},{"location":"manual/community/#writing-a-slew-limiter-in-the-faust-language","text":"A video about writing a Slew Limiter in the Faust Language by Julius Smith.","title":"Writing a Slew Limiter in the Faust Language"},{"location":"manual/community/#make-an-eight-channel-mixer-in-the-faust-ide","text":"A video about making an Eight Channel Mixer in the Faust IDE by Julius Smith.","title":"Make an Eight Channel Mixer in the Faust IDE"},{"location":"manual/community/#creating-vsts-and-more-using-faust","text":"FAUST is a programming language that enables us to quickly create cross-platform DSP code. We can easily create VST plugins, Max-Externals and more. Its high-quality built-in library of effects and tools enables us to quickly draft high quality audio processing devices. The workshop aims at getting new users started, conveying what FAUST is good for and how to use it effectively to produce high quality results quickly.","title":"Creating VSTs and more using FAUST"},{"location":"manual/community/#various-tools","text":"","title":"Various Tools"},{"location":"manual/community/#syntax-highlighting","text":"","title":"Syntax Highlighting"},{"location":"manual/community/#tree-sitter-faust","text":"Tree-sitter grammar Faust. Every Faust syntax feature should be supported. The npm package is here .","title":"tree-sitter-faust"},{"location":"manual/community/#syntax-highlighting-files","text":"This folder contains syntax highlighting files for various editors.","title":"Syntax Highlighting Files"},{"location":"manual/community/#sublime-text-syntax","text":"Sublime Text syntax file for the Faust programming language.","title":"Sublime Text syntax"},{"location":"manual/community/#faust-mode","text":"Major Emacs mode for the Faust programming language, featuring syntax highlighting, automatic indentation and auto-completion.","title":"Faust-Mode"},{"location":"manual/community/#faustine","text":"Faustine allows the edition of Faust code using emacs.","title":"Faustine"},{"location":"manual/community/#faust-neovim-plugin","text":"Plugin to edit Faust code in the hyperextensible Vim-based text editor neowim .","title":"faust neovim plugin"},{"location":"manual/community/#code-generators","text":"","title":"Code Generators"},{"location":"manual/community/#faust2pdex","text":"Generator of Faust wrappers for Pure Data. This software wraps the C++ code generated by Faust into a native external for Pure Data. You obtain a piece of source code that you can use with pd-lib-builder to produce a native binary with the help of make. No knowledge of C++ programming is required.","title":"faust2pdex"},{"location":"manual/community/#faustquark","text":"This SuperCollider package makes it possible to create SuperCollider packages (Quarks) containing plugins written in Faust code. With this, you can distribute plugins written in Faust and make it easy for others to install, compile or uninstall them. It also contains some simple interfaces for the faust and faust2sc.py commands used behind the scenes.","title":"Faust.quark"},{"location":"manual/community/#ode2dsp","text":"ode2dsp is a Python library for generating ordinary differential equation (ODE) solvers in digital signal processing (DSP) languages. It automates the tedious and error-prone symbolic calculations involved in creating a DSP model of an ODE. Features: Support linear and nonlinear systems of ODEs Support trapezoidal and backward Euler discrete-time integral approximations Approximate solutions of implicit equations using Newton's method Render finite difference equations (FDEs) to Faust code Calculate stability of ODEs and FDEs at an operating point","title":"ode2dsp"},{"location":"manual/community/#contributing","text":"Feel free to contribute by forking this project and creating a pull request , or by mailing the library description here .","title":"Contributing"},{"location":"manual/compiler/","text":"Using the Faust Compiler While the Faust compiler is available in different forms (e.g., Embedded Compiler , etc.), its most \"common\" one is the command line version, which can be invoked using the faust command. It translates a Faust program into code in a wide range of languages (C, C++, Rust, LLVM IR, WebAssembly, etc.). The generated code can be wrapped into an optional architecture file allowing to directly produce a fully operational program. A typical call of the Faust command line compiler is: faust [OPTIONS] faustFile.dsp The Faust compiler outputs C++ code by default therefore running: faust noise.dsp will compile noise.dsp and output the corresponding C++ code on the standard output. The option -o allows to reroute the standard output to a file: faust noise.dsp -o noise.cpp The -a option allows us to wrap the generated code into an architecture file: faust -a alsa-gtk.cpp noise.dsp which can either be placed in the same folder as the current Faust file ( noise.dsp here) or be one of the standard Faust architectures. To compile a Faust program into an ALSA application on Linux, the following commands can be used: faust -a alsa-gtk.cpp noise.dsp -o noise.cpp g++ -lpthread -lasound `pkg-config --cflags --libs gtk+-2.0` noise.cpp -o noise Note that a wide range of faust2... compilation scripts can be used to facilitate this operation by taking a Faust file and returning the corresponding binary for your platform. Structure of the Generated Code A Faust DSP C++ class derives from the base dsp class defined as below (a similar structure is used for languages other than C++): class dsp { public: dsp() {} virtual ~dsp() {} // Returns the number of inputs of the Faust program virtual int getNumInputs() = 0; // Returns the number of outputs of the Faust program virtual int getNumOutputs() = 0; // This method can be called to retrieve the UI description of // the Faust program and its associated fields virtual void buildUserInterface(UI* ui_interface) = 0; // Returns the current sampling rate virtual int getSampleRate() = 0; // Init methods virtual void init(int sample_rate) = 0; virtual void instanceInit(int sample_rate) = 0; virtual void instanceConstants(int sample_rate) = 0; virtual void instanceResetUserInterface() = 0; virtual void instanceClear() = 0; // Returns a clone of the instance virtual dsp* clone() = 0; // Retrieve the global metadata of the Faust program virtual void metadata(Meta* m) = 0; // Compute one audio buffer virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) = 0; // Compute a time-stamped audio buffer virtual void compute(double /*date_usec*/, int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { compute(count, inputs, outputs); } }; Methods are filled by the compiler with the actual code. In the case of noise.dsp : class mydsp : public dsp { private: int iRec0[2]; int fSampleRate; public: void metadata(Meta* m) { m->declare(\"author\", \"GRAME\"); m->declare(\"filename\", \"noise\"); m->declare(\"name\", \"Noise\"); m->declare(\"noises.lib/name\", \"Faust Noise Generator Library\"); m->declare(\"noises.lib/version\", \"0.0\"); } virtual int getNumInputs() { return 0; } virtual int getNumOutputs() { return 1; } static void classInit(int sample_rate) {} virtual void instanceConstants(int sample_rate) { fSampleRate = sample_rate; } virtual void instanceResetUserInterface() {} virtual void instanceClear() { for (int l0 = 0; (l0 < 2); l0 = (l0 + 1)) { iRec0[l0] = 0; } } virtual void init(int sample_rate) { classInit(sample_rate); instanceInit(sample_rate); } virtual void instanceInit(int sample_rate) { instanceConstants(sample_rate); instanceResetUserInterface(); instanceClear(); } virtual mydsp* clone() { return new mydsp(); } virtual int getSampleRate() { return fSampleRate; } virtual void buildUserInterface(UI* ui_interface) { ui_interface->openVerticalBox(\"Noise\"); ui_interface->closeBox(); } virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { FAUSTFLOAT* output0 = outputs[0]; for (int i = 0; (i < count); i = (i + 1)) { iRec0[0] = ((1103515245 * iRec0[1]) + 12345); output0[i] = FAUSTFLOAT((4.65661287e-10f * float(iRec0[0]))); iRec0[1] = iRec0[0]; } } }; Several fine-grained initialization methods are available: the instanceInit method calls several additional initialization methods. the instanceConstants method sets the instance constant state. the instanceClear method resets the instance dynamic state (delay lines...). the instanceResetUserInterface method resets all control value to their default state. All of those methods can be used individually on an allocated instance to reset part of its state. The init method combines class static state and instance initialization. When using a single instance, calling init is the simplest way to do \"what is needed.\" When using several instances, all of them can be initialized using instanceInit , with a single call to classInit to initialize the static shared state. The compute method takes the number of frames to process, and inputs and outputs buffers as arrays of separated mono channels. Note that by default inputs and outputs buffers are supposed to be distinct memory zones, so one cannot safely write compute(count, inputs, inputs) . The -inpl compilation option can be used for that, but only in scalar mode for now. By default the generated code process float type samples. This can be changed using the -double option (or even -quad in some backends). The FAUSTFLOAT type used in the compute method is defined in architecture files, and can be float or double , depending of the audio driver layer. Sample adaptation may have to be used between the DSP sample type and the audio driver sample type. Controlling Code Generation Several options of the Faust compiler allow to control the generated C++ code. By default computation is done sample by sample in a single loop. But the compiler can also generate vector and parallel code. Vector Code Generation Modern C++ compilers are able to do autovectorization, that is to use SIMD instructions to speedup the code. These instructions can typically operate in parallel on short vectors of 4 or 8 simple precision floating point numbers, leading to a theoretical speedup of 4 or 8. Autovectorization of C/C++ programs is a difficult task. Current compilers are very sensitive to the way the code is arranged. In particular, complex loops can prevent autovectorization. The goal of the vector code generation is to rearrange the C++ code in a way that facilitates the autovectorization job of the C++ compiler. Instead of generating a single sample computation loop, it splits the computation into several simpler loops that communicates by vectors. The vector code generation is activated by passing the --vectorize (or -vec ) option to the Faust compiler. Two additional options are available: --vec-size controls the size of the vector (by default 32 samples) and --loop-variant 0/1 gives some additional control on the loops: --loop-variant 0 generates fixed-size sub-loops with a final sub-loop that processes the last samples, --loop-variant 1 generates sub-loops of variable vector size. To illustrate the difference between scalar code and vector code, let's take the computation of the RMS (Root Mean Square) value of a signal. Here is the Faust code that computes the Root Mean Square of a sliding window of 1000 samples: The corresponding compute() method generated in scalar mode is the following: virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { FAUSTFLOAT* input0 = inputs[0]; FAUSTFLOAT* output0 = outputs[0]; for (int i = 0; (i < count); i = (i + 1)) { int iTemp0 = int((1048576.0f * mydsp_faustpower2_f(float(input0[i])))); iVec0[(IOTA & 1023)] = iTemp0; iRec0[0] = ((iRec0[1] + iTemp0) - iVec0[((IOTA - 1000) & 1023)]); output0[i] = FAUSTFLOAT(std::sqrt((9.53674362e-10f * float(iRec0[0])))); IOTA = (IOTA + 1); iRec0[1] = iRec0[0]; } } The -vec option leads to the following reorganization of the code: virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { fInput0_ptr = inputs[0]; FAUSTFLOAT* fInput0 = 0; fOutput0_ptr = outputs[0]; FAUSTFLOAT* fOutput0 = 0; int iRec0_tmp[36]; int* iRec0 = &iRec0_tmp[4]; int fullcount = count; int index = 0; /* Main loop */ for (index = 0; (index <= (fullcount - 32)); index = (index + 32)) { fInput0 = &fInput0_ptr[index]; fOutput0 = &fOutput0_ptr[index]; int count = 32; /* Vectorizable loop 0 */ /* Pre code */ iYec0_idx = ((iYec0_idx + iYec0_idx_save) & 2047); /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { iYec0[((i + iYec0_idx) & 2047)] = int((1048576.0f mydsp_faustpower2_f(float(fInput0[i])))); } /* Post code */ iYec0_idx_save = count; /* Recursive loop 1 */ /* Pre code */ for (int j0 = 0; (j0 < 4); j0 = (j0 + 1)) { iRec0_tmp[j0] = iRec0_perm[j0]; } /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { iRec0[i] = ((iRec0[(i - 1)] + iYec0[((i + iYec0_idx) & 2047)]) - iYec0[(((i + iYec0_idx) - 1000) & 2047)]); } /* Post code */ for (int j = 0; (j < 4); j = (j + 1)) { iRec0_perm[j] = iRec0_tmp[(count + j)]; } /* Vectorizable loop 2 */ /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { fOutput0[i] = FAUSTFLOAT(std::sqrt((9.53674362e-10f * float(iRec0[i])))); } } /* Remaining frames */ if (index < fullcount) { fInput0 = &fInput0_ptr[index]; fOutput0 = &fOutput0_ptr[index]; int count = (fullcount - index); /* Vectorizable loop 0 */ /* Pre code */ iYec0_idx = ((iYec0_idx + iYec0_idx_save) & 2047); /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { iYec0[((i + iYec0_idx) & 2047)] = int((1048576.0f * mydsp_faustpower2_f(float(fInput0[i])))); } /* Post code */ iYec0_idx_save = count; /* Recursive loop 1 */ /* Pre code */ for (int j0 = 0; (j0 < 4); j0 = (j0 + 1)) { iRec0_tmp[j0] = iRec0_perm[j0]; } /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { iRec0[i] = ((iRec0[(i - 1)] + iYec0[((i + iYec0_idx) & 2047)]) - iYec0[(((i + iYec0_idx) - 1000) & 2047)]); } /* Post code */ for (int j = 0; (j < 4); j = (j + 1)) { iRec0_perm[j] = iRec0_tmp[(count + j)]; } /* Vectorizable loop 2 */ /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { fOutput0[i] = FAUSTFLOAT(std::sqrt((9.53674362e-10f * float(iRec0[i])))); } } } While the second version of the code is more complex, it turns out to be much easier to vectorize efficiently by the C++ compiler. With the exact same compilation options: -O3 -xHost -ftz -fno-alias -fp-model fast=2 , the scalar version leads to a throughput performance of 129.144 MB/s, while the vector version achieves 359.548 MB/s, a speedup of x2.8 ! The vector code generation is built on top of the scalar code generation (see previous figure). Every time an expression needs to be compiled, the compiler checks if it requires a separate loop or not. Expressions that are shared (and are complex enough) are good candidates to be compiled in a separate loop, as well as recursive expressions and expressions used in delay lines. The result is a directed graph in which each node is a computation loop (see figure below). This graph is stored in the class object and a topological sort is applied to it before printing the code. Parallel Code Generation Parallel code generation is activated by passing either the --openMP (or -omp ) option or the --scheduler (or -sch ) option . It implies that the -vec option as well as the parallel code generation are built on top of the vector code generation. The OpenMP Code Generator The --openMP (or -omp ) option , when given to the Faust compiler, will insert appropriate OpenMP directives into the C++ code. OpenMP is a well established API that is used to explicitly define direct multi-threaded, shared memory parallelism. It is based on a fork-join model of parallelism (see figure above). Parallel regions are delimited by #pragma omp parallel constructs. At the entrance of a parallel region, a group of parallel threads is activated. The code within a parallel region is executed by each thread of the parallel group until the end of the region. #pragma omp parallel { // the code here is executed simultaneously by every thread of the parallel // team ... } In order not to have every thread doing redundantly the exact same work, OpenMP provides specific work-sharing directives. For example #pragma omp sections allows to break the work into separate, discrete sections, each section being executed by one thread: #pragma omp parallel { #pragma omp sections { #pragma omp section { // job 1 } #pragma omp section { // job 2 } ... } ... } Adding Open MP Directives As said before, parallel code generation is built on top of vector code generation. The graph of loops produced by the vector code generator is topologically sorted in order to detect the loops that can be computed in parallel. The first set S_0 (loops L1 , L2 and L3 ) contains the loops that don't depend on any other loops, the set S_1 contains the loops that only depend on loops of S_0 , (that is loops L4 and L5 ), etc.. As all the loops of a given set S_n can be computed in parallel, the compiler will generate a sections construct with a section for each loop. #pragma omp sections { #pragma omp section for (...) { // Loop 1 } #pragma omp section for (...) { // Loop 2 } ... } If a given set contains only one loop, then the compiler checks to see if the loop can be parallelized (no recursive dependencies) or not. If it can be parallelized, it generates: #pragma omp for for (...) { // Loop code } otherwise it generates a single construct so that only one thread will execute the loop: #pragma omp single for (...) { // Loop code } Example of Parallel OpenMP Code To illustrate how Faust uses the OpenMP directives, here is a very simple example, two 1-pole filters in parallel connected to an adder: The corresponding compute() method obtained using the -omp option looks like this: virtual void compute(int fullcount, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { float fRec0_tmp[36]; float fRec1_tmp[36]; FAUSTFLOAT* fInput0 = 0; FAUSTFLOAT* fInput1 = 0; FAUSTFLOAT* fOutput0 = 0; float* fRec0 = &fRec0_tmp[4]; float* fRec1 = &fRec1_tmp[4]; fInput0_ptr = inputs[0]; fInput1_ptr = inputs[1]; fOutput0_ptr = outputs[0]; #pragma omp parallel\\ firstprivate(fInput0, fInput1, fOutput0, fRec0, fRec1) { for (int index = 0; (index < fullcount); index = (index + 32)) { fInput0 = &fInput0_ptr[index]; fInput1 = &fInput1_ptr[index]; fOutput0 = &fOutput0_ptr[index]; int count = min(32, (fullcount - index)); #pragma omp sections { #pragma omp section { /* Recursive loop 0 */ /* Pre code */ for (int j0 = 0; (j0 < 4); j0 = (j0 + 1)) { fRec0_tmp[j0] = fRec0_perm[j0]; } /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { fRec0[i] = ((0.899999976f * fRec0[(i - 1)]) + (0.100000001f * float(fInput0[i]))); } /* Post code */ for (int j = 0; (j < 4); j = (j + 1)) { fRec0_perm[j] = fRec0_tmp[(count + j)]; } } #pragma omp section { /* Recursive loop 1 */ /* Pre code */ for (int j1 = 0; (j1 < 4); j1 = (j1 + 1)) { fRec1_tmp[j1] = fRec1_perm[j1]; } /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { fRec1[i] = ((0.899999976f * fRec1[(i - 1)]) + (0.100000001f * float(fInput1[i]))); } /* Post code */ for (int j = 0; (j < 4); j = (j + 1)) { fRec1_perm[j] = fRec1_tmp[(count + j)]; } } } #pragma omp single { /* Vectorizable loop 2 */ /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { fOutput0[i] = FAUSTFLOAT((fRec0[i] + fRec1[i])); } } } } } This code requires some comments: the parallel construct #pragma omp parallel is the fundamental construct that starts parallel execution. The number of parallel threads is generally the number of CPU cores but it can be controlled in several ways. variables external to the parallel region are shared by default. The pragma firstprivate(fRec0,fRec1) indicates that each thread should have its private copy of fRec0 and fRec1 . The reason is that accessing shared variables requires an indirection and is quite inefficient compared to private copies. the top level loop for (int index = 0;...)... is executed by all threads simultaneously. The subsequent work-sharing directives inside the loop will indicate how the work must be shared between threads. please note that an implied barrier exists at the end of each work-sharing region. All threads must have executed the barrier before any of them can continue. the work-sharing directive #pragma omp single indicates that this first section will be executed by only one thread (any of them). the work-sharing directive #pragma omp sections indicates that each corresponding #pragma omp section , here our two filters, will be executed in parallel. the loop construct #pragma omp for specifies that the iterations of the associated loop will be executed in parallel. The iterations of the loop are distributed across the parallel threads. For example, if we have two threads, the first one can compute indices between 0 and count/2 and the other one between count/2 and count. finally #pragma omp single indicates that this section will be executed by only one thread (any of them). The Scheduler Code Generator With the --scheduler (or -sch ) option given to the Faust compiler, the computation graph is cut into separate computation loops (called \"tasks\"), and a \"Work Stealing Scheduler\" is used to activate and execute them following their dependencies. A pool of worked threads is created and each thread uses it's own local WSQ (Work Stealing Queue) of tasks. A WSQ is a special queue with a Push operation, a \"private\" LIFO Pop operation and a \"public\" FIFO Pop operation. Starting from a ready task, each thread follows the dependencies, possibly pushing ready sub-tasks into it's own local WSQ. When no more tasks can be activated on a given computation path, the thread pops a task from it's local WSQ. If the WSQ is empty, then the thread is allowed to \"steal\" tasks from other threads WSQ. The local LIFO Pop operation allows better cache locality and the FIFO steal Pop \"larger chuck\" of work to be done. The reason for this is that many work stealing workloads are divide-and-conquer in nature, stealing one of the oldest task implicitly also steals a (potentially) large sub-tree of computations that will unfold once that piece of work is stolen and run. Compared to the OpenMP model ( -omp ) the new model is worse for simple Faust programs and usually starts to behave comparable or sometimes better for \"complex enough\" Faust programs. In any case, since OpenMP does not behave so well with GCC compilers, and is unusable on OSX in real-time contexts, this new scheduler option has it's own value. We plan to improve it adding a \"pipelining\" idea in the future. Example of Parallel Scheduler Code To illustrate how Faust generates the scheduler code, let's reuse the previous example made of two 1-pole filters in parallel connected to an adder: When -sch option is used, the content of the additional architecture/scheduler.h file is inserted in the generated code. It contains code to deal with WSQ and thread management. The compute() and computeThread() methods are the following: virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { fInput0_ptr = inputs[0]; fInput1_ptr = inputs[1]; fOutput0_ptr = outputs[0]; fCount = count; fIndex = 0; /* End task has only one input, so will be directly activated */ /* Only initialize tasks with more than one input */ initTask(fScheduler, 4, 2); /* Push ready tasks in each thread WSQ */ initTaskList(fScheduler, -1); signalAll(fScheduler); computeThread(0); syncAll(fScheduler); } void computeThread(int num_thread) { int count = fCount; FAUSTFLOAT* fInput0 = 0; FAUSTFLOAT* fInput1 = 0; FAUSTFLOAT* fOutput0 = 0; int tasknum = 0; while ((fIndex < fCount)) { fInput0 = &fInput0_ptr[fIndex]; fInput1 = &fInput1_ptr[fIndex]; fOutput0 = &fOutput0_ptr[fIndex]; count = min(32, (fCount - fIndex)); switch (tasknum) { case 0: { /* Work Stealing task */ tasknum = getNextTask(fScheduler, num_thread); break; } case 1: { /* Last task */ fIndex = (fIndex + 32); if (fIndex < fCount) { /* End task has only one input, so will be directly activated */ /* Only initialize tasks with more than one input */ initTask(fScheduler, 4, 2); /* Push ready tasks in 'num_thread' WSQ */ initTaskList(fScheduler, num_thread); } tasknum = 0; break; } case 2: { /* Recursive loop 2 */ /* Pre code */ for (int j0 = 0; (j0 < 4); j0 = (j0 + 1)) { fRec0_tmp[j0] = fRec0_perm[j0]; } /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { fRec0[i] = ((0.899999976f * fRec0[(i - 1)]) + (0.100000001f * float(fInput0[i]))); } /* Post code */ for (int j = 0; (j < 4); j = (j + 1)) { fRec0_perm[j] = fRec0_tmp[(count + j)]; } /* One output only */ activateOneOutputTask(fScheduler, num_thread, 4, &tasknum); break; } case 3: { /* Recursive loop 3 */ /* Pre code */ for (int j1 = 0; (j1 < 4); j1 = (j1 + 1)) { fRec1_tmp[j1] = fRec1_perm[j1]; } /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { fRec1[i] = ((0.899999976f * fRec1[(i - 1)]) + (0.100000001f * float(fInput1[i]))); } /* Post code */ for (int j = 0; (j < 4); j = (j + 1)) { fRec1_perm[j] = fRec1_tmp[(count + j)]; } /* One output only */ activateOneOutputTask(fScheduler, num_thread, 4, &tasknum); break; } case 4: { /* Vectorizable loop 4 */ /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { fOutput0[i] = FAUSTFLOAT((fRec0[i] + fRec1[i])); } tasknum = 1; break; } } } }","title":"Using the Compiler"},{"location":"manual/compiler/#using-the-faust-compiler","text":"While the Faust compiler is available in different forms (e.g., Embedded Compiler , etc.), its most \"common\" one is the command line version, which can be invoked using the faust command. It translates a Faust program into code in a wide range of languages (C, C++, Rust, LLVM IR, WebAssembly, etc.). The generated code can be wrapped into an optional architecture file allowing to directly produce a fully operational program. A typical call of the Faust command line compiler is: faust [OPTIONS] faustFile.dsp The Faust compiler outputs C++ code by default therefore running: faust noise.dsp will compile noise.dsp and output the corresponding C++ code on the standard output. The option -o allows to reroute the standard output to a file: faust noise.dsp -o noise.cpp The -a option allows us to wrap the generated code into an architecture file: faust -a alsa-gtk.cpp noise.dsp which can either be placed in the same folder as the current Faust file ( noise.dsp here) or be one of the standard Faust architectures. To compile a Faust program into an ALSA application on Linux, the following commands can be used: faust -a alsa-gtk.cpp noise.dsp -o noise.cpp g++ -lpthread -lasound `pkg-config --cflags --libs gtk+-2.0` noise.cpp -o noise Note that a wide range of faust2... compilation scripts can be used to facilitate this operation by taking a Faust file and returning the corresponding binary for your platform.","title":"Using the Faust Compiler"},{"location":"manual/compiler/#structure-of-the-generated-code","text":"A Faust DSP C++ class derives from the base dsp class defined as below (a similar structure is used for languages other than C++): class dsp { public: dsp() {} virtual ~dsp() {} // Returns the number of inputs of the Faust program virtual int getNumInputs() = 0; // Returns the number of outputs of the Faust program virtual int getNumOutputs() = 0; // This method can be called to retrieve the UI description of // the Faust program and its associated fields virtual void buildUserInterface(UI* ui_interface) = 0; // Returns the current sampling rate virtual int getSampleRate() = 0; // Init methods virtual void init(int sample_rate) = 0; virtual void instanceInit(int sample_rate) = 0; virtual void instanceConstants(int sample_rate) = 0; virtual void instanceResetUserInterface() = 0; virtual void instanceClear() = 0; // Returns a clone of the instance virtual dsp* clone() = 0; // Retrieve the global metadata of the Faust program virtual void metadata(Meta* m) = 0; // Compute one audio buffer virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) = 0; // Compute a time-stamped audio buffer virtual void compute(double /*date_usec*/, int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { compute(count, inputs, outputs); } }; Methods are filled by the compiler with the actual code. In the case of noise.dsp : class mydsp : public dsp { private: int iRec0[2]; int fSampleRate; public: void metadata(Meta* m) { m->declare(\"author\", \"GRAME\"); m->declare(\"filename\", \"noise\"); m->declare(\"name\", \"Noise\"); m->declare(\"noises.lib/name\", \"Faust Noise Generator Library\"); m->declare(\"noises.lib/version\", \"0.0\"); } virtual int getNumInputs() { return 0; } virtual int getNumOutputs() { return 1; } static void classInit(int sample_rate) {} virtual void instanceConstants(int sample_rate) { fSampleRate = sample_rate; } virtual void instanceResetUserInterface() {} virtual void instanceClear() { for (int l0 = 0; (l0 < 2); l0 = (l0 + 1)) { iRec0[l0] = 0; } } virtual void init(int sample_rate) { classInit(sample_rate); instanceInit(sample_rate); } virtual void instanceInit(int sample_rate) { instanceConstants(sample_rate); instanceResetUserInterface(); instanceClear(); } virtual mydsp* clone() { return new mydsp(); } virtual int getSampleRate() { return fSampleRate; } virtual void buildUserInterface(UI* ui_interface) { ui_interface->openVerticalBox(\"Noise\"); ui_interface->closeBox(); } virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { FAUSTFLOAT* output0 = outputs[0]; for (int i = 0; (i < count); i = (i + 1)) { iRec0[0] = ((1103515245 * iRec0[1]) + 12345); output0[i] = FAUSTFLOAT((4.65661287e-10f * float(iRec0[0]))); iRec0[1] = iRec0[0]; } } }; Several fine-grained initialization methods are available: the instanceInit method calls several additional initialization methods. the instanceConstants method sets the instance constant state. the instanceClear method resets the instance dynamic state (delay lines...). the instanceResetUserInterface method resets all control value to their default state. All of those methods can be used individually on an allocated instance to reset part of its state. The init method combines class static state and instance initialization. When using a single instance, calling init is the simplest way to do \"what is needed.\" When using several instances, all of them can be initialized using instanceInit , with a single call to classInit to initialize the static shared state. The compute method takes the number of frames to process, and inputs and outputs buffers as arrays of separated mono channels. Note that by default inputs and outputs buffers are supposed to be distinct memory zones, so one cannot safely write compute(count, inputs, inputs) . The -inpl compilation option can be used for that, but only in scalar mode for now. By default the generated code process float type samples. This can be changed using the -double option (or even -quad in some backends). The FAUSTFLOAT type used in the compute method is defined in architecture files, and can be float or double , depending of the audio driver layer. Sample adaptation may have to be used between the DSP sample type and the audio driver sample type.","title":"Structure of the Generated Code"},{"location":"manual/compiler/#controlling-code-generation","text":"Several options of the Faust compiler allow to control the generated C++ code. By default computation is done sample by sample in a single loop. But the compiler can also generate vector and parallel code.","title":"Controlling Code Generation"},{"location":"manual/compiler/#vector-code-generation","text":"Modern C++ compilers are able to do autovectorization, that is to use SIMD instructions to speedup the code. These instructions can typically operate in parallel on short vectors of 4 or 8 simple precision floating point numbers, leading to a theoretical speedup of 4 or 8. Autovectorization of C/C++ programs is a difficult task. Current compilers are very sensitive to the way the code is arranged. In particular, complex loops can prevent autovectorization. The goal of the vector code generation is to rearrange the C++ code in a way that facilitates the autovectorization job of the C++ compiler. Instead of generating a single sample computation loop, it splits the computation into several simpler loops that communicates by vectors. The vector code generation is activated by passing the --vectorize (or -vec ) option to the Faust compiler. Two additional options are available: --vec-size controls the size of the vector (by default 32 samples) and --loop-variant 0/1 gives some additional control on the loops: --loop-variant 0 generates fixed-size sub-loops with a final sub-loop that processes the last samples, --loop-variant 1 generates sub-loops of variable vector size. To illustrate the difference between scalar code and vector code, let's take the computation of the RMS (Root Mean Square) value of a signal. Here is the Faust code that computes the Root Mean Square of a sliding window of 1000 samples: The corresponding compute() method generated in scalar mode is the following: virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { FAUSTFLOAT* input0 = inputs[0]; FAUSTFLOAT* output0 = outputs[0]; for (int i = 0; (i < count); i = (i + 1)) { int iTemp0 = int((1048576.0f * mydsp_faustpower2_f(float(input0[i])))); iVec0[(IOTA & 1023)] = iTemp0; iRec0[0] = ((iRec0[1] + iTemp0) - iVec0[((IOTA - 1000) & 1023)]); output0[i] = FAUSTFLOAT(std::sqrt((9.53674362e-10f * float(iRec0[0])))); IOTA = (IOTA + 1); iRec0[1] = iRec0[0]; } } The -vec option leads to the following reorganization of the code: virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { fInput0_ptr = inputs[0]; FAUSTFLOAT* fInput0 = 0; fOutput0_ptr = outputs[0]; FAUSTFLOAT* fOutput0 = 0; int iRec0_tmp[36]; int* iRec0 = &iRec0_tmp[4]; int fullcount = count; int index = 0; /* Main loop */ for (index = 0; (index <= (fullcount - 32)); index = (index + 32)) { fInput0 = &fInput0_ptr[index]; fOutput0 = &fOutput0_ptr[index]; int count = 32; /* Vectorizable loop 0 */ /* Pre code */ iYec0_idx = ((iYec0_idx + iYec0_idx_save) & 2047); /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { iYec0[((i + iYec0_idx) & 2047)] = int((1048576.0f mydsp_faustpower2_f(float(fInput0[i])))); } /* Post code */ iYec0_idx_save = count; /* Recursive loop 1 */ /* Pre code */ for (int j0 = 0; (j0 < 4); j0 = (j0 + 1)) { iRec0_tmp[j0] = iRec0_perm[j0]; } /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { iRec0[i] = ((iRec0[(i - 1)] + iYec0[((i + iYec0_idx) & 2047)]) - iYec0[(((i + iYec0_idx) - 1000) & 2047)]); } /* Post code */ for (int j = 0; (j < 4); j = (j + 1)) { iRec0_perm[j] = iRec0_tmp[(count + j)]; } /* Vectorizable loop 2 */ /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { fOutput0[i] = FAUSTFLOAT(std::sqrt((9.53674362e-10f * float(iRec0[i])))); } } /* Remaining frames */ if (index < fullcount) { fInput0 = &fInput0_ptr[index]; fOutput0 = &fOutput0_ptr[index]; int count = (fullcount - index); /* Vectorizable loop 0 */ /* Pre code */ iYec0_idx = ((iYec0_idx + iYec0_idx_save) & 2047); /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { iYec0[((i + iYec0_idx) & 2047)] = int((1048576.0f * mydsp_faustpower2_f(float(fInput0[i])))); } /* Post code */ iYec0_idx_save = count; /* Recursive loop 1 */ /* Pre code */ for (int j0 = 0; (j0 < 4); j0 = (j0 + 1)) { iRec0_tmp[j0] = iRec0_perm[j0]; } /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { iRec0[i] = ((iRec0[(i - 1)] + iYec0[((i + iYec0_idx) & 2047)]) - iYec0[(((i + iYec0_idx) - 1000) & 2047)]); } /* Post code */ for (int j = 0; (j < 4); j = (j + 1)) { iRec0_perm[j] = iRec0_tmp[(count + j)]; } /* Vectorizable loop 2 */ /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { fOutput0[i] = FAUSTFLOAT(std::sqrt((9.53674362e-10f * float(iRec0[i])))); } } } While the second version of the code is more complex, it turns out to be much easier to vectorize efficiently by the C++ compiler. With the exact same compilation options: -O3 -xHost -ftz -fno-alias -fp-model fast=2 , the scalar version leads to a throughput performance of 129.144 MB/s, while the vector version achieves 359.548 MB/s, a speedup of x2.8 ! The vector code generation is built on top of the scalar code generation (see previous figure). Every time an expression needs to be compiled, the compiler checks if it requires a separate loop or not. Expressions that are shared (and are complex enough) are good candidates to be compiled in a separate loop, as well as recursive expressions and expressions used in delay lines. The result is a directed graph in which each node is a computation loop (see figure below). This graph is stored in the class object and a topological sort is applied to it before printing the code.","title":"Vector Code Generation"},{"location":"manual/compiler/#parallel-code-generation","text":"Parallel code generation is activated by passing either the --openMP (or -omp ) option or the --scheduler (or -sch ) option . It implies that the -vec option as well as the parallel code generation are built on top of the vector code generation.","title":"Parallel Code Generation"},{"location":"manual/compiler/#the-openmp-code-generator","text":"The --openMP (or -omp ) option , when given to the Faust compiler, will insert appropriate OpenMP directives into the C++ code. OpenMP is a well established API that is used to explicitly define direct multi-threaded, shared memory parallelism. It is based on a fork-join model of parallelism (see figure above). Parallel regions are delimited by #pragma omp parallel constructs. At the entrance of a parallel region, a group of parallel threads is activated. The code within a parallel region is executed by each thread of the parallel group until the end of the region. #pragma omp parallel { // the code here is executed simultaneously by every thread of the parallel // team ... } In order not to have every thread doing redundantly the exact same work, OpenMP provides specific work-sharing directives. For example #pragma omp sections allows to break the work into separate, discrete sections, each section being executed by one thread: #pragma omp parallel { #pragma omp sections { #pragma omp section { // job 1 } #pragma omp section { // job 2 } ... } ... }","title":"The OpenMP Code Generator"},{"location":"manual/compiler/#adding-open-mp-directives","text":"As said before, parallel code generation is built on top of vector code generation. The graph of loops produced by the vector code generator is topologically sorted in order to detect the loops that can be computed in parallel. The first set S_0 (loops L1 , L2 and L3 ) contains the loops that don't depend on any other loops, the set S_1 contains the loops that only depend on loops of S_0 , (that is loops L4 and L5 ), etc.. As all the loops of a given set S_n can be computed in parallel, the compiler will generate a sections construct with a section for each loop. #pragma omp sections { #pragma omp section for (...) { // Loop 1 } #pragma omp section for (...) { // Loop 2 } ... } If a given set contains only one loop, then the compiler checks to see if the loop can be parallelized (no recursive dependencies) or not. If it can be parallelized, it generates: #pragma omp for for (...) { // Loop code } otherwise it generates a single construct so that only one thread will execute the loop: #pragma omp single for (...) { // Loop code }","title":"Adding Open MP Directives"},{"location":"manual/compiler/#example-of-parallel-openmp-code","text":"To illustrate how Faust uses the OpenMP directives, here is a very simple example, two 1-pole filters in parallel connected to an adder: The corresponding compute() method obtained using the -omp option looks like this: virtual void compute(int fullcount, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { float fRec0_tmp[36]; float fRec1_tmp[36]; FAUSTFLOAT* fInput0 = 0; FAUSTFLOAT* fInput1 = 0; FAUSTFLOAT* fOutput0 = 0; float* fRec0 = &fRec0_tmp[4]; float* fRec1 = &fRec1_tmp[4]; fInput0_ptr = inputs[0]; fInput1_ptr = inputs[1]; fOutput0_ptr = outputs[0]; #pragma omp parallel\\ firstprivate(fInput0, fInput1, fOutput0, fRec0, fRec1) { for (int index = 0; (index < fullcount); index = (index + 32)) { fInput0 = &fInput0_ptr[index]; fInput1 = &fInput1_ptr[index]; fOutput0 = &fOutput0_ptr[index]; int count = min(32, (fullcount - index)); #pragma omp sections { #pragma omp section { /* Recursive loop 0 */ /* Pre code */ for (int j0 = 0; (j0 < 4); j0 = (j0 + 1)) { fRec0_tmp[j0] = fRec0_perm[j0]; } /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { fRec0[i] = ((0.899999976f * fRec0[(i - 1)]) + (0.100000001f * float(fInput0[i]))); } /* Post code */ for (int j = 0; (j < 4); j = (j + 1)) { fRec0_perm[j] = fRec0_tmp[(count + j)]; } } #pragma omp section { /* Recursive loop 1 */ /* Pre code */ for (int j1 = 0; (j1 < 4); j1 = (j1 + 1)) { fRec1_tmp[j1] = fRec1_perm[j1]; } /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { fRec1[i] = ((0.899999976f * fRec1[(i - 1)]) + (0.100000001f * float(fInput1[i]))); } /* Post code */ for (int j = 0; (j < 4); j = (j + 1)) { fRec1_perm[j] = fRec1_tmp[(count + j)]; } } } #pragma omp single { /* Vectorizable loop 2 */ /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { fOutput0[i] = FAUSTFLOAT((fRec0[i] + fRec1[i])); } } } } } This code requires some comments: the parallel construct #pragma omp parallel is the fundamental construct that starts parallel execution. The number of parallel threads is generally the number of CPU cores but it can be controlled in several ways. variables external to the parallel region are shared by default. The pragma firstprivate(fRec0,fRec1) indicates that each thread should have its private copy of fRec0 and fRec1 . The reason is that accessing shared variables requires an indirection and is quite inefficient compared to private copies. the top level loop for (int index = 0;...)... is executed by all threads simultaneously. The subsequent work-sharing directives inside the loop will indicate how the work must be shared between threads. please note that an implied barrier exists at the end of each work-sharing region. All threads must have executed the barrier before any of them can continue. the work-sharing directive #pragma omp single indicates that this first section will be executed by only one thread (any of them). the work-sharing directive #pragma omp sections indicates that each corresponding #pragma omp section , here our two filters, will be executed in parallel. the loop construct #pragma omp for specifies that the iterations of the associated loop will be executed in parallel. The iterations of the loop are distributed across the parallel threads. For example, if we have two threads, the first one can compute indices between 0 and count/2 and the other one between count/2 and count. finally #pragma omp single indicates that this section will be executed by only one thread (any of them).","title":"Example of Parallel OpenMP Code"},{"location":"manual/compiler/#the-scheduler-code-generator","text":"With the --scheduler (or -sch ) option given to the Faust compiler, the computation graph is cut into separate computation loops (called \"tasks\"), and a \"Work Stealing Scheduler\" is used to activate and execute them following their dependencies. A pool of worked threads is created and each thread uses it's own local WSQ (Work Stealing Queue) of tasks. A WSQ is a special queue with a Push operation, a \"private\" LIFO Pop operation and a \"public\" FIFO Pop operation. Starting from a ready task, each thread follows the dependencies, possibly pushing ready sub-tasks into it's own local WSQ. When no more tasks can be activated on a given computation path, the thread pops a task from it's local WSQ. If the WSQ is empty, then the thread is allowed to \"steal\" tasks from other threads WSQ. The local LIFO Pop operation allows better cache locality and the FIFO steal Pop \"larger chuck\" of work to be done. The reason for this is that many work stealing workloads are divide-and-conquer in nature, stealing one of the oldest task implicitly also steals a (potentially) large sub-tree of computations that will unfold once that piece of work is stolen and run. Compared to the OpenMP model ( -omp ) the new model is worse for simple Faust programs and usually starts to behave comparable or sometimes better for \"complex enough\" Faust programs. In any case, since OpenMP does not behave so well with GCC compilers, and is unusable on OSX in real-time contexts, this new scheduler option has it's own value. We plan to improve it adding a \"pipelining\" idea in the future.","title":"The Scheduler Code Generator"},{"location":"manual/compiler/#example-of-parallel-scheduler-code","text":"To illustrate how Faust generates the scheduler code, let's reuse the previous example made of two 1-pole filters in parallel connected to an adder: When -sch option is used, the content of the additional architecture/scheduler.h file is inserted in the generated code. It contains code to deal with WSQ and thread management. The compute() and computeThread() methods are the following: virtual void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { fInput0_ptr = inputs[0]; fInput1_ptr = inputs[1]; fOutput0_ptr = outputs[0]; fCount = count; fIndex = 0; /* End task has only one input, so will be directly activated */ /* Only initialize tasks with more than one input */ initTask(fScheduler, 4, 2); /* Push ready tasks in each thread WSQ */ initTaskList(fScheduler, -1); signalAll(fScheduler); computeThread(0); syncAll(fScheduler); } void computeThread(int num_thread) { int count = fCount; FAUSTFLOAT* fInput0 = 0; FAUSTFLOAT* fInput1 = 0; FAUSTFLOAT* fOutput0 = 0; int tasknum = 0; while ((fIndex < fCount)) { fInput0 = &fInput0_ptr[fIndex]; fInput1 = &fInput1_ptr[fIndex]; fOutput0 = &fOutput0_ptr[fIndex]; count = min(32, (fCount - fIndex)); switch (tasknum) { case 0: { /* Work Stealing task */ tasknum = getNextTask(fScheduler, num_thread); break; } case 1: { /* Last task */ fIndex = (fIndex + 32); if (fIndex < fCount) { /* End task has only one input, so will be directly activated */ /* Only initialize tasks with more than one input */ initTask(fScheduler, 4, 2); /* Push ready tasks in 'num_thread' WSQ */ initTaskList(fScheduler, num_thread); } tasknum = 0; break; } case 2: { /* Recursive loop 2 */ /* Pre code */ for (int j0 = 0; (j0 < 4); j0 = (j0 + 1)) { fRec0_tmp[j0] = fRec0_perm[j0]; } /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { fRec0[i] = ((0.899999976f * fRec0[(i - 1)]) + (0.100000001f * float(fInput0[i]))); } /* Post code */ for (int j = 0; (j < 4); j = (j + 1)) { fRec0_perm[j] = fRec0_tmp[(count + j)]; } /* One output only */ activateOneOutputTask(fScheduler, num_thread, 4, &tasknum); break; } case 3: { /* Recursive loop 3 */ /* Pre code */ for (int j1 = 0; (j1 < 4); j1 = (j1 + 1)) { fRec1_tmp[j1] = fRec1_perm[j1]; } /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { fRec1[i] = ((0.899999976f * fRec1[(i - 1)]) + (0.100000001f * float(fInput1[i]))); } /* Post code */ for (int j = 0; (j < 4); j = (j + 1)) { fRec1_perm[j] = fRec1_tmp[(count + j)]; } /* One output only */ activateOneOutputTask(fScheduler, num_thread, 4, &tasknum); break; } case 4: { /* Vectorizable loop 4 */ /* Compute code */ for (int i = 0; (i < count); i = (i + 1)) { fOutput0[i] = FAUSTFLOAT((fRec0[i] + fRec1[i])); } tasknum = 1; break; } } } }","title":"Example of Parallel Scheduler Code"},{"location":"manual/debugging/","text":"Debugging the Code Looking at the generated code Using the FIR backend The FIR (Faust Imperative Representation) backend can possibly be used to look at a textual version of the intermediate imperative language. Use the make developer target to compile the FIR backend, then use faust -lang fir foo.dsp to compile a given foo.dsp file as a FIR textual output. import(\"stdfaust.lib\"); vol = hslider(\"volume [unit:dB]\", 0, -96, 0, 0.1) : ba.db2linear : si.smoo; freq1 = hslider(\"freq1 [unit:Hz]\", 1000, 20, 3000, 1); freq2 = hslider(\"freq2 [unit:Hz]\", 200, 20, 3000, 1); process = vgroup(\"Oscillator\", os.osc(freq1) * vol, os.osc(freq2) * vol); For instance compiling the previous code with the faust -lang fir osc.dsp command will display various statistics, for example the number of operations done in the generated compute method: ======= Compute DSP begin ========== Instructions complexity : Load = 23 Store = 9 Binop = 12 [ { Int(+) = 1 } { Int(<) = 1 } { Real(*) = 3 } { Real(+) = 5 } { Real(-) = 2 } ] Mathop = 2 [ { floorf = 2 } ] Numbers = 8 Declare = 1 Cast = 2 Select = 0 Loop = 1 As well as the DSP structure memory size and layout, and read/write statistics: ======= Object memory footprint ========== Heap size int = 4 bytes Heap size int* = 0 bytes Heap size real = 48 bytes Total heap size = 68 bytes Stack size in compute = 28 bytes ======= Variable access in compute control ========== Field = fSampleRate size = 1 r_count = 0 w_count = 0 Field = fConst1 size = 1 r_count = 1 w_count = 0 Field = fHslider0 size = 1 r_count = 1 w_count = 0 Field = fConst2 size = 1 r_count = 0 w_count = 0 Field = fRec0 size = 2 r_count = 0 w_count = 0 Field = fConst3 size = 1 r_count = 2 w_count = 0 Field = fHslider1 size = 1 r_count = 1 w_count = 0 Field = fRec2 size = 2 r_count = 0 w_count = 0 Field = fHslider2 size = 1 r_count = 1 w_count = 0 Field = fRec3 size = 2 r_count = 0 w_count = 0 ======= Variable access in compute DSP ========== Field = fSampleRate size = 1 r_count = 0 w_count = 0 Field = fConst1 size = 1 r_count = 0 w_count = 0 Field = fHslider0 size = 1 r_count = 0 w_count = 0 Field = fConst2 size = 1 r_count = 1 w_count = 0 Field = fRec0 size = 2 r_count = 4 w_count = 2 Field = fConst3 size = 1 r_count = 0 w_count = 0 Field = fHslider1 size = 1 r_count = 0 w_count = 0 Field = fRec2 size = 2 r_count = 4 w_count = 2 Field = fHslider2 size = 1 r_count = 0 w_count = 0 Field = fRec3 size = 2 r_count = 4 w_count = 2 Those informations can possibly be used to detect abnormal memory consumption. Debugging the DSP Code On a computer, doing a computation that is undefined in mathematics (like val/0 or log(-1) ), and unrepresentable in floating-point arithmetic, will produce a NaN value, which has a special internal representation . Similarly, some computations will exceed the range that is representable with floating-point arithmetics, and are represented with a special INFINITY value, which value depends of the chosen type (like float , double or long double ). After being produced, those values can actually contaminate the following flow of computations (that is Nan + any value = NaN for instance) up to the point of producing incorrect indexes when used in array access, and causing memory access crashes. The Faust compiler gives error messages when the written code is not syntactically or semantically correct, and the interval computation system on signals is supposed to detect possible problematic computations at compile time, and refuse to compile the corresponding DSP code. But the interval calculation is currently quite imperfect , can misbehave, and possibly allow problematic code to be generated. Several strategies have been developed to help programmers better understand their written DSP code, and possibly analyse it, both at compile time and runtime. Debugging at compile time The -ct option Using the -ct compilation option allows to check table index range and generate safe table access code. It verifies that the signal range is compatible with the table size, and if needed, generate safe read and write indexes access code, by constraining them to stay in a given [0.. size-1] range. Note that since the signal interval calculation is imperfect, you may see false positives , and unneeded range constraining code might be generated, especially when using recursive signals where the interval calculation system will typically produce [-inf, inf] range, which is not precise enough to correctly describe the real signal range. The -me option Starting with version 2.37.0, mathematical functions which have a finite domain (like sqrt defined for positive or null values, or asin defined for values in the [-1..1] range) are checked at compile time when they actually compute values at that time , and raise an error if the program tries to compute an out-of-domain value. If those functions appear in the generated code, their domain of use can also be checked (using the interval computation system) and the -me option will display warnings if the domain of use is incorrect. Note that again because of the imperfect interval computation system, false positives may appear and should be checked. Warning messages Warning messages do not stop the compilation process, but allow to get useful informations on potential problematic code. The messages can be printed using the -wall compilation option. Mathematical out-of-domain error warning messages are displayed when both -wall and -me options are used. Debugging at runtime The interp-tracer tool The interp-tracer tool runs and instruments the compiled program using the Interpreter backend. Various statistics on the code are collected and displayed while running and/or when closing the application, typically FP_SUBNORMAL , FP_INFINITE and FP_NAN values, or INTEGER_OVERFLOW , CAST_INT_OVERFLOW and DIV_BY_ZERO operations, or LOAD/STORE errors. See the complete documentation and the Advanced debugging with interp-tracer tutorial. The faust2caqt tool On macOS, the faust2caqt script has a -me option to catch math computation exceptions (floating point exceptions and integer div-by-zero or overflow, etc.) at runtime. Developers can possibly use the dsp_me_checker class to decorate a given DSP object with the math computation exception handling code. Fixing the errors These errors must then be corrected by carefully checking signal range, like verifying the min/max values in vslider/hslider/nentry user-interface items. Additional Resources Note that the Faust math library contains the implementation of isnan and isinf functions that may help during development. Handling infinity and not-a-number (NaN) the right way still remains a tricky problem that is not completely handled in the current version of the compiler. Dario Sanfilippo blog post is a very helpful summary of the situation with a lot of practical solutions to write safer DSP code .","title":"Debugging the Code"},{"location":"manual/debugging/#debugging-the-code","text":"","title":"Debugging the Code"},{"location":"manual/debugging/#looking-at-the-generated-code","text":"","title":"Looking at the generated code"},{"location":"manual/debugging/#using-the-fir-backend","text":"The FIR (Faust Imperative Representation) backend can possibly be used to look at a textual version of the intermediate imperative language. Use the make developer target to compile the FIR backend, then use faust -lang fir foo.dsp to compile a given foo.dsp file as a FIR textual output. import(\"stdfaust.lib\"); vol = hslider(\"volume [unit:dB]\", 0, -96, 0, 0.1) : ba.db2linear : si.smoo; freq1 = hslider(\"freq1 [unit:Hz]\", 1000, 20, 3000, 1); freq2 = hslider(\"freq2 [unit:Hz]\", 200, 20, 3000, 1); process = vgroup(\"Oscillator\", os.osc(freq1) * vol, os.osc(freq2) * vol); For instance compiling the previous code with the faust -lang fir osc.dsp command will display various statistics, for example the number of operations done in the generated compute method: ======= Compute DSP begin ========== Instructions complexity : Load = 23 Store = 9 Binop = 12 [ { Int(+) = 1 } { Int(<) = 1 } { Real(*) = 3 } { Real(+) = 5 } { Real(-) = 2 } ] Mathop = 2 [ { floorf = 2 } ] Numbers = 8 Declare = 1 Cast = 2 Select = 0 Loop = 1 As well as the DSP structure memory size and layout, and read/write statistics: ======= Object memory footprint ========== Heap size int = 4 bytes Heap size int* = 0 bytes Heap size real = 48 bytes Total heap size = 68 bytes Stack size in compute = 28 bytes ======= Variable access in compute control ========== Field = fSampleRate size = 1 r_count = 0 w_count = 0 Field = fConst1 size = 1 r_count = 1 w_count = 0 Field = fHslider0 size = 1 r_count = 1 w_count = 0 Field = fConst2 size = 1 r_count = 0 w_count = 0 Field = fRec0 size = 2 r_count = 0 w_count = 0 Field = fConst3 size = 1 r_count = 2 w_count = 0 Field = fHslider1 size = 1 r_count = 1 w_count = 0 Field = fRec2 size = 2 r_count = 0 w_count = 0 Field = fHslider2 size = 1 r_count = 1 w_count = 0 Field = fRec3 size = 2 r_count = 0 w_count = 0 ======= Variable access in compute DSP ========== Field = fSampleRate size = 1 r_count = 0 w_count = 0 Field = fConst1 size = 1 r_count = 0 w_count = 0 Field = fHslider0 size = 1 r_count = 0 w_count = 0 Field = fConst2 size = 1 r_count = 1 w_count = 0 Field = fRec0 size = 2 r_count = 4 w_count = 2 Field = fConst3 size = 1 r_count = 0 w_count = 0 Field = fHslider1 size = 1 r_count = 0 w_count = 0 Field = fRec2 size = 2 r_count = 4 w_count = 2 Field = fHslider2 size = 1 r_count = 0 w_count = 0 Field = fRec3 size = 2 r_count = 4 w_count = 2 Those informations can possibly be used to detect abnormal memory consumption.","title":"Using the FIR backend"},{"location":"manual/debugging/#debugging-the-dsp-code","text":"On a computer, doing a computation that is undefined in mathematics (like val/0 or log(-1) ), and unrepresentable in floating-point arithmetic, will produce a NaN value, which has a special internal representation . Similarly, some computations will exceed the range that is representable with floating-point arithmetics, and are represented with a special INFINITY value, which value depends of the chosen type (like float , double or long double ). After being produced, those values can actually contaminate the following flow of computations (that is Nan + any value = NaN for instance) up to the point of producing incorrect indexes when used in array access, and causing memory access crashes. The Faust compiler gives error messages when the written code is not syntactically or semantically correct, and the interval computation system on signals is supposed to detect possible problematic computations at compile time, and refuse to compile the corresponding DSP code. But the interval calculation is currently quite imperfect , can misbehave, and possibly allow problematic code to be generated. Several strategies have been developed to help programmers better understand their written DSP code, and possibly analyse it, both at compile time and runtime.","title":"Debugging the DSP Code"},{"location":"manual/debugging/#debugging-at-compile-time","text":"","title":"Debugging at compile time"},{"location":"manual/debugging/#the-ct-option","text":"Using the -ct compilation option allows to check table index range and generate safe table access code. It verifies that the signal range is compatible with the table size, and if needed, generate safe read and write indexes access code, by constraining them to stay in a given [0.. size-1] range. Note that since the signal interval calculation is imperfect, you may see false positives , and unneeded range constraining code might be generated, especially when using recursive signals where the interval calculation system will typically produce [-inf, inf] range, which is not precise enough to correctly describe the real signal range.","title":"The -ct option"},{"location":"manual/debugging/#the-me-option","text":"Starting with version 2.37.0, mathematical functions which have a finite domain (like sqrt defined for positive or null values, or asin defined for values in the [-1..1] range) are checked at compile time when they actually compute values at that time , and raise an error if the program tries to compute an out-of-domain value. If those functions appear in the generated code, their domain of use can also be checked (using the interval computation system) and the -me option will display warnings if the domain of use is incorrect. Note that again because of the imperfect interval computation system, false positives may appear and should be checked.","title":"The -me option"},{"location":"manual/debugging/#warning-messages","text":"Warning messages do not stop the compilation process, but allow to get useful informations on potential problematic code. The messages can be printed using the -wall compilation option. Mathematical out-of-domain error warning messages are displayed when both -wall and -me options are used.","title":"Warning messages"},{"location":"manual/debugging/#debugging-at-runtime","text":"","title":"Debugging at runtime"},{"location":"manual/debugging/#the-interp-tracer-tool","text":"The interp-tracer tool runs and instruments the compiled program using the Interpreter backend. Various statistics on the code are collected and displayed while running and/or when closing the application, typically FP_SUBNORMAL , FP_INFINITE and FP_NAN values, or INTEGER_OVERFLOW , CAST_INT_OVERFLOW and DIV_BY_ZERO operations, or LOAD/STORE errors. See the complete documentation and the Advanced debugging with interp-tracer tutorial.","title":"The interp-tracer tool"},{"location":"manual/debugging/#the-faust2caqt-tool","text":"On macOS, the faust2caqt script has a -me option to catch math computation exceptions (floating point exceptions and integer div-by-zero or overflow, etc.) at runtime. Developers can possibly use the dsp_me_checker class to decorate a given DSP object with the math computation exception handling code.","title":"The faust2caqt tool"},{"location":"manual/debugging/#fixing-the-errors","text":"These errors must then be corrected by carefully checking signal range, like verifying the min/max values in vslider/hslider/nentry user-interface items.","title":"Fixing the errors"},{"location":"manual/debugging/#additional-resources","text":"Note that the Faust math library contains the implementation of isnan and isinf functions that may help during development. Handling infinity and not-a-number (NaN) the right way still remains a tricky problem that is not completely handled in the current version of the compiler. Dario Sanfilippo blog post is a very helpful summary of the situation with a lot of practical solutions to write safer DSP code .","title":"Additional Resources"},{"location":"manual/deploying/","text":"Deploying Faust DSP on the Web Using developments done for the Web (WebAssembly backends and libfaust library compiled in WebAssembly with Emscripten ), statically and dynamically Faust generated WebAudio nodes can be easily produced and deployed on the Web. See extended documentation here . Note : this model will soon be deprecated, better use the faustwasm package . Note : the faust2cpp2wasm tool can possibly be used as a drop in replacement for the wasm file generated by faust2wasm , but with Faust's C++ backend instead of its wasm backend. The faustwasm package The FaustWasm library presents a convenient, high-level API that wraps around Faust compiler. This library's interface is primarily designed for TypeScript usage, although it also provides API descriptions and documentation for pure JavaScript. The WebAssembly version of the Faust Compiler, compatible with both Node.js and web browsers, has been compiled using Emscripten 3.1.31. The library offers functionality for compiling Faust DSP code into WebAssembly, enabling its utilization as WebAudio nodes within a standard WebAudio node graph. Moreover, it supports offline rendering scenarios. Furthermore, supplementary tools can be employed for generating SVGs from Faust DSP programs. Exporting for the Web Web targets can be exported from the Faust Editor or Faust IDE using the remote compilation service. Choose Platform = web , then Architecture with one of the following target: wasmjs allows you to export a ready to use Web audio node to be integrated in an application. An example of HTML and JavaScript files demonstrates how the node can be loaded and activated. wasmjs-poly allows you to export a ready to use polyphonic MIDI controllable Web audio node to be integrated in an application. An example of HTML and JavaScript files demonstrates how the node can be loaded and activated. webaudiowasm allows you to export a ready to use Web audio node with a prebuilt GUI, that can be installed as a Progressive Web Application . An example of HTML and JavaScript files demonstrates how the node can be loaded and activated. webaudiowasm-poly allows you to export a ready to use polyphonic MIDI controllable Web audio node with a prebuilt GUI, that can be installed as a Progressive Web Application . An example of HTML and JavaScript files demonstrates how the node can be loaded and activated. pwa allows you to export a ready to use Progressive Web Application with a prebuilt GUI, directly usable in the page, and that can possibly be installed and run on smartphone or tablet using the QR Code. pwa-poly allows you to export a ready to use polyphonic MIDI controllable Progressive Web Application with a prebuilt GUI, directly usable in the page, and that can possibly be installed and run on smartphone or tablet using the QR Code. Exporting WAM 2.0 plugins WAM 2.0 plugin can be exported from the Faust Editor or Faust IDE using the remote compilation service. A complete tutorial can be found here . Choose Platform = web , then Architecture with one of the following target: wam2-ts allows you to export a ready to use WAM 2.0 plugin. wam2-poly-ts allows you to export a ready to use polyphonic MIDI controllable WAM 2.0 plugin. wam2-fft-ts allows you to export a ready to use WAM 2.0 plugin using the FFT architecture presented in this paper . The faust-web-component package Tthe faust-web-component package provides two web components for embedding interactive Faust snippets in web pages: displays an editor (using CodeMirror 6 ) with executable, editable Faust code, along with some bells & whistles (controls, block diagram, plots) in a side pane. This component is ideal for demonstrating some code in Faust and allowing the reader to try it out and tweak it themselves without having to leave the page, and can been tested here . just shows the controls and does not allow editing, so it serves simply as a way to embed interactive DSP, and can been tested here . These components are built on top of faustwasm and faust-ui packages and is released as a npm package .","title":"Deploying on the Web"},{"location":"manual/deploying/#deploying-faust-dsp-on-the-web","text":"Using developments done for the Web (WebAssembly backends and libfaust library compiled in WebAssembly with Emscripten ), statically and dynamically Faust generated WebAudio nodes can be easily produced and deployed on the Web. See extended documentation here . Note : this model will soon be deprecated, better use the faustwasm package . Note : the faust2cpp2wasm tool can possibly be used as a drop in replacement for the wasm file generated by faust2wasm , but with Faust's C++ backend instead of its wasm backend.","title":"Deploying Faust DSP on the Web"},{"location":"manual/deploying/#the-faustwasm-package","text":"The FaustWasm library presents a convenient, high-level API that wraps around Faust compiler. This library's interface is primarily designed for TypeScript usage, although it also provides API descriptions and documentation for pure JavaScript. The WebAssembly version of the Faust Compiler, compatible with both Node.js and web browsers, has been compiled using Emscripten 3.1.31. The library offers functionality for compiling Faust DSP code into WebAssembly, enabling its utilization as WebAudio nodes within a standard WebAudio node graph. Moreover, it supports offline rendering scenarios. Furthermore, supplementary tools can be employed for generating SVGs from Faust DSP programs.","title":"The faustwasm package"},{"location":"manual/deploying/#exporting-for-the-web","text":"Web targets can be exported from the Faust Editor or Faust IDE using the remote compilation service. Choose Platform = web , then Architecture with one of the following target: wasmjs allows you to export a ready to use Web audio node to be integrated in an application. An example of HTML and JavaScript files demonstrates how the node can be loaded and activated. wasmjs-poly allows you to export a ready to use polyphonic MIDI controllable Web audio node to be integrated in an application. An example of HTML and JavaScript files demonstrates how the node can be loaded and activated. webaudiowasm allows you to export a ready to use Web audio node with a prebuilt GUI, that can be installed as a Progressive Web Application . An example of HTML and JavaScript files demonstrates how the node can be loaded and activated. webaudiowasm-poly allows you to export a ready to use polyphonic MIDI controllable Web audio node with a prebuilt GUI, that can be installed as a Progressive Web Application . An example of HTML and JavaScript files demonstrates how the node can be loaded and activated. pwa allows you to export a ready to use Progressive Web Application with a prebuilt GUI, directly usable in the page, and that can possibly be installed and run on smartphone or tablet using the QR Code. pwa-poly allows you to export a ready to use polyphonic MIDI controllable Progressive Web Application with a prebuilt GUI, directly usable in the page, and that can possibly be installed and run on smartphone or tablet using the QR Code.","title":"Exporting for the Web"},{"location":"manual/deploying/#exporting-wam-20-plugins","text":"WAM 2.0 plugin can be exported from the Faust Editor or Faust IDE using the remote compilation service. A complete tutorial can be found here . Choose Platform = web , then Architecture with one of the following target: wam2-ts allows you to export a ready to use WAM 2.0 plugin. wam2-poly-ts allows you to export a ready to use polyphonic MIDI controllable WAM 2.0 plugin. wam2-fft-ts allows you to export a ready to use WAM 2.0 plugin using the FFT architecture presented in this paper .","title":"Exporting WAM 2.0 plugins"},{"location":"manual/deploying/#the-faust-web-component-package","text":"Tthe faust-web-component package provides two web components for embedding interactive Faust snippets in web pages: displays an editor (using CodeMirror 6 ) with executable, editable Faust code, along with some bells & whistles (controls, block diagram, plots) in a side pane. This component is ideal for demonstrating some code in Faust and allowing the reader to try it out and tweak it themselves without having to leave the page, and can been tested here . just shows the controls and does not allow editing, so it serves simply as a way to embed interactive DSP, and can been tested here . These components are built on top of faustwasm and faust-ui packages and is released as a npm package .","title":"The faust-web-component package"},{"location":"manual/embedding/","text":"Embedding the Faust Compiler Using libfaust Dynamic Compilation Chain The Faust compiler uses an intermediate FIR representation (Faust Imperative Representation), which can be translated to several output languages. The FIR language describes the computation performed on the samples in a generic manner. It contains primitives to read and write variables and arrays, do arithmetic operations, and define the necessary control structures ( for and while loops, if structure, etc.). To generate various output languages, several backends have been developed for C, C++, Interpreter, Java, LLVM IR, WebAssembly, etc. The Interpreter, LLVM IR and WebAssembly ones are particularly interesting since they allow the direct compilation of a DSP program into executable code in memory, bypassing the external compiler requirement. Using libfaust with the LLVM backend Libfaust with LLVM backend API The complete chain goes from the Faust DSP source code, compiled in LLVM IR using the LLVM backend, to finally produce the executable code using the LLVM JIT. All steps take place in memory, getting rid of the classical file-based approaches. Pointers to executable functions can be retrieved from the resulting LLVM module and the code directly called with the appropriate parameters. Creation API The libfaust library exports the following API: given a Faust source code (as a string or a file), calling the createDSPFactoryFromString or createDSPFactoryFromFile functions runs the compilation chain (Faust + LLVM JIT) and generates the prototype of the class, as a llvm_dsp_factory pointer. This factory actually contains the compiled LLVM IR for the given DSP alternatively the createCPPDSPFactoryFromBoxes allows to create the factory from a box expression built with the box API alternatively the createDSPFactoryFromSignals allows to create the factory from a list of outputs signals built with the signal API the library keeps an internal cache of all allocated factories so that the compilation of the same DSP code -- that is the same source code and the same set of normalized (sorted in a canonical order) compilation options -- will return the same (reference counted) factory pointer next, the createDSPInstance function (corresponding to the new className of C++) instantiates a llvm_dsp pointer to be used through its interface, connected to the audio chain and controller interfaces. When finished, delete can be used to destroy the dsp instance. Note that an instance internally needs to access its associated factory during its entire lifetime. since llvm_dsp is a subclass of the dsp base class, an object of this type can be used with all the available audio and UI classes. In essence, this is like reusing all architecture files already developed for the static C++ class compilation scheme like OSCUI , httpdUI interfaces, etc. deleteDSPFactory has to be explicitly used to properly decrement the reference counter when the factory is not needed anymore, that is when all associated DSP instances have been properly destroyed a unique SHA1 key of the created factory can be obtained using its getSHAKey method Saving/restoring the factory After the DSP factory has been compiled, the application or the plugin running it might need to save it and then restore it. To get the internal factory compiled code, several functions are available: writeDSPFactoryToIR : get the DSP factory LLVM IR (in textual format) as a string writeDSPFactoryToIRFile : get the DSP factory LLVM IR (in textual format) and write it to a file writeDSPFactoryToBitcode : get the DSP factory LLVM IR (in binary format) as a string writeDSPFactoryToBitcodeFile : save the DSP factory LLVM IR (in binary format) in a file writeDSPFactoryToMachine : get the DSP factory executable machine code as a string writeDSPFactoryToMachineFile : save the DSP factory executable machine code in a file To re-create a DSP factory from a previously saved code, several functions are available: readDSPFactoryFromIR : create a DSP factory from a string containing the LLVM IR (in textual format) readDSPFactoryFromIRFile : create a DSP factory from a file containing the LLVM IR (in textual format) readDSPFactoryFromBitcode : create a DSP factory from a string containing the LLVM IR (in binary format) readDSPFactoryFromBitcodeFile : create a DSP factory from a file containing the LLVM IR (in binary format) readDSPFactoryFromMachine : create a DSP factory from a string containing the executable machine code readDSPFactoryFromMachineFile : create a DSP factory from a file containing the executable machine code. Typical code example More generally, a typical use of libfaust in C++ could look like: // The Faust code to compile as a string (could be in a file too) string theCode = \"import(\\\"stdfaust.lib\\\"); process = no.noise;\"; // Compiling in memory (createDSPFactoryFromFile could be used alternatively) llvm_dsp_factory* m_factory = createDSPFactoryFromString( \"faust\", theCode, argc, argv, \"\", m_errorString, optimize); // creating the DSP instance for interfacing dsp* m_dsp = m_factory->createDSPInstance(); // Creating a generic UI to interact with the DSP my_ui* m_ui = new MyUI(); // linking the interface to the DSP instance m_dsp->buildUserInterface(m_ui); // Initializing the DSP instance with the SR m_dsp->init(44100); // Hypothetical audio callback, assuming m_input/m_output are previously allocated while (...) { m_dsp->compute(128, m_input, m_output); } // Cleaning // Manually delete the DSP delete m_dsp; delete m_ui; // The factory actually keeps track of all allocated DSP (done in createDSPInstance). // So if not manually deleted, all remaining DSP will be garbaged here. deleteDSPFactory(m_factory); The first step consists in creating a DSP factory from a DSP file (using createDSPFactoryFromFile ) or string (using createDSPFactoryFromString ) with additional parameters given to the compiler. Assuming the compilation works, a factory is returned, to create a DSP instance with the factory createDSPInstance method. Note that the resulting llvm_dsp* pointer type (see faust/dsp/llvm-dsp.h header file) is a subclass of the base dsp class (see faust/dsp/dsp.h header file). Thus it can be used with any UI type to plug a GUI, MIDI or OSC controller on the DSP object, like it would be done with a DSP program compiled to a C++ class (the generated mydsp class is also a subclass of the base dsp class). This is demonstrated with the my_ui* m_ui = new MyUI(); and m_dsp->buildUserInterface(m_ui); lines where the buildUserInterface method is used to connect a controller. Then the DSP object has to be connected to an audio driver to be rendered (see the m_dsp->compute(128, m_input, m_output); block). A more complete C++ example can be found here . A example using the pure C API can be found here . Using libfaust with the Interpreter backend When compiled to embed the Interpreter backend , libfaust can also be used to generate the Faust Bytes Code (FBC) format and interpret it in memory. Libfaust with Interpreter backend API The interpreter backend (described in this paper ) has been first written to allow dynamical compilation on iOS, where Apple does not allow LLVM based JIT compilation to be deployed, but can also be used to develop testing tools . It has been defined as a typed bytecode and a virtual machine to execute it. The FIR language is simple enough to be easily translated in the typed bytecode for an interpreter, generated by a FIR to bytecode compilation pass. The virtual machine then executes the bytecode on a stack based machine. Creation API The interpreter backend API is similar to the LLVM backend API: given a FAUST source code (as a file or a string), calling the createInterpreterDSPFactory function runs the compilation chain (Faust + interpreter backend) and generates the prototype of the class, as an interpreter_dsp_factory pointer. This factory actually contains the compiled bytecode for the given DSP alternatively the createInterpreterDSPFactoryFromBoxes allows to create the factory from a box expression built with the box API alternatively the createInterpreterDSPFactoryFromSignals allows to create the factory from a list of outputs signals built with the signal API the library keeps an internal cache of all allocated factories so that the compilation of the same DSP code -- that is the same source code and the same set of normalized (sorted in a canonical order) compilation options -- will return the same (reference counted) factory pointer next, the createDSPInstance method of the factory class, corresponding to the new className of C++, instantiates an interpreter_dsp pointer, to be used as any regular Faust compiled DSP object, run and controlled through its interface. The instance contains the interpreter virtual machine loaded with the compiled bytecode, to be executed for each method. When finished, delete can be used to destroy the dsp instance. Note that an instance internally needs to access its associated factory during its entire lifetime. since interpreter_dsp is a subclass of the dsp base class, an object of this type can be used with all the available audio and UI classes. In essence, this is like reusing all architecture files already developed for the static C++ class compilation scheme like OSCUI , httpdUI interfaces, etc. deleteInterpreterDSPFactory has to be explicitly used to properly decrement the reference counter when the factory is not needed anymore, that is when all associated DSP instances have been properly destroyed. a unique SHA1 key of the created factory can be obtained using its getSHAKey method Saving/restoring the factory After the DSP factory has been compiled, the application or plugin may want to save/restore it in order to save Faust to interpreter bytecode compilation at next use. To get the internal factory bytecode and save it, two functions are available: writeInterpreterDSPFactoryToMachine allows to get the interpreter bytecode as a string writeInterpreterDSPFactoryToMachineFile allows to save the interpreter bytecode in a file To re-create a DSP factory from a previously saved code, two functions are available: readInterpreterDSPFactoryFromMachine allows to create a DSP factory from a string containing the interpreter bytecode readInterpreterDSPFactoryFromMachineFile allows to create a DSP factory from a file containing the interpreter bytecode The complete API is available and documented in the installed faust/dsp/interpreter-dsp.h header. Note that only the scalar compilation mode is supported. A more complete C++ example can be found here . Performances The generated code is obviously much slower than LLVM generated native code. Measurements on various DSPs examples have been done, and the code is between 3 and more than 10 times slower than the LLVM native code. Using libfaust with the WebAssembly backend The libfaust C++ library can be compiled in WebAssembly with Emscripten , and used in the web or NodeJS platforms. A specific page on the subject is available. Additional Functions Some additional functions are available in the libfaust API: Expanding the DSP code . The expandDSPFromString / expandDSPFromFile functions can be used to generate a self-contained DSP source string where all needed librairies have been included. All compilations options are normalized and included as a comment in the expanded string. This is a way to create self-contained version of DSP programs. Using other backends or generating auxiliary files . The generateAuxFilesFromString and generateAuxFilesFromFile functions taking a DSP source string or file can be used: to activate and use other backends (depending of which ones have been compiled in libfaust) to generate like C, C++, or Cmajor code, etc. The argv parameter has to mimic the command line like for instance: -lang cpp -vec -lv 1 to generate a C++ file in vector mode. to generate auxiliary files which can be text files SVG, XML, ps, etc. The argv parameter has to mimic the command line like for instance: -json to generate a JSON file. Sample size adaptation When compiled with the -double option, the generated code internally uses double format for samples, but also expects inputs/outputs buffers to be filled with samples in double. The dsp_sample_adapter decorator class defined in faust/dsp/dsp-adapter.h can be used to adapt the buffers. Deployment The application or plugin using libfaust can embed the library either as a statically linked component (to get a self-contained binary) or provided as a separate component to be loaded dynamically at runtime. The Faust libraries themselves usually have to be bundled separately and can be accessed at runtime using the compiler -I /path/to/libraries option in createDSPFactoryFromString/createDSPFactoryFromFile functions. Additional Resources Some papers and tutorials are available: Comment Embarquer le Compilateur Faust dans Vos Applications ? An Overview of the FAUST Developer Ecosystem Using the box API Using the signal API Use Case Examples The dynamic compilation chain has been used in several projects: FaustLive : an integrated IDE for Faust development offering on-the-fly compilation and execution features Faustgen : a generic Faust Max/MSP programmable external object Faustgen : a generic Faust PureData programmable external object The faustgen2~ object is a Faust external for Pd a.k.a. Pure Data, Miller Puckette's interactive multimedia programming environment Faust for Csound : a Csound opcode running the Faust compiler internally LibAudioStream : a framework to manipulate audio ressources through the concept of streams Faust for JUCE : a tool integrating the Faust compiler to JUCE developed by Oliver Larkin and available as part of the pMix2 project An experimental integration of Faust in Antescofo FaucK : the combination of the ChucK Programming Language and Faust libossia is a modern C++, cross-environment distributed object model for creative coding. It is used in Ossia score project Radium is a music editor with a new type of interface, including a Faust audio DSP development environment using libfaust with the LLVM and Interpreter backends Mephisto LV2 is a Just-in-Time Faust compiler embedded in an LV2 plugin, using the C API. gwion-plug is a Faust plugin for the Gwion programming language. FaustGen allows to livecode Faust in SuperCollider. It uses the libfaust LLVM C++ API. FAUSTPy is a Python wrapper for the Faust DSP language. It is implemented using the CFFI and hence creates the wrapper dynamically at run-time. A updated version of the project is available on this fork . Faust.jl is Julia wrapper for the Faust compiler. It uses the libfaust LLVM C API. fl-tui is a Rust wrapper for the Faust compiler. It uses the libfaust LLVM C API. faustlive-jack-rs is another Rust wrapper for the Faust compiler, using JACK server for audio. It uses the libfaust LLVM C API. DawDreamer is an audio-processing Python framework supporting core DAW features. It uses the libfaust LLVM C API. metaSurface64 is a real-time continuous sound transformation control surface that features both its own loop generator for up to 64 voices and a multi-effects FX engine. It uses the libfaust LLVM C++ API. metaFx is a control surface for continuous sound transformations in real time, just like the metaSurface64. Like metaSurface64, it has both its own loop generator and a multi-effects FX engine, but its operation is different, especially for the management of plugin chains and pads. HISE is an open source framework for building sample based virtual instruments combining a highly performant Disk-Streaming Engine, a flexible DSP-Audio Module system and a handy Interface Designer. AMATI is a VST plugin for live-coding effects in the Faust programming language. cyfaust is a cython wrapper of the Faust interpreter and the RtAudio cross-platform audio driver, derived from the faustlab project. The objective is to end up with a minimal, modular, self-contained, cross-platform python3 extension. nih-faust-jit ia a plugin written in Rust to load Faust dsp files and JIT-compile them with LLVM. A simple GUI is provided to select which script to load and where to look for the Faust libraries that this script may import. The selected DSP script is saved as part of the plugin state and therefore is saved with your DAW project.","title":"Embedding the Compiler"},{"location":"manual/embedding/#embedding-the-faust-compiler-using-libfaust","text":"","title":"Embedding the Faust Compiler Using libfaust"},{"location":"manual/embedding/#dynamic-compilation-chain","text":"The Faust compiler uses an intermediate FIR representation (Faust Imperative Representation), which can be translated to several output languages. The FIR language describes the computation performed on the samples in a generic manner. It contains primitives to read and write variables and arrays, do arithmetic operations, and define the necessary control structures ( for and while loops, if structure, etc.). To generate various output languages, several backends have been developed for C, C++, Interpreter, Java, LLVM IR, WebAssembly, etc. The Interpreter, LLVM IR and WebAssembly ones are particularly interesting since they allow the direct compilation of a DSP program into executable code in memory, bypassing the external compiler requirement.","title":"Dynamic Compilation Chain"},{"location":"manual/embedding/#using-libfaust-with-the-llvm-backend","text":"","title":"Using libfaust with the LLVM backend"},{"location":"manual/embedding/#libfaust-with-llvm-backend-api","text":"The complete chain goes from the Faust DSP source code, compiled in LLVM IR using the LLVM backend, to finally produce the executable code using the LLVM JIT. All steps take place in memory, getting rid of the classical file-based approaches. Pointers to executable functions can be retrieved from the resulting LLVM module and the code directly called with the appropriate parameters.","title":"Libfaust with LLVM backend API"},{"location":"manual/embedding/#creation-api","text":"The libfaust library exports the following API: given a Faust source code (as a string or a file), calling the createDSPFactoryFromString or createDSPFactoryFromFile functions runs the compilation chain (Faust + LLVM JIT) and generates the prototype of the class, as a llvm_dsp_factory pointer. This factory actually contains the compiled LLVM IR for the given DSP alternatively the createCPPDSPFactoryFromBoxes allows to create the factory from a box expression built with the box API alternatively the createDSPFactoryFromSignals allows to create the factory from a list of outputs signals built with the signal API the library keeps an internal cache of all allocated factories so that the compilation of the same DSP code -- that is the same source code and the same set of normalized (sorted in a canonical order) compilation options -- will return the same (reference counted) factory pointer next, the createDSPInstance function (corresponding to the new className of C++) instantiates a llvm_dsp pointer to be used through its interface, connected to the audio chain and controller interfaces. When finished, delete can be used to destroy the dsp instance. Note that an instance internally needs to access its associated factory during its entire lifetime. since llvm_dsp is a subclass of the dsp base class, an object of this type can be used with all the available audio and UI classes. In essence, this is like reusing all architecture files already developed for the static C++ class compilation scheme like OSCUI , httpdUI interfaces, etc. deleteDSPFactory has to be explicitly used to properly decrement the reference counter when the factory is not needed anymore, that is when all associated DSP instances have been properly destroyed a unique SHA1 key of the created factory can be obtained using its getSHAKey method","title":"Creation API"},{"location":"manual/embedding/#savingrestoring-the-factory","text":"After the DSP factory has been compiled, the application or the plugin running it might need to save it and then restore it. To get the internal factory compiled code, several functions are available: writeDSPFactoryToIR : get the DSP factory LLVM IR (in textual format) as a string writeDSPFactoryToIRFile : get the DSP factory LLVM IR (in textual format) and write it to a file writeDSPFactoryToBitcode : get the DSP factory LLVM IR (in binary format) as a string writeDSPFactoryToBitcodeFile : save the DSP factory LLVM IR (in binary format) in a file writeDSPFactoryToMachine : get the DSP factory executable machine code as a string writeDSPFactoryToMachineFile : save the DSP factory executable machine code in a file To re-create a DSP factory from a previously saved code, several functions are available: readDSPFactoryFromIR : create a DSP factory from a string containing the LLVM IR (in textual format) readDSPFactoryFromIRFile : create a DSP factory from a file containing the LLVM IR (in textual format) readDSPFactoryFromBitcode : create a DSP factory from a string containing the LLVM IR (in binary format) readDSPFactoryFromBitcodeFile : create a DSP factory from a file containing the LLVM IR (in binary format) readDSPFactoryFromMachine : create a DSP factory from a string containing the executable machine code readDSPFactoryFromMachineFile : create a DSP factory from a file containing the executable machine code.","title":"Saving/restoring the factory"},{"location":"manual/embedding/#typical-code-example","text":"More generally, a typical use of libfaust in C++ could look like: // The Faust code to compile as a string (could be in a file too) string theCode = \"import(\\\"stdfaust.lib\\\"); process = no.noise;\"; // Compiling in memory (createDSPFactoryFromFile could be used alternatively) llvm_dsp_factory* m_factory = createDSPFactoryFromString( \"faust\", theCode, argc, argv, \"\", m_errorString, optimize); // creating the DSP instance for interfacing dsp* m_dsp = m_factory->createDSPInstance(); // Creating a generic UI to interact with the DSP my_ui* m_ui = new MyUI(); // linking the interface to the DSP instance m_dsp->buildUserInterface(m_ui); // Initializing the DSP instance with the SR m_dsp->init(44100); // Hypothetical audio callback, assuming m_input/m_output are previously allocated while (...) { m_dsp->compute(128, m_input, m_output); } // Cleaning // Manually delete the DSP delete m_dsp; delete m_ui; // The factory actually keeps track of all allocated DSP (done in createDSPInstance). // So if not manually deleted, all remaining DSP will be garbaged here. deleteDSPFactory(m_factory); The first step consists in creating a DSP factory from a DSP file (using createDSPFactoryFromFile ) or string (using createDSPFactoryFromString ) with additional parameters given to the compiler. Assuming the compilation works, a factory is returned, to create a DSP instance with the factory createDSPInstance method. Note that the resulting llvm_dsp* pointer type (see faust/dsp/llvm-dsp.h header file) is a subclass of the base dsp class (see faust/dsp/dsp.h header file). Thus it can be used with any UI type to plug a GUI, MIDI or OSC controller on the DSP object, like it would be done with a DSP program compiled to a C++ class (the generated mydsp class is also a subclass of the base dsp class). This is demonstrated with the my_ui* m_ui = new MyUI(); and m_dsp->buildUserInterface(m_ui); lines where the buildUserInterface method is used to connect a controller. Then the DSP object has to be connected to an audio driver to be rendered (see the m_dsp->compute(128, m_input, m_output); block). A more complete C++ example can be found here . A example using the pure C API can be found here .","title":"Typical code example"},{"location":"manual/embedding/#using-libfaust-with-the-interpreter-backend","text":"When compiled to embed the Interpreter backend , libfaust can also be used to generate the Faust Bytes Code (FBC) format and interpret it in memory.","title":"Using libfaust with the Interpreter backend"},{"location":"manual/embedding/#libfaust-with-interpreter-backend-api","text":"The interpreter backend (described in this paper ) has been first written to allow dynamical compilation on iOS, where Apple does not allow LLVM based JIT compilation to be deployed, but can also be used to develop testing tools . It has been defined as a typed bytecode and a virtual machine to execute it. The FIR language is simple enough to be easily translated in the typed bytecode for an interpreter, generated by a FIR to bytecode compilation pass. The virtual machine then executes the bytecode on a stack based machine.","title":"Libfaust with Interpreter backend API"},{"location":"manual/embedding/#creation-api_1","text":"The interpreter backend API is similar to the LLVM backend API: given a FAUST source code (as a file or a string), calling the createInterpreterDSPFactory function runs the compilation chain (Faust + interpreter backend) and generates the prototype of the class, as an interpreter_dsp_factory pointer. This factory actually contains the compiled bytecode for the given DSP alternatively the createInterpreterDSPFactoryFromBoxes allows to create the factory from a box expression built with the box API alternatively the createInterpreterDSPFactoryFromSignals allows to create the factory from a list of outputs signals built with the signal API the library keeps an internal cache of all allocated factories so that the compilation of the same DSP code -- that is the same source code and the same set of normalized (sorted in a canonical order) compilation options -- will return the same (reference counted) factory pointer next, the createDSPInstance method of the factory class, corresponding to the new className of C++, instantiates an interpreter_dsp pointer, to be used as any regular Faust compiled DSP object, run and controlled through its interface. The instance contains the interpreter virtual machine loaded with the compiled bytecode, to be executed for each method. When finished, delete can be used to destroy the dsp instance. Note that an instance internally needs to access its associated factory during its entire lifetime. since interpreter_dsp is a subclass of the dsp base class, an object of this type can be used with all the available audio and UI classes. In essence, this is like reusing all architecture files already developed for the static C++ class compilation scheme like OSCUI , httpdUI interfaces, etc. deleteInterpreterDSPFactory has to be explicitly used to properly decrement the reference counter when the factory is not needed anymore, that is when all associated DSP instances have been properly destroyed. a unique SHA1 key of the created factory can be obtained using its getSHAKey method","title":"Creation API"},{"location":"manual/embedding/#savingrestoring-the-factory_1","text":"After the DSP factory has been compiled, the application or plugin may want to save/restore it in order to save Faust to interpreter bytecode compilation at next use. To get the internal factory bytecode and save it, two functions are available: writeInterpreterDSPFactoryToMachine allows to get the interpreter bytecode as a string writeInterpreterDSPFactoryToMachineFile allows to save the interpreter bytecode in a file To re-create a DSP factory from a previously saved code, two functions are available: readInterpreterDSPFactoryFromMachine allows to create a DSP factory from a string containing the interpreter bytecode readInterpreterDSPFactoryFromMachineFile allows to create a DSP factory from a file containing the interpreter bytecode The complete API is available and documented in the installed faust/dsp/interpreter-dsp.h header. Note that only the scalar compilation mode is supported. A more complete C++ example can be found here .","title":"Saving/restoring the factory"},{"location":"manual/embedding/#performances","text":"The generated code is obviously much slower than LLVM generated native code. Measurements on various DSPs examples have been done, and the code is between 3 and more than 10 times slower than the LLVM native code.","title":"Performances"},{"location":"manual/embedding/#using-libfaust-with-the-webassembly-backend","text":"The libfaust C++ library can be compiled in WebAssembly with Emscripten , and used in the web or NodeJS platforms. A specific page on the subject is available.","title":"Using libfaust with the WebAssembly backend"},{"location":"manual/embedding/#additional-functions","text":"Some additional functions are available in the libfaust API: Expanding the DSP code . The expandDSPFromString / expandDSPFromFile functions can be used to generate a self-contained DSP source string where all needed librairies have been included. All compilations options are normalized and included as a comment in the expanded string. This is a way to create self-contained version of DSP programs. Using other backends or generating auxiliary files . The generateAuxFilesFromString and generateAuxFilesFromFile functions taking a DSP source string or file can be used: to activate and use other backends (depending of which ones have been compiled in libfaust) to generate like C, C++, or Cmajor code, etc. The argv parameter has to mimic the command line like for instance: -lang cpp -vec -lv 1 to generate a C++ file in vector mode. to generate auxiliary files which can be text files SVG, XML, ps, etc. The argv parameter has to mimic the command line like for instance: -json to generate a JSON file.","title":"Additional Functions"},{"location":"manual/embedding/#sample-size-adaptation","text":"When compiled with the -double option, the generated code internally uses double format for samples, but also expects inputs/outputs buffers to be filled with samples in double. The dsp_sample_adapter decorator class defined in faust/dsp/dsp-adapter.h can be used to adapt the buffers.","title":"Sample size adaptation"},{"location":"manual/embedding/#deployment","text":"The application or plugin using libfaust can embed the library either as a statically linked component (to get a self-contained binary) or provided as a separate component to be loaded dynamically at runtime. The Faust libraries themselves usually have to be bundled separately and can be accessed at runtime using the compiler -I /path/to/libraries option in createDSPFactoryFromString/createDSPFactoryFromFile functions.","title":"Deployment"},{"location":"manual/embedding/#additional-resources","text":"Some papers and tutorials are available: Comment Embarquer le Compilateur Faust dans Vos Applications ? An Overview of the FAUST Developer Ecosystem Using the box API Using the signal API","title":"Additional Resources"},{"location":"manual/embedding/#use-case-examples","text":"The dynamic compilation chain has been used in several projects: FaustLive : an integrated IDE for Faust development offering on-the-fly compilation and execution features Faustgen : a generic Faust Max/MSP programmable external object Faustgen : a generic Faust PureData programmable external object The faustgen2~ object is a Faust external for Pd a.k.a. Pure Data, Miller Puckette's interactive multimedia programming environment Faust for Csound : a Csound opcode running the Faust compiler internally LibAudioStream : a framework to manipulate audio ressources through the concept of streams Faust for JUCE : a tool integrating the Faust compiler to JUCE developed by Oliver Larkin and available as part of the pMix2 project An experimental integration of Faust in Antescofo FaucK : the combination of the ChucK Programming Language and Faust libossia is a modern C++, cross-environment distributed object model for creative coding. It is used in Ossia score project Radium is a music editor with a new type of interface, including a Faust audio DSP development environment using libfaust with the LLVM and Interpreter backends Mephisto LV2 is a Just-in-Time Faust compiler embedded in an LV2 plugin, using the C API. gwion-plug is a Faust plugin for the Gwion programming language. FaustGen allows to livecode Faust in SuperCollider. It uses the libfaust LLVM C++ API. FAUSTPy is a Python wrapper for the Faust DSP language. It is implemented using the CFFI and hence creates the wrapper dynamically at run-time. A updated version of the project is available on this fork . Faust.jl is Julia wrapper for the Faust compiler. It uses the libfaust LLVM C API. fl-tui is a Rust wrapper for the Faust compiler. It uses the libfaust LLVM C API. faustlive-jack-rs is another Rust wrapper for the Faust compiler, using JACK server for audio. It uses the libfaust LLVM C API. DawDreamer is an audio-processing Python framework supporting core DAW features. It uses the libfaust LLVM C API. metaSurface64 is a real-time continuous sound transformation control surface that features both its own loop generator for up to 64 voices and a multi-effects FX engine. It uses the libfaust LLVM C++ API. metaFx is a control surface for continuous sound transformations in real time, just like the metaSurface64. Like metaSurface64, it has both its own loop generator and a multi-effects FX engine, but its operation is different, especially for the management of plugin chains and pads. HISE is an open source framework for building sample based virtual instruments combining a highly performant Disk-Streaming Engine, a flexible DSP-Audio Module system and a handy Interface Designer. AMATI is a VST plugin for live-coding effects in the Faust programming language. cyfaust is a cython wrapper of the Faust interpreter and the RtAudio cross-platform audio driver, derived from the faustlab project. The objective is to end up with a minimal, modular, self-contained, cross-platform python3 extension. nih-faust-jit ia a plugin written in Rust to load Faust dsp files and JIT-compile them with LLVM. A simple GUI is provided to select which script to load and where to look for the Faust libraries that this script may import. The selected DSP script is saved as part of the plugin state and therefore is saved with your DAW project.","title":"Use Case Examples"},{"location":"manual/errors/","text":"Error messages Error messages are typically displayed in the form of compiler errors. They occur when the code cannot be successfully compiled, and typically indicate issues such as syntax errors or semantic errors. They can occur at different stages in the compilation process, possibly with the file and line number where the error occurred (when this information can be retrieved), as well as a brief description of the error. The compiler is organized in several stages: starting from the DSP source code, the parser builds an internal memory representation of the source program (typically known as an Abstract Source Tree ) made here of primitives in the Box language . A first class of errors messages are known as syntax error messages, like missing the ; character to end a line, etc. the next step is to evaluate the definition of process programe entry point. This step is basically a \u03bb-calculus interpreter with a strict evaluation. The result is \u201dflat\u201d block-diagram where everything have been expanded. The resulting block is type annoatetd to discover its number of inputs and outputs. the \u201dflat\u201d block-diagram is then translated the Signal language where signals as conceptually infinite streams of samples or control values. The box language actually implements the Faust Block Diagram Algebra , and not following the connections rules will trigger a second class of errors messages, the box connection errors . Other errors can be produced at this stage when parameters for some primitives are not of the correct type. the pattern matching meta language allows to algorithmically create and manipulate block diagrams expressions. So a third class of pattern matching coding errors can occur at this level. signal expressions are optimized, type annotated (to associate an integer or real type with each signal, but also discovering when signals are to be computed: at init time, control-rate or sample-rate..) to produce a so-called normal-form . A fourth class of parameter range errors or typing errors can occur at this level, like using delays with a non-bounded size, etc. signal expressions are then converted in FIR (Faust Imperative Representation), a representation for state based computation (memory access, arithmetic computations, control flow, etc.), to be converted into the final target language (like C/C++, LLVM IR, Rust, WebAssembly, etc.). A fifth class of backend errors can occur at this level, like non supported compilation options for a given backend, etc. Note that the current error messages system is still far from perfect, usually when the origin of the error in the DSP source cannot be properly traced. In this case the file and line number where the error occurred are not displayed, but an internal description of the expression (as a Box of a Signal) is printed. Syntax errors Those errors happen when the language syntax is not respected. Here are some examples. The following program: box1 = 1 box2 = 2; process = box1,box2; will produce the following error message: errors.dsp : 2 : ERROR : syntax error, unexpected IDENT It means that an unexpected identifier as been found line 2 of the file test.dsp. Usually, this error is due to the absence of the semi-column ; at the end of the previous line. The following program: t1 = _~(+(1); 2 process = t1 / 2147483647; will produce the following error message: errors.dsp : 1 : ERROR : syntax error, unexpected ENDDEF The parser finds the end of the definition ( ; ) while searching a closing right parenthesis. The following program: process = ffunction; will produce the following error message: errors.dsp : 1 : ERROR : syntax error, unexpected ENDDEF, expecting LPAR The parser was expecting a left parenthesis. It identified a keyword of the language that requires arguments. The following program: process = +)1); will produce the following error message: errors.dsp : 1 : ERROR : syntax error, unexpected RPAR The wrong parenthesis has been used. The following program: process = <:+; will produce the following error message: errors.dsp : 1 : ERROR : syntax error, unexpected SPLIT The <: split operator is not correctly used, and should have been written process = _<:+; . The following program: process = foo; will produce the following error message: errors.dsp : 1 : ERROR : undefined symbol : foo This happens when an undefined name is used. Box connection errors Diagram expressions express how block expressions can be combined to create new ones. The connection rules are precisely defined for each of them and have to be followed for the program to be correct. Remember the operator priority when writing more complex expressions. The five connections rules A second category of error messages is returned when block expressions are not correctly connected. Parallel connection Combining two blocks A and B in parallel can never produce a box connection error since the 2 blocks are placed one on top of the other, without connections. The inputs of the resulting block-diagram are the inputs of A and B . The outputs of the resulting block-diagram are the outputs of A and B . Sequencial connection error Combining two blocks A and B in sequence will produce a box connection error if outputs(A) != inputs(B) . So for instance the following program: A = _,_; B = _,_,_; process = A : B; will produce the following error message: ERROR : sequential composition A:B The number of outputs [2] of A must be equal to the number of inputs [3] of B Here A = _,_; has 2 outputs while B = _,_,_; has 3 inputs Split connection error Combining two blocks A and B with the split composition will produce a box connection error if the number of inputs of B is not a multiple of the number of outputs of A . So for instance the following program: A = _,_; B = _,_,_; process = A <: B; will produce the following error message: ERROR : split composition A<:B The number of outputs [2] of A must be a divisor of the number of inputs [3] of B Here A = _,_; has 2 outputs while B = _,_,_; has 3 inputs Merge connection error Combining two blocks A and B with the merge composition will produce a box connection error if the number of outputs of A is not a multiple of the number of inputs of B . So for instance the following program: A = _,_; B = _,_,_; process = A :> B; will produce the following error message: ERROR : merge composition A:>B The number of outputs [2] of A must be a multiple of the number of inputs [3] of B Here A = _,_; has 2 outputs while B = _,_,_; has 3 inputs Recursive connection error Combining two blocks A and B with the recursive composition will produce a box connection error if the number of outputs of A is less than the number of inputs of B , or the number of outputs of B is more than the number of inputs of A (that is the following \\mathrm{outputs}(A) \\geq \\mathrm{inputs}(B) and \\mathrm{inputs}(A) \\geq \\mathrm{outputs}(B) connection rule is not respected). So for instance the following program: A = _,_; B = _,_,_; process = A ~ B; will produce the following error message: ERROR : recursive composition A~B The number of outputs [2] of A must be at least the number of inputs [3] of B. The number of inputs [2] of A must be at least the number of outputs [3] of B. Here A = _,_; has 2 inputs and 2 outputs while B = _,_,_; has 3 inputs and 3 outputs Route connection errors More complex routing between blocks can also be described using the route primitive. Two different errors can be produced in case of incorrect coding: process = route(+,8.7,(0,0),(0,1)); will produce the following error message: ERROR : invalid route expression, first two parameters should be blocks producing a value, third parameter a list of input/output pairs : route(+,8.7f,0,0,0,1) And the second one when the parameters are not actually numbers: process = route(9,8.7f,0,0,0,button(\"foo\")); will produce the following error message: ERROR : invalid route expression, parameters should be numbers : route(9,8.7f,0,0,0,button(\"foo\")) Iterative constructions Iterations are analogous to for(...) loops in other languages and provide a convenient way to automate some complex block-diagram constructions. All par , seq , sum , prod expressions have the same form, take an identifier as first parameter, a number of iteration as an integer constant numerical expression as second parameter, then an arbitrary block-diagram as third parameter. The example code: process = par(+, 2, 8); will produce the following syntax error, since the first parameter is not an identifier: filename.dsp : xx : ERROR : syntax error, unexpected ADD, expecting IDENT The example code: process = par(i, +, 8); will produce the following error: filename.dsp : 1 : ERROR : not a constant expression of type : (0->1) : + Pattern matching errors Pattern matching mechanism allows to algorithmically create and manipulate block diagrams expressions. Pattern matching coding errors can occur at this level. Multiple symbol definition error This error happens when a symbol is defined several times in the DSP file: ERROR : [file foo.dsp : N] : multiple definitions of symbol 'foo' Since computation are done at compile time and the pattern matching language is Turing complete, even infinite loops can be produced at compile time and should be detected by the compiler. Loop detection error The following (somewhat extreme ) code: foo(x) = foo(x); process = foo; will produce the following error: ERROR : stack overflow in eval and similar kind of infinite loop errors can be produced with more complex code. [TO COMPLETE] Signal related errors Signal expressions are produced from box expressions, are type annotated and finally reduced to a normal-form. Some primitives expect their parameters to follow some constraints, like being in a specific range or being bounded for instance. The domain of mathematical functions is checked and non allowed operations are signaled. Automatic type promotion Some primitives (like route , rdtable , rwtable ...) expect arguments with an integer type, which is automatically promoted, that is the equivalent of int(exp) is internally added and is not necessary in the source code. Parameter range errors Soundfile usage error The soundfile primitive assumes the part number to stay in the [0..255] interval, so for instance the following code: process = _,0 : soundfile(\"foo.wav\", 2); will produce the following error: ERROR : out of range soundfile part number (interval(-1,1,-24) instead of interval(0,255)) in expression : length(soundfile(\"foo.wav\"),IN[0]) Delay primitive error The delay @ primitive assumes that the delay signal value is bounded, so the following expression: import(\"stdfaust.lib\"); process = @(ba.time); will produce the following error: ERROR : can't compute the min and max values of : proj0(letrec(W0 = (proj0(W0)'+1)))@0+-1 used in delay expression : IN[0]@(proj0(letrec(W0 = (proj0(W0)'+1)))@0+-1) (probably a recursive signal) [TO COMPLETE] Table construction errors The rdtable primitive can be used to read through a read-only (pre-defined at initialisation time) table. The rwtable primitive can be used to implement a read/write table. Both have a size computed at compiled time The rdtable primitive assumes that the table content is produced by a processor with 0 input and one output, known at compiled time. So the following expression: process = rdtable(9, +, 4); will produce the following error, since the + is not of the correct type: ERROR : checkInit failed for type RSEVN interval(-2,2,-24) The same kind of errors will happen when read and write indexes are incorrectly defined in a rwtable primitive. Mathematical functions out of domain errors Error messages will be produced when the mathematical functions are used outside of their domain, and if the problematic computation is done at compile time. If the out of domain computation may be done at runtime, then a warning can be produced using the -me option (see Warning messages section). Modulo primitive error The modulo % assumes that the denominator is not 0, thus the following code: process = _%0; will produce the following error: ERROR : % by 0 in IN[0] % 0 The same kind of errors will be produced for acos , asin , fmod , log10 , log , remainder and sqrt functions. FIR and backends related errors Some primitives of the language are not available in some backends. The example code: fun = ffunction(float fun(float), , \"\"); process = fun; compiled with the wast/wasm backends using: faust -lang wast errors.dsp will produce the following error: ERROR : calling foreign function 'fun' is not allowed in this compilation mode and the same kind of errors would happen also with foreign variables or constants. [TO COMPLETE] Compiler option errors All compiler options cannot be used with all backends. Moreover, some compiler options can not be combined. These will typically trigger errors, before any compilation actually begins. [TO COMPLETE] Warning messages Warning messages do not stop the compilation process, but allow to get usefull informations on potential problematic code. The messages can be printed using the -wall compilation option. Mathematical out-of-domain error warning messages are displayed when both -wall and -me options are used.","title":"Error Messages"},{"location":"manual/errors/#error-messages","text":"Error messages are typically displayed in the form of compiler errors. They occur when the code cannot be successfully compiled, and typically indicate issues such as syntax errors or semantic errors. They can occur at different stages in the compilation process, possibly with the file and line number where the error occurred (when this information can be retrieved), as well as a brief description of the error. The compiler is organized in several stages: starting from the DSP source code, the parser builds an internal memory representation of the source program (typically known as an Abstract Source Tree ) made here of primitives in the Box language . A first class of errors messages are known as syntax error messages, like missing the ; character to end a line, etc. the next step is to evaluate the definition of process programe entry point. This step is basically a \u03bb-calculus interpreter with a strict evaluation. The result is \u201dflat\u201d block-diagram where everything have been expanded. The resulting block is type annoatetd to discover its number of inputs and outputs. the \u201dflat\u201d block-diagram is then translated the Signal language where signals as conceptually infinite streams of samples or control values. The box language actually implements the Faust Block Diagram Algebra , and not following the connections rules will trigger a second class of errors messages, the box connection errors . Other errors can be produced at this stage when parameters for some primitives are not of the correct type. the pattern matching meta language allows to algorithmically create and manipulate block diagrams expressions. So a third class of pattern matching coding errors can occur at this level. signal expressions are optimized, type annotated (to associate an integer or real type with each signal, but also discovering when signals are to be computed: at init time, control-rate or sample-rate..) to produce a so-called normal-form . A fourth class of parameter range errors or typing errors can occur at this level, like using delays with a non-bounded size, etc. signal expressions are then converted in FIR (Faust Imperative Representation), a representation for state based computation (memory access, arithmetic computations, control flow, etc.), to be converted into the final target language (like C/C++, LLVM IR, Rust, WebAssembly, etc.). A fifth class of backend errors can occur at this level, like non supported compilation options for a given backend, etc. Note that the current error messages system is still far from perfect, usually when the origin of the error in the DSP source cannot be properly traced. In this case the file and line number where the error occurred are not displayed, but an internal description of the expression (as a Box of a Signal) is printed.","title":"Error messages"},{"location":"manual/errors/#syntax-errors","text":"Those errors happen when the language syntax is not respected. Here are some examples. The following program: box1 = 1 box2 = 2; process = box1,box2; will produce the following error message: errors.dsp : 2 : ERROR : syntax error, unexpected IDENT It means that an unexpected identifier as been found line 2 of the file test.dsp. Usually, this error is due to the absence of the semi-column ; at the end of the previous line. The following program: t1 = _~(+(1); 2 process = t1 / 2147483647; will produce the following error message: errors.dsp : 1 : ERROR : syntax error, unexpected ENDDEF The parser finds the end of the definition ( ; ) while searching a closing right parenthesis. The following program: process = ffunction; will produce the following error message: errors.dsp : 1 : ERROR : syntax error, unexpected ENDDEF, expecting LPAR The parser was expecting a left parenthesis. It identified a keyword of the language that requires arguments. The following program: process = +)1); will produce the following error message: errors.dsp : 1 : ERROR : syntax error, unexpected RPAR The wrong parenthesis has been used. The following program: process = <:+; will produce the following error message: errors.dsp : 1 : ERROR : syntax error, unexpected SPLIT The <: split operator is not correctly used, and should have been written process = _<:+; . The following program: process = foo; will produce the following error message: errors.dsp : 1 : ERROR : undefined symbol : foo This happens when an undefined name is used.","title":"Syntax errors"},{"location":"manual/errors/#box-connection-errors","text":"Diagram expressions express how block expressions can be combined to create new ones. The connection rules are precisely defined for each of them and have to be followed for the program to be correct. Remember the operator priority when writing more complex expressions.","title":"Box connection errors"},{"location":"manual/errors/#the-five-connections-rules","text":"A second category of error messages is returned when block expressions are not correctly connected.","title":"The five connections rules"},{"location":"manual/errors/#parallel-connection","text":"Combining two blocks A and B in parallel can never produce a box connection error since the 2 blocks are placed one on top of the other, without connections. The inputs of the resulting block-diagram are the inputs of A and B . The outputs of the resulting block-diagram are the outputs of A and B .","title":"Parallel connection"},{"location":"manual/errors/#sequencial-connection-error","text":"Combining two blocks A and B in sequence will produce a box connection error if outputs(A) != inputs(B) . So for instance the following program: A = _,_; B = _,_,_; process = A : B; will produce the following error message: ERROR : sequential composition A:B The number of outputs [2] of A must be equal to the number of inputs [3] of B Here A = _,_; has 2 outputs while B = _,_,_; has 3 inputs","title":"Sequencial connection error"},{"location":"manual/errors/#split-connection-error","text":"Combining two blocks A and B with the split composition will produce a box connection error if the number of inputs of B is not a multiple of the number of outputs of A . So for instance the following program: A = _,_; B = _,_,_; process = A <: B; will produce the following error message: ERROR : split composition A<:B The number of outputs [2] of A must be a divisor of the number of inputs [3] of B Here A = _,_; has 2 outputs while B = _,_,_; has 3 inputs","title":"Split connection error"},{"location":"manual/errors/#merge-connection-error","text":"Combining two blocks A and B with the merge composition will produce a box connection error if the number of outputs of A is not a multiple of the number of inputs of B . So for instance the following program: A = _,_; B = _,_,_; process = A :> B; will produce the following error message: ERROR : merge composition A:>B The number of outputs [2] of A must be a multiple of the number of inputs [3] of B Here A = _,_; has 2 outputs while B = _,_,_; has 3 inputs","title":"Merge connection error"},{"location":"manual/errors/#recursive-connection-error","text":"Combining two blocks A and B with the recursive composition will produce a box connection error if the number of outputs of A is less than the number of inputs of B , or the number of outputs of B is more than the number of inputs of A (that is the following \\mathrm{outputs}(A) \\geq \\mathrm{inputs}(B) and \\mathrm{inputs}(A) \\geq \\mathrm{outputs}(B) connection rule is not respected). So for instance the following program: A = _,_; B = _,_,_; process = A ~ B; will produce the following error message: ERROR : recursive composition A~B The number of outputs [2] of A must be at least the number of inputs [3] of B. The number of inputs [2] of A must be at least the number of outputs [3] of B. Here A = _,_; has 2 inputs and 2 outputs while B = _,_,_; has 3 inputs and 3 outputs","title":"Recursive connection error"},{"location":"manual/errors/#route-connection-errors","text":"More complex routing between blocks can also be described using the route primitive. Two different errors can be produced in case of incorrect coding: process = route(+,8.7,(0,0),(0,1)); will produce the following error message: ERROR : invalid route expression, first two parameters should be blocks producing a value, third parameter a list of input/output pairs : route(+,8.7f,0,0,0,1) And the second one when the parameters are not actually numbers: process = route(9,8.7f,0,0,0,button(\"foo\")); will produce the following error message: ERROR : invalid route expression, parameters should be numbers : route(9,8.7f,0,0,0,button(\"foo\"))","title":"Route connection errors"},{"location":"manual/errors/#iterative-constructions","text":"Iterations are analogous to for(...) loops in other languages and provide a convenient way to automate some complex block-diagram constructions. All par , seq , sum , prod expressions have the same form, take an identifier as first parameter, a number of iteration as an integer constant numerical expression as second parameter, then an arbitrary block-diagram as third parameter. The example code: process = par(+, 2, 8); will produce the following syntax error, since the first parameter is not an identifier: filename.dsp : xx : ERROR : syntax error, unexpected ADD, expecting IDENT The example code: process = par(i, +, 8); will produce the following error: filename.dsp : 1 : ERROR : not a constant expression of type : (0->1) : +","title":"Iterative constructions"},{"location":"manual/errors/#pattern-matching-errors","text":"Pattern matching mechanism allows to algorithmically create and manipulate block diagrams expressions. Pattern matching coding errors can occur at this level.","title":"Pattern matching errors"},{"location":"manual/errors/#multiple-symbol-definition-error","text":"This error happens when a symbol is defined several times in the DSP file: ERROR : [file foo.dsp : N] : multiple definitions of symbol 'foo' Since computation are done at compile time and the pattern matching language is Turing complete, even infinite loops can be produced at compile time and should be detected by the compiler.","title":"Multiple symbol definition error"},{"location":"manual/errors/#loop-detection-error","text":"The following (somewhat extreme ) code: foo(x) = foo(x); process = foo; will produce the following error: ERROR : stack overflow in eval and similar kind of infinite loop errors can be produced with more complex code. [TO COMPLETE]","title":"Loop detection error"},{"location":"manual/errors/#signal-related-errors","text":"Signal expressions are produced from box expressions, are type annotated and finally reduced to a normal-form. Some primitives expect their parameters to follow some constraints, like being in a specific range or being bounded for instance. The domain of mathematical functions is checked and non allowed operations are signaled.","title":"Signal related errors"},{"location":"manual/errors/#automatic-type-promotion","text":"Some primitives (like route , rdtable , rwtable ...) expect arguments with an integer type, which is automatically promoted, that is the equivalent of int(exp) is internally added and is not necessary in the source code.","title":"Automatic type promotion"},{"location":"manual/errors/#parameter-range-errors","text":"","title":"Parameter range errors"},{"location":"manual/errors/#soundfile-usage-error","text":"The soundfile primitive assumes the part number to stay in the [0..255] interval, so for instance the following code: process = _,0 : soundfile(\"foo.wav\", 2); will produce the following error: ERROR : out of range soundfile part number (interval(-1,1,-24) instead of interval(0,255)) in expression : length(soundfile(\"foo.wav\"),IN[0])","title":"Soundfile usage error"},{"location":"manual/errors/#delay-primitive-error","text":"The delay @ primitive assumes that the delay signal value is bounded, so the following expression: import(\"stdfaust.lib\"); process = @(ba.time); will produce the following error: ERROR : can't compute the min and max values of : proj0(letrec(W0 = (proj0(W0)'+1)))@0+-1 used in delay expression : IN[0]@(proj0(letrec(W0 = (proj0(W0)'+1)))@0+-1) (probably a recursive signal) [TO COMPLETE]","title":"Delay primitive error"},{"location":"manual/errors/#table-construction-errors","text":"The rdtable primitive can be used to read through a read-only (pre-defined at initialisation time) table. The rwtable primitive can be used to implement a read/write table. Both have a size computed at compiled time The rdtable primitive assumes that the table content is produced by a processor with 0 input and one output, known at compiled time. So the following expression: process = rdtable(9, +, 4); will produce the following error, since the + is not of the correct type: ERROR : checkInit failed for type RSEVN interval(-2,2,-24) The same kind of errors will happen when read and write indexes are incorrectly defined in a rwtable primitive.","title":"Table construction errors"},{"location":"manual/errors/#mathematical-functions-out-of-domain-errors","text":"Error messages will be produced when the mathematical functions are used outside of their domain, and if the problematic computation is done at compile time. If the out of domain computation may be done at runtime, then a warning can be produced using the -me option (see Warning messages section).","title":"Mathematical functions out of domain errors"},{"location":"manual/errors/#modulo-primitive-error","text":"The modulo % assumes that the denominator is not 0, thus the following code: process = _%0; will produce the following error: ERROR : % by 0 in IN[0] % 0 The same kind of errors will be produced for acos , asin , fmod , log10 , log , remainder and sqrt functions.","title":"Modulo primitive error"},{"location":"manual/errors/#fir-and-backends-related-errors","text":"Some primitives of the language are not available in some backends. The example code: fun = ffunction(float fun(float), , \"\"); process = fun; compiled with the wast/wasm backends using: faust -lang wast errors.dsp will produce the following error: ERROR : calling foreign function 'fun' is not allowed in this compilation mode and the same kind of errors would happen also with foreign variables or constants. [TO COMPLETE]","title":"FIR and backends related errors"},{"location":"manual/errors/#compiler-option-errors","text":"All compiler options cannot be used with all backends. Moreover, some compiler options can not be combined. These will typically trigger errors, before any compilation actually begins. [TO COMPLETE]","title":"Compiler option errors"},{"location":"manual/errors/#warning-messages","text":"Warning messages do not stop the compilation process, but allow to get usefull informations on potential problematic code. The messages can be printed using the -wall compilation option. Mathematical out-of-domain error warning messages are displayed when both -wall and -me options are used.","title":"Warning messages"},{"location":"manual/faq/","text":"Frequently Asked Questions When to use int or float cast ? The Signal Processor Semantic section explains what a Faust program describes. In particular Faust considers two type of signals: integer signals and floating point signals . Mathematical operations either occur in the domain of integer numbers, or in the domain of floating point numbers, depending of their types, read here . Using explicit int cast or float cast may be needed to force a given computation to be done in the correct number domain. Some language primitives (like par , seq , route , soundfile , etc.) assume that their parameters are Constant Numerical Expressions of the integer type. In this case the compiler automatically does type promotion and there is no need to use int cast to have the argument be of the integer type (note that an uneeded cast will simply be ignored and will not add uneeded computation in the generated code). User interface items produce floating point signals . Depending of their use later in the computed expression, using explicit int cast may be needed also to force a given computation to be done in the correct number domain. Does select2 behaves as a standard C/C++ like if ? The short answer is no , select2 doesn't behave like the if-then-else of a traditional programming language, nor does ba.if of the standard library. To understand why, think of select2 as the tuner of a radio, it selects what you listen, but does not prevent the various radio stations from broadcasting. Actually, select2 could be easily redefined in Faust as: select2(i, x, y) = (1-i) * x + i * y; Strict vs Lazy semantics In computer science terminology, select2(i,x,y) has so-called strict semantics. This means that its three arguments i , x , y are always evaluated before select2 itself is executed, in other words, even if x or y is not selected. The standard C/C++ if-then-else has lazy semantics. The condition is always executed, but depending of the value of the condition , only the then or the else branch is executed. The strict semantics of select2 means that you cannot use it to prevent a division by 0 in an expression, or the square root of a negative number, etc... For example, the following code will not prevent a division by 0 error: select2(x == 0, 1/x, 10000); You cannot use ba.if either because it is implemented using select2 and has the same strict semantics. Therefore the following code will not prevent a division by 0 error: ba.if(x == 0, 10000, 1/x); But things are a little bit more complex... Concerning the way select2 is compiled by the Faust compiler, the strict semantic is always preserved. In particular, the type system flags problematic expressions and the stateful parts are always placed outside the if. For instance the DSP code: process = button(\"choose\"), (*(3) : +~_), (*(7):+~_) : select2; is compiled as the following C++ code, where fRec0[0] and fRec1[0] contains the computation of each branch: for (int i = 0; (i < count); i = (i + 1)) { fRec0[0] = (fRec0[1] + (3.0f * float(input0[i]))); fRec1[0] = (fRec1[1] + (7.0f * float(input1[i]))); output0[i] = FAUSTFLOAT((iSlow0 ? fRec1[0] : fRec0[0])); fRec0[1] = fRec0[0]; fRec1[1] = fRec1[0]; } For code optimization strategies, the generated code is not fully strict on select2 . When Faust produces C++ code, the C++ compiler can decide to avoid the execution of the stateless part of the signal that is not selected (and not needed elsewhere). This doesn't change the semantics of the output signal, but it changes the strictness of the code if a division by 0 would have been executed in the stateless part. When stateless expressions are used, they are by default generated using a non-strict conditional expression. For instance the following DSP code: process = select2((+(1)~_)%10, sin:cos:sin:cos, cos:sin:cos:sin); is compiled in C/C++ as: for (int i0 = 0; i0 < count; i0 = i0 + 1) { iRec0[0] = iRec0[1] + 1; output0[i0] = FAUSTFLOAT(((iRec0[0] % 10) ? std::sin(std::cos(std::sin(std::cos(float(input1[i0]))))) : std::cos(std::sin(std::cos(std::sin(float(input0[i0]))))))); iRec0[1] = iRec0[0]; } where only one of the then or else branch will be effectively computed, thus saving CPU. If computing both branches is really desired, the -sts (--strict-select) option can be used to force their computation by putting them in local variables, as shown in the following generated with -sts code version of the same DSP code: for (int i0 = 0; i0 < count; i0 = i0 + 1) { iRec0[0] = iRec0[1] + 1; float fThen0 = std::cos(std::sin(std::cos(std::sin(float(input0[i0]))))); float fElse0 = std::sin(std::cos(std::sin(std::cos(float(input1[i0]))))); output0[i0] = FAUSTFLOAT(((iRec0[0] % 10) ? fElse0 : fThen0)); iRec0[1] = iRec0[0]; } to therefore preserve the strict semantic, even if a non-strict (cond) ? then : else form is used to produce the result of the select2 expression. This can be helpful for debugging purposes like testing if there is no division by 0, or producing INF or NaN values. The interp-tracer can be used for that by adding the -sts option. So again remember that select2 cannot be used to avoid computing something . For computations that need to avoid some values or ranges (like doing val/0 that would return INF , or log of a negative value that would return NaN ), the solution is to use min and max to force the arguments to be in the correct domain of values. For example, to avoid division by 0, you can write 1/max(ma.EPSILON, x) . Note that select2 is also typically used to compute rdtable/rwtable access indexes. In this case computing an array out-of-bound index, if is not used later on, is not a problem. What properties does the Faust compiler and generated code have ? [WIP] Compiler The compiler itself is turing complete because it contains a pattern matching meta-programming model. Thus a Faust DSP program can loop at compile time. For instance the following: foo = foo; process = foo; will loop and hopefully end with the message: ERROR : after 400 evaluation steps, the compiler has detected an endless evaluation cycle of 2 steps because the compiler contains an infinite loop detection heuristic. Generated code The generated code computes the sample in a finite number of operations, thus a DSP program that would loop infinitely cannot be written. It means the generated code is not turing complete. This is of course a limitation because certain classes of algorithms cannot be expressed ( TODO : Newton approximation used in diode VA model). But on the contrary it gives a strong garanty on the upper bound of CPU cost that is quite interesting to have when deploying a program in a real-time audio context. Memory footprint The DSP memory footprint is perfectly known at compile time, so the generated code always consume a finite amount of memory. Moreover the standard deployement model is to allocate the DSP a load time, init it with a given sample-rate, then execute the DSP code, be repeatedly calling the compute function to process audio buffers. CPU footprint Since the generated code computes the sample in a finite number of operations, the CPU use has an upper bound which is a very helpful property when deploying a program in a real-time audio context. Read the Does select2 behaves as a standard C/C++ like if ? for some subtle issues concerning the select2 primitive. Pattern matching and lists Strictly speaking, there are no lists in Faust. For example the expression () or NIL in Lisp, which indicates an empty list, does not exist in Faust. Similarly, the distinction in Lisp between the number 3 and the list with only one element (3) does not exist in Faust. However, list operations can be simulated (in part) using the parallel binary composition operation , and pattern matching. The parallel composition operation is right-associative. This means that the expression (1,2,3,4) is just a simplified form of the fully parenthesized expression (1,(2,(3,4))) . The same is true for (1,2,(3,4)) which is also a simplified form of the same fully parenthesized expression (1,(2,(3,4))) . You can think of pattern-matching as always being done on fully parenthesized expressions. Therefore no Faust function can ever distinguish (1,2,3,4) from (1,2,(3,4)) , because they represent the same fully parenthesized expression (1,(2,(3,4))) . This is why ba.count( ((1,2), (3,4), (5,6)) ) is not 3 but 4, and also why ba.count( ((1,2), ((3,4),5,6)) ) is not 2 but 4. Explanation: in both cases the fully parenthesized expression is ( (1,2),((3,4),(5,6)) ) . The definition of ba.count being: count((x,y)) = 1 + count(y); // rule R1 count(x) = 1; // rule R2 we have: ba.count( ((1,2),((3,4),(5,6))) ) -R1-> 1 + ba.count( ((3,4),(5,6)) ) -R1-> 1 + 1 + ba.count( (5,6) ) -R1-> 1 + 1 + 1 + ba.count( 6 ) -R2-> 1 + 1 + 1 + 1 Please note that pattern matching is not limited to parallel composition, the other composition operators (<: : :> ~) can be used too. What is the situation about Faust compiler licence and the deployed code? Q: Does the Faust license (GPL) apply somehow to the code exports that it produces as well? Or can the license of the exported code be freely chosen such that one could develop commercial software (e.g. VST plug-ins) using Faust? A: You can freely use Faust to develop commercial software. The GPL license of the compiler doesn't apply to the code generated by the compiler. The license of the code generated by the Faust compiler depends only on the licenses of the input files. You should therefore check the licenses of the Faust libraries used and the architecture files. On the whole, when used unmodified, Faust libraries and architecture files are compatible with commercial, non-open source use. Surprising effects of vgroup/hgroup on how controls and parameters work User interface widget primitives like button , vslider/hslider , vbargraph/hbargraph allow for an abstract description of a user interface from within the Faust code. They can be grouped in a hiearchical manner using vgroup/hgroup/tgroup primitives. Each widget then has an associated path name obtained by concatenating the labels of all its surrounding groups with its own label. Widgets that have the same path in the hiearchical structure will correspond to a same controller and will appear once in the GUI. For instance the following DSP code does not contain any explicit grouping mechanism: import(\"stdfaust.lib\"); freq1 = hslider(\"Freq1\", 500, 200, 2000, 0.01); freq2 = hslider(\"Freq2\", 500, 200, 2000, 0.01); process = os.osc(freq1) + os.square(freq2), os.osc(freq1) + os.triangle(freq2); Shared freq1 and freq2 controllers So even if freq1 and freq2 controllers are used as parameters at four different places, freq1 used in os.osc(freq1) and os.square(freq1) will have the same path (like /foo/Freq1 ), be associated to a unique controller, and will finally appear once in the GUI. And this is the same mecanism for freq2 . Now if some grouping mecanism is used to better control the UI rendering, as in the following DSP code: import(\"stdfaust.lib\"); freq1 = hslider(\"Freq1\", 500, 200, 2000, 0.01); freq2 = hslider(\"Freq2\", 500, 200, 2000, 0.01); process = hgroup(\"Voice1\", os.osc(freq1) + os.square(freq2)), hgroup(\"Voice2\", os.osc(freq1) + os.triangle(freq2)); The freq1 and freq2 controllers now don't have the same path in each group (like /foo/Voice1/Freq1 and /foo/Voice1/Freq2 in the first group, and /foo/Voice2/Freq1 and /foo/Voice2/Freq2 in the second group), and so four separated controllers and UI items are finally created. Four freq1 and freq2 controllers Using the relative pathname as explained in Labels as Pathnames possibly allows us to move freq1 one step higher in the hierarchical structure, thus having again a unique path (like /foo/Freq1 ) and controller: import(\"stdfaust.lib\"); freq1 = hslider(\"../Freq1\", 500, 200, 2000, 0.01); freq2 = hslider(\"Freq2\", 500, 200, 2000, 0.01); process = hgroup(\"Voice1\", os.osc(freq1) + os.square(freq2)), hgroup(\"Voice2\", os.osc(freq1) + os.triangle(freq2)); freq1 moved one step higher in the hierarchical structure Note that the name for a given hgroup , vgroup , or tgroup can be used more than once, and they will be merged. This can be useful when you want to define different names for different widget signals, but still want to group them. For example, this pattern can be used to separate a synth's UI design from the implementation of the synth's DSP: import (\"stdfaust.lib\"); synth(foo, bar, baz) = os.osc(foo+bar+baz); synth_ui = synth(foo, bar, baz) with { ui(x) = hgroup(\"Synth\", x); leftcol(x) = ui(vgroup(\"[0]foobar\", x)); foo = leftcol(hslider(\"[0]foo\", 100, 20, 1000, 1)); bar = leftcol(hslider(\"[1]bar\", 100, 20, 1000, 1)); baz = ui(vslider(\"[1]baz\", 100, 20, 1000, 1)); }; process = synth_ui; naming and grouping What are the rules used for partial application ? Assuming F is not an abstraction and has n+m inputs and A has n outputs, then we have the rewriting rule F(A) ==> A,bus(m):F (with bus(1) = _ and bus(n+1) = _,bus(n)) There is an exception when F is a binary operation like +,-,/,* . In this case, the rewriting rule is /(3) ==> _,3:/ . In other words, when we apply only one argument, it is the second one. Control rate versus audio rate Question: I have a question about sample rate / control rate issues. I have a Faust code that takes channel pressure messages from my keyboard as input, therefore at control rate, and outputs an expression signal at sample rate. The first part of the code can run at control rate, but I want to force it to run at sample rate (otherwise unwanted behavior will appear). Is there a simple way of forcing my pressure signal to be at sample rate (without making a smooth which may also result in unwanted behavior) . Answer: the ba.kr2ar function can be used for that purpose.","title":"Frequently Asked Questions"},{"location":"manual/faq/#frequently-asked-questions","text":"","title":"Frequently Asked Questions"},{"location":"manual/faq/#when-to-use-int-or-float-cast","text":"The Signal Processor Semantic section explains what a Faust program describes. In particular Faust considers two type of signals: integer signals and floating point signals . Mathematical operations either occur in the domain of integer numbers, or in the domain of floating point numbers, depending of their types, read here . Using explicit int cast or float cast may be needed to force a given computation to be done in the correct number domain. Some language primitives (like par , seq , route , soundfile , etc.) assume that their parameters are Constant Numerical Expressions of the integer type. In this case the compiler automatically does type promotion and there is no need to use int cast to have the argument be of the integer type (note that an uneeded cast will simply be ignored and will not add uneeded computation in the generated code). User interface items produce floating point signals . Depending of their use later in the computed expression, using explicit int cast may be needed also to force a given computation to be done in the correct number domain.","title":"When to use int or float cast ?"},{"location":"manual/faq/#does-select2-behaves-as-a-standard-cc-like-if","text":"The short answer is no , select2 doesn't behave like the if-then-else of a traditional programming language, nor does ba.if of the standard library. To understand why, think of select2 as the tuner of a radio, it selects what you listen, but does not prevent the various radio stations from broadcasting. Actually, select2 could be easily redefined in Faust as: select2(i, x, y) = (1-i) * x + i * y;","title":"Does select2 behaves as a standard C/C++ like if ?"},{"location":"manual/faq/#strict-vs-lazy-semantics","text":"In computer science terminology, select2(i,x,y) has so-called strict semantics. This means that its three arguments i , x , y are always evaluated before select2 itself is executed, in other words, even if x or y is not selected. The standard C/C++ if-then-else has lazy semantics. The condition is always executed, but depending of the value of the condition , only the then or the else branch is executed. The strict semantics of select2 means that you cannot use it to prevent a division by 0 in an expression, or the square root of a negative number, etc... For example, the following code will not prevent a division by 0 error: select2(x == 0, 1/x, 10000); You cannot use ba.if either because it is implemented using select2 and has the same strict semantics. Therefore the following code will not prevent a division by 0 error: ba.if(x == 0, 10000, 1/x);","title":"Strict vs Lazy semantics"},{"location":"manual/faq/#but-things-are-a-little-bit-more-complex","text":"Concerning the way select2 is compiled by the Faust compiler, the strict semantic is always preserved. In particular, the type system flags problematic expressions and the stateful parts are always placed outside the if. For instance the DSP code: process = button(\"choose\"), (*(3) : +~_), (*(7):+~_) : select2; is compiled as the following C++ code, where fRec0[0] and fRec1[0] contains the computation of each branch: for (int i = 0; (i < count); i = (i + 1)) { fRec0[0] = (fRec0[1] + (3.0f * float(input0[i]))); fRec1[0] = (fRec1[1] + (7.0f * float(input1[i]))); output0[i] = FAUSTFLOAT((iSlow0 ? fRec1[0] : fRec0[0])); fRec0[1] = fRec0[0]; fRec1[1] = fRec1[0]; } For code optimization strategies, the generated code is not fully strict on select2 . When Faust produces C++ code, the C++ compiler can decide to avoid the execution of the stateless part of the signal that is not selected (and not needed elsewhere). This doesn't change the semantics of the output signal, but it changes the strictness of the code if a division by 0 would have been executed in the stateless part. When stateless expressions are used, they are by default generated using a non-strict conditional expression. For instance the following DSP code: process = select2((+(1)~_)%10, sin:cos:sin:cos, cos:sin:cos:sin); is compiled in C/C++ as: for (int i0 = 0; i0 < count; i0 = i0 + 1) { iRec0[0] = iRec0[1] + 1; output0[i0] = FAUSTFLOAT(((iRec0[0] % 10) ? std::sin(std::cos(std::sin(std::cos(float(input1[i0]))))) : std::cos(std::sin(std::cos(std::sin(float(input0[i0]))))))); iRec0[1] = iRec0[0]; } where only one of the then or else branch will be effectively computed, thus saving CPU. If computing both branches is really desired, the -sts (--strict-select) option can be used to force their computation by putting them in local variables, as shown in the following generated with -sts code version of the same DSP code: for (int i0 = 0; i0 < count; i0 = i0 + 1) { iRec0[0] = iRec0[1] + 1; float fThen0 = std::cos(std::sin(std::cos(std::sin(float(input0[i0]))))); float fElse0 = std::sin(std::cos(std::sin(std::cos(float(input1[i0]))))); output0[i0] = FAUSTFLOAT(((iRec0[0] % 10) ? fElse0 : fThen0)); iRec0[1] = iRec0[0]; } to therefore preserve the strict semantic, even if a non-strict (cond) ? then : else form is used to produce the result of the select2 expression. This can be helpful for debugging purposes like testing if there is no division by 0, or producing INF or NaN values. The interp-tracer can be used for that by adding the -sts option. So again remember that select2 cannot be used to avoid computing something . For computations that need to avoid some values or ranges (like doing val/0 that would return INF , or log of a negative value that would return NaN ), the solution is to use min and max to force the arguments to be in the correct domain of values. For example, to avoid division by 0, you can write 1/max(ma.EPSILON, x) . Note that select2 is also typically used to compute rdtable/rwtable access indexes. In this case computing an array out-of-bound index, if is not used later on, is not a problem.","title":"But things are a little bit more complex..."},{"location":"manual/faq/#what-properties-does-the-faust-compiler-and-generated-code-have-wip","text":"","title":"What properties does the Faust compiler and generated code have ? [WIP]"},{"location":"manual/faq/#compiler","text":"The compiler itself is turing complete because it contains a pattern matching meta-programming model. Thus a Faust DSP program can loop at compile time. For instance the following: foo = foo; process = foo; will loop and hopefully end with the message: ERROR : after 400 evaluation steps, the compiler has detected an endless evaluation cycle of 2 steps because the compiler contains an infinite loop detection heuristic.","title":"Compiler"},{"location":"manual/faq/#generated-code","text":"The generated code computes the sample in a finite number of operations, thus a DSP program that would loop infinitely cannot be written. It means the generated code is not turing complete. This is of course a limitation because certain classes of algorithms cannot be expressed ( TODO : Newton approximation used in diode VA model). But on the contrary it gives a strong garanty on the upper bound of CPU cost that is quite interesting to have when deploying a program in a real-time audio context.","title":"Generated code"},{"location":"manual/faq/#memory-footprint","text":"The DSP memory footprint is perfectly known at compile time, so the generated code always consume a finite amount of memory. Moreover the standard deployement model is to allocate the DSP a load time, init it with a given sample-rate, then execute the DSP code, be repeatedly calling the compute function to process audio buffers.","title":"Memory footprint"},{"location":"manual/faq/#cpu-footprint","text":"Since the generated code computes the sample in a finite number of operations, the CPU use has an upper bound which is a very helpful property when deploying a program in a real-time audio context. Read the Does select2 behaves as a standard C/C++ like if ? for some subtle issues concerning the select2 primitive.","title":"CPU footprint"},{"location":"manual/faq/#pattern-matching-and-lists","text":"Strictly speaking, there are no lists in Faust. For example the expression () or NIL in Lisp, which indicates an empty list, does not exist in Faust. Similarly, the distinction in Lisp between the number 3 and the list with only one element (3) does not exist in Faust. However, list operations can be simulated (in part) using the parallel binary composition operation , and pattern matching. The parallel composition operation is right-associative. This means that the expression (1,2,3,4) is just a simplified form of the fully parenthesized expression (1,(2,(3,4))) . The same is true for (1,2,(3,4)) which is also a simplified form of the same fully parenthesized expression (1,(2,(3,4))) . You can think of pattern-matching as always being done on fully parenthesized expressions. Therefore no Faust function can ever distinguish (1,2,3,4) from (1,2,(3,4)) , because they represent the same fully parenthesized expression (1,(2,(3,4))) . This is why ba.count( ((1,2), (3,4), (5,6)) ) is not 3 but 4, and also why ba.count( ((1,2), ((3,4),5,6)) ) is not 2 but 4. Explanation: in both cases the fully parenthesized expression is ( (1,2),((3,4),(5,6)) ) . The definition of ba.count being: count((x,y)) = 1 + count(y); // rule R1 count(x) = 1; // rule R2 we have: ba.count( ((1,2),((3,4),(5,6))) ) -R1-> 1 + ba.count( ((3,4),(5,6)) ) -R1-> 1 + 1 + ba.count( (5,6) ) -R1-> 1 + 1 + 1 + ba.count( 6 ) -R2-> 1 + 1 + 1 + 1 Please note that pattern matching is not limited to parallel composition, the other composition operators (<: : :> ~) can be used too.","title":"Pattern matching and lists"},{"location":"manual/faq/#what-is-the-situation-about-faust-compiler-licence-and-the-deployed-code","text":"Q: Does the Faust license (GPL) apply somehow to the code exports that it produces as well? Or can the license of the exported code be freely chosen such that one could develop commercial software (e.g. VST plug-ins) using Faust? A: You can freely use Faust to develop commercial software. The GPL license of the compiler doesn't apply to the code generated by the compiler. The license of the code generated by the Faust compiler depends only on the licenses of the input files. You should therefore check the licenses of the Faust libraries used and the architecture files. On the whole, when used unmodified, Faust libraries and architecture files are compatible with commercial, non-open source use.","title":"What is the situation about Faust compiler licence and the deployed code?"},{"location":"manual/faq/#surprising-effects-of-vgrouphgroup-on-how-controls-and-parameters-work","text":"User interface widget primitives like button , vslider/hslider , vbargraph/hbargraph allow for an abstract description of a user interface from within the Faust code. They can be grouped in a hiearchical manner using vgroup/hgroup/tgroup primitives. Each widget then has an associated path name obtained by concatenating the labels of all its surrounding groups with its own label. Widgets that have the same path in the hiearchical structure will correspond to a same controller and will appear once in the GUI. For instance the following DSP code does not contain any explicit grouping mechanism: import(\"stdfaust.lib\"); freq1 = hslider(\"Freq1\", 500, 200, 2000, 0.01); freq2 = hslider(\"Freq2\", 500, 200, 2000, 0.01); process = os.osc(freq1) + os.square(freq2), os.osc(freq1) + os.triangle(freq2); Shared freq1 and freq2 controllers So even if freq1 and freq2 controllers are used as parameters at four different places, freq1 used in os.osc(freq1) and os.square(freq1) will have the same path (like /foo/Freq1 ), be associated to a unique controller, and will finally appear once in the GUI. And this is the same mecanism for freq2 . Now if some grouping mecanism is used to better control the UI rendering, as in the following DSP code: import(\"stdfaust.lib\"); freq1 = hslider(\"Freq1\", 500, 200, 2000, 0.01); freq2 = hslider(\"Freq2\", 500, 200, 2000, 0.01); process = hgroup(\"Voice1\", os.osc(freq1) + os.square(freq2)), hgroup(\"Voice2\", os.osc(freq1) + os.triangle(freq2)); The freq1 and freq2 controllers now don't have the same path in each group (like /foo/Voice1/Freq1 and /foo/Voice1/Freq2 in the first group, and /foo/Voice2/Freq1 and /foo/Voice2/Freq2 in the second group), and so four separated controllers and UI items are finally created. Four freq1 and freq2 controllers Using the relative pathname as explained in Labels as Pathnames possibly allows us to move freq1 one step higher in the hierarchical structure, thus having again a unique path (like /foo/Freq1 ) and controller: import(\"stdfaust.lib\"); freq1 = hslider(\"../Freq1\", 500, 200, 2000, 0.01); freq2 = hslider(\"Freq2\", 500, 200, 2000, 0.01); process = hgroup(\"Voice1\", os.osc(freq1) + os.square(freq2)), hgroup(\"Voice2\", os.osc(freq1) + os.triangle(freq2)); freq1 moved one step higher in the hierarchical structure Note that the name for a given hgroup , vgroup , or tgroup can be used more than once, and they will be merged. This can be useful when you want to define different names for different widget signals, but still want to group them. For example, this pattern can be used to separate a synth's UI design from the implementation of the synth's DSP: import (\"stdfaust.lib\"); synth(foo, bar, baz) = os.osc(foo+bar+baz); synth_ui = synth(foo, bar, baz) with { ui(x) = hgroup(\"Synth\", x); leftcol(x) = ui(vgroup(\"[0]foobar\", x)); foo = leftcol(hslider(\"[0]foo\", 100, 20, 1000, 1)); bar = leftcol(hslider(\"[1]bar\", 100, 20, 1000, 1)); baz = ui(vslider(\"[1]baz\", 100, 20, 1000, 1)); }; process = synth_ui; naming and grouping","title":"Surprising effects of vgroup/hgroup on how controls and parameters work"},{"location":"manual/faq/#what-are-the-rules-used-for-partial-application","text":"Assuming F is not an abstraction and has n+m inputs and A has n outputs, then we have the rewriting rule F(A) ==> A,bus(m):F (with bus(1) = _ and bus(n+1) = _,bus(n)) There is an exception when F is a binary operation like +,-,/,* . In this case, the rewriting rule is /(3) ==> _,3:/ . In other words, when we apply only one argument, it is the second one.","title":"What are the rules used for partial application ?"},{"location":"manual/faq/#control-rate-versus-audio-rate","text":"Question: I have a question about sample rate / control rate issues. I have a Faust code that takes channel pressure messages from my keyboard as input, therefore at control rate, and outputs an expression signal at sample rate. The first part of the code can run at control rate, but I want to force it to run at sample rate (otherwise unwanted behavior will appear). Is there a simple way of forcing my pressure signal to be at sample rate (without making a smooth which may also result in unwanted behavior) . Answer: the ba.kr2ar function can be used for that purpose.","title":"Control rate versus audio rate"},{"location":"manual/http/","text":"HTTP Support Similarly to OSC, several Faust architectures also provide HTTP support. This allows Faust applications to be remotely controlled from any Web browser using specific URLs. Moreover OSC and HTTPD can be freely combined. While OSC support is installed by default when Faust is built, this is not the case for HTTP. That's because it depends on the GNU libmicrohttpd library which is usually not installed by default on the system. An additional make httpd step is therefore required when compiling and installing Faust: make httpd make sudo make install Note that make httpd will fail if libmicrohttpd is not available on the system. HTTP support can be added to any Faust program (as long as the target architecture supports it: see tables below) simply by adding the [http:on] metadata to the standard option metadata : declare options \"[http:on]\"; The following tables lists Faust's architectures providing HTTP support: Linux Faust Architectures with HTTP Support Audio System Environment Alsa GTK, Qt, Console Jack GTK, Qt, Console Netjack GTK, Qt, Console PortAudio GTK, Qt OSX Faust Architectures with HTTP Support Audio System Environment CoreAudio Qt Jack Qt, Console Netjack Qt, Console PortAudio Qt Windows Faust Architectures with HTTP Support Audio System Environment Jack Qt, Console PortAudio Qt A Simple Example To illustrate how HTTP support works, let's reuse our previous example, a simple monophonic audio mixer with 4 inputs and one output. For each input we have a mute button and a level slider: This example can be compiled as a standalone Jack QT application with HTTP support using the command: faust2jaqt -httpd mix4.dsp The -httpd option embeds a small Web server into the generated application. Its purpose is to serve an HTML page implementing the interface of the app. This page makes use of JavaScript and SVG, and is quite similar to the native QT interface. When the application is started from the command line: ./mix4 various information are printed on the standard output, including: Faust httpd server version 0.73 is running on TCP port 5510 As we can see, the embedded Web server is running by default on TCP port 5510. The entry point is http://localhost:5510 . It can be open from any recent browser and it produces the page presented in the figure below: JSON Description of the User Interface The communication between the application and the Web browser is based on several underlying URLs. The first one is http://localhost:5510/JSON that returns a JSON description of the user interface of the application. This JSON description is used internally by the JavaScript code to build the graphical user interface. Here is (part of) the json returned by mix4 : { \"name\": \"mix4\", \"address\": \"YannAir.local\", \"port\": \"5511\", \"ui\": [ { \"type\": \"hgroup\", \"label\": \"mixer\", \"items\": [ { \"type\": \"vgroup\", \"label\": \"input_0\", \"items\": [ { \"type\": \"vslider\", \"label\": \"level\", \"address\": \"/mixer/input_0/level\", \"init\": \"0\", \"min\": \"0\", \"max\": \"1\", \"step\": \"0.01\" }, { \"type\": \"checkbox\", \"label\": \"mute\", \"address\": \"/mixer/input_0/mute\", \"init\": \"0\", \"min\": \"0\", \"max\": \"0\", \"step\": \"0\" } ] }, ... ] } ] } Querying the State of the Application Each widget has a unique \"address\" field that can be used to query its value. In our example here the level of the input 0 has the address /mixer/input_0/level . The address can be used to forge a URL to get the value of the widget: http://localhost:5510/mixer/input_0/level , resulting in: /mixer/input_0/level 0.00000 Multiple widgets can be queried at once by using an address higher in the hierarchy. For example to get the values of the level and the mute state of input 0 we use http://localhost:5510/mixer/input_0 , resulting in: /mixer/input_0/level 0.00000 /mixer/input_0/mute 0.00000 To get the all the values at once we simply use http://localhost:5510/mixer , resulting in: /mixer/input_0/level 0.00000 /mixer/input_0/mute 0.00000 /mixer/input_1/level 0.00000 /mixer/input_1/mute 0.00000 /mixer/input_2/level 0.00000 /mixer/input_2/mute 0.00000 /mixer/input_3/level 0.00000 /mixer/input_3/mute 0.00000 Changing the Value of a Widget Let's say that we want to mute input 1 of our mixer. For that purpose, we can use the URL http://localhost:5510/mixer/input_1/mute?value=1 obtained by concatenating ?value=1 at the end of the widget URL. All widgets can be controlled in a similar way. For example http://localhost:5510/mixer/input_3/level?value=0.7 will set the input 3 level to 0.7. Proxy Control Access to the Web Server A control application may want to access and control the running DSP using its Web server, but without using the delivered HTML page in a browser. Since the complete JSON can be retrieved, control applications can be purely developed in C/C++. A proxy version of the user interface can then be built, and parameters can be \"set and get\" using HTTP requests. This mode can be started dynamically using the -server URL parameter. Assuming an application with HTTP support is running remotely at the given URL, the control application will fetch its JSON description, use it to dynamically build the user interface, and allow for the access of the remote parameters. HTTP Cheat Sheet Here is a summary of the various URLs used to interact with the application's Web server. Default Ports Port Description 5510 default TCP port used by the application's Web server 5511... alternative TCP ports Command Line Options Option Description -port n set the TCP port number used by the application's Web server -server URL start a proxy control application accessing the remote application running on the given URL URLs URL Description http://host:port the base URL to be used in proxy control access mode http://host:port/JSON get a json description of the user interface http://host:port/address get the value of a widget or a group of widgets http://host:port/address?value=v set the value of a widget to v JSON Top Level The JSON describes the name, host, and port of the application and a hierarchy of user interface items: { \"name\": , \"address\": , \"port\": , \"ui\": [ ] } An is either a group (of items) or a widget. Groups A group is essentially a list of items with a specific layout: { \"type\": , \"label\":