[HN Gopher] Declarative, non-intrusive, compile-time C++ reflect...
       ___________________________________________________________________
        
       Declarative, non-intrusive, compile-time C++ reflection for audio
       plug-ins
        
       Author : jcelerier
       Score  : 55 points
       Date   : 2021-10-31 17:08 UTC (5 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | ognarb wrote:
       | This is very cool and a nice use of C++20 new features.
        
       | jdc wrote:
       | Looks promising. I wonder if this could be used to load arbitrary
       | VST and LV2 plugins into EasyEffects.
        
       | pininja wrote:
       | What are some things this can be used for? From a short skim this
       | seems like a way to make sharable, tweakable effects and patches
       | for any OS. Stuff that historically soundflower could be use for?
        
         | jcelerier wrote:
         | If you wrote a software that worked with soundflower it means
         | that at some point you used to call either the CoreAudio API
         | directly or any abstraction on top of it (RtAudio, PortAudio,
         | ...). Thus harder to port to another OS :-)
         | 
         | Here the idea is to write the algorithms in a way that is more
         | future-proof, by not having them to depend on any run-time API,
         | just a generic specification (given as a set of C++20
         | concepts). This way the algorithms will still be useful in 10
         | years when everyone has moved to API N+1, unlike a metric ton
         | of existing audio software which depends on a specific audio /
         | media-object API for no good reason (today ! When they were
         | written C++ wasn't advanced enough to allow this at all)
         | 
         | - all the objects in https://github.com/pcastine-lp/LitterPower
         | for instance
         | 
         | - all the algorithms in
         | https://github.com/MTG/essentia/tree/master/src/algorithms
         | 
         | - ditto for
         | https://github.com/cycfi/q/tree/master/q_lib/include/q
         | 
         | - ditto for all the VCV algorithms:
         | https://github.com/VCVRack/Fundamental/tree/v1/src
         | 
         | - ditto for all the Bespoke objects:
         | https://github.com/BespokeSynth/BespokeSynth/tree/main/Sourc...
         | 
         | - ditto for all the LMMS plugins:
         | https://github.com/LMMS/lmms/tree/master/plugins
         | 
         | etc etc, there's a couple hundred of those, which always depend
         | on some API and are thus not easily portable across
         | environments: if tomorrow you want to make an audio software
         | and want to use one of the, say, VCVRack plug-ins you're going
         | to have to bring the whole VCV run-time API along.
         | 
         | Related projects are Faust (https://faust.grame.fr/) and SOUL
         | (https://juce.com/discover/stories/soul-first-public-beta-
         | rel...) but they both are domain-specific languages with their
         | own compilers. I wanted a pure-C++ thing instead, which allows
         | to call directly native code, and enables more than just
         | generic audio processing: unlike Faust and SOUL (last time I
         | checked) it's possible to make a message-based object for Pd or
         | Max, not just an audio filter or synthesizer.
         | 
         | My end goal for this is that when I make an object for the main
         | software I'm developing, ossia score (https://ossia.io) then
         | the whole media arts community can benefit :-)
        
           | einpoklum wrote:
           | > Here the idea is to write the algorithms in a way that is
           | more future-proof
           | 
           | More baroque, for sure. More future-proof? Only if this is
           | the one true form of reflected code that gets adopted.
           | 
           | > This way the algorithms will still be useful in 10 years
           | when everyone has moved to API N+1
           | 
           | Actually, this way algorithms are not useful today since
           | nobody writes them this way, and in 10 years, whoever writes
           | this way right now will probably have moved on.
           | 
           | Now, if some small community (like C++-reflection-and-audio-
           | buffs) commits to this, then maybe it makes sense for them.
           | But not generally. General reflection (not to mention runtime
           | reflection) is probably the way to go.
        
             | jcelerier wrote:
             | > More baroque, for sure. More future-proof? Only if this
             | is the one true form of reflected code that gets adopted.
             | 
             | ah, actually not :) the library tries to do things in two
             | steps in many places:
             | 
             | 1/ map the user's code to a proxy depending on which
             | concepts it conforms to: for instance whether your audio
             | processing function is written per-sample (that's still in
             | progress tho):                   float operator()(float
             | input) { return input * 0.5; }
             | 
             | or per-buffer:                   void operator()(float**
             | inputs, float** outputs, int frames) { for(each channel,
             | each sample) { ... } }
             | 
             | 2/ the back-end accesses the class through these proxies to
             | do whatever it needs to do.
             | 
             | this means than if tomorrow we get reflection and
             | metaclasses (I hint to this in the readme), and that we can
             | write the "ideal" struct which (to me) would look like
             | struct my_effect {            [[name: "Main inputs"]]
             | audio_input main_input;           [[name: "Main outputs"]]
             | audio_output main_input;                      [[range: {20,
             | 20000}]]           [[unit: hertz]]           slider
             | frequency;                 // will create an appropriate
             | message in max / pd with the name some_function
             | void some_function(int,
             | std::ranges::view<std::variant<float, int>> list_of_stuff);
             | auto operator()(std::floating_point auto in) { return in *
             | gain; }         };
             | 
             | the only thing that one will have to do is map the new
             | "shape" in these proxies (which are as far as possible
             | doing their stuff at compile-time) and all the existing
             | backends will keep working with the algos defined in the
             | better way.
             | 
             | of course the existing algorithms will stay uglier, but
             | they'll also keep working with new environments (and, as
             | far as possible, without a runtime perf. hit)
             | 
             | > General reflection (not to mention runtime reflection) is
             | probably the way to go.
             | 
             | yes, we all want that !
        
               | tialaramex wrote:
               | I'm going to assume you wanted to declare an audio_output
               | named main_output here and not re-declare main_input
               | which presumably doesn't compile in this hypothetical
               | future C++ dialect.
               | 
               | In the audio space I can imagine Concepts' Duck-typing is
               | probably fairly effective for this work, you're rarely
               | going to have something that looks like this flavour of
               | duck but is not, in fact, a duck. It has been many years
               | since I wrote LADSPA plugins, but there I'd guess the
               | only confusion might be that there's deliberately no
               | _API_ difference between PCM data and CV, a reasonable
               | strategy if you 're building a modular synth but probably
               | not what you want to listen to as output. Such
               | differences are expressed only in metadata.
               | 
               | On the other hand, the same thing that presumably made
               | this an attractively simple project for C++ 20 features
               | also makes its practical utility very limited. Over the
               | years what seems to matter to musicians is mostly the
               | packaging, not the algorithms. The same exact algorithm
               | exists for twenty years and then somebody famous calls it
               | "Wonky-smog-burble", paints it fluorescent purple and
               | presents the control parameter as an integer between 3
               | and 17 - suddenly it is this week's hot new sound. Which
               | is fine of course, but the API doesn't matter at all to
               | such fashions.
        
               | pjmlp wrote:
               | That "tomorrow" will be way beyond C++26, as per latest
               | news in feature adoption.
        
               | jcelerier wrote:
               | Yeah, I'm not holding my breath :( though in the meantime
               | it might start to make sense to target the Circle
               | compiler, it has all the reflection features needed to
               | make things nice
        
               | pjmlp wrote:
               | It would be ironic that Circle could eventually get more
               | weight than ISO.
        
       | gavinray wrote:
       | Going to ask some really stupid questions, apologies in advance:
       | 
       | This is mindblowing work. I'm having some trouble wrapping my
       | head around all of it. Will lead with the context that:
       | A) I am not very familiar with audio programming specifics, or
       | DSP at all. I have done some things with VST2 + VST3, and looked
       | into LV2.            B) I have very little background with C++
       | 
       | I am curious:                 1. Is this targeted more towards
       | the DSP and algorithm side of audio plugin development, or is
       | there interest/use for the plugin side too (IE, generating
       | VST/LV2 compatible interfaces)?            2. Would a byproduct
       | of this be that you would be able to implement functionality +
       | interact with C++ classes from other languages? Or generate C
       | ABI's? I see the note: "Automatically generate Python bindings
       | for a C++ class"
       | 
       | I ask about the second one because I have been trying to work
       | towards getting a C ABI for VST3 so that language bindings can be
       | codegen'ed. I know VST3 works through COM so you can use that,
       | but the alternative is to emulate the VTable layouts in C
       | structs. This can make it easier than implementing COM in a
       | language.
       | 
       | -----
       | 
       | Edit: I also wanted to say that when reading your blogpost, the
       | way you write C++ is very easy to understand. Not sure if it's a
       | stylistic thing or a purposeful choice. I usually have to spend a
       | second reading C++ code to comprehend it, but in your snippets, I
       | was able to (mostly) grok it first pass.
       | 
       | Same for the example code in the repo.
        
         | jcelerier wrote:
         | Thanks ! I don't think those questions are stupid at all, if
         | anything those are things I should put in the README !
         | 
         | > 1. Is this targeted more towards the DSP and algorithm side
         | of audio plugin development, or is there interest/use for the
         | plugin side too (IE, generating VST/LV2 compatible interfaces)?
         | 
         | it does generate something compatible with VST2.x and I've been
         | working on VST3 this week-end (and pulling a few hairs, that
         | API is horrendous, the smallest example is a few thousand lines
         | of codes).
         | 
         | For LV2 I don't think it will be directly possible as LV2
         | requires separate .ttl files which describe the plug-in.
         | Technically one could make a small program which from the C++
         | reflection, outputs the .ttl on stdout and kindly ask CMake to
         | call it and generate the .ttl file from it.
         | 
         | > 2. Would a byproduct of this be that you would be able to
         | implement functionality + interact with C++ classes from other
         | languages?
         | 
         | yes, that's a goal, make your DSP processor in C++ and then
         | write a small UI for it in python or QML without having to
         | write binding code.
         | 
         | I've tried to find ways to auto-generate a C ABI but sadly:
         | 
         | - asm("...") only wants string literals, strings generated
         | through constexpr don't cut it (otherwise it'd be possible and
         | even relatively easy to generate e.g.                   asm(R"(
         | .global foo_set_x         .text          foo_set_x:         jmp
         | _ZN3foo5set_xEi         )");
         | 
         | to wrap for instance a foo::set_x(int) method.
         | 
         | - I tried writing directly the binary code and appending it to
         | the ELF sections but it seems that there's no easy way to do
         | append to .symtab / .stringtab where the name of the symbols
         | are without using a linker script.
        
           | drmr wrote:
           | Re 1: Why not create a DPF wrapper for this and have DPF
           | create the ladspa/dssi/vst2/vst3/lv2 for you? ->
           | https://github.com/DISTRHO/DPF
        
             | jcelerier wrote:
             | I had looked into it ! but it seemed that making a DPF
             | plug-in involved a lot of boilerplate, while I really
             | wanted to do something where I can just include a couple
             | headers and get going, without any particular compilation
             | hassle (the whole library is header-only as it is pretty
             | much entirely templates).
        
           | gavinray wrote:
           | > "it does generate something compatible with VST2.x and I've
           | been working on VST3 this week-end (and pulling a few hairs,
           | that API is horrendous, the smallest example is a few
           | thousand lines of codes)."
           | 
           | Oh that is awesome!!!
           | 
           | If you want a tip about the smallest possible VST3
           | implementation, (and maybe you already know this) there is a
           | class called "SingleComponentEffect", and an example called
           | "AGainSimple" that uses it and is a fully self-contained
           | single file full VST3 plugin:
           | 
           | https://github.com/steinbergmedia/vst3_public_sdk/search?q=s.
           | ..
           | 
           | Also, I spent some time trying to get VST3 SDK usable with
           | vcpkg so you could just do:                 // vcpkg.json
           | {         "name": "my-audio-lib",         "dependencies": [
           | {             "name": "vst3sdk",             "version>=":
           | "3.7.3"           }         ]       }
           | 
           | And your CMake would just install it for you:
           | 
           | https://github.com/microsoft/vcpkg/issues/5660#issuecomment-.
           | ..
           | 
           | It ended up requiring more changes than this, but in the end
           | I wound up fixing the CMake build so it spit out both the
           | VST3 SDK (helper classes) shared lib and the almost-header-
           | only lib ("pluginterfaces" directory).
           | 
           | I ought to upload the patches somewhere in case it's useful
           | for you/anyone else.                 > "yes, that's a goal,
           | make your DSP processor in C++ and then write a small UI for
           | it in python or QML without having to write binding code."
           | 
           | That would be incredible! I think the industry focus on doing
           | UI in C++ is a bit nutty, as an outsider (but what do I
           | know).                 >  "- asm("...") only wants string
           | literals, strings generated through constexpr don't cut it
           | (otherwise it'd be possible and even relatively easy to
           | generate e.g."
           | 
           | Waaay over my head!!
        
             | gavinray wrote:
             | Alright, I tried to push my fixed version of the VST3 SDK
             | to Github but because it uses a ton of Git Submodules it
             | didn't quite work.
             | 
             | What I did instead was do:                 git diff >
             | vst3sdk-no-vstgui-cmake-compatible.patch
             | 
             | Which has a patch with the changes (EXCEPT for the file
             | "Config.cmake.in")
             | 
             | https://github.com/GavinRay97/vst3sdk/blob/master/vst3sdk-
             | no...
             | 
             | I also just said fuck it and zipped up the whole project,
             | you can download it here:
             | 
             | https://github.com/GavinRay97/vst3sdk/blob/master/vst3sdk.z
             | i...
             | 
             | With this fixed version, using the VST3 SDK becomes much
             | much easier. All you have to do is add it as a vendored
             | subproject in CMake:                 project(VST3Example
             | CXX)       add_subdirectory(vendor/vst3sdk)
             | add_executable(VST3Example src/main.cpp)
             | target_compile_features(VST3Example PRIVATE cxx_std_20)
             | target_include_directories(VST3Example PRIVATE
             | vendor/vst3sdk)       # This links SDK if you want to use
             | the "SingleComponentEffect"
             | target_link_libraries(VST3Example PUBLIC sdk)
             | 
             | Where the project looks something like                 $
             | tree -L 2       .       +-- CMakeLists.txt       +-- src
             | |   +-- main.cpp       +-- vendor           +-- vst3sdk
             | 
             | And then here's your VST3 in "main.cpp":
             | #include "public.sdk/source/main/pluginfactory.h"
             | #include "public.sdk/source/vst/vstsinglecomponenteffect.h"
             | using namespace Steinberg;       using namespace
             | Steinberg::Vst;            class MyVST :
             | Steinberg::Vst::SingleComponentEffect {};
             | __declspec(dllexport) IPluginFactory* GetPluginFactory() {}
             | 
             | Hope this helps! I am relatively clueless about C++ though!
        
           | gpderetta wrote:
           | Using gcc extended asm you can pass literal constants to the
           | asm and they will be expanded textually (or at least their
           | address will). I don't think the details are fully documented
           | anywhere and I had to use intel syntax to make it work, but
           | it might be possible even wit AT&T syntax.
           | 
           | Take a look a this[1] for example. See how trampoline, the
           | destructor and the size are passed in with the 'i' constraint
           | and are referred to their value with the %cX constraint (yes,
           | the code is write only and even with a lot of comments I have
           | only the most vague idea of what I was trying to do here).
           | 
           | Probably more work is require for PIC though.
           | 
           | [1] https://github.com/gpderetta/delimited/blob/7e755d643ee45
           | 897...
        
       | Negitivefrags wrote:
       | There are examples here of making objects that will be reflected,
       | but no examples of what it's like to consume those objects.
       | 
       | It would be nice if there was an example of that too.
        
       | [deleted]
        
       ___________________________________________________________________
       (page generated 2021-10-31 23:00 UTC)