[HN Gopher] WASM SYNTH, or, how music taught me the beauty of math
___________________________________________________________________
WASM SYNTH, or, how music taught me the beauty of math
Author : timdaub
Score : 154 points
Date : 2021-05-25 13:15 UTC (9 hours ago)
(HTM) web link (timdaub.github.io)
(TXT) w3m dump (timdaub.github.io)
| gregsadetsky wrote:
| @dang, I think that the link should be changed to not go to the
| footnotes. Thanks!
| tessierashpool wrote:
| endorse. following the link through is quite jarring.
|
| non-footnote version:
| https://timdaub.github.io/2020/02/19/wasm-synth/
| timdaub wrote:
| op here: Ah damn, I just noticed it too. I tried editing but I
| can't change the link.
| Anktionaer wrote:
| I had the opposite experience (based on the title): I read
| Goedel, Escher Bach because I thought programming and math were
| really interesting and it ended up getting me into music.
| [deleted]
| skybrian wrote:
| This is doing it the hard way if you just want to generate a sine
| wave, because WebAudio's OscillatorNode [1] will do it for you,
| no WebAssembly required. It likely works in more browsers too.
|
| You can also use setPeriodicWave() to have it iterate over any
| sampled waveform you like. There is enough in the JavaScript API
| to fool around with basic subtractive synthesis where you connect
| an oscillator to a filter.
|
| I also recommend Syntorial [2] for understanding what subtractive
| synthesis is capable of. It won't help you synthesize real
| instruments, but you learn what the various knobs on synthesizers
| do by trying to reproduce increasingly sophisticated synthesizer
| sounds.
|
| Learning the basic math is useful too though.
|
| [1] https://developer.mozilla.org/en-
| US/docs/Web/API/OscillatorN... [2] https://www.syntorial.com/
| Weebs wrote:
| Is there a way to supply a PeriodicWave a specific wavetable or
| series of floating points to represent the wave form? It
| appears from the docs that PeriodicWave is a way to generate
| waveforms by shaping sinewaves
|
| One problem I've had in the past trying to do audio on the web
| is I struggled to find any reference material on how to
| generate my own waveforms and rely on the browser APIs only for
| playback. This tutorial, with the AudioWorklet, is the first
| piece I've seen on how to easily do this.
| javajosh wrote:
| This probably isn't much help, but if you've got an arbitrary
| waveform you can do a Fourier transform on it to decompose it
| into a list of additive sine waves (each wave parameterized
| by amplitude and frequency, ignoring phase), and then play
| all those sine waves together to make your sound. I have no
| idea about the implementation details here (I can imagine
| that this might get computationally intensive!) but in theory
| this should work.
| timdaub wrote:
| 100%, I looked past APIs as I wanted to learn myself audio
| programming audio autodidactically.
| Weebs wrote:
| I think you provided a great guide for people looking to do
| the same, I appreciate the write up! Real time audio is a
| tricky thing to get into and even moreso in the browser,
| starting with a sine wave here is a great launch point for
| more experiments.
| timdaub wrote:
| OP here. As I received some attention with my WASM SYNTH project
| in another thread [1], I thought I may post it as full post to HN
| again.
|
| - 1: https://news.ycombinator.com/item?id=27276180
| red_trumpet wrote:
| A small note on your LaTeX use: If you use \sin, the sin will
| be upright (instead of italics), which is considered standard
| use for math operators (other operators of the top of my had
| that also use this: tan, cos, log, dim, deg).
|
| If you encounter an operator, where the corresponding command
| with \ does not exist, you can create it with
| \DeclareMathOperator{\operator}{operator}
| worldmerge wrote:
| That is so cool! Also your website is really nice! Is that an
| available theme or did you make it yourself? I ask because I'm
| remaking my website.
| timdaub wrote:
| https://github.com/probberechts/hexo-theme-cactus
|
| have fun!
| javajosh wrote:
| I didn't know about hexo - it's Jekyll for node.
|
| https://hexo.io/
| errozero wrote:
| I made a synth with web audio a few years ago using the built-in
| oscillator and filter nodes which are actually quite capable when
| combined.
|
| Synth: http://www.errozero.co.uk/stuff/poly/ Source:
| https://github.com/errozero/poly-synth
|
| Works best in chrome
| capableweb wrote:
| Not sure what went wrong and I'm not sure about the exact
| measurements but your synth has huge input or output latency,
| maybe around 100ms or something like that? Other WASM/web
| synths I've tried doesn't suffer from the same problem.
|
| Otherwise it's a nifty little toy, thanks for sharing :) Lots
| of fun.
| apanloco wrote:
| i don't have this issue! awesome synth! and i really like the
| tracker-style keys ;D
| errozero wrote:
| Thanks! :) Yeah the keyboard layout is based on the tracker
| style. I spent a lot of time with FastTracker 2.
| delineator wrote:
| @errozero: I made a synth with web audio/midi based on your
| synth engine a few years ago:
|
| https://web-audio-midi.herokuapp.com/
|
| Adding a keyboard, scale selection, oscilloscope, score note
| viewer, sound font player, bluetooth/web midi controller
| detection, etc :)
|
| Did you decide on a license for your synth code?
| moron4hire wrote:
| Umh, why don't the F or H keys work?
| errozero wrote:
| Because of the mapping of the typing keyboard to musical
| notes. F would represent E# which doesn't exist on a piano.
| H should work though. I found a pic that explains the
| keyboard layout better than I could:
|
| https://sourceforge.net/p/vmpk/shared-keymaps-and-
| instrument...
| errozero wrote:
| Cool, I like the split keyboard feature. I've not really
| thought about a license tbh, I'll look into which one would
| be best and add a note about that. I don't really mind what
| anyone does with it so maybe MIT would be the one.
| delineator wrote:
| Cool. Great to meet you virtually, btw. Your web audio
| synth is awesome. Adding the oscilloscope made me
| appreciate how your different patches generate their unique
| sounds. And I added a few preset patches I liked too.
| LegionMammal978 wrote:
| In a default installation, as far as I am aware, Emscripten uses
| Clang as its compiler, while the code at the bottom implies it
| uses GCC. (To support existing build pipelines, it attempts to
| recognize arguments for either compiler.) Is this in error, or
| can Emscripten be configured to use GCC?
| timdaub wrote:
| OP here. Hey, if you have a source on your statement I'm happy
| to change the blog post towards what is correct.
|
| It may be that I assumed emscripten to use gcc but since its
| been over a year I wrote about it, I'm not sure anymore and so
| I default to trusting what I wrote. Happy to change with a
| source.
| inamiyar wrote:
| https://emscripten.org/docs/tools_reference/emcc.html
|
| > To see the full list of Clang options supported on the
| version of Clang used by Emscripten, run clang --help.
|
| Could probably find a more authoritative statement but just a
| quick search.
| timdaub wrote:
| Thanks. Should be updated soon.
| LegionMammal978 wrote:
| It appears to be noted in the readme for the repo [0]:
| "Emscripten compiles C and C++ to WebAssembly using LLVM and
| Binaryen." A 32-bit Clang is present in my own emsdk
| installation.
|
| [0] https://github.com/emscripten-core/emscripten/
| timdaub wrote:
| Thanks. Should be updated soon.
| jacquesm wrote:
| If this stuff interests you have a look at Pianoteq. They take
| this to a whole new level, through physical modelling they get
| extremely close to being able to generate realistically sounding
| pianos.
| weinzierl wrote:
| I haven't used their product, but the examples of all the
| effects they model impressed me quite a bit.
|
| Unfortunately it is not possible to link to the examples page
| directly. To see what I mean, click on the _Fine details of
| sound_ tab here: https://www.modartt.com/pianoteq#acoustic
| spankalee wrote:
| Pianoteq is great, and they have some really classic keyboards
| in their collection.
|
| I really wish there were more open source physical modeling
| synths out there. I'd love to play with physical modeling code,
| but it's just not that common.
| timdaub wrote:
| Interesting one! For the time I played piano, I ended up buying
| myself 2 ADAM A7X monitors and played with Ableton's Grand
| Piano. I also downloaded a supposedly other library that
| allowed me to use sampled grand piano sounds which was supposed
| to improve sound quality.
|
| It, however, never ended up sounding quite as good as my
| brothers's simple 300EUR Yamaha keyboard.
|
| Even after having trained my ears for many months, I still
| don't know why the piano sounds from my speakers just weren't
| as pleasing as what by bro's Yamaha produced.
| javajosh wrote:
| This problem is similar to the one faced by photographers.
| Why do in-camera shots look so much better in one camera vs
| another? A big reason these days is the camera's default
| software post-processing matches your taste. I bet with a
| little fiddling (particular with things like eq, compression,
| reverb, maybe resonance - all of which Live has in
| abundance!) you could find a great sound. It's just easier
| because Yamaha has great taste in sound. (One day I swear I
| will get a C5!)
|
| The other thing that affects your experience, and that
| doesn't have anything to do with timbre, is latency. Ableton
| through a USB audio interface into monitors is going to take
| (much) longer than the onboard sound generation + speakers on
| a digital piano. It's going to add _at least_ 20ms plus
| whatever time is required to compute the sound. Meanwhile any
| cheap digital piano is going to do better than that.
| Slow_Hand wrote:
| I agree that unchecked latency (no matter how
| imperceptible) can sour or perception of an instrument's
| playability. That said, latency of 3ms is very attainable
| so long as your interface isn't a decade old. That's a very
| usable speed.
|
| Software instruments are usually about as quick as you'd
| ever need them to be (no latency). So just make sure your
| audio buffer is set low (32 - 128 samples) and that you're
| not doing any heavy DSP processing that's going to add
| extra latency.
|
| It takes some vigilance to do and it can be a pain in the
| ass to manage when you're trying to play the instrument in
| a CPU intensive session, but if you do it right you'll only
| get latency at the output stage (3ms).
| jacquesm wrote:
| The modern Yamaha keyboards use samples from real pianos. In
| mine there is both a Yamaha and a Bosendorfer and both sound
| quite good. But being sample based they are essentially just
| playing back recorded sounds.
|
| Pianoteq _generates_ the sounds using nothing but a bit of
| software and it is really most impressive.
| TheOtherHobbes wrote:
| I've never been able to work out if Pianoteq uses physical
| modelling - modelling the strings and soundboard as the
| solutions of differential equations - or spectral modelling
| - overtone resynthesis, which is rooted in sampling but
| reassembles the harmonics in samples dynamically instead of
| playing back a fixed sample series at different rates.
|
| I suspect it's the latter, because there's a hint of detail
| missing in the way the overtones move.
| matheist wrote:
| The wikipedia article[1] says it's "Fourier construction"
| but without reference (that I can find) and without
| elaboration. At their website[2] they list some of their
| staff; I looked up research by one of their researchers
| and found a paper "Modeling and simulation of a grand
| piano"[3] which looks quite heavy on the physical
| modeling of strings and soundboard. I'd expect that to
| work better than spectral modeling because I think the
| latter would introduce (too much?) latency via needing to
| collect an entire spectral window (plus extra computation
| to compute the phases, and even then I don't think it
| could sound good enough?). Whereas physical modeling
| works directly in the time domain and there's a wealth of
| literature around it. See e.g. J. O. Smith III's
| waveguide synthesis work[4].
|
| [1] https://en.wikipedia.org/wiki/Pianoteq
|
| [2] https://www.modartt.com/modartt
|
| [3] https://hal.inria.fr/file/index/docid/768234/filename
| /RR-818...
|
| [4] https://ccrma.stanford.edu/~jos/swgt/
| psd1 wrote:
| Simple: good monitors aren't kind. Try hi-fi speakers that
| are more warm than detailed
| Dfiesl wrote:
| I guess the yamaha has dedicated DSP, it might also have some
| analog circuitry as part of the signal path. Both of these
| will change the quality of the sound coming out, potentially
| for the better. Also for some reason sometimes downsampling
| to 12bit gives a pleasing character as can be heard on some
| of the hardware samplers from the 90s. As for the sounds
| actually on the keyboard, its possible they were recorded
| from a different piano, with different mic placement and a
| different mic pre amp to the ableton sounds, all of which
| could potentially lead to a nicer sound being heard.
| alfl wrote:
| To answer the question in footnote 1, off the top of my head:
| setTimeout and setInterval have no timing guarantees because in
| the browser window/tab context there's no guarantee that they're
| active/visible. It doesn't make sense for a UI event to fire
| (their intended use) if the UI is hidden/inactive, and timing
| guarantees are therefore deferred to specific implementations.
|
| requestAnimationFrame does make those timing guarantees.
| PUSH_AX wrote:
| I thought they weren't accurate because the callback is only
| added to the stack once the timer expires, this means you may
| need to wait for the stack to clear before execution happens.
| mdtusz wrote:
| RAF is still imprecise for audio use. For audio in the browser,
| you usually want to defer all timing events that must be
| precise to a scheduler that uses either the webAudio context
| time or even more ideally, use a lookahead scheduler to
| schedule events by audio frame using an internal counter and
| the sample rate.
|
| A great programming exercise that forces you to wrap your head
| around JavaScript timing events is writing a small step
| sequencer that can be used while it is running with minimal
| latency.
| lioeters wrote:
| I found some reasons for the inaccuracy of setTimeout:
|
| From https://developer.mozilla.org/en-
| US/docs/Web/API/WindowOrWor...
|
| - Nested timeouts
|
| > As specified in the HTML standard, browsers will enforce a
| minimum timeout of 4 milliseconds once a nested call to
| setTimeout has been scheduled 5 times.
|
| - Timeouts in inactive tabs
|
| > To reduce the load (and associated battery usage) from
| background tabs, browsers will enforce a minimum timeout delay
| in inactive tabs. It may also be waived if a page is playing
| sound using a Web Audio API AudioContext.
|
| > The specifics of this are browser-dependent.
|
| - Throttling of tracking scripts
|
| > Firefox enforces additional throttling for scripts that it
| recognises as tracking scripts. When running in the foreground,
| the throttling minimum delay is still 4ms. In background tabs,
| however, the throttling minimum delay is 10,000 ms, or 10
| seconds..
|
| - Late timeouts
|
| > The timeout can also fire later than expected if the page (or
| the OS/browser) is busy with other tasks.
|
| - Deferral of timeouts during pageload
|
| > Firefox will defer firing setTimeout() timers while the
| current tab is loading.
| _han wrote:
| The link points to a footnote in the page. Can it be changed to
| the page itself? (Remove the tailing 3 characters from the URL.)
|
| EDIT: I don't think it was intended to refer to the footnote,
| because the title of the discussion refers to the title of the
| page itself.
| timdaub wrote:
| I think it was finally changed :)
___________________________________________________________________
(page generated 2021-05-25 23:01 UTC)