Currently we have audio-samples port type for audio. This port type is a list of samples that make up the sample buffer.
This means every node that processes audio needs to read- and re-render an audio buffer.
This feature request is for an audio-object similar to 3d objects. That is to say there are no samples within the audio-object, only audio DSP code that is “inserted” into the audio renderer where there is one audio buffer.
This would be similar to MAX/MSP gen, and other software that allows deep access to the audio buffer.
This would also allow a user to make an advanced synth or sound design generator with a combination of very simple nodes.
Also for audio effects that need an audio buffer (for example a delay effect) we can still use the output of the audio objects renderer with other audio-buffer nodes. In a similar way to render layer as image allows layers to be rastered.
@alexmitchellmus, is the goal of this feature request to improve performance, to improve usability, or something else?
To improve performance and quality of audio rendering. Currently it is my understanding that Vuo renders the audio buffer between each audio node. This means if you make really complex effects or instruments you could end up rendering the buffer many many times.
This feature request is for a node set that uses a dsp-tree style setup that consists of one node (the renderer) that has either a direct audio device output- or a audio-sample output. The other nodes would insert DSP functions into the dsp-tree “renderer” which would work out the most efficient way of rendering the signals together.
It is my understanding that MAX/MSP and PureData both use this functionality (also known as an audio graph).
If Vuo is already using this then don’t worry about this request- but as I understand audio in Vuo this currently isn’t the method.
In the “improve audio” discussion I have pasted info about the finer points of an audio graph implementation.
This feature request allows the development of such a feature without replacing the already working audio implementation- and in fact works with it.
@jstrecker is there any more info you need regarding this request?
if you make really complex effects or instruments you could end up rendering the buffer many many times.
I’d like us to do some performance testing to see how much difference this would make in Vuo. “Rendering” the audio buffer basically comes down to reading an input array, performing the audio calculations, and writing to an output array. Is reading and writing arrays (via the VuoList abstraction) the most expensive part of this operation? If so, can we fix that by improving the efficiency of VuoList? Etc. etc. In short, would the effort of batching audio operations pay off with noticeable improvements to audio quality and performance?
There is also the issue audio wise that reloading the buffer for every audio operation costs ever more delay. Obviously this is negligible now- however when you are building a synth with 20 dsp nodes (like in max-MSP or pd) this would translate to almost a quarter of a second with the current buffer. I don’t know how making the list processing better could correct the delay issue, as each node currently needs to process each list in order, not at the same time.
I believe the solution that Apple has currently is an AUGraph, (obviously using Audio Units), however I could be incorrect regarding this.
What do you mean by “reloading the buffer”? How did you come up with the figure of 1/4 second?
Hi @jstrecker, it is my understanding that any audio-sample based node needs to read the incoming Json audio buffer (512 samples currently) at roughly 100 times a second to provide the DAC with 48000 samples per second.
So each time we have an audio node it is loading the previous buffer from the last node. So if we were to make a chain of audio nodes that did nothing more than read the old sample list into a new sample list as output it- we would incur a delay depending on how many times we did that.
My number comes from roughly 20 nodes in series doing nothing (although normally such nodes could do lots of Maths - phasors - crossing counters - etc) other than taking an audio-sample list input and outputting an audio -sample - list output within a for() loop which is the same size as the audio buffer. (Which is why I keep calling it audio buffer - my bad)
So my loose Maths comes from 20 audio nodes in series- each processing 512 samples - running at about 100 executions a second, gives us 1/5 of a second delay. (I said 1/4 as they don’t run at 100 but a bit less - and I didn’t want a crazy number as an example)
Am I understanding correctly how audio lists are processed in Vuo? I’ll run some tests and check this myself as well.
We could test this with audio.mix as if we use one sample input I think it still processes the audio, so in = out. But place 20 in row- then connect original audio to last node and then analyze audio for any delay. I will check this out ASAP, currently not at office.
Wait, are you thinking that each audio node currently takes ~10 ms to execute? Vuo is faster than that!
See attached super quick test with 20
Adjust Loudness nodes in sequence. I opened the popover on
Fire Periodically’s output port to see how many events are being dropped. When firing at 100 times per second (approx. the necessary speed to process audio in real time), it doesn’t drop any events. When firing at 1000 times per second (10x faster than necessary), it drops a few but still close to 0%.
AudioTimeQuickTest.vuo (6.34 KB)
Hey @jstrecker, yeah I know vuo is fast, that’s why I love to use it!!
However, no matter how fast audio is computed- if there is a series of nodes that do a read / write of samples these sample buffers have to be synchronized. So I’m not worried about drop outs, just audio delay.
I am away from office right now, but if you were to connect the original audio to the final output (bypassing the 20 nodes) would the audio be the same- or a delaied version of it?
I’ve opened this feature request for community voting. Once this gets voted up near the top, we’ll take some more time to investigate how much this can improve performance.