Audio Implementation Improvement

Vuo is fantastic as a canvas for generating great Audio/Visual ideas. This is a generic feature request to improve Audio.

Audio seems to be not as developed in Vuo as Visuals, (for good reason) however I would love to use Vuo for audio synthesis (like I would PD- or MAX/MSP, or even sequencers).

Glitches occur when using audio and expanding lists. Also there is a few strange things that can happen with audio and event timing as well.

I would love to be able to use Vuo for audio like I would PD, and given Vuo’s power I think it is within its reach.

1 Like

This is a great article regarding best RTAudio practices. I know you guys know more than I about this area, but audio (as described in this article) has to break some software rules in order to work well. PS: it’s very technical!

http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing

Hi, @alexmitchellmus. This is a bit broad for a feature request, so I converted it to a discussion in hopes of coming up with some more focused feature requests / bug reports. For people voting on features, this would give them a better idea of what exactly they’re voting for.

Glitches occur when using audio and expanding lists.

I created a bug report: Composition momentarily pauses during live-coding reload.

Also there is a few strange things that can happen with audio and event timing as well.

Anything besides Midi (or event) buffering sample accurate?

This is a generic feature request to improve Audio.

Besides the nodes you’ve already submitted feature requests & code for (thanks!), are there others you hope to see? Or any UI improvements that would make it easier / more efficient to work with audio in Vuo Editor?

I just want to chime in and say that while I do appreciate some audio input and processing through Vuo, this is an absolutely enormous field to start working on, and would require a different/larger team to actually implement it, possibly/probably taking some focus away from the visual stuff. I actually find the audio part of Vuo to be very satisfying just as it is. The reason for that however is that I also use a different node based application to do audio, NIs Reaktor along with the excellent jackOSX. If you want to dive down the rabbit hole, Reaktor is incredibly versatile and should offer you about everything you can possibly think of in terms of audio development in a node-based environment (including some concepts hard to find/code elsewhere). You can make an oscillator-less filter feedback synth in the time it takes to brew your coffee.

They key thing though, is that they can talk together through different means. The first ones are the obvious MIDI/OSC abilities of both applications that are quite simple in their execution. In addition, if you work with audio on a mac, you should really take a look at JackOSX (jackosx.org). As a virtual interface, this lets you have as many audio channels as you want/your computer can handle. Being able to do this, you can send all channels (even processed frequency ranges) from any audio software on a mac to Vuo for further processing into visuals! It’s even pretty stable!

As an example, for my band, we have three macs, each with its own synths and software running. The audio goes from two of them into the main via regular audio cables, and internal inputs and software via Jack through Reaktor for mixing and effects. From Reaktor it goes out via Jack, through Minihost-Modular (imagine-line stuff for just running AUs/VSTs) where we do sidechain-compression (we love cheese) or other processing on the summed output, before it goes back into Reaktor for main volume adjustment, and then out to the main (or any other) output we want. The total amount of audio channels or lines is insane for the initial input count, but picking out a channel and/or frequency range in Reaktor from any input/stage in the chain and sending it to Vuo either as a control signal or audio is trivial. I wouldn’t recommend doing both visuals and audio on the same computer though, apart from basic playback and simple visuals (also a reason not to mix them). Jack can also work over network, but I haven’t tested the stability, so not sure if I can recommend it. There are other options for that though.

Completely agree @MartinusMagneson that other software can talk to other software, however as audio has been implemented there is obviously interest from the development team.

I think that Vuo is much more than just a graphics package. It’s a complete media package, using state of the art design concepts.

Personally I am not worried about time periods, but it’s important to progress towards the goal correctly. I remember when vuo didn’t even have a graphics output- only console.

So it’s about generative development, moving steadily towards a great product, and hopefully the journey is never over as there will always be improvements! (Which is a good thing!). ;-)

Also just remembered that the audio library Vuo uses (gamma) is quite fantastic! ;-)

Oh, yeah, don’t get me wrong, I wouldn’t start crying if there were even more audio support :)! I just want a rock-solid (preferably a hard rock) core of features before getting distracted by/implementing all the fun possibilities. I just shared some nodes in the composition gallery that were ment for the discussion page relating to this, as I think the potential for doing a lot of things in the feature request list is already present (although I have voted for some of them).

If I again use Reaktor as an example, you can build your own oscillators by simple math combined with read/write operations to memory at sample rate clock. So for me that means that the implementation of math, the sample rate clock and memory operations are a lot more important than nodes for doing specific audio related tasks.

@MartinusMagneson I built a wavetable oscillator as a vuo node (and still have to upload my updates) so I have a bit of an idea as to how Vuo goes about audio.

I know this may sound crazy- but I am more interested in having better audio support in the API than as nodes. This is because once it’s in the API anyone can easily add in whatever they desire.

Currently Gamma is amazing and gives so many possibilities. However the way I see it we still have to render out each audio node into a sample buffer. This would be like rendering out a 3D model each time you added a different effect. (That actually sounds impossible- but go with the analogy)

With puredata - csound- supercolider - reaktor, (I don’t know about MAX) they all use a DSP Tree. What this means is that each node is able to “add” a dsp element to the tree (without rendering the full buffer)- then the engine works out what can be rendered together or what needs to be rendered out before rendering everything together. I would much rather have a fully working DSP tree then any new audio nodes. Also I really enjoy learning audio DSP in c as opposed to c++ (call me a troglodyte) but there is something about working “directly” with DSP that is quite exciting.

Also such low level nodes should be more educational for users in the future as they can see clearly how audio is put together in the source code- as opposed to a c++ library that just calls lots of functions.

If we can have a fast DSP tree then this will be the first step towards a professional audio (aka reaktor style) patching environment with video. Very exciting.

(Puredata has the DSP_ADD() function but for the life of me I can’t find the actual function code in the source!)

I would suggest a re-evaluation of Vuo audio- to make sure Vuo is making use of best in class practices. I also don’t neccessarily believe an external library will save the day- but rather an in house Vuo solution. Everything visual so far has been amazing, so I can’t wait for super high end audio as well!

1 Like

Oh, sometimes I start a sentence mid-thought-chain without the proper preface. Maybe it’s this troglodytes thing going on here as well, haha.

In short, I rather want the API as nodes with direct access to base functions so that you could noodle the DSP yourself.

Not so short: I believe we eventually will get some function to open nodes within compositions, instead of opening them up in a new window and reloading whatever we worked on before. Manageable/easy nesting in other words. So instead of going to QT, typing in your code and then compile it for use in Vuo, check that it works/doesn’t, perhaps back to QT fixing stuff - you can make it directly in Vuo as nested nodes. This means (in my head at least) that you could do DSP inside Vuo if there were more low level/direct access nodes. If someone wondered how you did something, they could just pop the hood and look at it or change it to get a better understanding. If they were totally uninterested in anything but the result, it would just be another node.

For me (which is all I can speak for) use of Vuo and other node-based environments are a bit more than a quick way to hammer out results. I can do very, very basic stuff with code, but I tend to loose track and oversight. Looking at something made in a node environment makes sense to me because I can see the connections. I would for example love to be able to pull out the pixel array for images directly in Vuo to make filters, or pull out events/data. I’ve tried getting into those things in code, and have a somewhat basic understanding of what’s going on conceptually, but when I start to try it out, I fall off quickly as it turns into a mush of words and symbols and I don’t see the result before it’s coded. I absolutely love opening up Vuo without having any idea of what’s going to be made, and then just try different things to see what’s possible (code wandering?). When typing in code, I feel there is an inherent need for a purpose with what you’re about to do.

Going all node, as far as it’s possible also has its advantages in that it won’t break on updates, don’t rely on external libraries and an eventual port to a different OS wouldn’t (ideally) have anything to say. The downsides is that the functions has to be made in Vuo, so libraries can’t be ported, they have to be remade with nodes. This does however require a lot more from the backend, and even more basic functionality which is what I want instead of dedicated nodes for audio (at this time).

I believe we eventually will get some function to open nodes within compositions, instead of opening them up in a new window and reloading whatever we worked on before.

Yes! Feature request: Edit node C code in the Vuo Editor have it do all the QT wrangling for the node

Yes thats a great request! I hope that feature request its not too complex, as QT gives debugging of some sort, (wish it knew the Vuo functions as well).

I have been working on a few shaders to learn GLSL- and ShaderToy’s error dialogues are very very helpful- unfortunately Vuo GLSL just comes up as a compile error in Console. (which I know has nothing to do with Vuo- GLSL is a bit of a dark art)

So yes the point I am trying to make is error messages are very helpful for learning, and also when you are looking at a screen at 2am and forget a bracket somewhere.

This is an excerpt from: Designing Audio Objects for Max/MSP and Pd pp, 315-316 by Eric Lyon:


This gives a very tiny overview of using a DSP Tree implementation to process audio objects - as opposed to rendering out each DSP buffer to pass to another node.

Obviously posting this here for educational use only.

Screen shot 2016-01-16 at 5.42.47 PM.png