Reduce the pause during live-coding reloads

  1. Open any composition that outputs a continuous audio stream or displays a continuous animation (e.g. Pan Audio or Bend Sphere example).
  2. Run the composition.
  3. Edit the composition in a way that causes a live-coding reload (e.g. add a node, resize a drawer).

There’s a slight pause in the composition’s output. It’s more noticeable with audio.

The pause happens at the moment when the composition’s dylib is swapped out. Is there a way to (a) reduce the time this takes so that it’s no longer perceptible or (b) output the graphics/sound in some way that covers up the discontinuity?

From my limited understanding @jstrecker, as audio is so different to video, (video buffers don’t care about drop-outs audio does) maybe audio should running as a separate process? In a similar manner to how SuperCollider works (with a server and client architecture) So we would only ever ‘tell’ the audio engine what to do, not actually instigate it. This may mean that audio nodes function differently to video or logic- maths etc (at least at a base level).

I’ve opened this feature request for community voting.

Next step: analyze the time taken in each stage of a live-coding reload (pause, serialize, dlclose, dlopen, unserialize, unpause), and see what we can do to reduce each of those times. That would globally improve livecoding responsiveness. Maybe we can reduce it to the point where it doesn’t interrupt typical audio event streams.

The complete audio rewrite proposed in Audio Objects port type and renderer could eliminate the audio dropout during live-coding reloads — but even then, control signals would still briefly pause, potentially causing audio hiccups.

@smokris, I did fit in toe requests in one- sorry about that. This is only because I don’t know the best way for Vuo to make this happen. My first suggestion is not a complete rewrite- as I think that it would be good to still use audio-samples port type in tandem as well as any other audio-object-style node.

I think of it as similar to render as image and render to window. Sometimes you need the ability to render out the buffer.

In regard to stopping any glitch this really has only one solution, which is to have audio function as a seperate process, which is able to accept different commands without reload. So yes- this “server and client” model is a complete re-write, but audio-objects is simply adding to the current system and could most probably work very well.

When looking at other successful audio environments Supercolider and Chuck are great at accepting realtime “control events” to make them achieve something different. Obviously due to Vuo’s usage of LLVM - and it’s unique ability to build itself anew each run introduces unique challenges for on-the-fly work.