Optimizing for video

I am making a video experience which plays 2 videos, in sync and allows the audience to control the effects of them.

I’m using OSC and iOS devices to control them.

My question is about ways to optimize performance.

I chose Prores LT 720p for the video format. Is there any other codec I should consider? I don’t think that the hard drive will be a bottleneck for this format, so I plan to run them both from the same hard drive.

I want to have effects that can be toggled on and off. I’m concerned that for some in the audience, the instinct will be to turn everything on and max it out. I can plan for some of that with smart use of toggles (if one turns on, it turns another off), but my concern is the sync. Keeping the 2 videos in sync over ~45 minutes is critical.

I don’t have a specific question, but I’d love to hear any tips or experience anyone has for this case. Do we know which effects are more GPU intensive? If I send the video stream with various effects through different “Select Input”, for example Select Input 8, will 8 instances of my video be playing no matter which one is selected?

Effects are hard to guess the performance impact of as it depends heavily on the shader-code. Generally convolution-based effects (blur/edge detection etc.) are heavy as they do a lot of calculations per pixel based on the neighboring pixels. While a 3x3px convolution matrix is’n too heavy (8 calculations for the neighboring pixels), a 9x9px convolution matrix is exponentially more resource intensive (80 calculations for neighboring pixels). This in contrast with blending or color changing that just are a couple of simple math operations on the pixel itself.

If sync is of critical importance, there are a few options for keeping it so for an extended period of time. One approach is to use a media server that is designed for the purpose (timeline based). This can get pretty expensive fast, but check with your local rental company for solutions if you can go this route.

A second approach is to play the videos on a dedicated external device that can handle two simultaneous videos/outputs. Then use a capture card with at least two inputs to process the live video inside Vuo. Note that you should not use two separate/different capture cards for this as it then can have different latency on the inputs.

A third option is to render the two videos together as one wide image. Then you can crop the main video into two video streams in Vuo, ensuring it cannot go out of sync at the source. This may be preferred as it is a relatively cheap option to ensure sync. This can of course be paired with the previous solution and one capture card (that has to support wide resolutions) to remove the overhead of video playback from the effect machine.

To make sure the effect chain doesn’t induce a timing mismatch between the sources, the solution is to run all the effects all the time. This ensures a constant load - but needs good enough hardware to run effectively. To do so you set up a blend image ladder where the blended output from one effect goes into the other, and the audience controls the blend value.

1 Like

You may want to also experiment with codecs. Hap may be a good one to use here.

https://hap.video/