Is there a way to do Off Line Rendering with Vuo (like Quartz Crystal) other than using Image Generator protocol?

Off-line rendering offers numerous advantages over realtime. One being a 4 hour movie using a simple composition can be rendered in 10 minutes or less, another being that a perfect resolution movie can use all the power and time it needs to render each frame, taking much longer than what is possible with realtime rendering.

Edit> **Short answer is yes, but you need to convert the composition to an Image Generator using the Vuo Image Generator Protocol **
See Vuo Manual: 10.1.2 Exporting a movie from an Image Generator composition.

I’m having problems converting my perfectly working order composition to one that works when rendered as an IMage Generator. Much of the static text layer content is blinking on and off like crazy and moving around even. (See movie: graph animation as image Also there’s a resolution issue. With the original comp the text on the screen was crisp no matter if I put the output window on my MBP retina display or regular 96ppi DELL. The new window that the Image Generator spawns keeps the same low screen res from 96DPI Dell onto the retina display and text and lines go fuzzy as a result. Same for rendered output.

The advantage of Quartz Crystal, a separate rendering Application is that you could just publish those input you wanted editable in Quartz Crystal and that was it. As Vuo has nodes for rendering to windows, this probably makes it harder for a separate rendering app to know where to get the output from but it could default to just any and if there is only one then that is the one.

What working methods are people using to get around these issues of resolution and changed composition behavior that results in buggy output?

Quartz Crystal was a life save for me, used it to death. Nothing I can find in Vuo can do that. Compiled apps have same limitation. If compiled apps had the ability to render out movies they would have more sale value in effects/video tools market too.

Still struggling with a workable method for this problem. Perhaps I need to convert this to a FR of some kind?

The text layer behaving differently after you converted the composition to the image generator protocol could be related to event flow through published ports.

The text should be equally crispy as long as you’re rendering at the same resolution, regardless of the rendering destination. If the text doesn’t look right in the exported movie, I’d suggest checking your export settings, including resolution, movie quality, and motion blur.

If you’re still stuck on either of these problems, I’d suggest posting question(s) or discussion(s) and on each attaching a simple composition that demonstrates the problem.