This is a follow-on from my previous query about exporting video with a soundtrack. I’m assuming this is not possible currently.
So what to do? Save Images to Movie will do the job. However, as I look inside the resulting file with, say, Media Inspector, I find the frame rate of the resulting ProRes 422 file is 41.369 fps. Assuming you’d like to use the file as a clip in a longer video or would simply like to give the file some shape with fades, etc. this frame rate seems to be problematic when it comes time to render the content in H264 for, say, Vimeo. The export preset in Resolve for Vimeo defaults to a 24fps render. This is not any kind of multiple of the clip’s default of 41-and-change. Here’s what it looks like as I use (with thanks) balan’s simple audio reactive composition with some of my music.
To my eye, it starts pretty well but by about the 1 minute point it starts to go chunky and ends up looking almost like a stop motion project. The synch with the audio seems reasonably OK but is should… flow. Like it does in the Window in Vuo itself.
Yes, there are wheels within wheels, all sorts of factors that could be contributing to this. But, my purpose, is to try to eliminate as many of those factors as possible.
So the question: is there a way to get a movie file out of Vuo that has a frame rate that I can control or at least conforms to one of the standard rates and that will be the product of some process that takes into account as many of the other bottlenecks and limitations of any given set up as possible?
I think I’m back to a viable NRT solution but I’m really new to this software; there are hundreds of wrinkles that I’m unaware of that, if flattened out, might get me where I need to be.
Thanks for your help. …edN
It’s pretty quiet in here. Not to worry, I am used to talking to myself.
For those of you who have an interest in this, my state of ignorance, I continue experimenting.
I’ve found that, while output settings for ProRes 422 lead to frame rates of 41 - 43 fps, output using H264 creates a file of 60fps. Bingo!?
Why? And how is this remedied if it can be?
The result is a then a 100% H264 workflow in the NLE, which is doable if you’re not going to mess with the output too much – colour grading, effects, etc. The results of this change have been substituted for the file you may have looked at yesterday. I think it looks better or at least more in line with what I expected to see based on viewing the Window.
I’d like to understand what is going on with these differences because I can foresee situations where higher quality, less compressed output files will be needed. Is it as simple as setting up a periodic event set to 60fps to run the audio file and thus the rest of the composition? What about the encueing settings along the way? It would sure be nice to have some assistance with this in contrast to having to flail around on my own. Yes, experimentation is a great way to lean stuff but as a famous nineteenth-century French painter said, “He who is self-taught is learning from a very ignorant teacher.”
Revised video is here: https://vimeo.com/309352419
Thanks for your help. …edN
Hmm it is quiet in here :)
I just ran a very quick test with the example file ‘SaveFramesToMovie’ that Jaymie created and found it gave me unreliable results: variable frame rate and unable to write the file multiple times, even after changing the overwrite URL function (weirdly it did make me two versions on the third time I openned the comp and then not the forth).
I’ll try and have a play with it another day.
Why don’t you do an non realtime render and then re-add the audio afterwards in your NLE of choice?
Another solution would be to use a hardware video recorder such as an Atomos or a cheaper gaming recorder.
Yes, thanks. Synching audio to video in post is the only current alternative I think. The question for me is about time code. I don’t understand the what and how of timecode in Vuo. It appears to be the only synch choice given there is no scratch audio in the video output against which a waveform synch might operate.
So Is this the procedure?
- export the video NRT with timecode attached – somehow
- write out the audio with timecode in real time or, alternatively, as a video-less movie in NRT – at the same time as the the video…
- finally import to NLE and synch using timecode.
I haven’t been able to find anything in the docs about what type of time code is written to these files; whether it consumes one of the audio tracks or is put into meta-data. How it is ‘jammed’ to the output files, etc., etc. So I’m feeling like I’ve been flying blind on this and have lost the appetite for just banging around until something happens.
Are you creating your audio in Vuo?
Why do you want timecode?
It feels like you are overcomplicating it to me unless there is something I don’t understand in what you are trying to achieve.
What is wrong with rendering a movie the exact length of your audio track and re-syncing.
I’m grateful for your input and suggestions.
To answer your second question: I want timecode because it is offered as a feature in this software, it promises a possible enhancement to my workflow and, finally, as I hope I’ve mentioned earlier, I don’t understand how it works based on my reading of the documentation and in relation to my previous experience in shooting and editing video.
If you have ever done video work using a dual system approach, i.e., video to camera, audio to external recorder, you know that synching audio to video is a rudimentary first step in the editing process. By convention, there are tow methods: a) comparing the audio wave forms of the scratch track on the video to the high quality audio from the external recorder, or b) imprinting matching timecode on both video and audio and then merging the two in the editor. Lining up video visually to an audio file, if that is what you’re proposing, is fraught. Particularly in a scenario where you might want to modify frame rates and/or use a different piece of video as part of the edit.
All that aside, my reason for posting here is not to critique my way of working or my creative goals. It is to try to get some clarity around how some of the features of Vuo, a truly estimable piece of software, work. I can read what the documentation says the features do but the docs, probably because of my ignorance of the product, do not tell me what the features mean or imply in terms of the usage scenarios I’ve been describing.
Am I being too obtuse? Nit-picky? My apologies.
@Scratchpole, the problem with saving might be the
Stop Composition. I thought it would be convenient but maybe it cuts off the composition before it’s done saving. You could take that out and instead keep an eye on the timestamps. I just tested that with a longer recording (changed the 5s to 120s) and the framerate was steady. When you get a chance, maybe you could post a video that ended up with a variable framerate.
@eenixon, Vuo does timestamps, not timecode.
Thanks for hanging in on this; I appreciate it. This is getting a bit long in the tooth and I’m going to have to go back and review the project and try again to understand some of the concepts.
Give me a few days and I’ll try to get something back here that, at best, is a success or, alternatively, clarifies the issue I think I’m having.
Thanks again. …edN
The variable framerate was probably just Quicktime not measuring fps correctly with short 5s clips.
I just did a 30s export and all seemed fine and steady, but the audio cut off at 27s because I had not ammended the samplesPerSec rate to match the rate of my audio file. Could that be automated?
It seems to be a good solution for synced output.
OK. Just to get me organized here. Let’s focus on the composition that Jaymie put together in the previous thread here: https://community.vuo.org/t/-/6225
I’ll forgo the reactive stuff for the moment as being irrelevant to my current problem. I’m still a newbee so I have only a high level understanding of Jaymie’s code. One thing that is foggy is where/how the length or duration of the composition file is determined. There is a Calculate node but I don’t see the relationship to duration, e.g., of the audio file.
What would be most useful in my case would be a process that existed in one of two situations: a) the duration of the audio file being played, or, alternatively, the duration of a video file being played. I guess that means that “finished playback” would be used to relation to or in place of “Stop Composition” in Jaymie’s example. Subtleties I’m not getting here?
Now the “Make Audio/Video Frame” nodes are putting Timestamps on each frame; this looks good. But is it really necessary in this use case? Because the video and audio are being merged or synched in the “Save Frames to Movie” node. On the other hand, if for some strange reason, you wanted to create two outputs – one audio, one video – perhaps the Timestamp is essential (if you actually want to reunite the two in some later process, e.g., an NLE edit.)
So I have some questions:
- is a Vuo Timestamp like or even better conformant with Timecode as described here: https://en.wikipedia.org/wiki/Timecode ?
- is there a way of Timestamping a movie that is being exported in non-real-time?
- is there a way of writing out an audio file containing Timestamps? Presumably the same Timestamps as those on an exported video.
Thanks again for your patience as I do some noisy gear/paradigm shifting. …edN
Right I’ll try and help answer your queries.
You could use Get Audio File Info node (enter the same URL to your audio file as is contained in Play Audio file) and use the duration output to set the length of the file to be created, link it up to the Is Less Than node. The Is Less Than node is where the output file gets it’s duration.
The timestamps are utilised by the Save Frames to Movie node so that it can sync the audio and video together when it finalizes the file.
Timestamp is nothing to do with timecode, other than it is ‘time’.
Any movie written by Vuo intrinsically has a time and a fps, so that is a timecode of sorts.
Hope that helps. I have never used these nodes before trying to answer you previously and all knowledge I have of this composition has been gained by searching and reading here, plus studying the example and hovering over ports while the composition is running.