Essentially, the only difference in the workflow needed, to get the audio to work, is to add each clip to the render batch individually and choose to render "single clip" instead of simply by setting an in and out point in the thumbnail parade and choosing to render "individual source clips." I'm trying to find out why that one check box makes all the difference as to why the audio renders correctly or incorrectly.Īs for the UI, I have no issues with it at all. On features, I will generally make a timeline for each scene, and do the same for each. If I don't want the original sound replaced, then I would simply leave the original clips in the timeline that works just fine, regardless of the export choice. Each roll is exported as a separate clip. What is an intent of dailies in your workflow? Why are separate clips required? How should it, in your opinion work? What if (and it's always the case) do not want the original sound replaced? How do you think resolve should handle this from UI POV?ĭepending on what type of production, be it commercial, feature, etc., the normal workflow is to take the raw camera clips, sync them with system audio, perform a baselight correction, and then export that clip with audio as ProRes for the director to use. It isn't just for placing graded clips in NLE timelines. when using Resolve for dailies, it is indeed very important to be able to sync audio and then export the clips with said high quality audio included. I'm trying to find the cause/workaround for this issue because there shouldn't be a need to use Plural Eyes, and also because it's senseless to say, "well just don't use that feature" about an application when the aforementioned feature is one of the main reasons for using the app in the first place, e.g. In my situation (not the original poster), I'm using Resolve for dailies as well as final color. There is no image quality loss as there is no transcoding that takes place. I still do not understand why you won't just use plural eyes stand alone that you already own to produce clips with the synch audio. You are just able to leave them unchanged other than color and they will fit perfectly back into a timeline by re-conforming/relinking media. I didn't find anything helpful in the manual, so I'm wondering if you all have any suggestions.ĭmitry Kitsov wrote:So the intent of the render the individual clips option is to use those in your editing app without affecting the edit in any way. Everything looks great in the edit tab, the production audio is lined up and is linked to the footage, but when I render the footage is coming out with the crumby 5D source audio on it. I import that into Resolve, hoping to create my edit media with production audio burned in in one fell swoop. It worked perfectly, so I exported an XML from the synced sequence that pluraleyes created. So, I brought my raw footage and production audio into final cut and ran pluraleyes as I normally would. They didn't even give me a proper slate clap to work with, so manual is out as well. Timecode from the audio does not match the video, so unfortunately the autosync feature isn't an option. I've recently had problems keeping things in sync with my usual pluraleyes workflow in FCP7, so I devised a brilliant plan to use Resolve to create Prores files with properly married production audio. I'm working on a job with interviews shot on a 5DmkIII, and of course, sound from a field recorder.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |