As I understand it, Adobe Premier Pro doesn't support "Smart Render" mode as in After Effects.
Yet I don't understand how does it support 32 Bit Per Channel input.
I look at the SDK sample called SDK Noise, at the following code:
PrPixelFormat destinationPixelFormat = PrPixelFormat_BGRA_4444_8u;
if (pixelFormatSuite) {
(*pixelFormatSuite->GetPixelFormat)(output, &destinationPixelFormat);
if (destinationPixelFormat == PrPixelFormat_BGRA_4444_8u){
ERR(suites.Iterate8Suite1()->iterate( in_dataP,
0, // progress base
linesL, // progress final
¶ms[NOISE_INPUT]->u.ld, // src
NULL, // area - null for all pixels
(void*)&niP, // refcon - your custom data pointer
FilterImageBGRA_8u, // pixel function pointer
output));
} else if (destinationPixelFormat == PrPixelFormat_VUYA_4444_8u){
ERR(suites.Iterate8Suite1()->iterate( in_dataP,
0, // progress base
linesL, // progress final
¶ms[NOISE_INPUT]->u.ld, // src
NULL, // area - null for all pixels
(void*)&niP, // refcon - your custom data pointer
FilterImageVUYA_8u, // pixel function pointer
output));
} else if (destinationPixelFormat == PrPixelFormat_BGRA_4444_32f) {
// Premiere doesn't support IterateFloatSuite1, so we've rolled our own
IterateFloat( in_dataP,
0, // progress base
linesL, // progress final
¶ms[NOISE_INPUT]->u.ld, // src
(void*)&niP, // refcon - your custom data pointer
FilterImageBGRA_32f, // pixel function pointer
output);
} else if (destinationPixelFormat == PrPixelFormat_VUYA_4444_32f) {
// Premiere doesn't support IterateFloatSuite1, so we've rolled our own
IterateFloat( in_dataP,
0, // progress base
linesL, // progress final
¶ms[NOISE_INPUT]->u.ld, // src
(void*)&niP, // refcon - your custom data pointer
FilterImageVUYA_32f, // pixel function pointer
output);
} else {
// Return error, because we don't know how to handle the specified pixel type
return PF_Err_UNRECOGNIZED_PARAM_TYPE;
}
err = AEFX_ReleaseSuite (
in_dataP,
out_data,
kPFPixelFormatSuite,
kPFPixelFormatSuiteVersion1,
NULL);
}
}
I removed some code related to errors.
Yet even if I set the Render and the Sequence (Preview) to 32 Bit (Max Bit Depth) it still always selects 8 Bit Format.
I don't understand how the pipeline should work in order to enable support for 32 Bit processing.
Any assistance?
How exactly does it work?