i’m trying to calibrate a camera with a projector.
There are several softwares that, using OpenCV primitives, returns parameters that allow to define a mesh that describes the projection area that matches the camera view, correcting also lenses distortions.
The data i can have with these application are of this form:

I think “stereo” datas are the useful one i really need.

Now, i wish to make a mesh described by these parameters, and apply a texture on it - you guess, the texture is the video camera feed, so what i get out of the mesh is the matching image to project.

I see “Make Parametric Mesh” node and i suspect could be the one i need to use to make this mesh.
I don’t know so ,much of parametric math, but before digging into that, I’d like to know if the aim I have is really possible with this node, and some hints if possible.

My hypotesis is that, if “Make Parametric Mesh” is the right way to compute mesh point coordinates and i find a way to write them to a file, "“Warp Image with Projection Mesh” is the right node to show a calibrated image.

I’m trying to figure out what those numbers mean. Looking at this camera calibration derivation, it seems your “K” matrix contains the Focal Length, Principal Point, and Skew Coefficient values, and your “kc” vector contains the Radial and Tangential Distortion Coefficients. Together those form a complete set of Intrinsic Parameters for the Heikkilä/Silvén camera model.

Your “R” matrix and “T” vector appear to be the Extrinsic Parameters — a rotation matrix and translation vector, respectively. I’m not sure why they’re labeled “Stereo”.

So, I think you need to make a 3D mesh based on the Intrinsic Parameters, and place it in 3D space using the Extrinsic Parameters.

First, the easy part, the Extrinsic Parameters — at the bottom of this page there’s a JavaScript widget to convert a 3x3 rotation matrix into a quaternion. Entering your matrix, I get quaternion [-0.034904, -0.196936, 0.015732, 0.979669] — you can enter this into Vuo’s Make Quaternion Transform node. Likewise with the “T” vector (but given how large those values are relative to Vuo’s -1 to +1 coordinate space, you’ll have to apply a scale transform, too).

Next, how do we make a mesh based on the Intrinsic Parameters? If we can find parametric equations of the form (x,y,z) = f(u,v), we can use Vuo’s Make Parametric Mesh node to produce a mesh. I skimmed through the original 1997 Heikkilä/Silvén paper, but didn’t find anything appropriate, and I don’t understand enough of the math yet to devise my own set of equations. Any ideas? Do you know of any open-source software that can generate a mesh given the camera’s Intrinsic Parameters?

your contribution is awesome. Thanks!!! This sounds for me a great step forward.
I’ll study better and test the solution you provide, and look back to the information i collected to see if the last opened question mark finds some way to get an answer.

It sounds also to consider the extrinsics data, so: do you think that “Make Parametric Mesh” is, as I suspected, the node that, feeder with the formula there described, can get the result?