I would like to render fisheye images in Blender, get them into Vuo so I process them with “Image › Warp Image with Projection Mesh”.
One possibility is to use syphon, but I get the impression it does not run on mac, and maybe broken on windows.
Another possibility is render image to file, and have vuo detect updated file and read it in.
Has anyone made this connection?
I can’t speak to the Blender side, but on the Vuo side receiving Syphon should work.
Another option might be to run Blender fullscreen on one monitor and Vuo on another, and use Vuo’s
Capture Image of Screen to pull in the Blender output.
syphon on blender seems to be the obstacle.
Screen capture is an option that Paul Bourke suggested on a private email. The problem here is managing displays and juggling different different sw packages simultaneously.
My current inclination is to do uv mapping within blender, as a node after rendering. Currently trying to figure out how to take the meshmap used in vuo, and convert it into a UV blender node.
Curious if anyone in the vuo community, besides Paul and me, are doing dome viz.
@MatthewDougherty what is your end goal? Is it running Blender to generate live content? Might be easier to understand/suggest solution in context :)
I use Blender for animation, particularly for fisheye scenegraphs. For a complete movie I use vuo’s warp to do the full video.
I have a 6m dome with a modified version of Paul Bourke’s warped fisheye offset projection.
Getting the models correct, lighting correct, and camera optimal is always a problem of rendering. To actually view it I have to warp the fisheye. Dalai Felinto did a warped fisheye in the gaming engine 15y ago, but it did not work in practical animation rendering.
I thought I could tighten the QC loop using vuo to do the warping thru syphon. But the syphon code is barely there, so getting it into vuo to do the warp is problematic.
I think the solution is to write a warp node in blender and call it a day.
Unfortunately, it seems so. The only alternative would be to run Blender on a PC, and create a spout output from a camera which should be stable-ish. Then you can use Spout to NDI (https://leadedge.github.io/) and get the NDI stream in Vuo over a local network.
Or use a 4K HDMI → USB-C capture dongle, connect to the HDMI out on the Blender machine, appears as a webcam input in Vuo and warp that. I have one somewhere in my box of tricks from another project. I could dig it out and test if you like. You will be limited to 2K fisheye but I suspect that’s about the upper limit of any spherical mirror projection anyway.
Need 4k, want 8k when those projectors become affordable.
Got rid of the mirror and use the upper half of a video camera fisheye lens; big aperture, cheaper, better image quality, more compact, and easier to maintain/calibrate.
I plan to dig into Dalai Felinto’s code in the blender game engine, to create an image node in the rendering. Just need something for a simple dome quick look to set scenegraph/render parameters. For production, render in fisheye, and move that to vuo for further enhancements (eg, sound, midi & dmx).
final note on this
I decided my production setup using Vuo and Blender is to do movie sequence warping with Vuo, and to do single image scenegraph tests using Blender. I did not use Dalai Felinto’s code to create a warp node in Blender, but rather devised a method to take the warp mesh file used in Vuo, and convert it into a 16-bit (float would be better) UV image that can be interpreted by the Blender composite node MapUV. This avoids using Syphon to send images to Vuo. I am under the impression Syphon has problems in Blender, additionally the Syphon integration is not part of the distributed Blender code package (i.e., voluntary code occasionally maintained elsewhere).