A “Make Radial Gradient Image” node with the external border color set to 0% alpha connected to a “Make 3D object with Image” is something I cannot get transparent on the borders no matter what blending mode I use.
Am I doing something wrong or should I make a feature request (or is there already one concerning this that I have overseen ) ?
@bodysoulspirit, it depends on both the order the objects appear in the list(*), and the Z values of the objects. Vuo currently renders objects in the order they appear in the object list.
Check out the attached composition — it creates a list of 10 radial gradient image-objects. On the left they’re rendered back-to-front; on the right they’re rendered front-to-back using Reverse List:
On the right side, the frontmost object is rendered first, so its pixels fill the color buffer and depth buffer. Then, it renders the object behind it. Since the depth buffer already contains some pixels closer to the camera than the second image-object, OpenGL thinks it doesn’t need to render some of its pixels — which is what makes the front object appear opaque. (Even though the images are square, the opaque area appears circular because Vuo discards pixels that are fully transparent / 0% alpha.)
I’ve also attached a modified version of your composition. I just changed the Z values in Copy 3D Object, so the objects are drawn back-to-front.
(*) Ideally Vuo should automatically render transparent objects in back-to-front order, so you don’t need to worry about this issue. That would make a good feature request.
Yes. The problem when changing the z-values like you did in my composition is that the objects appear bigger or smaller and that you have to change their sizes too then.
So yes much better when objects are rendered back to front, thanks for the nice sample composition !
But actually my composition was a simplified composition of feeding such “Make radial image” into “Make 3D object from Image” that are copied into Satoshi’s Particle System.
As particles are random and have different speeds "newer particles may come nearer to camera then older ones, will that feature request you created be able to check that in realtime ? Will that not be heavy ?
Sorry, I don’t understand much in those OpenGl technical stuff.
The problem when changing the z-values like you did in my composition is that the objects appear bigger or smaller
If you use the orthographic camera, the apparent size of the object won’t change depending on the distance from the camera.
will that feature request you created be able to check that in realtime ? Will that not be heavy ?
The feature request will sort each object, but it won’t affect the order that vertices within each object are rendered.
Satoshi’s particle emitter outputs a list of positions, which you’re feeding into Copy 3D Object. Each particle is its own object, so the feature request should make that composition render without sharp edges.
However, for a very large number of particles, it would probably be more efficient for a (hypothetical) particle system to produce a single object with many vertices (rendered as points, like Make Point Mesh does). The feature request wouldn’t help with that; in that case the only solution I know of is to disable depth writing (like our Shade Edges with Color does).
will it […] be able to calculate in realtime and show your modified composition joined below ?
Yes, I expect the feature request to fix the sharp edges in that composition.