Max Amount of Vertices to Add Noise 3D

What Im trying to do is recreate something I made in quartz composer 13 years ago in Vuo. It is an underwater water scene that I made with Kineme 3D and some GLSL refraction vertex shaders.

This is what that composition originally looked like.

Im trying to do the same thing now in Vuo. However Vuo doesn’t seem to have vertex shaders so Im using the built in Make Frosted Glass Shader which works fine when I use a low poly mesh like the one from this Composition: mxgx_Jelly3 .

Here is a video of the recreation in Vuo using that before mentioned Jelly model. It looks good not exactly the same but sufficient. Im happy with it.

This is all good up until the point I try and use my original model made with Maya back thirteen years ago. First I needed to converted it from FBX 2006 to DAE Collada with Autodesk FBX Converter because Vuo wouldn’t load my old FBX. I don’t have the funds to use Maya anymore and Blender wouldn’t convert my model because it was too old.

Thankfully Autodesk provides a free converter tool. Now that I can load the model in Vuo somethings weird start to happen. The model won’t display the Make Frosted Glass Shader it just disappears. Then when I try and deform the mesh with noise using just a lit color shader, everything gets choppy and not smooth anymore. See here how the noise and 3D rotation of the environment sphere is all choppy and not smooth.

I’m wondering if its because my sea urchin model has too many vertices. It says it has 45.5k which I don’t think should be too many. I was able to render the same thing 13 years ago using Kineme 3D in Quartz Composer. The Composition: mxgx_Jelly3 model has 27k vertices and runs the environment map rotation and noise deformation smoothly. Is there a limit to how many vertices I can deform with noise ? That is definitely the issue because it’s only when I add noise to the model that the environment gets all choppy.

Am I just hitting the limits of Vuo ? Is it just my computer being too old ? Does this sea urchin composition run smoothly like the jelly one does on someone else computer ? A possible solution, might be to import the model into blender and do some retopology to get the vertices count lower. I’m not sure how to do that, I haven’t ever done that with Blender but I know its possible. Might be a chance to learn something new.

sea urchin

jelly composition

jellydust.zip (3.12 MB)

Hey dust – urchin_test.vuo running at ~60fps on an M1 mac mini here, same as jellyComposition.vuo.

well that’s good and bad news. I guess my computer is getting too old. It’s about five years old now. I got 4gb of video ram. That was great for when I bought it. It’s bad news because I got to get a new computer and I’m not ready to do that yet. So buying a new computer isn’t an option to fixing my problem right now.

can you try sea_urchin 2 and see if it renders the frosted glass shader. When I render it looks like this and the urchin doesn’t render.

only the sea urchin flower renders with frosted glass on my system but not the actual urchin tentacles. the flower is just a stock sphere. I find I can only get the frosted glass to render if I flip the camera or scene upside down with the mouse. this isn’t what I want to do. it should look something like this but right side up. Also run smoothly like the jelly composition does.

How does one find out the FPS of a running composition ?

jellydust 2.zip (3.12 MB)

urchin2 – frosted glass shader not working for me as well, looks the same as you and works fine if I bypass it.

I say fps, but really Vuo reports last event per second. See it by double clicking on an output port.

Screen Shot 2022-02-06 at 12.38.14 PM.png

Btw, did you try changing the Render Scene to Window event throttling from enqueue to drop events (also via the output port)?  

1 Like

okay thanks for that info jersmi seems like a combination of things going on. there is something screwy with my model. It was a made with a built in paint or stroke fx that Maya has, that I convert or subdivided into a mesh to use in QC. it did something when I converted it from stroke too mesh to create the shader. I think it’s only vertex colors for the shader but that shouldn’t matter because I am replacing the shader in Vuo.

so not sure if its a problem with my model or the shader or both. in the other composition named jelly I was getting the same thing with the frosted glass shader disappearing if I rotated the model more than 100 degrees. so it seems there might be issues with the frosted glass shader as well.

also i’m getting like only 6 FPS on my sea urchin and I have 4gb of video ram ? that explains it being choppy and slowing everything in the composition down. it seems that M1 architecture you have jersmi really makes a lot of difference.

no I didn’t do anything with event throttling. What is that for ? It sounds like something I wouldn’t want to do maybe unless My composition was getting over run with events. How would I go about getting more FPS for my scene. I tried making the application window smaller to gain some FPS like you can do in QC by changing the render size in the viewer but that didn’t make any difference in Vuo. Also tried rendering scene to a small image like 512 x 512 and that didn’t make any difference either.

I just spent 900 dollars getting my third mother board replaced in my Mac book. It was cheaper than buying a new Mac book. I thought my Mac book had the same specs as a base model Mac. I should have got a M1 Mac mini for that price it seems to perform way better than my Mac book. oh well live and learn. I got a lot more qc comps to re do in Vuo, ill just move on to another one.  

Yeah, event throttling vs enqueue events probably wouldn’t help much for your low fps situation. Came up for me recently trying to build a list of cubes and pass shaders to the built list. The cubes in the built list were missing some shaders because the enqueud list of shaders wasn’t making the journey from a subcomposition made for the six sides of the cube, showing some with checkerboards. Switching to drop events fixed that one.

well I tried throttling my events and that didn’t help with gaining any fps. I think at this point I have to get the vertex count down some how. I tried to remesh it in blender but for what ever reason I can’t see my model when I import it to blender. I am a total noob with blender. so its get the vertex count down or get a new computer. or make a new model etc… or both… I kind of gave up on this sea urchin flower concept and made a turntable today.

Screen Shot 2022-02-07 at 7.37.53 PM.jpg

its another remake of a concept I did in qc along time ago. I had it working with the Kinect and leap motion at one point in QC. I don’t have either of those devices with me right now. So I tried using the machine learning Hand Pose OSC but the tracking is really flakey with that. So for now its just interactive with the mouse. it would probably make a cool GUI like a dial or something. maybe I should I post a stripped down version as GUI element that rotates something instead of turntable.

well some good news I was able to use mesh lab to decimate the mesh to more simplified version without losing too much quality, which allows me to gain some more fps or events per second. cutting the model in half, getting the size down to 22k vertices doubled my fps. this is great. however the model still won’t render the frosted glass shader so I am still at a lose with this one. thanks for trying that composition out for me.

1 Like

Well that’s good. Though still a little disappointing you can’t run an old QC comp…
I was checking out your turntable, nice work. I can definitely sympathize trying to recreate QC stuff.

For example, a little story – I did a ton of performances back in the day that all began with a QC comp called “draw in space”, which all started as a Kineme cwright javascript demo showing how to translate a 3D point matrix to 2D screen space, very inspiring at the time. (Though the performances required a lot more – IR camera/tracking, second computer at the camera running openCV in a jitter patch to get points, ethernet to computer running QC, mapping to projectors for tracking. The tracking setup was super DIY, IR bulbs inside a ping pong ball, IR camera had part of a floppy disk taped over the lens to filter out spectra other than IR, etc. It worked, every time). Anyway, after that period there was a bit less going on for a while, then Vuo came out and the first thing I wanted to try was that Draw in Space comp. Translating to Vuo was small task, at least for me. But when it finally came together, the Vuo version was really great, much better than the QC version (and didn’t require javascript). And now it’s an example comp, you’re welcome. :-)  

indeed I was happy gaining a few more fps. I read your thread about drawing in 3D space and remember that QC patch and saw the JS code you posted. I like the draw in space example. That showed me how to create a display for controls. I still need to drill down and see how your drawing in 3D Space. that’s one thing Vuo is missing is a javascript patch. I suppose a little work with the Vuo SDK and some C code, one could do relatively the same thing the QC JS patch did. still would be nice to have a JS or C code editor inside Vuo. it would help when doing logic that’s hard inside Vuo visually.

Screen Shot 2022-02-09 at 6.18.24 PM.jpg

i did another remake today. it came out better than the QC version did so I am happy about that. figured I would start with something simple and made this line animation on a sphere. I like the audio input version. the original wasn’t audio reactive and just an exercise rendering stripes on parametric objects with open cl. the QC version still runs even though the mesh creator says deprecated or well all of QC is deprecated now I suppose. I still have QC on my computer, it helps when converting QC patches to Vuo to be able to run them in QC. not everything still works though…  

Screen Shot 2022-02-09 at 6.18.24 PM.jpg

@dust, looks like you’ve worked through most of the problems in order to share Sea Flower Refraction. Very nice :)

Vuo’s Add Noise to 3D Object works differently than Kineme 3D Object Noise. The Kineme patch uses uncorrelated random values, giving a jagged (not very aquatic) effect. The Vuo node uses 4D Perlin noise, which is more expensive to compute, but gives a smoother effect.

As you saw, you can offset the expense by reducing the number of vertices in the mesh. MeshLab is indeed a nice tool for that.

I find I can only get the [urchin tentacles] frosted glass to render if I flip the camera or scene upside down with the mouse.

The problem might stem from the fact that the Make Frosted Glass Shader node is sensitive to depth sorting (the order in which objects are rendered, based on how far they are from the camera). This node looks at the graphics that have been rendered so far, and may not “see” the tentacles if they’re rendered in an unexpected order. The sphere could be throwing off the order, since part of it is behind and part is in front of the tentacles. You might be able to solve the problem by splitting the sphere into smaller objects so that each one is clearly either in front of or behind the tentacles.

well I tried throttling my events and that didn’t help with gaining any fps.

If you were changing it on the Render Scene to Window trigger port, then it wouldn’t make a difference since that port isn’t connected to anything. Changing event throttling to “drop” is useful for slowing down a trigger port that’s firing events too fast for the downstream nodes to keep up (documentation here under “Buildup of events”). “Drop” is the default on Fire on Display Refresh.

1 Like

Nice one Dust !

Jaymie, thank you for the tip about splitting up the sphere also.
Was wondering, could it somehow also be related to a discussion I vaguely remember about the fact that at some point Vuo switched some functions over to the CPU instead of the GPU because of all the GPU related bugs with OpenGL ?

Starting from some Vuo version, adding noise to 3D objects became heavier because of that, and the team said that as planned, switching to metal and back to the GPU will handle heavier calculations like these even better ?

Although yes, with an M1 processor, Dust’s composition can render smoothly, optimizations are always welcome ;)  

If you were changing it on the Render Scene to Window trigger port, then it wouldn’t make a difference since that port isn’t connected to anything. Changing event throttling to “drop” is useful for slowing down a trigger port that’s firing events too fast for the downstream nodes to keep up (documentation here under “Buildup of events”). “Drop” is the default on Fire on Display Refresh.

This makes sense, of course. Thanks, @jstrecker!

i’m not sure that splitting the sphere or flower up would fix the problem as the urchin tentacles where not rendering the frosted glass shader when it was just the tentacles model on screen without the sphere. so something screwy is going on with my model, as it will render the frosted glass shader if I flip it upside down. its probably just a depth sorting problem as Jaymie mentioned.

I found a way to work around the frosted glass shader problem. I came across this refraction shader while searching for glitch shaders the other day. its very simple and I’m not sure if you would call it refraction or not but that’s what it was called on shader toy. I just needed to render the scene into two parts foreground and background then feed them to this shader.

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
vec4 forground = texture(iChannel0, uv);
fragColor = texture(iChannel1, uv + forground.xz);	
}

the original Kineme refraction I was using by cwright was way more complicated but produced a much better result. here’s is the note from the original refraction I used.

We’re kinda cheating – objects in front of the teapot shouldn’t get refracted. They do, because we don’t do any depth sorting (not really tractable in QC).

Im not sure if this can be ported to Vuo or not. It uses a vertex shader for lighting and the internet tells me the variables gl_LightSource has been removed from ISF ? So not sure if this is even possible to port but posting it anyways as it worked out really nice even if it was cheating without any depth sorting.

// vertex shader

varying vec3 normal;
varying vec4 ambient;
varying vec3 lightDir,halfVector;
varying float dist;

void main()
{
	vec4 ecPos;
	vec3 aux;
	gl_Position = ftransform();
	normal = gl_NormalMatrix * gl_Normal;

	// lighting vector stuff for specular highlights
	ecPos = gl_ModelViewMatrix * gl_Vertex;
	aux = vec3(gl_LightSource[0].position-ecPos);
	lightDir = normalize(aux);
	dist = length(aux);	
	halfVector = normalize(gl_LightSource[0].halfVector.xyz);
	ambient = gl_FrontMaterial.ambient * gl_LightSource[0].ambient;
}


// fragment shader 
uniform sampler2D texture;
uniform float transparency;
uniform float width, height;
varying vec3 normal;
varying vec4 ambient;
varying vec3 lightDir,halfVector;
varying float dist;

void main()
{
	vec2 loc = gl_FragCoord.xy;
	vec3 lNormal = normalize(normal);
	loc -= lNormal.xy*64.;		// pseudo-Index of Refraction
	loc.x /= width;
	loc.y /= -height;

	// standard boring light stuff -- except,
	// we skip global ambient lighting because we're trying to
	// look like glass -- ambient light tints us white otherwise.

	vec3 halfV,viewV,ldir;
	float NdotL,NdotHV;
	vec4 color = vec4(0);
	float att;

	NdotL = dot(lNormal,normalize(lightDir));
	
	if (NdotL > 0.0)
	{		
		att = 1.0 / (gl_LightSource[0].constantAttenuation +
				gl_LightSource[0].linearAttenuation * dist +
				gl_LightSource[0].quadraticAttenuation * dist * dist);
		color += att * ambient;
		
			
		halfV = normalize(halfVector);
		NdotHV = max(dot(lNormal,halfV),0.0);
		color += att * gl_FrontMaterial.specular * gl_LightSource[0].specular * 
						pow(NdotHV,gl_FrontMaterial.shininess);
	}

	// sample the refraction image with our displaced texture position
	vec4 environment = texture2D(texture, loc) * transparency;
	gl_FragColor = environment + color;
}

To use lighting on a 3D object, you can either generate a 2D image (using Vuo’s Image Generator nodes and/or ISF) and feed it to Vuo’s Make Lit Image Shader or Make Image Details Shader nodes, or write a shader in a custom node class if you want to implement custom lighting. You wouldn’t use gl_LightSource since that’s part of the deprecated OpenGL fixed-function pipeline. Instead, you’d probably use custom uniforms (like in the 2nd reference below).

Some code that might be useful for reference: