Datatype documentation, explainer and how to user guide

I only play with Vuo sporadically ATM and I frequently forget the relationship between some of the more complicated datatypes like 2D Objects, 3D Objects, Meshes, Scenes, Layers, Images, Shaders etc.

Often when I want to expand a composition with a specific enhancement I’m scratching my head as to what direction I should be working in, am I looking to add a 3D object shader to a mesh that currently has an image shader and set of points as inputs, or should I be piping 3D Objects into a layer node≤ or what have you.

I’d like to see what these advanced but very important data types actually look like in code terms, how is a 3D Object type defined? Is it a super class which the {line, cube, sphere, torus,…} set of objects belong to? How is a Scene defined, is it a class containing 3D objects and shaders assigned to each 3D object? How is a mesh defined, is it simply a list of 3D Points or also 3D objects, or is it a 3D object itself, does it take or require at least a simple shader definition?

To graphics programmers and 3D modellers this is probably more readily apparent, but seeing as Vuo takes the approach that it’s also a programming tool for those without much coding experience I really feel there needs to be some structured overview and maps to these components. (That’s how I like to think about it all anyhow, I want the map even if I don’t understand the complexity of many of the things shown on the map beyond their name and basic purpose).

We could start it as doc in the cloud using Pages or Keynote (seeing as we’re all on Mac) and then Kosada could correct it and integrate as part of the documentation. Put in the Appendix of regular docs maybe. Some diagrams would help, we’re visual types.

Other data types worth considering are lists, and that sub-set of nodes that accepts a list input, but it’s more the 3D imaging/scene construction that I want immediate clarity on using some maps and explainers of each type of node/data class.

I have scanned the manual, but the manual tends to a more narrative approach, I’m looking for something more like a listing of the 3D related objects and their relationships to each other. I guess the VUO API documentation is a place to start, but a less technically dense, more user friendly approach is what I’m advocating for. Mostly for beginners but for advanced users too if they forget something or are looking for more depth around certain functionality.

The document would assumes basic knowledge of Vuo operations like connecting nodes, firing of events, iterating loops… but would help in overall composition design when you have no idea how to approach a very specific end goal. IN some ways a pattern book.

If users concur I’m happy to start it, but not knowing too much I can’t really write it on my own.

The source code in combination with the API is an invaluable tool to figure out Vuo for more advanced usage. I frequently look up the node filename (under the node title), look up the source, and then move on to the API to understand the code/usage. For the specifics of Vuo, documenting all of this to a greater degree than it already is would be a huge undertaking for someone I assume is a small portion of the user base.

I can greatly recommend “The Nature of Code” by Daniel Schiffman and his excellent YouTube channel at The Coding Train to understand the basics better in a general perspective. He does his things in Processing or Processing.js, but the concepts, examples and code are relatively easily transferrable to Vuo. Especially in combination with the source/API.

I’ve thought of doing a few simple tutorials on writing very simple nodes in C that could or could not be what you are after, but not sure if I can find time to do so in the foreseeable future. It is on my to-do list, right after making sense of, structuring and publishing all my node sets though (and the source to those that are published are already available in the node sets!).

1 Like

I’m more thinking to answer questions like do I need to feed my list of 3D Points to a Scene or a Layer or a Window? If I want an image at each point, do I need a shader and at what point in the chain of types does this fit in? There’s so many nodes in Vuo that take an input of one or more of the following datatypes and output a different different datatype that it’s really hard to know where to go to get the desired result sometimes and I find I do a lot of working around before I get close to the result I’m after, just because I don’t know all the patterns.

{image, shader, point, 3D object, cube, sphere, rectangle, oval, scene, layer, window}

Does that make sense?

Thanks for the links too, oh yes you showed me this dude’s work before. I looks super relevant to my interests :-)

Just wish I could do higher level maths, want to code a node of JS patch in QC to do Delaunay triangulation and a bit out of by depth. Here’s a nice video made with Processing  

1 Like

The short answer is that it fits in anywhere! Which is the beauty of it. If you think linearly, this very simplified diagram might help, but maybe not really.

Since you basically can adjust things before or after the fact, it depends a bit on workflow, and what you want to achieve. If you follow the diagram, an image is an end point of the comp. But it can also be the beginning if you want to shade with an image. Or both if you want a feedback-y kind of shader for your comp.

I played around with a simple composition the other day that perhaps can demonstrate the flexibility and intertwinedness(?) a bit when it comes to where you should be putting values and things (boxFlip.vuo). It is how I view it though, there might be better ways to do it. The comp creates a cube (object), copies it and applies a transform to the position (points). The same points are used to sample a color (well, color) from an image input (or in this case live video from the cam), and use the lightness value (real(point)) to create a list of rotational values for the transforms (point). It is then rendering the scene to an image which goes to the output.

The cube is created at the beginning, and then copied since it is an array of similar objects, and it wouldn’t be as efficient to create 625 meshes when there already is an object to copy.

There is no “Hold” node before the “Get Item from List” node going from the point grid because it gets generated at the start and it isn’t animated (no events after the first). If the grid was animated, a “Hold” node would be required, or at least an “Allow Changes” node for intermittent adjustments to prevent stuttering and glitches (due to the two data streams being out of sync).

To run the composition, you’ll need to put the Magneson.Points.2dGrid.vuonode in your user modules folder (WIP warning, should work, can’t guarantee it).

If you haven’t already, check out my nodes (and all the others!) in the Node Gallery! They are a bit all over the place at the moment, but should provide some useful tools for a lot of different (perhaps niche) stuff. Especially when it comes to lists.

Magneson.Points.2dGrid.vuonode (3.64 KB)

boxFlip.vuo (8.46 KB)

2 Likes

Hey thanks for the diagram Martinus, that is exactly what I was looking for!

We could colour in some of those graph nodes to show the pattern any particular simple composition takes from the see of endless possibilities. And also add some control nodes where required. In synthesisers back in the day we used to get taught of master models (LFOs, ADSR Envelopes etc) and slaves (VCOs, Ring Modulators, VCFs VCAs). In you diagram all those types generators (as indicated by each graph node) could be seen as having slave relationships to Wave Generators, Radom Noise Generators, Midi/ArtNet DMX data etc.

So is a mesh a set of points and data about which points are vertexes and which vertexes are combined to define edges and faces? Is it anything more than that? For example can I define the colour of each point in the mesh (like a blend mesh in Adobe Illustrator for example).

The kind of thing I like to work with is sets of point that make geometric patterns/shapes and transitioning points from one set of positions/colour/edge/face arrangements into another using interpolation of various sorts.

Can we interact with the mesh data directly in Vuo?

Would I be correct in surmising that handling a lot of points, lines and faces in a mesh object is faster than manipulating lists of 3D and 4D points in loops? Or is that not necessarily the case at all?  

This is an open-ended feature request, but I agree that there’s plenty of room for more documentation on data types.

  • The manual could talk more about the widely used data types like objects and layers.
  • The node documentation could talk about types that are specific to node sets.
  • The tutorials perhaps could provide more information about data types.

I like the idea of documentation being originated by the community. That way it answers the questions that people really want to know about.

… like the nice diagram by @MartinusMagneson. Good that it will be here as a reference until we are able to add more official documentation.

I’ve opened this feature request for voting. Since it is open-ended, I think what we’ll do is make an initial pass at the manual and/or node documentation. From there, the community can create more specific feature requests as the need arises.

@useful_design, to understand meshes it helps to understand the OpenGL rendering pipeline.

Can anyone suggest a good reference for beginners on the OpenGL rendering pipeline with lots of diagrams?

I think that comment is probably on the money @jstrecker, I really don’t know much at all about meshes and sometimes people have said read the OpenGL Orange Book in answer to my questions since starting down the QC road. Meshes seem to be at the nexus of much of the Vuo rendering related nodes. I’d love to think one day I’d be be able to write a node that could manipulate a mesh directly.

While I’m looking at OpenGL books and blogs, what version of OpenGL does Vuo implement? As I understand it that’s something Vuo has control over, and leap-frogged QC’s older OpenGL implementation when it was first released?