Very preliminary question: wondering what the chances of integrating something like this into Vuo, as a plugin, I guess. Or other possibilities?
The last time I used Box2D was a raw kludge, and it was someone else’s role on the team to do that part – two laptops, laptop 1 with a max/Jitter patch for IR tracking and sending Box2D data as OSC via ethernet to laptop 2 running graphics out of Quartz Composer. (I could and did at times run everything on one laptop, distance from performer+tracking to control booths created the need for two laptops.)
This, by the way, was a show for kids. DIY all the way – the tracking was an IR bulb inside a pingpong ball with a battery pack attached to a trombone slide with electrical tape. Camera was a hacked PlayStation Eye camera. (It would be nice to have a more elegant solution for this setup.) We did our little interactive media show at grade schools as well as for full orchestra. Box 2D was driving a starry sky, and the trombonist could swing his trombone around and the stars would react with physics. Effective, kids loved it, it was magic to them. We also made “magic wands” for the kids, with pingpong ball trackers attached to little wand handles.
I am currently in discussions about how to revive some of this.
(Another thing was a pre-show drawing table where kids would draw or move objects around on a plexiglass window, a webcam below would record to video, then we would use the videos as a section of the graphics for the music performance that followed. Satisfying for all to see the videos placed immediately into a performance context.)
My experience thus far with Vuo is that it might take a while (for me) to get something working, but the results could be solid. I have not dug in much yet to 2.2, looks great, esp. for app building! (Part 2 of this would be to have a separate “controller” app running alongside projected graphics – I have seen in the forum here that others also use this approach. Part 3 would be to output a video recording of show graphics, hopefully not choking a 2014 MBP.)
I think it’s more of a question if box2D is the correct solution to implement for physics-calculations in the first place. As Vuo already has a lot of the base components that you’d want from box2D (layers/bounding boxes/fast memory management etc), I’d wager that it would be a neater and better solution to implement a native system - especially when considering 3D as well.
The physics calculations themselves are pretty straightforward, it’s all the stuff around it that is the hassle. I have tried to make a few physics-ish-nodes, but the issue is mostly how to make a good user experience in the line of; do you want it to input layers/objects and manipulate them? Or do you want to generate layers from a master node? Do you want a “world” node to connect layers to? Or could a “Make Layer/Object with Physics” node be simpler? Knitting together a nice workflow is probably the largest piece of the puzzle, but that is an issue if you use box2D as well
Good questions, and thanks so much. I need to think about this. Exciting to think it could all be done in Vuo, for sure.
The first would be to replicate what we had – a very simple setup compared to what is possible for games, etc. – a tracking point pushing a bunch of stars (200 or 300?) around on the screen, with stars having inertia, bumping into each other, etc. It was the audience connection with it as “interactive” that made it work. Simple but effective.
So I can imagine the physics in Vuo, but it looks like openCV blob/point tracking hasn’t been rolled out, right? Then I guess, what, openFrameworks or Jitter or some other tool to port tracked x/y coordinates to Vuo? Recommendations?
@jersmi, blob tracking would be great in Vuo but it’s not yet here. A workaround that could work in particular situations, like the task of distinguish if the tracking point hits a body over a contrasted background like a star over a dark sky, could be “sample color from image”. I used that in a composition to check if the tracked point coming from a leap motion device was inside or outside a black and white mask. Could work even if you want to check if you are pointing a coloured shape or not.
Appreciate the suggestion. What is Leap’s range? This was a performer (and audience) drawing on screen or interacting with the stars, etc. (thus physics) using tracked IR lights. IR tracking was the most robust option at the time, for various lighting situations, distance from camera, etc. I think the hack still would be worth comparing – a piece of floppy disk over the lens of a camera with IR filter removed. Kinect v2 also looks promising.
Here is reported range is 60cm; but to my experience this wasn’t a huge limit, providing the correct installation setup. My setup expected the user pointed with its finger towards a huge mapped projection. If this is your situation also, vuo Leap nodes is the best out-of-the-box solution as you get a xy position of the user pointed position. Last Vuo release supports Kinect2, but I see this less useful for this specific task.
Feel free to vote for these feature requests if you’d like us to prioritize them. We welcome your input on how you’d like us to approach physics implementations in Vuo (like Magneson mentioned). And, of course, we welcome other feature requests that could support your work.
After quite some thought about it, I realized maybe the world node would make most sense as a kind of window/render (something more accurate) property node. This would be equally true for physics, particles, and in the future maybe volumetrics and procedurals as well. Layers and objects should be kept as is, but an addition of “wrapper”-nodes for additional properties for layers/objects would deal with the necessities. Hard to explain, maybe this helps:
An approach like this would take advantage of already implemented solutions, and would make learning how to use it simple, as you only add on to an existing base. If you think about it in the context of "Add [Physics/Particles/Volumetrics/Procedural] to Object], it would be a drop in solution for some pretty nifty stuff for existing projects/compositions.
You also get rid of an additional complexity layer by using objects/layers as the base for forces. This could perhaps be a generic object/point(list) port though for the cases where you only need points for it. This would also prime the solution for the use of volumetrics for flow-fields with 2/3D gradient noise (Density port being a generic image/volumetric/object port).
I also think simplification is key to a good implementation. Is repulsion for instance necessary? Could it just be negative attraction?
I’m starting to ramble, there is a lot to this, but maybe it could be a solution?