However, pending that—I’m working on something where I want to find the first non-transparent pixels in the top left. I need to export the .app, so connecting some other tools is a no-go for this particular question. Basically a cheap ‘blob tracker’.
A shader would be a nice efficient way to do this, but I can’t work out how I’d get the 2D coordinates out of the shader
Another approach might be to somehow use the Resize Image node with Proportional mode, but I’m yet to find a way to make this work.
I don’t think Resize Image will help you here; that node doesn’t analyze the image content, it just scales and crops the image based on its dimensions. Now that you mention it, the documentation is ambiguous about that; we’ll work on clarifying it.
Thanks @jmcc. I figured out it was just trying to describe how it was cropping, not doing anything that had awareness of opacity values
I am working on some other approaches. **Sample Color from Image ** is what I’m using at the moment, but it would be great to know what makes that patch run under the hood. ie—how is it getting a single value from many pixels.
Thanks @jmcc. I didn’t realize it’s all Open Source. Very, very cool. I might just have to write some C.
Thanks also @useful_design. I did build something like this in my first pass—made a gradient with blue on x, red on y axis. It would be perfect for blobs, but I’m hoping for the top left corner for my particular need. Since I’m looking at basically a stroked shape, created by frame diffs, this also is a little wonky.
I’m working on using an iterative divide-and-conquer approach to get some efficiency, rather than simply scanning every pixel.
Basic blob tracking of sorts could be terribly handy for many people I assume. If I come up with anything generically useful I will try to share.