Hmm, it does seem as though Steve suggests having GUI interface elements selectable when publishing…which really does sound very straightforward, nice approach.
Personally, I just think that straightforward, easy for the team to code in a robust way, and basic to start with…is a good path for any of this. You have a small amount of GUI elements, and then more user feedback happens. Knob, slider, color picker, string entry, maybe file drop, etc. Maybe not even all of that initially, I don’t know what would be reasonable.
Less potential of creating a very involved system out of attempting to handle every single scenario, but then having to work backwards for some reason later.
Given recent discussion, and coincidentally - thought I was having about VUO in past days - maybe it would be time well spent to document using a VUO graph within an xcode based app, and using Interface Builder to setup a GUI to send data to published ports.
I was just looking at the VUO docs about this, but hadn’t dug in…the last time I tried setting up the SDK on a new system, I just couldn’t get it working as I had previously. Hopefully this time will workout.
The Max/MSP preset storage system is pretty useful. I may be wrong, but one distinction I see in the discussion is that the Max thing works through a node on the editor surface that stores all values. Whereas this would seem to be proposed to work via publishing first and then the GUI interface widget presumably rendering somehow somewhere…which is a whole other ball of wax.
It may be the best approach. Sometimes!
There have been several workflow discussions about GUI in past days, and I think that there are a handful of somewhat different “solutions” to the “problem”. Quotes emphasizsd! Because I think the perception of best route can be highly personal, and what seems like the best route can also be very context dependent even for the same user.
I find there to be very valid use cases for the differing perspectives people have had about how to approach this issue in general. Sometimes it is just best to be able to approach things multiple ways.
There’s a use case for a class of GUI elements on the editor, for GUI element approaches in the render windows, to be able to code GUI elements in VUO language to use within VUO, to have really fleshed out Xcode examples of how to work with Interface Builder and published ports, and I’m sure some other approaches I am missing.
I think a slider and a knob on the editor could be very low hanging fruit things to accomplish, as some sort of editor gui widget class. Starting slow on that seems good, and it would immediately be helpful in many settings. Maybe a color picker or something.
Having robust GUI widget selection within the app exporter seems like another great approach.
The ideas of having GUI made in VUO nodes render in a separate window, you make your hit tests and sliders in VUO language…I think much of that can be done from scratch now. I’m unsure what the biggest weak links would be on that type of approach. I think that you can even do file drops and stuff given the stock nodes, but I don’t remember for sure.
I think some of the ideas of various GUI functionality seems really time consuming to code, though I am unsure, it depends on many factors. Whereas some things seem like they could have a very basic initial implementation in a few hours, with a few day window after to polish it up.
Anyway, just some thoughts to cast into the void :-)