A node that interfaces with voice recognition tools (such as Siri on Sierra)
We checked out the Apple framework that interfaces with Siri, SiriKit.
The documentation describes a limitation that I want to make sure you’re aware of, which is that SiriKit can only be used for certain specified topics: VoIP calling, Messaging, Payments, Photo, Workouts, Ride booking, Restaurant reservations (Maps only), CarPlay (automotive vendors only).
And, just to let you know, the process for integrating SiriKit with a composition / exported app, as described in the documentation, would be onerous. Unlike a typical framework that is just dropped in when compiling/linking the composition, this one requires significant changes to the compiling/linking process. So this would not be a simple 1-dot-complexity feature request. (An alternative voice recognition technology, OS X Dictation, probably would be 1-dot.)
So, what do you think? Could you maybe describe what you hope to do with this node?
Thanks for checking Siri out. Sounds like there would be little point integrating Siri into Vuo if OS X Dictation is easier. Does OS X Dictation use any Siri backend services? Or is it a competing technology? Would Siri eventually take over from Dictation?
I will change the request title to simply be
Voice Command Node and leave the implementation up to Team Vuo. There could even be a better solution which is cross-platform out there anyway.
It really depends on what you want the commands to do.
Siri can handle commands for the topics I listed above.
Dictation, which was added in OS X 10.8, can handle commands to the system or specific applications, such as “Undo that”, “Get new mail”, “What time is it?” As the name suggests, it can also convert speech to text.
Or if those don’t cover the uses you had in mind, possibly there are other options available besides Apple’s.
I think a simple
text to speech would work well.
We can always program in our own special words into the system ourselves from within Vuo! Super great @jstrecker!
Aha, there’s another built-in Apple API for voice commands: Speech Recognition. Speech Recognition would work better for commands than Dictation because it listens for a specified set of commands. (Not susceptible to silly dictation mistakes like “turn right” → “tern write”.)
I’ll update this feature request and open it for voting.
For Dictation, I’ve created a separate feature request: https://community.vuo.org/t/-/5641