Questions about Kinect v2 (XBox One) support

Hi, I’ve started playing with Kinect v2 and have some questions on how it can operate with Vuo.

Let’s start with the data I’m able to read from NiMate. One the left side is “Depth” image encoded with grayscale (0-255) color, on the right is “Encoded Depth” using REP118 protocol with RG channels for econding depth (0-65025).

![kinect views][kinect views]
[kinect views]: https://vuo.org/sites/default/files/discussion/kinect_v2.png

I’m able to receive the simpler “Depth” image via Syphon and then displace an object with it, it works fine, but amount of details is really low.

  1. When operating with Kinect v1, does Vuo.Kinect node decode more details than I’m currently able to read with the greyscale image?

2.a) Thinking about adding support for Kinect v2 and ROS118 encoded depth - I think it should be easy to write a node decoding RGB Syphon images to a 3d plane, but won’t the performance degrade because of Syphon protocol usage? Shouldn’t it rather use system drivers to fetch data directly from Kinect v2 via USB bus? (that would be beyond my capabilities)

2.b) Would Vuo be able to process 30 frames per second with this amount of 3d data, or I should rather think about using lower-level tool like openFrameworks? Take a loot at this image, to see how detailed the mesh is:

![characters][characters]
[characters]: https://vuo.org/sites/default/files/discussion/kinect_v2_characeters.png

I’m able to receive the simpler “Depth” image via Syphon and then displace an object with it, it works fine, but amount of details is really low.

That would be because Syphon is limited to 8bpc, whereas the Kinect is providing higher bit depth images.

  1. When operating with Kinect v1, does Vuo.Kinect node decode more details than I’m currently able to read with the greyscale image?

The node receives 16bpc images from the Kinect v1. However, since the sensor hardware is better on the Kinect v2 than the Kinect v1, I’m not sure how Kinect v1 at 16bpc would compare to what you’re getting now with Kinect v2 at 8bpc.

2.a) Thinking about adding support for Kinect v2 and ROS118 encoded depth - I think it should be easy to write a node decoding RGB Syphon images to a 3d plane, but won’t the performance degrade because of Syphon protocol usage? Shouldn’t it rather use system drivers to fetch data directly from Kinect v2 via USB bus? (that would be beyond my capabilities)

Yes, native Kinect support in Vuo should use less system resources and provide better-quality images than importing from NI mate + Syphon. There’s an open feature request for Add support for Xbox One Kinect (Kinect V2). I expect we’d use GitHub - OpenKinect/libfreenect2: Open source drivers for the Kinect for Windows v2 device .

2.b) Would Vuo be able to process 30 frames per second with this amount of 3d data, or I should rather think about using lower-level tool like openFrameworks?

Since Vuo does mesh deformations on the GPU, it should be able to handle large meshes like that quickly. Though in general performance is hard to predict because it depends on a number of factors, including your computer. You’re using Displace 3D Object with Image now? You could test by feeding it a series of images at the size and level of detail that you would expect from the Kinect v2.

1 Like