Azure Kinect Tips & Tricks

tips and tricksWell, I think it’s time to share some tips, tricks and examples regarding the K4A-asset, as well. This package, although similar to the K2-asset, has some features that significantly differ from the previous one. It supports multiple sensors in one scene and different types of depth sensors, as well.

The configuration and detection of the available sensors is less automatic than before, but allows more complex multi-sensor configurations. In this regard, check the first 4-5 tips in the list below. Please also consult the online documentation, if you need more information regarding the available demo scenes and depth sensor related components.

For more tips and tricks, please look at the Kinect-v2 Tips, Tricks & Examples. These tips and tricks were written for the K2-asset, but for backward compatibility a lot of the demo scenes, components and API mentioned there are the same or very similar.

Table of Contents:

How to reuse the K4A-asset functionality in your Unity project
How to set up the KinectManager and Azure Kinect interface in the scene
How to use Kinect-v2 sensor instead of Azure Kinect in the demo scenes
How to use RealSense sensor instead of Azure Kinect in the demo scenes
How to set up multiple Azure Kinect (or other) sensors in the scene
How to remove the SDKs and sensor interfaces you don’t need
Why is the old K2-asset still around
How to playback a recording instead of getting data from a connected sensor

How to reuse the K4A-asset functionality in your Unity project

Here is how to reuse the K4A-asset scripts and components in your Unity project:

1. Copy folder ‘KinectScripts’ from the AzureKinectExamples-folder of this package to your project. This folder contains all needed components, scripts, filters and interfaces.
2. Copy folder ‘Resources’ from the AzureKinectExamples-folder of this package to your project. This folder contains some needed libraries and resources.
3. Copy the sensor-SDK specific sub-folders (Kinect4AzureSDK, KinectSDK2.0 & RealSenseSDK2.0) from the AzureKinectExamples/SDK-folder of this package to your project. It contains the plugins and wrapper classes for the respective sensor types.
4. Wait until Unity detects, imports and compiles the newly detected resources and scripts.
5. Please do not share the KinectDemos-folder in source form or as part of public repositories.

How to set up the KinectManager and Azure Kinect interface in the scene

Please see the hierarchy of objects in any of the demo scenes, as a practical implementation of this tip:

1. Create an empty KinectController-game object in the scene. Set its position to (0, 0, 0) rotation to (0, 0, 0) and scale to (1, 1, 1).
2. Add KinectManager-script from AzureKinectExamples/KinectScripts-folder as component to the KinectController game object.
3. Select the frame types you need to get from the sensor – depth, color, IR, pose and/or body. Enable synchronization between frames, as needed. Check the user-detection setting and change them if needed, as well as the on-screen info you’d like to get in the scene.
4. Create Kinect4Azure-game object as child object of KinectController in Hierarchy. Set its position and rotation to match the Azure-Kinect sensor position & rotation in the world. For a start, you can set only the position in meters, then estimate the sensor rotation from the pose frames later, if you like.
5. Add Kinect4AzureInterface-script from AzureKinectExamples/KinectScripts/Interfaces-folder to the newly created Kinect4Azure-game object.
6. Change the by-default setting of the component, if needed. For instance, you can select different color camera mode, depth camera mode, device sync mode, the min & max distances used for creating the depth-related images.
7. If you like to replay a previously saved recording file, select ‘Device streaming mode’ = ‘Play recording’ and set the full path to recording file in the ‘Recording file’-setting.
8. Run the scene, to check if everything selected works as expected.

How to use Kinect-v2 sensor instead of Azure Kinect in the demo scenes

1. Unfold the KinectController-object in the scene.
2. Select the Kinect4Azure-child object.
3. (Optional) Set the ‘Device streaming mode’ of its Kinect4AzureInterface-component to ‘Disabled’.
4. Select the KinectV2-child object.
5. Set the ‘Device streaming mode’ of its Kinect2Interface-component to ‘Connected sensor’.
6. If you like to replay a previously saved recording file, you should play it in the ‘Kinect studio v2.0’ (part of Kinect SDK 2.0).
7. Run the scene, to check if the Kinect-v2 sensor interface is used instead of Azure-Kinect interface.

How to use RealSense sensor instead of Azure Kinect in the demo scenes

1. Unfold the KinectController-object in the scene.
2. Select the Kinect4Azure-child object.
3. (Optional) Set the ‘Device streaming mode’ of its Kinect4AzureInterface-component to ‘Disabled’.
4. Select the RealSense-child object.
5. Set the ‘Device streaming mode’ of its RealSenseInterface-component to ‘Connected sensor’.
6. If you like to replay a previously saved recording file, select ‘Device streaming mode’ = ‘Play recording’ and set the full path to recording file in the ‘Recording file’-setting.
7. Run the scene, to check if the RealSense sensor interface is used instead of Azure-Kinect interface.

How to set up multiple Azure Kinect (or other) sensors in the scene

Here is how to set up a 2nd (as well as 3rd, 4th, etc.) Azure Kinect camera interface in the scene:

1. Unfold the KinectController-object in the scene.
2. Duplicate the Kinect4Azure-child object.
3. Set the ‘Device index’ of the new object to 1 instead of 0. Further connected sensors should have device indices of 2, 3, etc.
4. Change ‘Device sync mode’ of the connected cameras, as needed. The 1st one should be ‘Master’ and the other ones – ‘Subordinate’, instead of ‘Standalone’.
5. Set the position and rotation of the new object to match the sensor’s position & rotation in the world. For a start, you can set only the position in meters, then estimate the sensor rotation from the pose frames later, if you like.

How to remove the SDKs and sensor interfaces you don’t need

If you work with only one type of sensors (probably Azure Kinects), here is what to do, to get rid of the extra SDKs in the K4A-asset. This will decrease the space of your project and build:

– To remove the RealSense SDK: 1. Delete ‘RealSenseInterface.cs’ from KinectScripts/Interfaces-folder; 2. Delete the RealSenseSDK2.0-folder from AzureKinectExamples/SDK-folder.
– To remove the Kinect-v2 SDK: 1. Delete ‘Kinect2Interface.cs’ from KinectScripts/Interfaces-folder; 2. Delete the KinectSDK2.0-folder from AzureKinectExamples/SDK-folder.

Why is the old K2-asset still around

The ‘Kinect v2 Examples with MS-SDK and Nuitrack SDK’-package (or ‘K2-asset’ for short) is still around (and will be around for some time), because it has components and demo scenes that are not available in the new K4A-asset. For instance: the face-tracking components & demo scenes, the hand-interaction components & scenes, the speech-recognition component & scene, the body recorder & player and the data server and client networking components. This is due to various reasons. For instance, the SDK API does not yet provide this functionality, or I have not managed to add this functionality to the K4A-asset yet. As long as these (or replacement) components & scenes are missing in the K4A-asset, the K2-asset will be kept around.

On the other hand the K4A-asset has significant advantages, as well. It works with the most up-to-date sensors (like Azure Kinect & Realsense), allows multi-camera setups, has better internal structure and gets better and better (with more components, functions and demo scenes) with each next release.

How to playback a recording instead of getting data from a connected sensor

The sensor interfaces in K4A-asset provide the option to play back a recording file, instead of getting data from a physically connected sensor. Here is how to achieve this for all types of sensor interfaces:

1. Unfold the KinectController-game object in Hierarchy.
2. Select the proper sensor interface object. If you need to play back a Kinect-v2 recording, please skip steps 3, 4 & 5, and look at the note below.
3. In the sensor interface component in Inspector, change ‘Device streaming mode’ from ‘Connected sensor’ to ‘Play recording’.
4. Set ‘Recording file’ to the full path to previously saved recording. This is the MKV-file (in case of Kinect4Azure) or OUT-file (in case of RealSense-sensors).
5. Run the scene to check, if it is working as expected.

Note: In case of Kinect-v2, please start ‘Kinect Studio v2.0’ (part of Kinect SDK 2.0) and open the previously saved XEF-recording file. Then go to the Play-tab, press the Connect-button and play the file. The scene utilizing Kinect2Interface should be run before you start playing the recording file.

 

9 thoughts on “Azure Kinect Tips & Tricks

  1. Pingback: Azure Kinect Examples for Unity | RF Solutions - Technology, Health and More

  2. Hi, I’m using good sauce.

    I’m working on a project to write two Kinect v2 at the same time with two Kinect v2
    But not at the same time, but at the same time, only one Kinect is used in a single scene.

    How do I use the desired KINNEX with two KINNEX connected?

    • Sorry, but the K2-asset does not support multiple sensor setups. In this regard, why would you use two sensors, if only one is used in a single scene?

  3. Hi.Runmen F .
    I ‘m using azure kinect and follow your tips & Tricks ,then try to open the sec deviec ,but unity have some erros are “Can’t create body tracker for Kinect4AzureInterface1! & AzureKinectException: result = K4A_RESULT_FAILED” . how can i fix this erro for mutilple device ?

Leave a Reply