Azure Kinect Tips & Tricks

tips and tricksWell, I think it’s time to share some tips, tricks and examples regarding the K4A-asset, as well. This package, although similar to the K2-asset, has some features that significantly differ from the previous one. It supports multiple sensors in one scene and different types of depth sensors, as well.

The configuration and detection of the available sensors is less automatic than before, but allows more complex multi-sensor configurations. In this regard, check the first 4-5 tips in the list below. Please also consult the online documentation, if you need more information regarding the available demo scenes and depth sensor related components.

For more tips and tricks, please look at the Kinect-v2 Tips, Tricks & Examples. These tips and tricks were written for the K2-asset, but for backward compatibility a lot of the demo scenes, components and API mentioned there are the same or very similar.

Table of Contents:

How to reuse the K4A-asset functionality in your Unity project
How to set up the KinectManager and Azure Kinect interface in the scene
How to use Kinect-v2 sensor instead of Azure Kinect in the demo scenes
How to use RealSense sensor instead of Azure Kinect in the demo scenes
How to set up multiple Azure Kinect (or other) sensors in the scene
How to remove the SDKs and sensor interfaces you don’t need
Why is the old K2-asset still around
How to playback a recording instead of getting data from a connected sensor
How to set up sensor’s position and rotation in the scene
How to calibrate and set up multiple cameras in the scene
How to get the sensor streams across the network

How to reuse the K4A-asset functionality in your Unity project

Here is how to reuse the K4A-asset scripts and components in your Unity project:

1. Copy folder ‘KinectScripts’ from the AzureKinectExamples-folder of this package to your project. This folder contains all needed components, scripts, filters and interfaces.
2. Copy folder ‘Resources’ from the AzureKinectExamples-folder of this package to your project. This folder contains some needed libraries and resources.
3. Copy the sensor-SDK specific sub-folders (Kinect4AzureSDK, KinectSDK2.0 & RealSenseSDK2.0) from the AzureKinectExamples/SDK-folder of this package to your project. It contains the plugins and wrapper classes for the respective sensor types.
4. Wait until Unity detects, imports and compiles the newly detected resources and scripts.
5. Please do not share the KinectDemos-folder in source form or as part of public repositories.

How to set up the KinectManager and Azure Kinect interface in the scene

Please see the hierarchy of objects in any of the demo scenes, as a practical implementation of this tip:

1. Create an empty KinectController-game object in the scene. Set its position to (0, 0, 0) rotation to (0, 0, 0) and scale to (1, 1, 1).
2. Add KinectManager-script from AzureKinectExamples/KinectScripts-folder as component to the KinectController game object.
3. Select the frame types you need to get from the sensor – depth, color, IR, pose and/or body. Enable synchronization between frames, as needed. Check the user-detection setting and change them if needed, as well as the on-screen info you’d like to get in the scene.
4. Create Kinect4Azure-game object as child object of KinectController in Hierarchy. Set its position and rotation to match the Azure-Kinect sensor position & rotation in the world. For a start, you can set only the position in meters, then estimate the sensor rotation from the pose frames later, if you like.
5. Add Kinect4AzureInterface-script from AzureKinectExamples/KinectScripts/Interfaces-folder to the newly created Kinect4Azure-game object.
6. Change the by-default setting of the component, if needed. For instance, you can select different color camera mode, depth camera mode, device sync mode, the min & max distances used for creating the depth-related images.
7. If you like to replay a previously saved recording file, select ‘Device streaming mode’ = ‘Play recording’ and set the full path to recording file in the ‘Recording file’-setting.
8. Run the scene, to check if everything selected works as expected.

How to use Kinect-v2 sensor instead of Azure Kinect in the demo scenes

1. Unfold the KinectController-object in the scene.
2. Select the Kinect4Azure-child object.
3. (Optional) Set the ‘Device streaming mode’ of its Kinect4AzureInterface-component to ‘Disabled’.
4. Select the KinectV2-child object.
5. Set the ‘Device streaming mode’ of its Kinect2Interface-component to ‘Connected sensor’.
6. If you like to replay a previously saved recording file, you should play it in the ‘Kinect studio v2.0’ (part of Kinect SDK 2.0).
7. Run the scene, to check if the Kinect-v2 sensor interface is used instead of Azure-Kinect interface.

How to use RealSense sensor instead of Azure Kinect in the demo scenes

1. Unfold the KinectController-object in the scene.
2. Select the Kinect4Azure-child object.
3. (Optional) Set the ‘Device streaming mode’ of its Kinect4AzureInterface-component to ‘Disabled’.
4. Select the RealSense-child object.
5. Set the ‘Device streaming mode’ of its RealSenseInterface-component to ‘Connected sensor’.
6. If you like to replay a previously saved recording file, select ‘Device streaming mode’ = ‘Play recording’ and set the full path to recording file in the ‘Recording file’-setting.
7. Run the scene, to check if the RealSense sensor interface is used instead of Azure-Kinect interface.

How to set up multiple Azure Kinect (or other) sensors in the scene

Here is how to set up a 2nd (as well as 3rd, 4th, etc.) Azure Kinect camera interface in the scene:

1. Unfold the KinectController-object in the scene.
2. Duplicate the Kinect4Azure-child object.
3. Set the ‘Device index’ of the new object to 1 instead of 0. Further connected sensors should have device indices of 2, 3, etc.
4. Change ‘Device sync mode’ of the connected cameras, as needed. The 1st one should be ‘Master’ and the other ones – ‘Subordinate’, instead of ‘Standalone’.
5. Set the position and rotation of the new object to match the sensor’s position & rotation in the world. For a start, you can set only the position in meters, then estimate the sensor rotation from the pose frames later, if you like.

How to remove the SDKs and sensor interfaces you don’t need

If you work with only one type of sensors (probably Azure Kinects), here is what to do, to get rid of the extra SDKs in the K4A-asset. This will decrease the space of your project and build:

– To remove the RealSense SDK: 1. Delete ‘RealSenseInterface.cs’ from KinectScripts/Interfaces-folder; 2. Delete the RealSenseSDK2.0-folder from AzureKinectExamples/SDK-folder.
– To remove the Kinect-v2 SDK: 1. Delete ‘Kinect2Interface.cs’ from KinectScripts/Interfaces-folder; 2. Delete the KinectSDK2.0-folder from AzureKinectExamples/SDK-folder.

Why is the old K2-asset still around

The ‘Kinect v2 Examples with MS-SDK and Nuitrack SDK’-package (or ‘K2-asset’ for short) is still around (and will be around for some time), because it has components and demo scenes that are not available in the new K4A-asset. For instance: the face-tracking components & demo scenes, the hand-interaction components & scenes, the speech-recognition component & scene, the body recorder & player and the data server and client networking components. This is due to various reasons. For instance, the SDK API does not yet provide this functionality, or I have not managed to add this functionality to the K4A-asset yet. As long as these (or replacement) components & scenes are missing in the K4A-asset, the K2-asset will be kept around.

On the other hand the K4A-asset has significant advantages, as well. It works with the most up-to-date sensors (like Azure Kinect & Realsense), allows multi-camera setups, has better internal structure and gets better and better (with more components, functions and demo scenes) with each next release.

How to playback a recording instead of getting data from a connected sensor

The sensor interfaces in K4A-asset provide the option to play back a recording file, instead of getting data from a physically connected sensor. Here is how to achieve this for all types of sensor interfaces:

1. Unfold the KinectController-game object in Hierarchy.
2. Select the proper sensor interface object. If you need to play back a Kinect-v2 recording, please skip steps 3, 4 & 5, and look at the note below.
3. In the sensor interface component in Inspector, change ‘Device streaming mode’ from ‘Connected sensor’ to ‘Play recording’.
4. Set ‘Recording file’ to the full path to previously saved recording. This is the MKV-file (in case of Kinect4Azure) or OUT-file (in case of RealSense-sensors).
5. Run the scene to check, if it is working as expected.

Note: In case of Kinect-v2, please start ‘Kinect Studio v2.0’ (part of Kinect SDK 2.0) and open the previously saved XEF-recording file. Then go to the Play-tab, press the Connect-button and play the file. The scene utilizing Kinect2Interface should be run before you start playing the recording file.

How to set up sensor’s position and rotation in the scene

Here is how to set up the sensor’s transform (position and rotation) in the scene. In case of multiple camera setup this should be done at least for the 1st sensor:

1. Unfold the KinectController-game object in Hierarchy, and select the respective sensor interface object.
2. Measure the distance between the floor and the camera, in meters. Set it as Y-position in the Transform-component. Leave the X- & Z-values as 0.
3. Set the ‘Get pose frames’-setting of KinectManager-component in the scene to ‘Display info’. Start the scene. You will see on screen information regarding the sensor’s position and rotation. Write down the 3 rotation values.
4. Select again the sensor interface object in the scene. Set the written down values as X-, Y- & Z-rotation values in its Transform-component.
5. Set the ‘Get pose frames’-setting of KinectManager-component in the scene back to ‘None’, to avoid unneeded IMU frames processing.

How to calibrate and set up multiple cameras in the scene

To calibrate multiple cameras, connected to the same machine (usually two or more Azure Kinect sensors), you can utilize the MultiCameraSetup-scene in KinectDemos/MultiCameraSetup-folder. Please open this scene and do as follows:

1. Create the needed sensor interface objects, as children of the KinectController-game object in Hierarchy. By default there are two Kinect4Azure-objects there, but if you have more sensors connected, feel free to create new, or duplicate one of the existing sensor interface objects. Don’t forget to set their ‘Device index’-settings accordingly.
2. Set up the position and rotation of the 1st sensor-interface object (Kinect4Azure0 in Hierarchy). See this tip above on how to do that.
3. Run the scene. All configured sensors should light up. During the calibration one (and only one) user should stay visible to all configured sensors. The progress will be displayed on screen, as well as the calibrated user meshes, so you could track the quality of calibration, too. After the calibration completes, the configuration file will be saved to the root folder of your Unity project.
4. After the automatic calibration completes, you can further manually adjust the rotations and positions of the calibrated sensors (except the fixed first one), to make the user meshes match each other as close as possible. To orbit around the user meshes for better visibility, press Alt + mouse drag. When you are ready, press the Save-button to save the changes to the config file.
5. Feel free to re-run the MultiCameraSetup-scene, if you are not satisfied with the quality of multi-camera calibration.

To use the saved multi-camera configuration in any other scene, open the scene and enable the ‘Use Multi-Cam Config’-setting of the KinectManager-component in that scene. When the scene is started, this should create new sensor interfaces in Hierarchy, according to the saved configuration. Run the scene to check, if it works as expected.

How to get the sensor streams across the network

To get the sensor streams across the network, you can utilize the KinectNetServer-scene and the NetClientInterface-component, as follows:

1. On the machine, where the sensor is connected, open and run the KinectNetServer-scene from KinectDemos/NetworkDemo-folder. This scene utilizes the KinectNetServer-component, to send the requested sensor frames across the network, to the connected clients.
2. On the machine, where you need the sensor streams, create an empty game object, name it NetClient and add it as child to the KinectController-game object (containing the KinectManager-component) in the scene. For reference, see the NetClientDemo1-scene in KinectDemos/NetworkDemo-folder.
3. Set the position and rotation of the NetClient-game object to match the sensor’s physical position and rotation. Add NetClientInterface from KinectScripts/Interfaces as component to the NetClient-game object.
4. Configure the network-specific settings of the NetClientInterface-component. Either enable ‘Auto server discovery’ to get the server address and port automatically (LAN only), or set the server host (and eventually base port) explicitly. Enable ‘Get body-index frames’, if you need them in that specific scene. The body-index frames may cause extra network traffic, but are needed in many scenes, e.g. in background removal or user meshes.
5. Configure the KinectManager-component in the scene, to receive only the needed sensor frames, and if frame synchronization is needed or not. Keep in mind that more sensor streams would mean more network traffic, and frame sync would increase the lag on the client side.
6. If the scene has UI, consider adding a client-status UI-Text as well, and reference it with the ‘Client status text’-setting of NetClientInterface-component. This may help you localize various networking issues, like connection-cannot-be-established, temporary disconnection, etc.
7. Run the client scene in the Editor to make sure it connects successfully to the server. If it doesn’t, check the console for error messages.
8. If the client scene needs to be run on mobile device, build it for the target platform and run it on device, to check if it works there, as well.

 

20 thoughts on “Azure Kinect Tips & Tricks

  1. Pingback: Azure Kinect Examples for Unity | RF Solutions - Technology, Health and More

  2. Hi, I’m using good sauce.

    I’m working on a project to write two Kinect v2 at the same time with two Kinect v2
    But not at the same time, but at the same time, only one Kinect is used in a single scene.

    How do I use the desired KINNEX with two KINNEX connected?

    • Sorry, but the K2-asset does not support multiple sensor setups. In this regard, why would you use two sensors, if only one is used in a single scene?

  3. Hi.Runmen F .
    I ‘m using azure kinect and follow your tips & Tricks ,then try to open the sec deviec ,but unity have some erros are “Can’t create body tracker for Kinect4AzureInterface1! & AzureKinectException: result = K4A_RESULT_FAILED” . how can i fix this erro for mutilple device ?

  4. Hi,

    Can I access coordinations(x, y, z) of point cloud and it’s color(RGB)?

    If I can do that, Could you explain how to?

    • If you mean the SceneMeshDemo or UserMeshDemo, for performance reasons the point clouds there are generated by shaders. If you need script access to the mesh coordinates, please replace SceneMeshRendererGpu-component with SceneMeshRenderer (or UserMeshRendererGpu with UserMeshRenderer). They generate the meshes on CPU and you can have access to everything there.

  5. To start with: LOVE the Unity asset from the Azure Kinect examples you created.
    I have one question:
    How can I detect multiple tracker users?
    In the Kinect manager it says Max tracker user should be set on “0”
    But still it only detects 1 person.What am I forgetting?What do I need to Add to my scene?

    • Thank you! To your question: All components in the K4A-asset related to user tracking have a setting called ‘Player index’. If you need an example, please open the KinectAvatarsDemo1-scene in KinectDemos/AvatarDemo-folder. Then look at the AvatarController-component of U_CharacterBack or U_CharacterFront objects in the scene. The ‘Player index’ setting determines the tracked user – 0 means the 1st detected user, 1 – the 2nd detected user, 2 – the 3rd detected user, etc. If you change this setting, you can track different users. Lastly, look at KinectAvatarsDemo4-scene and its UserAvatarMatcher-component. This component creates and destroys avatars the scene automatically, according to the currently detected users.

      The ‘Max tracked users’-setting of KM determines the maximum number of users that may be tracked at the same time. For instance, if there are many people in the room, the users that need to be really tracked could be limited by distance or by the max number of users (e.g. if your scene needs only one or two users).

      • how can I regulate maximum distance? in kinectmanager script I tried to give maxuserdistance as 2.0f, but it doesn’t work

      • As answered few days ago, ‘MaxUserDistance = 2’ would mean users, whose distance to the sensor is more than 2 meters will not be detected (or will be lost).

  6. How do I use the background removal to display the head only?
    I have a idea that I can use a image mask on the raw image.
    But the overlay is world space position.
    Can I get screen position?
    Or more simple method to solve it.

    • Please open BackgroundRemovalManager.cs in the KinectScripts-folder and find ‘kinectManager.GetUserBoundingBox’. Comment out the invocation of this method, and replace it with the following code:

      bool bSuccess = kinectManager.IsJointTracked(userId, KinectInterop.JointType.Head);
      Vector3 pMin = kinectManager.GetJointPosition(userId, KinectInterop.JointType.Head);
      Vector3 pMax = pMin;

      Then adjust the offsets around the head joint for each axis (x, y, z) in the lines below, where posMin, posMaxX, posMaxY, posMaxZ are set. The shader should filter out all points that are not within the specified cuboid coordinates.

  7. Is there a way to make it so that it does NOT hide the joints its uncertain of? I believe that used to be the behavior but now the skeleton keeps flickering

    • There is a setting of the KinectManager-component in the scene, called ‘Ignore inferred joints’. You can use it, to determine whether the inferred joints to be considered as tracked or not.

      At the same time, I’m not quite sure what you mean by “skeleton keeps flickering”. May I ask you to e-mail me and share some more details on how to reproduce the issue you are having.

Leave a Reply