Azure Kinect Tips & Tricks

tips and tricksWell, I think it’s time to share some tips, tricks and examples regarding the K4A-asset, as well. This package, although similar to the K2-asset, has some features that significantly differ from the previous one. It supports multiple sensors in one scene and different types of depth sensors, as well.

The configuration and detection of the available sensors is less automatic than before, but allows more complex multi-sensor configurations. In this regard, check the first 4-5 tips in the list below. Please also consult the online documentation, if you need more information regarding the available demo scenes and depth sensor related components.

For more tips and tricks, please look at the Kinect-v2 Tips, Tricks & Examples. These tips and tricks were written for the K2-asset, but for backward compatibility a lot of the demo scenes, components and API mentioned there are the same or very similar.

Table of Contents:

How to reuse the K4A-asset functionality in your Unity project
How to update your existing K2-project to the K4A-asset
How to set up the KinectManager and Azure Kinect interface in the scene
How to use Kinect-v2 sensor instead of Azure Kinect in the demo scenes
How to use RealSense sensor instead of Azure Kinect in the demo scenes
How to set up multiple Azure Kinect (or other) sensors in the scene
How to remove the SDKs and sensor interfaces you don’t need
Why is the K2-asset still around
How to play a recording instead of utilizing live data from a connected sensor
How to set up sensor’s position and rotation in the scene
How to calibrate and set up multiple cameras in the scene
How to make the point cloud demos work with multiple cameras
How to send the sensor streams across the network
How to get the user or scene mesh working on Oculus Quest
How to get the color-camera texture in your code
How to get the depth-frame data or texture in your code
How to get the position of a body joint in your code
How to get the orientation of a body joint in your code
What is the file-format used by the body recordings
How to utilize green screen for better volumetric scenes or videos

How to reuse the K4A-asset functionality in your Unity project

Here is how to reuse the K4A-asset scripts and components in your Unity project:

1. Copy folder ‘KinectScripts’ from the AzureKinectExamples-folder of this package to your project. This folder contains all needed components, scripts, filters and interfaces.
2. Copy folder ‘Resources’ from the AzureKinectExamples-folder of this package to your project. This folder contains some needed libraries and resources.
3. Copy the sensor-SDK specific sub-folders (Kinect4AzureSDK, KinectSDK2.0 & RealSenseSDK2.0) from the AzureKinectExamples/SDK-folder of this package to your project. It contains the plugins and wrapper classes for the respective sensor types.
4. Wait until Unity detects, imports and compiles the newly detected resources and scripts.
5. Please do not share the KinectDemos-folder in source form or as part of public repositories.

How to update your existing K2-project to the K4A-asset

If you have an existing project utilizing the K2-asset, and would like to update it to use the K4A-asset, please look at the following steps. Please also note that not all K2-asset functions are currently supported by the K4A-asset. For instance, the face-tracking components and API and speech recognition are not currently supported. Look at this tip for more info.

1. First off, don’t forget to make a backup of the existing project, or copy it to a new folder. Then open it in Unity editor.
2. Remove the K2Examples-folder from the Unity project.
3. Import the K4A-asset from Unity asset store, or from the provided Unity package.
4. After the import finishes, check the console for error messages. If there are any and they are easy to fix, just go ahead and fix them. An easy fix would be to add ‘using com.rfilkov.kinect;’ or ‘using com.rfilkov.components;’ to your scripts that use the K2-asset API. Missing namespaces was an issue before.
5. If the error fixes are more complicated, you can contact me for support, as long as you are legitimate customer. I tried to keep all components as close as possible to the same components in the K2-asset, but there may be some slight differences. For instance, the K2-asset supports only one sensor, while the K4A-asset can support many sensors at once. That’s why a sensor-index parameter may be needed by some API calls, as well.
5. Select the KinectController-game object in the scene and make sure the KinectManager’s settings are similar to the KinectManager settings in the K2-asset and correct for that scene. Some of the KM settings in the K4A-asset are a bit different now, because of the bigger amount of provided functionality.
6. Go through the other Kinect-related components in the scene, and make sure their settings are correct, too.
7. Run the scene to try it out. Compare it with the scene running in the K2-asset. If the output is not similar enough, look again at the component settings and at the custom API calls.

How to set up the KinectManager and Azure Kinect interface in the scene

Please see the hierarchy of objects in any of the demo scenes, as a practical implementation of this tip:

1. Create an empty KinectController-game object in the scene. Set its position to (0, 0, 0) rotation to (0, 0, 0) and scale to (1, 1, 1).
2. Add KinectManager-script from AzureKinectExamples/KinectScripts-folder as component to the KinectController game object.
3. Select the frame types you need to get from the sensor – depth, color, IR, pose and/or body. Enable synchronization between frames, as needed. Check the user-detection setting and change them if needed, as well as the on-screen info you’d like to get in the scene.
4. Create Kinect4Azure-game object as child object of KinectController in Hierarchy. Set its position and rotation to match the Azure-Kinect sensor position & rotation in the world. For a start, you can set only the position in meters, then estimate the sensor rotation from the pose frames later, if you like.
5. Add Kinect4AzureInterface-script from AzureKinectExamples/KinectScripts/Interfaces-folder to the newly created Kinect4Azure-game object.
6. Change the by-default setting of the component, if needed. For instance, you can select different color camera mode, depth camera mode, device sync mode, the min & max distances used for creating the depth-related images.
7. If you like to replay a previously saved recording file, select ‘Device streaming mode’ = ‘Play recording’ and set the full path to recording file in the ‘Recording file’-setting.
8. Run the scene, to check if everything selected works as expected.

How to use Kinect-v2 sensor instead of Azure Kinect in the demo scenes

1. Unfold the KinectController-object in the scene.
2. Select the Kinect4Azure-child object.
3. (Optional) Set the ‘Device streaming mode’ of its Kinect4AzureInterface-component to ‘Disabled’.
4. Select the KinectV2-child object.
5. Set the ‘Device streaming mode’ of its Kinect2Interface-component to ‘Connected sensor’.
6. If you like to replay a previously saved recording file, you should play it in the ‘Kinect studio v2.0’ (part of Kinect SDK 2.0).
7. Run the scene, to check if the Kinect-v2 sensor interface is used instead of Azure-Kinect interface.

How to use RealSense sensor instead of Azure Kinect in the demo scenes

1. Unfold the KinectController-object in the scene.
2. Select the Kinect4Azure-child object.
3. (Optional) Set the ‘Device streaming mode’ of its Kinect4AzureInterface-component to ‘Disabled’.
4. Select the RealSense-child object.
5. Set the ‘Device streaming mode’ of its RealSenseInterface-component to ‘Connected sensor’.
6. If you like to replay a previously saved recording file, select ‘Device streaming mode’ = ‘Play recording’ and set the full path to recording file in the ‘Recording file’-setting.
7. Run the scene, to check if the RealSense sensor interface is used instead of Azure-Kinect interface.

How to set up multiple Azure Kinect (or other) sensors in the scene

Here is how to set up a 2nd (as well as 3rd, 4th, etc.) Azure Kinect camera interface in the scene:

1. Unfold the KinectController-object in the scene.
2. Duplicate the Kinect4Azure-child object.
3. Set the ‘Device index’ of the new object to 1 instead of 0. Other connected sensors should have device indices of 2, 3, etc.
4. Change ‘Device sync mode’ of the connected cameras, as needed. One sensor should be ‘Master’ and the others – ‘Subordinate’, instead of ‘Standalone’.
5. Set the position and rotation of the new object to match the sensor’s position & rotation in the world. For a start, you can set only the position in meters, then estimate the sensor rotation from the pose frames later, if you like.

How to remove the SDKs and sensor interfaces you don’t need

If you work with only one type of sensors (probably Azure Kinects), here is what to do, to get rid of the extra SDKs in the K4A-asset. This will decrease the space of your project and build:

– To remove the RealSense SDK: 1. Delete ‘RealSenseInterface.cs’ from KinectScripts/Interfaces-folder; 2. Delete the RealSenseSDK2.0-folder from AzureKinectExamples/SDK-folder.
– To remove the Kinect-v2 SDK: 1. Delete ‘Kinect2Interface.cs’ from KinectScripts/Interfaces-folder; 2. Delete the KinectSDK2.0-folder from AzureKinectExamples/SDK-folder.

Why is the K2-asset still around

The ‘Kinect v2 Examples with MS-SDK and Nuitrack SDK’-package (or ‘K2-asset’ for short) is still around (and will be around for some time), because it has components and demo scenes that are not available in the new K4A-asset. For instance: the face-tracking components & demo scenes, the hand-interaction components & scenes and the speech-recognition component & scene. This is due to various reasons. For instance, the SDK API does not yet provide this functionality, or I have not managed to add this functionality to the K4A-asset yet. As long as these (or replacement) components & scenes are missing in the K4A-asset, the K2-asset will be kept around.

On the other hand the K4A-asset has significant advantages, as well. It works with the most up-to-date sensors (like Azure Kinect & Realsense), allows multi-camera setups, has better internal structure and gets better and better (with more components, functions and demo scenes) with each next release.

How to play a recording instead of utilizing live data from a connected sensor

The sensor interfaces in K4A-asset provide the option to play back a recording file, instead of getting data from a physically connected sensor. Here is how to achieve this for all types of sensor interfaces:

1. Unfold the KinectController-game object in Hierarchy.
2. Select the proper sensor interface object. If you need to play back a Kinect-v2 recording, please skip steps 3, 4 & 5, and look at the note below.
3. In the sensor interface component in Inspector, change ‘Device streaming mode’ from ‘Connected sensor’ to ‘Play recording’.
4. Set ‘Recording file’ to the full path to previously saved recording. This is the MKV-file (in case of Kinect4Azure) or OUT-file (in case of RealSense-sensors).
5. Run the scene to check, if it is working as expected.

Note: In case of Kinect-v2, please start ‘Kinect Studio v2.0’ (part of Kinect SDK 2.0) and open the previously saved XEF-recording file. Then go to the Play-tab, press the Connect-button and play the file. The scene utilizing Kinect2Interface should be run before you start playing the recording file.

How to set up sensor’s position and rotation in the scene

Here is how to set up the sensor’s transform (position and rotation) in the scene. In case of multiple camera setup this should be done at least for the 1st sensor:

1. Unfold the KinectController-game object in Hierarchy, and select the respective sensor interface object.
2. If you are not using Azure Kinect with ‘Detect floor for pose estimation’-setting enabled, measure the distance between the floor and the camera, in meters. Set it as Y-position in the Transform-component. Leave the X- & Z-values as 0.
3. Set the ‘Get pose frames’-setting of KinectManager-component in the scene to ‘Display info’. Start the scene. You will see on screen information regarding the sensor’s position and rotation. Write down the 3 rotation values.
4. Select again the sensor interface object in the scene. Set the written down values as X-, Y- & Z-rotation values in its Transform-component.
5. Set the ‘Get pose frames’-setting of KinectManager-component in the scene back to ‘None’, to avoid unneeded IMU frames processing.

If you are using Azure Kinect and the sensor is turned sideways (+/-90 degrees) or upside-down (180 degrees), please open ‘Kinect4AzureInterface.cs’ in ‘KinectScripts/Interfaces’-folder and look for ‘k4abt_sensor_orientation_t.K4ABT_SENSOR_ORIENTATION_DEFAULT’ Then change ‘K4ABT_SENSOR_ORIENTATION_DEFAULT’ to the value corresponding to the sensor rotation. This should hint the Body Tracking SDK, to consider the unusual sensor rotation, when detecting the body joints.

How to calibrate and set up multiple cameras in the scene

To calibrate multiple cameras, connected to the same machine (usually two or more Azure Kinect sensors), you can utilize the MultiCameraSetup-scene in KinectDemos/MultiCameraSetup-folder. Please open this scene and do as follows:

1. Create the needed sensor interface objects, as children of the KinectController-game object in Hierarchy. By default there are two Kinect4Azure-objects there, but if you have more sensors connected, feel free to create new, or duplicate one of the existing sensor interface objects. Don’t forget to set their ‘Device index’-settings accordingly.
2. Set up the position and rotation of the 1st sensor-interface object (Kinect4Azure0 in Hierarchy). See this tip above on how to do that.
3. Run the scene. All configured sensors should light up. During the calibration one (and only one) user should stay visible to all configured sensors*. The progress will be displayed on screen, as well as the calibrated user meshes, so you could track the quality of calibration, too. After the calibration completes, the configuration file will be saved to the root folder of your Unity project.
* If the calibration setup cannot find the user in the intersection area of all cameras, please select the ‘MultiCameraSetup’-object in Hierarchy, disable the ‘Use sychronized samples’-setting of its component and then try again.
4. After the automatic calibration completes, you can further manually adjust the rotations and positions of the calibrated sensors (except the fixed first one), to make the user meshes match each other as close as possible. To orbit around the user meshes for better visibility, press Alt + mouse drag. When you are ready, press the Save-button to save the changes to the config file.
5. Feel free to re-run the MultiCameraSetup-scene, if you are not satisfied with the quality of multi-camera calibration.

To use the saved multi-camera configuration in any other scene, open the scene and enable the ‘Use Multi-Cam Config’-setting of the KinectManager-component in that scene. When the scene is started, this should create new sensor interfaces in Hierarchy, according to the saved configuration. Run the scene to check, if it works as expected.

How to make the point cloud demos work with multiple cameras

To make the point cloud demo scenes work with multiple calibrated cameras, please follow these steps:

1. Follow the tip above to calibrate the cameras, if you haven’t done it yet.
2. Open the respective point-cloud scene (i.e. SceneMeshDemo or UserMeshDemo in the KinectDemos/PointCloudDemo-folder).
3. Enable the ‘Use multi-cam config’-option of the KinectManager-component in the scene.
4. Duplicate the SceneMeshS0-object as SceneMeshS1 (in SceneMeshDemo-scene), or User0MeshS0-object as User0MeshS1 (in UserMeshDemo-scene).
5. Change the ‘Sensor index’-setting of their Kinect-related components from 0 to 1. This way you will get a 2nd mesh in the scene, from the 2nd sensor’s point of view.
6. If you have more cameras, continue duplicating the same objects (as SceneMeshS2, SceneMeshS3, etc. in SceneMeshDemo, or User0MeshS2, User0MeshS3, etc. in UserMeshDemo), and change their respective ‘Sensor index’-settings to point to the other cameras, in order to get meshes from these cameras’ POV too.
7. Run the scene to see the result. The sensor meshes should align properly, if the calibration done in MultiCameraSetup-scene is good.

How to send the sensor streams across the network

To send the sensor streams across the network, you can utilize the KinectNetServer-scene and the NetClientInterface-component, as follows:

1. On the machine, where the sensor is connected, open and run the KinectNetServer-scene from KinectDemos/NetworkDemo-folder. This scene utilizes the KinectNetServer-component, to send the requested sensor frames across the network, to the connected clients.
2. On the machine, where you need the sensor streams, create an empty game object, name it NetClient and add it as child to the KinectController-game object (containing the KinectManager-component) in the scene. For reference, see the NetClientDemo1-scene in KinectDemos/NetworkDemo-folder.
3. Set the position and rotation of the NetClient-game object to match the sensor’s physical position and rotation. Add NetClientInterface from KinectScripts/Interfaces as component to the NetClient-game object.
4. Configure the network-specific settings of the NetClientInterface-component. Either enable ‘Auto server discovery’ to get the server address and port automatically (LAN only), or set the server host (and eventually base port) explicitly. Enable ‘Get body-index frames’, if you need them in that specific scene. The body-index frames may cause extra network traffic, but are needed in many scenes, e.g. in background removal or user meshes.
5. Configure the KinectManager-component in the scene, to receive only the needed sensor frames, and if frame synchronization is needed or not. Keep in mind that more sensor streams would mean more network traffic, and frame sync would increase the lag on the client side.
6. If the scene has UI, consider adding a client-status UI-Text as well, and reference it with the ‘Client status text’-setting of NetClientInterface-component. This may help you localize various networking issues, like connection-cannot-be-established, temporary disconnection, etc.
7. Run the client scene in the Editor to make sure it connects successfully to the server. If it doesn’t, check the console for error messages.
8. If the client scene needs to be run on mobile device, build it for the target platform and run it on device, to check if it works there, as well.

How to get the user or scene mesh working on Oculus Quest

To get the UserMeshDemo- or SceneMeshDemo-scene working on Oculus Quest, please do as follows:

1. First off, you need to do the usual Oculus-specific setup, i.e. import the ‘Oculus Integration’-asset from the Unity asset store, enable ‘Virtual reality supported’ in ‘Player Settings / XR Settings’ and add Oculus’ as ‘Virtual Reality SDK’ there, as well. In the more recent Unity releases this is replaced by the ‘XR Plugin Management’ group of project settings.
2. Oculus Quest will need to get the sensor streams over the network. In this regard, open NetClientDemo1-scene in KinectDemos/NetworkDemo-folder, unfold KinectController-game object in Hierarchy and copy the NetClient-object below it to the clipboard.
3. Open UserMeshDemo- or SceneMeshDemo-scene, paste the copied NetClient-object from the clipboard to the Hierarchy and then move it below the KinectController-game object. Make sure that ‘Auto server discovery’ and ‘Get body index frames’-settings of the NetClientInterface-component are both enabled. Feel free also to set the ‘Device streaming mode’ of the Kinect4AzureInterface-component to ‘Disabled’. It will not be used on Quest.
4. Add the KinectNetServer-scene from the KinectDemos/NetworkDemo-folder to the ‘Scenes in Build’-setting of Unity ‘Build settings’. Then build it for the Windows-platform and architecture ‘x86_64’. Alternatively, create a second Unity project, import the K4A-asset in there and open the KinectNetServer-scene. This scene (or executable) will act as network server for the sensor data. You should run it on the machine, where the sensor is physically connected.
5. Remove ‘KinectNetServer’ from the list of ‘Scenes in Build’, and add UserMeshDemo- or SceneMeshDemo-scene instead.
6. Switch to the Android-platform, open ‘Player settings’, go to ‘Other settings’ and make sure ‘OpenGLES3’ is the only item in the ‘Graphics APIs’-list. I would also recommend disabling the ‘Multithreaded rendering’, but this is still a subject for further experiments.
7. Start the KinectNetServer-scene (or executable). Connect your Oculus Quest HMD to the machine, then build, deploy and run the UserMeshDemo- or SceneMeshDemo-scene to the device. After the scene starts, you should see the user or scene mesh live on the HMD. Enjoy!

How to get the color-camera texture in your code

Please check, if the ‘Get color frames’-setting of the KinectManager-component in the scene is set to ‘Color texture’. Then use the following snippet in the Update()-method of your script:

KinectManager kinectManager = KinectManager.Instance;
if(kinectManager && kinectManager.IsInitialized())
{
    Texture texColor = kinectManager.GetColorImageTex(sensorIndex);
    // do something with the texture
}

How to get the depth-frame data or texture in your code

Please check, if the ‘Get depth frames’-setting of the KinectManager-component in the scene is set to ‘Depth texture’, if you need the texture or to ‘Raw depth data’, if you need the depth data only. Then use the following snippet in the Update()-method of your script:

KinectManager kinectManager = KinectManager.Instance;
if(kinectManager && kinectManager.IsInitialized())
{
    Texture texDepth = kinectManager.GetDepthImageTex(sensorIndex);  // to get the depth frame texture
    ushort[] rawDepthData = kinectManager.GetRawDepthMap(sensorIndex);  // to get the raw depth frame data
    // do something with the texture or data
}

Please note, the raw depth data is an array of shorts, with size equal to (depthImageWidth * depthImageHeight). Each value represents the depth in mm for the respective depth frame point.

How to get the position of a body joint in your code

First, please make sure the ‘Get body frames’-setting of the KinectManager-component in the scene is set to something different than ‘None’. Then use the following snippet in the Update()-method of your script (and replace ‘HandRight’ below with the body joint you need):

KinectInterop.JointType joint = KinectInterop.JointType.HandRight;

KinectManager kinectManager = KinectManager.Instance;
if(kinectManager && kinectManager.IsInitialized())
{
    if(kinectManager.IsUserDetected(playerIndex))
    {
        ulong userId = kinectManager.GetUserIdByIndex(playerIndex);

        if(kinectManager.IsJointTracked(userId, joint))
        {
            Vector3 jointPos = kinectManager.GetJointPosition(userId, joint);
            // do something with the joint position
        }
    }
}

How to get the orientation of a body joint in your code

Again, please make sure the ‘Get body frames’-setting of the KinectManager-component in the scene is set to something different than ‘None’. Then use the following snippet in the Update()-method of your script (and replace ‘Pelvis’ below with the body joint you need):

KinectInterop.JointType joint = KinectInterop.JointType.Pelvis;
bool mirrored = false;

KinectManager kinectManager = KinectManager.Instance;
if(kinectManager && kinectManager.IsInitialized())
{
    if(kinectManager.IsUserDetected(playerIndex))
    {
        ulong userId = kinectManager.GetUserIdByIndex(playerIndex);

        if(kinectManager.IsJointTracked(userId, joint))
        {
            Quaternion jointOrientation = kinectManager.GetJointOrientation(userId, joint, !mirrored);
            // do something with the joint orientation
        }
    }
}

What is the file-format used by the body recordings

The KinectRecorderPlayer-component can record or replay body-recording files. These recordings are text files, where each line represents a body-frame at a specific moment in time. You can use it to replay or analyze the body-frame recordings in your own tools. Here is the format of each line. See the sample body-frames below, for reference.

0. time in seconds, since the start of recording, followed by ‘|’. All other field separators are ‘;’.
This value is used by the KinectRecorderPlayer-component for time-sync, when it needs to replay the body recording.

1. body frame identifier. should be ‘k4b’.
2. body-frame timestamp, coming from the SDK. This field is ignored by the KinectManager.
3. number of tracked bodies.
4. number of body joints (32).
5. space scale factor (3 numbers for the X,Y & Z-axes).

Then follows the data for each tracked body:
6. body tracking flag – 1 if the body is tracked, 0 if it is not tracked (the 5 zeros at the end of the lines below are for the 5 missing bodies)

if the body is tracked, then the bodyId and the data for all body joints follow. if it is not tracked – the bodyId and joint data (7-9) are skipped
7. body ID

body joint data (25 times, for all body joints – ordered by JointType (see KinectScripts/KinectInterop.cs)
8. joint tracking state – 0 means not-tracked; 1 – inferred; 2 – tracked

if the joint is inferred or tracked, the joint position data follows. if it is not-tracked, the joint position data (9) is skipped.
9. joint position data, in meters (3 numbers for the X, Y & Z-axes).
10. joint orientation data, in degrees (3 numbers for the X, Y & Z-axes).

And here are two body-frame samples, for reference:

4.285|k4b;805827551;1;32;-1;-1;1;1;1;2;-0.415;0.002;1.746;3.538;356.964;2.163;2;-0.408;-0.190;1.717;358.542;356.776;2.160;2;-0.402;-0.345;1.705;353.569;356.587;2.643;2;-0.392;-0.577;1.743;354.717;353.971;4.102;2;-0.386;-0.666;1.742;348.993;349.257;8.088;2;-0.357;-0.537;1.742;351.562;359.329;4.547;2;-0.202;-0.525;1.743;307.855;64.654;324.608;2;0.027;-0.614;1.570;15.579;121.943;301.289;2;-0.092;-0.812;1.468;345.353;117.828;300.655;2;-0.096;-0.899;1.409;345.353;117.828;220.655;2;-0.430;-0.541;1.734;352.718;351.386;16.178;2;-0.566;-0.578;1.714;315.412;285.925;25.871;2;-0.732;-0.648;1.466;15.003;217.635;48.440;2;-0.568;-0.825;1.383;337.354;236.803;353.364;2;-0.512;-0.802;1.288;337.354;236.803;358.364;2;-0.316;0.005;1.752;346.343;356.306;359.296;2;-0.305;0.436;1.694;17.148;355.929;359.284;2;-0.347;0.819;1.852;338.148;350.141;3.025;2;-0.319;0.893;1.664;337.809;349.421;3.293;2;-0.505;-0.001;1.740;350.880;356.484;1.804;2;-0.514;0.433;1.716;11.939;357.147;1.821;2;-0.510;0.833;1.844;334.003;20.297;2.223;2;-0.579;0.886;1.678;333.513;21.027;1.903;2;-0.356;-0.737;1.585;348.993;349.257;8.088;2;-0.333;-0.766;1.634;348.993;349.257;8.088;2;-0.292;-0.728;1.760;348.993;349.257;8.088;2;-0.388;-0.768;1.620;348.993;349.257;8.088;2;-0.470;-0.744;1.734;348.993;349.257;8.088;2;-0.155;-0.866;1.313;345.353;117.828;11.142;2;-0.156;-0.839;1.334;80.906;53.999;316.725;2;-0.539;-0.838;1.394;285.211;20.682;56.425;2;-0.519;-0.966;1.378;10.344;155.193;69.151

4.361|k4b;806494221;1;32;-1;-1;1;1;1;2;-0.404;-0.076;1.745;6.454;356.702;1.397;2;-0.399;-0.264;1.706;2.510;356.605;1.390;2;-0.395;-0.416;1.684;357.553;356.485;1.980;2;-0.384;-0.647;1.724;353.562;358.508;2.961;2;-0.380;-0.735;1.725;355.491;345.619;7.062;2;-0.350;-0.607;1.720;350.254;353.270;353.332;2;-0.198;-0.627;1.739;299.401;48.858;332.268;2;0.067;-0.682;1.613;3.271;109.329;295.800;2;0.018;-0.895;1.492;351.194;137.170;266.059;2;0.037;-0.997;1.483;351.194;137.170;186.059;2;-0.422;-0.610;1.718;348.712;3.006;24.969;2;-0.550;-0.670;1.730;313.287;304.828;18.589;2;-0.772;-0.716;1.528;358.776;240.673;49.456;2;-0.695;-0.892;1.363;319.830;227.016;6.959;2;-0.633;-0.897;1.270;319.830;227.016;11.959;2;-0.306;-0.074;1.751;344.427;356.157;0.339;2;-0.301;0.352;1.679;14.904;356.335;0.338;2;-0.347;0.737;1.820;0.651;347.614;3.214;2;-0.319;0.876;1.677;0.313;346.946;3.207;2;-0.493;-0.077;1.739;348.410;356.260;2.692;2;-0.508;0.352;1.696;7.027;357.126;2.657;2;-0.499;0.759;1.787;5.252;40.137;0.152;2;-0.589;0.893;1.696;4.784;40.810;0.213;2;-0.339;-0.787;1.565;355.491;345.619;7.062;2;-0.319;-0.821;1.611;355.491;345.619;7.062;2;-0.288;-0.798;1.742;355.491;345.619;7.062;2;-0.373;-0.821;1.593;355.491;345.619;7.062;2;-0.462;-0.811;1.703;355.491;345.619;7.062;2;0.024;-1.103;1.436;351.194;137.170;286.972;2;0.050;-0.997;1.413;349.990;58.882;331.104;2;-0.669;-0.904;1.379;279.467;85.405;345.866;2;-0.660;-1.033;1.395;25.016;150.346;66.683

How to utilize green screen for better volumetric scenes or videos

Here is how to utilize a green (or other color) screen for high quality volumetric scenes or videos. If you need practical examples, please look at the GreenScreenDemo1- and GreenScreenDemo2-scene in KinectDemos/GreenScreenDemo-folder.

1. First, please set the ‘Get color frames’-setting of the KinectManager-component in the scene to ‘Color texture’ and ‘Get depth data’ to ‘Raw depth data’.
2. Create an object in the scene to hold the green screen components. This object is called ‘GreenScreenMgr’ in the green screen demo scenes.
3. Add the BackgroundRemovalManager- and BackgroundRemovalByGreenScreen-scripts from KinectScripts-folder as components to the object. The green screen component is implemented as a background-removal filter, similar to the other BR filters.
4. Set the ‘Green screen color’-setting of the BackgroundRemovalByGreenScreen-component. This is the color of the physical green screen. By default it is green, but you can use whatever color screen you have (even some towels, but they should not be black, gray or white). To find out the screen’s color, you could make a photo or screenshot of your setup and then use a color picker in any picture editor, to get the color’s RGB values.
5. You can utilize the other settings of the BackgroundRemovalByGreenScreen-component, to adjust the foreground filtering according to your needs, after you start the scene later.
6. Set the ‘Foreground image’-setting of the BackgroundRemovalManager-component to point to any RawImage-object in the scene, to get the green screen filtered output image displayed there.
7. Alternatively, if you need a volumetric output in the scene, add a Quad-object in the scene (menu ‘3D Object / Quad’). This is the ForegroundRenderer-object in the GreenScreenDemo2-scene. Then add the ForegroundBlendRenderer-component from KinectScripts-folder as component to the object.
8. If you use the ForegroundBlendRenderer-component, set its ‘Invalid depth value’ to the distance between the camera and the green screen, in meters. This value will be used as a depth value for the so called invalid pixels in the depth image. This will provide more consistent, high quality foreground image.
9. If you use the ForegroundBlendRenderer-component and you need to apply the scene lighting to the generated foreground mesh, please enable the ‘Apply lighting’-setting of the ForegroundBlendRenderer-component. By default this setting is disabled.
10. Run the scene to see the result. You can then adjust the settings of the BackgroundRemovalByGreenScreen-component to match your needs as good as possible. Don’t forget to copy the changed settings back to the component, when you stop the scene.

 

83 thoughts on “Azure Kinect Tips & Tricks

  1. Pingback: Azure Kinect Examples for Unity | RF Solutions - Technology, Health and More

  2. Hi, I’m using good sauce.

    I’m working on a project to write two Kinect v2 at the same time with two Kinect v2
    But not at the same time, but at the same time, only one Kinect is used in a single scene.

    How do I use the desired KINNEX with two KINNEX connected?

    • Sorry, but the K2-asset does not support multiple sensor setups. In this regard, why would you use two sensors, if only one is used in a single scene?

  3. Hi.Runmen F .
    I ‘m using azure kinect and follow your tips & Tricks ,then try to open the sec deviec ,but unity have some erros are “Can’t create body tracker for Kinect4AzureInterface1! & AzureKinectException: result = K4A_RESULT_FAILED” . how can i fix this erro for mutilple device ?

  4. Hi,

    Can I access coordinations(x, y, z) of point cloud and it’s color(RGB)?

    If I can do that, Could you explain how to?

    • If you mean the SceneMeshDemo or UserMeshDemo, for performance reasons the point clouds there are generated by shaders. If you need script access to the mesh coordinates, please replace SceneMeshRendererGpu-component with SceneMeshRenderer (or UserMeshRendererGpu with UserMeshRenderer). They generate the meshes on CPU and you can have access to everything there.

  5. To start with: LOVE the Unity asset from the Azure Kinect examples you created.
    I have one question:
    How can I detect multiple tracker users?
    In the Kinect manager it says Max tracker user should be set on “0”
    But still it only detects 1 person.What am I forgetting?What do I need to Add to my scene?

    • Thank you! To your question: All components in the K4A-asset related to user tracking have a setting called ‘Player index’. If you need an example, please open the KinectAvatarsDemo1-scene in KinectDemos/AvatarDemo-folder. Then look at the AvatarController-component of U_CharacterBack or U_CharacterFront objects in the scene. The ‘Player index’ setting determines the tracked user – 0 means the 1st detected user, 1 – the 2nd detected user, 2 – the 3rd detected user, etc. If you change this setting, you can track different users. Lastly, look at KinectAvatarsDemo4-scene and its UserAvatarMatcher-component. This component creates and destroys avatars the scene automatically, according to the currently detected users.

      The ‘Max tracked users’-setting of KM determines the maximum number of users that may be tracked at the same time. For instance, if there are many people in the room, the users that need to be really tracked could be limited by distance or by the max number of users (e.g. if your scene needs only one or two users).

      • how can I regulate maximum distance? in kinectmanager script I tried to give maxuserdistance as 2.0f, but it doesn’t work

      • As answered few days ago, ‘MaxUserDistance = 2’ would mean users, whose distance to the sensor is more than 2 meters will not be detected (or will be lost).

  6. How do I use the background removal to display the head only?
    I have a idea that I can use a image mask on the raw image.
    But the overlay is world space position.
    Can I get screen position?
    Or more simple method to solve it.

    • Please open BackgroundRemovalManager.cs in the KinectScripts-folder and find ‘kinectManager.GetUserBoundingBox’. Comment out the invocation of this method, and replace it with the following code:

      bool bSuccess = kinectManager.IsJointTracked(userId, KinectInterop.JointType.Head);
      Vector3 pMin = kinectManager.GetJointPosition(userId, KinectInterop.JointType.Head);
      Vector3 pMax = pMin;

      Then adjust the offsets around the head joint for each axis (x, y, z) in the lines below, where posMin, posMaxX, posMaxY, posMaxZ are set. The shader should filter out all points that are not within the specified cuboid coordinates.

  7. Is there a way to make it so that it does NOT hide the joints its uncertain of? I believe that used to be the behavior but now the skeleton keeps flickering

    • There is a setting of the KinectManager-component in the scene, called ‘Ignore inferred joints’. You can use it, to determine whether the inferred joints to be considered as tracked or not.

      At the same time, I’m not quite sure what you mean by “skeleton keeps flickering”. May I ask you to e-mail me and share some more details on how to reproduce the issue you are having.

    • Hi Ruben. Yes, I have seen the floor detection sample and find it quite useful. It could replace the current way of detecting sensor rotation in the K4A-asset, or even enhance it for detecting the height above the ground, as well. It’s on my to-do list. But I’m still thinking how to implement it, in order not to lose performance, while processing point clouds on each IMU frame. Please feel free to e-mail me, if you have done something in this regard, or if you have fresh ideas.

      • Sorry to bother you again Rumen, I’ve sent you an email, but I got no response, so I guess it ended in the spam folder. Can you check it?
        Thanks,
        Ruben

  8. Hi! Anybody how to setup the azure kinect in portrait orientation (phisically the sensor), and how to setup then de project to use it in a 1080×1920 unity project? I’m not able to do it following the information nor the posts about this… Thanks and sorry!

    • Hi, when you turn the sensor sideways, you would need to turn your monitor sideways, too. Don’t forget to set the sensor interface’s transform rotation in the scene, too. See this tip: https://rfilkov.com/2019/08/26/azure-kinect-tips-tricks/#t9 And, if you are using body tracking, please open DepthSensorBase.cs in KinectScripts/Interfaces-folder, look for ‘K4ABT_SENSOR_ORIENTATION_DEFAULT’ and change it to ‘K4ABT_SENSOR_ORIENTATION_CLOCKWISE90’ (or ‘K4ABT_SENSOR_ORIENTATION_COUNTERCLOCKWISE90’).

      • Hi Rumen F. Reaalllyyyy thanks! Just one more and last question… When turning my monitor sideways… Must I leave in the Windows 10 dislay settings to “Landscape” orientation or must I also configure the display as “Portrait”… Seems to be the first option…?

      • I think everything should be left as it was, just turned sideways. This way you will get the full display resolution. But feel free to experiment a bit, just to find out what fits the best.

      • Hi! I’m not able to solve it using the Fitting Room 1 samble. I’ve done all the changes: sensor transform, device configuration in the interface,… but no luck… With the CLOCKWISE setting, it’s still working well in “NORMAL” orientation of the device!!! Even with the setting:

        bodyTracker = new BodyTracking(calibration, k4abt_sensor_orientation_t.K4ABT_SENSOR_ORIENTATION_CLOCKWISE90, k4abt_tracker_processing_mode_t.K4ABT_TRACKER_PROCESSING_MODE_GPU, 0);

        It’s strange!!!

      • Ok… Seems to be something related with the Fitting Room 1 sample… The Fitting Room 2 sample works perfectly… And also the others samples works fine with the device orientation clockwise and counterclockwise… strangely, the fitting room 1 sample not! Any clue???

      • This portrait mode changes the world axes, as well. Up becomes left, down becomes right, left becomes down, etc. This makes the gesture and pose recognition not work correctly. I think this is the problem with FR1-demo. In this regard, please find the KinectManager-component in the scene, change its ‘Player calibration pose’-setting from ‘Tpose’ to ‘None’ and then try again.

  9. Hi,
    I set two userimage and kinectcamera and set player index 0 and 1. And I set max tracked users for 2 in the kinectmanager.
    My problem is, every userimage show player 1’s image.
    How can I detect multiple users corectly in the backgroundremoval?

    • Please set the ‘Player index’-setting of BackgroundRemovalManager-component to -1, to get the both users on the image. Also, please use only one UserImage, controller by one of the users, because the ForegroundToRenderer-component expects only one BackgroundRemovalManager in the scene. For two separate user images, you would need to modify ForegroundToRenderer.cs a bit and have two BR-managers in the scene, for both user 0 and 1.

      • I changed backManager to public in ForegroundToRenderer and connected each user images to each BackgroundRemovalManager(also, changed related player index.). But both images show nothing. What was I wrong?

      • I suppose you haven’t done anything wrong. There is probably an issue in the BackgroundRemovalManager shaders in this case. I need to take a look.

    • Thank you! Yes, I managed to reproduce the issue. The fix will be available in the next release. If you are in a hurry, feel free to e-mail me, to get the updated package a.s.a.p. Please don’t forget to mention your invoice number in the e-mail, as well.

  10. Hello Rumen,
    I’ve a problem, each time I try to launch one of the examples, unity crashes, sorry if someone gave you the same issue but I didn’t found my answer ^^’

    Here is a part of my error.log file :

    KERNELBASE.dll caused an Unknown exception type (0xc06d007f)
    in module KERNELBASE.dll at 0033:47d4a799.

    Error occurred at 2020-05-15_163952.
    C:\Program Files\Unity\Hub\Editor\2019.3.13f1\Editor\Unity.exe
    […]
    Stack Trace of Crashed Thread 15464:
    0x00007FFD47D4A799 (KERNELBASE) RaiseException
    0x00007FFCFDF2AFEB (k4abt) k4abt_tracker_destroy
    [..]
    0x00000245B2A75405 (Microsoft.Azure.Kinect.Sensor) Microsoft.Azure.Kinect.Sensor.NativeMethods.k4abt_tracker_create()
    0x00000245B2A730A3 (Microsoft.Azure.Kinect.Sensor) Microsoft.Azure.Kinect.Sensor.BodyTracking..ctor()
    0x00000245B2A3D483 (Assembly-CSharp) com.rfilkov.kinect.DepthSensorBase.InitBodyTracking()
    0x00000245B29CA161 (Assembly-CSharp) com.rfilkov.kinect.Kinect4AzureInterface.OpenSensor()
    0x00000245B29C0CF6 (Assembly-CSharp) com.rfilkov.kinect.KinectManager.StartDepthSensors()
    0x00000245B29B5AF3 (Assembly-CSharp) com.rfilkov.kinect.KinectManager.Awake()
    0x00000245AC86F4D8 (mscorlib) System.Object.runtime_invoke_void__this__()
    0x00007FFCD9ECCBA0 (mono-2.0-bdwgc) mono_get_runtime_build_info
    0x00007FFCD9E52112 (mono-2.0-bdwgc) mono_perfcounters_init
    0x00007FFCD9E5B10F (mono-2.0-bdwgc) mono_runtime_invoke

    Thank you very much for your help !

    • Hi Adrien, As far as I see, the body tracking crashes at scene start. I assume you’re using the latest release of the K4A-asset. In this regard, please check if:
      1. You have installed Body Tracking SDK v1.0.1.
      2. The Body Tracking SDK is installed into its by-default folder ‘C:\Program Files\Azure Kinect Body Tracking SDK’.
      3. Azure Kinect Body Tracking Viewer works as expected.

      If the problem persists, please e-mail me and attach the Editor’s log-file after the crash has occurred. Here is where to find the Unity log-files: https://docs.unity3d.com/Manual/LogFiles.html

      • Thank you very much for your help and the time you spent to help me I really appreciate and am very thankful for your help and your devotion !

      • I’m having the same issue. Is there something I can implement to avoid this crash? Let me know if you need to see my log file as well.

      • Please make sure you’re using the latest version of the K4A-asset (currently v1.12.1). Please also look at my comment above and check if everything is OK. Then try again, and if the crash persists, please e-mail me your Editor’s log file, so I can take a look. Here is where to find the Unity log-files: https://docs.unity3d.com/Manual/LogFiles.html

  11. Hello Rumen F.

    I have a question about MultiCameraSetup.scene.

    Two Azure Kinects were recognized,
    The calibration remains 0% even if one user is photographed.

    Do you know why?

    Thank you

    • Hi, only one user should be visible to both cameras, and should move a bit, as well. I don’t think there is any other significant limitation.

      • Hello Rumen F.

        When I tried again on another PC,
        It worked safely.
        I was able to confirm the operation.

        perhaps,
        The CPU may be too low.

        Thank you very much.

  12. I noticed a bunch of really large dlls are copied over to the root folder after playing a scene. Can I get a list of what to put in my gitignore for my private source control? Thank you!

  13. Hey Rumen,
    I need to record high quality rgb streams from 4 kinects. Is it possible to control white balance with your wrappers? Thank you!

  14. Hello Rumen ! Excellent Project !
    I am trying to align the Avatar body tracked by Azure DK with Oculus VR Headset in the Scene. What part of the AvatarController can i tap into to accomplish that?
    Thank You !

    • Hi, I think you need to enable the ‘External root motion’-setting of the AvatarController-component, and add the HmdHeadMover from the KinectScripts-folder as component to the humanoid model in the scene. The head mover should follow the headset’s position and the avatar controller will control the body joint orientations. That should be enough.

  15. hello, I use the background removal and set azure kinect 4096 x 3072 resolution.
    but the screen look too blurry to believe the resolution.( Already set Apply Blur Filter false)
    the background removal use a lot of inner shaders and I read them difficultly.
    Can I modify the background removal script to output high resolution?

    • Hi, the background removal scenes (and all other scenes) use the color camera resolution you set. Please note, the maximum FPS of 4096×3072 is 15 instead of 30. This wouldn’t explain the unexpected blurriness though. What depth mode do you use in combination with the color-camera resolution?

      Also, may I ask you to e-mail me a short video clip (or some screenshots) of the Kinect4AzureInterface-settings and what you get on screen after running the scene. It may help me understand better your issue.

  16. Hello Rumen F.

    I have some questions about MultiCameraSetup.scene.

    4 Azure Kinects were recognized,”Standalone”can calibration 100%,But this calibration is wrong, right?
    if 1 ‘Master’,3‘Subordinate’, result the calibration remains 0%

  17. Hello
    Maybe someone needs to move in the space during calibration? It may be the wrong way to do it once. Although 4 devices are set to Standalone, although they are calibrated, they are applied to the cg model. Real-time mocap found that the action is very lagging. The PC used has a high configuration and it will not be a configuration problem. , What is the correct way

    • Hi, sorry for the delay, but (as I said) I don’t work at weekends.

      Now to your issues:
      1. Yes, there should be one (and only one) person moving in the intersection area of all cameras, in order for the calibration process to complete.
      2. I’m not sure about the configuration of your cameras. If they are all standalone, they should be set as Standalone in the scene. If they are Master/Subs, they should be set that way in the scene too. The configuration should match the reality.
      3. Please look at this tip regarding the multi-camera calibration in the K4A-asset: https://rfilkov.com/2019/08/26/azure-kinect-tips-tricks/#t10
      4. If you still have issues calibrating the cameras, please e-mail me with some more details about your issues, and we can continue the discussion there.

  18. 1Azure Action recognition is possible, there is shaking, but the degree of acceptable, this is the effect of kinect2, but with 4 azures, the hand shakes very badly, what methods can be used to adjust the hand recognition, the depth image is also always the hand Not recognized

    • Yes, this is the main goal, although the multi-camera tracking currently has some issues when tracking specific joints, like the hands for instance.

Leave a Reply