Azure Kinect Tips & Tricks

tips and tricksWell, I think it’s time to share some tips, tricks and examples regarding the K4A-asset, as well. This package, although similar to the K2-asset, has some features that significantly differ from the previous one. It supports multiple sensors in one scene and different types of depth sensors, as well.

The configuration and detection of the available sensors is less automatic than before, but allows more complex multi-sensor configurations. In this regard, check the first 4-5 tips in the list below. Please also consult the online documentation, if you need more information regarding the available demo scenes and depth sensor related components.

For more tips and tricks, please look at the Kinect-v2 Tips, Tricks & Examples. These tips and tricks were written for the K2-asset, but for backward compatibility a lot of the demo scenes, components and API mentioned there are the same or very similar.

Table of Contents:

How to reuse the K4A-asset functionality in your Unity project
How to update your existing K2-project to the K4A-asset
How to set up the KinectManager and Azure Kinect interface in the scene
How to use Kinect-v2 sensor instead of Azure Kinect in the demo scenes
How to use RealSense sensor instead of Azure Kinect in the demo scenes
How to set up multiple Azure Kinect (or other) sensors in the scene
How to remove the SDKs and sensor interfaces you don’t need
Why is the K2-asset still around
How to play a recording instead of utilizing live data from a connected sensor
How to set up sensor’s position and rotation in the scene
How to calibrate and set up multiple cameras in the scene
How to make the point cloud demos work with multiple cameras
How to send the sensor streams across the network
How to get the user or scene mesh working on Oculus Quest
How to get the color-camera texture in your code
How to get the depth-frame data or texture in your code
How to get the position of a body joint in your code
How to get the orientation of a body joint in your code
What is the file-format used by the body recordings
How to utilize green screen for better volumetric scenes or videos
How to integrate the Cubemos body tracking with the RealSense sensor interface
How to utilize Apple iPhone-Pro or iPad-Pro as depth sensors
How to replace Azure Kinect with Femto Bolt or Mega sensors

How to reuse the K4A-asset functionality in your Unity project

Here is how to reuse the K4A-asset scripts and components in your Unity project:

1. Copy folder ‘KinectScripts’ from the AzureKinectExamples-folder of this package to your project. This folder contains all needed components, scripts, filters and interfaces.
2. Copy folder ‘Resources’ from the AzureKinectExamples-folder of this package to your project. This folder contains some needed libraries and resources.
3. Copy the sensor-SDK specific sub-folders (Kinect4AzureSDK, KinectSDK2.0 & RealSenseSDK2.0) from the AzureKinectExamples/SDK-folder of this package to your project. It contains the plugins and wrapper classes for the respective sensor types.
4. Wait until Unity detects, imports and compiles the newly detected resources and scripts.
5. Please do not share the KinectDemos-folder in source form or as part of public repositories.

How to update your existing K2-project to the K4A-asset

If you have an existing project utilizing the K2-asset, and would like to update it to use the K4A-asset, please look at the following steps. Please also note that not all K2-asset functions are currently supported by the K4A-asset. For instance, the face-tracking components and API and speech recognition are not currently supported. Look at this tip for more info.

1. First off, don’t forget to make a backup of the existing project, or copy it to a new folder. Then open it in Unity editor.
2. Remove the K2Examples-folder from the Unity project.
3. Import the K4A-asset from Unity asset store, or from the provided Unity package.
4. After the import finishes, check the console for error messages. If there are any and they are easy to fix, just go ahead and fix them. An easy fix would be to add ‘using com.rfilkov.kinect;’ or ‘using com.rfilkov.components;’ to your scripts that use the K2-asset API. Missing namespaces was an issue before.
5. If the error fixes are more complicated, you can contact me for support, as long as you are legitimate customer. I tried to keep all components as close as possible to the same components in the K2-asset, but there may be some slight differences. For instance, the K2-asset supports only one sensor, while the K4A-asset can support many sensors at once. That’s why a sensor-index parameter may be needed by some API calls, as well.
5. Select the KinectController-game object in the scene and make sure the KinectManager’s settings are similar to the KinectManager settings in the K2-asset and correct for that scene. Some of the KM settings in the K4A-asset are a bit different now, because of the bigger amount of provided functionality.
6. Go through the other Kinect-related components in the scene, and make sure their settings are correct, too.
7. Run the scene to try it out. Compare it with the scene running in the K2-asset. If the output is not similar enough, look again at the component settings and at the custom API calls.

How to set up the KinectManager and Azure Kinect interface in the scene

Please see the hierarchy of objects in any of the demo scenes, as a practical implementation of this tip:

1. Create an empty KinectController-game object in the scene. Set its position to (0, 0, 0) rotation to (0, 0, 0) and scale to (1, 1, 1).
2. Add KinectManager-script from AzureKinectExamples/KinectScripts-folder as component to the KinectController game object.
3. Select the frame types you need to get from the sensor – depth, color, IR, pose and/or body. Enable synchronization between frames, as needed. Check the user-detection setting and change them if needed, as well as the on-screen info you’d like to get in the scene.
4. Create Kinect4Azure-game object as child object of KinectController in Hierarchy. Set its position and rotation to match the Azure-Kinect sensor position & rotation in the world. For a start, you can set only the position in meters, then estimate the sensor rotation from the pose frames later, if you like.
5. Add Kinect4AzureInterface-script from AzureKinectExamples/KinectScripts/Interfaces-folder to the newly created Kinect4Azure-game object.
6. Change the by-default setting of the component, if needed. For instance, you can select different color camera mode, depth camera mode, device sync mode, the min & max distances used for creating the depth-related images.
7. If you like to replay a previously saved recording file, select ‘Device streaming mode’ = ‘Play recording’ and set the full path to recording file in the ‘Recording file’-setting.
8. Run the scene, to check if everything selected works as expected.

How to use Kinect-v2 sensor instead of Azure Kinect in the demo scenes

1. Unfold the KinectController-object in the scene.
2. Select the Kinect4Azure-child object.
3. (Optional) Set the ‘Device streaming mode’ of its Kinect4AzureInterface-component to ‘Disabled’.
4. Select the KinectV2-child object.
5. Set the ‘Device streaming mode’ of its Kinect2Interface-component to ‘Connected sensor’.
6. If you like to replay a previously saved recording file, you should play it in the ‘Kinect studio v2.0’ (part of Kinect SDK 2.0).
7. Run the scene, to check if the Kinect-v2 sensor interface is used instead of Azure-Kinect interface.

How to use RealSense sensor instead of Azure Kinect in the demo scenes

1. Unfold the KinectController-object in the scene.
2. Select the Kinect4Azure-child object.
3. (Optional) Set the ‘Device streaming mode’ of its Kinect4AzureInterface-component to ‘Disabled’.
4. Select the RealSense-child object.
5. Set the ‘Device streaming mode’ of its RealSenseInterface-component to ‘Connected sensor’.
6. If you like to replay a previously saved recording file, select ‘Device streaming mode’ = ‘Play recording’ and set the full path to recording file in the ‘Recording file’-setting.
7. Run the scene, to check if the RealSense sensor interface is used instead of Azure-Kinect interface.

How to set up multiple Azure Kinect (or other) sensors in the scene

Here is how to set up a 2nd (as well as 3rd, 4th, etc.) Azure Kinect camera interface in the scene:

1. Unfold the KinectController-object in the scene.
2. Duplicate the Kinect4Azure-child object.
3. Set the ‘Device index’ of the new object to 1 instead of 0. Other connected sensors should have device indices of 2, 3, etc.
4. Change ‘Device sync mode’ of the connected cameras, as needed. One sensor should be ‘Master’ and the others – ‘Subordinate’, instead of ‘Standalone’.
5. Set the position and rotation of the new object to match the sensor’s position & rotation in the world. For a start, you can set only the position in meters, then estimate the sensor rotation from the pose frames later, if you like.

How to remove the SDKs and sensor interfaces you don’t need

If you work with only one type of sensors (probably Azure Kinects), here is what to do, to get rid of the extra SDKs in the K4A-asset. This will decrease the space of your project and build:

– To remove the RealSense SDK: 1. Delete ‘RealSenseInterface.cs’ from KinectScripts/Interfaces-folder; 2. Delete the RealSenseSDK2.0-folder from AzureKinectExamples/SDK-folder.
– To remove the Kinect-v2 SDK: 1. Delete ‘Kinect2Interface.cs’ from KinectScripts/Interfaces-folder; 2. Delete the KinectSDK2.0-folder from AzureKinectExamples/SDK-folder.

Why is the K2-asset still around

The ‘Kinect v2 Examples with MS-SDK and Nuitrack SDK’-package (or ‘K2-asset’ for short) is still around (and will be around for some time), because it has components and demo scenes that are not available in the new K4A-asset. For instance: the face-tracking components & demo scenes, the hand-interaction components & scenes and the speech-recognition component & scene. This is due to various reasons. For instance, the SDK API does not yet provide this functionality, or I have not managed to add this functionality to the K4A-asset yet. As long as these (or replacement) components & scenes are missing in the K4A-asset, the K2-asset will be kept around.

On the other hand the K4A-asset has significant advantages, as well. It works with the most up-to-date sensors (like Azure Kinect & Realsense), allows multi-camera setups, has better internal structure and gets better and better (with more components, functions and demo scenes) with each next release.

How to play a recording instead of utilizing live data from a connected sensor

The sensor interfaces in K4A-asset provide the option to play back a recording file, instead of getting data from a physically connected sensor. Here is how to achieve this for all types of sensor interfaces:

1. Unfold the KinectController-game object in Hierarchy.
2. Select the proper sensor interface object. If you need to play back a Kinect-v2 recording, please skip steps 3, 4 & 5, and look at the note below.
3. In the sensor interface component in Inspector, change ‘Device streaming mode’ from ‘Connected sensor’ to ‘Play recording’.
4. Set ‘Recording file’ to the full path to previously saved recording. This is the MKV-file (in case of Kinect4Azure) or OUT-file (in case of RealSense-sensors).
5. Run the scene to check, if it is working as expected.

Note: In case of Kinect-v2, please start ‘Kinect Studio v2.0’ (part of Kinect SDK 2.0) and open the previously saved XEF-recording file. Then go to the Play-tab, press the Connect-button and play the file. The scene utilizing Kinect2Interface should be run before you start playing the recording file.

How to set up sensor’s position and rotation in the scene

Here is how to set up the sensor’s transform (position and rotation) in the scene. In case of multiple camera setup this should be done at least for the 1st sensor:

1. Unfold the KinectController-game object in Hierarchy, and select the respective sensor interface object.
2. If you are NOT using Azure Kinect with ‘Detect floor for pose estimation’-setting enabled, please measure manually the distance between the floor and the camera, in meters. Set it as Y-position in the Transform-component of the sensor interface object. Leave the X- & Z-values as 0.
3. Set the ‘Get pose frames’-setting of KinectManager-component in the scene to ‘Display info’. Make sure the rotation of the sensor interface object transform is set to (0, 0, 0). Start the scene.
4. You should see the detected sensor’s position and rotation on the screen. Write down the rotation values. If you are using Azure Kinect with ‘Detect floor for pose estimation’ enabled, write down the detected position values too.
5. Select again the sensor interface object in the scene. Set the written down values as X-, Y- & Z-rotation of its Transform-component. If the sensor’s position was detected too, set the position values of the Transform-component, as well.
6. Set the ‘Get pose frames’-setting of KinectManager-component in the scene back to ‘None’, to avoid the overhead of IMU frames processing.
7. Start the scene again, to check if the world coordinates are correct. In case of sensors that are turned sideways, you should turn the monitor sideways, too.

If you are using Azure Kinect and the sensor is turned sideways (+/-90 degrees) or upside-down (180 degrees), please find the ‘Kinect4AzureInterface’-component in the scene, and set its ‘Body tracking sensor orientation’-setting accordingly:
– K4ABT_SENSOR_ORIENTATION_DEFAULT corresponds to a Z-rotation 0 degrees,
– K4ABT_SENSOR_ORIENTATION_CLOCKWISE90 corresponds to a Z-rotation of 270 (or -90) degrees,
– K4ABT_SENSOR_ORIENTATION_COUNTERCLOCKWISE90 corresponds to a Z-rotation of 90 degrees, and
– K4ABT_SENSOR_ORIENTATION_FLIP180 corresponds to a Z-rotation of 180 degrees.
This hints the Body Tracking SDK, to consider the respective sensor rotation, when detecting the positions and rotations of the body joints.

How to calibrate and set up multiple cameras in the scene

To calibrate multiple cameras, connected to the same machine (usually two or more Azure Kinect sensors), you can utilize the MultiCameraSetup-scene in KinectDemos/MultiCameraSetup-folder. Please open this scene and do as follows:

1. Create the needed sensor interface objects, as children of the KinectController-game object in Hierarchy. By default there are two Kinect4Azure-objects there, but if you have more sensors connected, feel free to create new, or duplicate one of the existing sensor interface objects. Don’t forget to set their ‘Device index’ and ‘Device sync mode’-settings accordingly.
2a. Set up the position and rotation of the 1st sensor-interface object (Kinect4Azure0 in the Hierarchy of the scene). See the tip above on how to do that.
2b. Select the ‘KinectController’-object in Hierarchy and enable ‘Sync multi-cam frames’-setting of its KinectManager-component, if it is currently disabled.
3. Run the scene. All configured sensors should light up. During the calibration process one (and only one) user should stay visible to all configured sensors. The calibration progress and the quality of the calibration will be displayed on screen. The user meshes, as seen by all cameras should be visible too. They may help you track visually the quality of calibration, too. After the calibration completes, the calibration-config file ‘multicam_config.json’ will be saved to the root folder of your Unity project.
4. If, during the calibration process, the user cannot be found in the intersection area of the cameras, please select the ‘KinectController’-object in Hierarchy again, disable the ‘Sync multi-cam frames’-setting of its KinectManager-component and then try to run the scene again.
4. After the automatic calibration completes, you can manually adjust the rotations and positions of the calibrated sensors (except for the first one), to make the user meshes match each other as close as possible. To orbit around the user meshes for better visibility, press Alt + mouse drag. When you are ready, press the ‘Save’-button to save the changes to the calibration-config file.
6. To test the quality of the saved calibration file, select the ‘KinectController’-object in Hierarchy and enable the ‘Use multi-cam config’-setting of its KinectManager-component. Then run the scene again and check how close are the meshes of the detected user, as seen by different camera perspectives.
7. Feel free to re-run the MultiCameraSetup-scene with ‘Use multi-cam config’-setting disabled, if you are not satisfied with the current quality of the multi-camera calibration results.
8. To use the saved calibration-config in any other scene, open the respective scene and enable the ‘Use multi-cam config’-setting of KinectManager-component in that scene. When you run it, the KinectManager should recreate and set up the sensor interfaces, according to the saved calibration config. Run the scene to check, if it works as expected. Please note, in case of multiple cameras, the user-body-merger script will try to merge automatically the user bodies detected by all cameras, according to their proximity to each other.

How to make the point cloud demos work with multiple cameras

To make the point cloud demo scenes work with multiple calibrated cameras, please follow these steps:

1. Follow the tip above to calibrate the cameras, if you haven’t done it yet.
2. Open the respective point-cloud scene (i.e. SceneMeshDemo or UserMeshDemo in the KinectDemos/PointCloudDemo-folder).
3. Enable the ‘Use multi-cam config’-option of the KinectManager-component in the scene.
4. Duplicate the SceneMeshS0-object as SceneMeshS1 (in SceneMeshDemo-scene), or User0MeshS0-object as User0MeshS1 (in UserMeshDemo-scene).
5. Change the ‘Sensor index’-setting of their Kinect-related components from 0 to 1. This way you will get a 2nd mesh in the scene, from the 2nd sensor’s point of view.
6. If you have more cameras, continue duplicating the same objects (as SceneMeshS2, SceneMeshS3, etc. in SceneMeshDemo, or User0MeshS2, User0MeshS3, etc. in UserMeshDemo), and change their respective ‘Sensor index’-settings to point to the other cameras, in order to get meshes from these cameras’ POV too.
7. Run the scene to see the result. The sensor meshes should align properly, if the calibration done in MultiCameraSetup-scene is good.

How to send the sensor streams across the network

To send the sensor streams across the network, you can utilize the KinectNetServer-scene and the NetClientInterface-component, as follows:

1. On the machine, where the sensor is connected, open and run the KinectNetServer-scene from KinectDemos/NetworkDemo-folder. This scene utilizes the KinectNetServer-component, to send the requested sensor frames across the network, to the connected clients.
2. On the machine, where you need the sensor streams, create an empty game object, name it NetClient and add it as child to the KinectController-game object (containing the KinectManager-component) in the scene. For reference, see the NetClientDemo1-scene in KinectDemos/NetworkDemo-folder.
3. Set the position and rotation of the NetClient-game object to match the sensor’s physical position and rotation. Add NetClientInterface from KinectScripts/Interfaces as component to the NetClient-game object.
4. Configure the network-specific settings of the NetClientInterface-component. Either enable ‘Auto server discovery’ to get the server address and port automatically (LAN only), or set the server host (and eventually base port) explicitly. Enable ‘Get body-index frames’, if you need them in that specific scene. The body-index frames may cause extra network traffic, but are needed in many scenes, e.g. in background removal or user meshes.
5. Configure the KinectManager-component in the scene, to receive only the needed sensor frames, and if frame synchronization is needed or not. Keep in mind that more sensor streams would mean more network traffic, and frame sync would increase the lag on the client side.
6. If the scene has UI, consider adding a client-status UI-Text as well, and reference it with the ‘Client status text’-setting of NetClientInterface-component. This may help you localize various networking issues, like connection-cannot-be-established, temporary disconnection, etc.
7. Run the client scene in the Editor to make sure it connects successfully to the server. If it doesn’t, check the console for error messages.
8. If the client scene needs to be run on mobile device, build it for the target platform and run it on device, to check if it works there, as well.

How to get the user or scene mesh working on Oculus Quest

To get the UserMeshDemo- or SceneMeshDemo-scene working on Oculus Quest, please do as follows:

1. First off, you need to do the usual Oculus-specific setup, i.e. import the ‘Oculus Integration’-asset from the Unity asset store, enable ‘Virtual reality supported’ in ‘Player Settings / XR Settings’ and add Oculus’ as ‘Virtual Reality SDK’ there, as well. In the more recent Unity releases this is replaced by the ‘XR Plugin Management’ group of project settings.
2. Oculus Quest will need to get the sensor streams over the network. In this regard, open NetClientDemo1-scene in KinectDemos/NetworkDemo-folder, unfold KinectController-game object in Hierarchy and copy the NetClient-object below it to the clipboard.
3. Open UserMeshDemo- or SceneMeshDemo-scene, paste the copied NetClient-object from the clipboard to the Hierarchy and then move it below the KinectController-game object. Make sure that ‘Auto server discovery’ and ‘Get body index frames’-settings of the NetClientInterface-component are both enabled. Feel free also to set the ‘Device streaming mode’ of the Kinect4AzureInterface-component to ‘Disabled’. It will not be used on Quest.
4. Add the KinectNetServer-scene from the KinectDemos/NetworkDemo-folder to the ‘Scenes in Build’-setting of Unity ‘Build settings’. Then build it for the Windows-platform and architecture ‘x86_64’. Alternatively, create a second Unity project, import the K4A-asset in there and open the KinectNetServer-scene. This scene (or executable) will act as network server for the sensor data. You should run it on the machine, where the sensor is physically connected.
5. Remove ‘KinectNetServer’ from the list of ‘Scenes in Build’, and add UserMeshDemo- or SceneMeshDemo-scene instead.
6. Switch to the Android-platform, open ‘Player settings’, go to ‘Other settings’ and make sure ‘OpenGLES3’ is the only item in the ‘Graphics APIs’-list. I would also recommend disabling the ‘Multithreaded rendering’, but this is still a subject for further experiments.
7. Start the KinectNetServer-scene (or executable). Connect your Oculus Quest HMD to the machine, then build, deploy and run the UserMeshDemo- or SceneMeshDemo-scene to the device. After the scene starts, you should see the user or scene mesh live on the HMD. Enjoy!

How to get the color-camera texture in your code

Please check, if the ‘Get color frames’-setting of the KinectManager-component in the scene is set to ‘Color texture’. Then use the following snippet in the Update()-method of your script:

KinectManager kinectManager = KinectManager.Instance;
if(kinectManager && kinectManager.IsInitialized())
{
    Texture texColor = kinectManager.GetColorImageTex(sensorIndex);
    // do something with the texture
}

How to get the depth-frame data or texture in your code

Please check, if the ‘Get depth frames’-setting of the KinectManager-component in the scene is set to ‘Depth texture’, if you need the texture or to ‘Raw depth data’, if you need the depth data only. Then use the following snippet in the Update()-method of your script:

KinectManager kinectManager = KinectManager.Instance;
if(kinectManager && kinectManager.IsInitialized())
{
    Texture texDepth = kinectManager.GetDepthImageTex(sensorIndex);  // to get the depth frame texture
    ushort[] rawDepthData = kinectManager.GetRawDepthMap(sensorIndex);  // to get the raw depth frame data
    // do something with the texture or data
}

Please note, the raw depth data is an array of shorts, with size equal to (depthImageWidth * depthImageHeight). Each value represents the depth in mm for the respective depth frame point.

How to get the position of a body joint in your code

First, please make sure the ‘Get body frames’-setting of the KinectManager-component in the scene is set to something different than ‘None’. Then use the following snippet in the Update()-method of your script (and replace ‘HandRight’ below with the body joint you need):

KinectInterop.JointType joint = KinectInterop.JointType.HandRight;

KinectManager kinectManager = KinectManager.Instance;
if(kinectManager && kinectManager.IsInitialized())
{
    if(kinectManager.IsUserDetected(playerIndex))
    {
        ulong userId = kinectManager.GetUserIdByIndex(playerIndex);

        if(kinectManager.IsJointTracked(userId, joint))
        {
            Vector3 jointPos = kinectManager.GetJointPosition(userId, joint);
            // do something with the joint position
        }
    }
}

How to get the orientation of a body joint in your code

Again, please make sure the ‘Get body frames’-setting of the KinectManager-component in the scene is set to something different than ‘None’. Then use the following snippet in the Update()-method of your script (and replace ‘Pelvis’ below with the body joint you need):

KinectInterop.JointType joint = KinectInterop.JointType.Pelvis;
bool mirrored = false;

KinectManager kinectManager = KinectManager.Instance;
if(kinectManager && kinectManager.IsInitialized())
{
    if(kinectManager.IsUserDetected(playerIndex))
    {
        ulong userId = kinectManager.GetUserIdByIndex(playerIndex);

        if(kinectManager.IsJointTracked(userId, joint))
        {
            Quaternion jointOrientation = kinectManager.GetJointOrientation(userId, joint, !mirrored);
            // do something with the joint orientation
        }
    }
}

What is the file-format used by the body recordings

The KinectRecorderPlayer-component can record or replay body-recording files. These recordings are text files, where each line represents a body-frame at a specific moment in time. You can use it to replay or analyze the body-frame recordings in your own tools. Here is the format of each line. See the sample body-frames below, for reference.

0. time in seconds, since the start of recording, followed by ‘|’. All other field separators are ‘;’.
This value is used by the KinectRecorderPlayer-component for time-sync, when it needs to replay the body recording.

1. body frame identifier. should be ‘k4b’.
2. body-frame timestamp, coming from the SDK. This field is ignored by the KinectManager.
3. number of tracked bodies.
4. number of body joints (32).
5. space scale factor (3 numbers for the X,Y & Z-axes).

Then follows the data for each tracked body:
6. body tracking flag – 1 if the body is tracked, 0 if it is not tracked (the 5 zeros at the end of the lines below are for the 5 missing bodies)

if the body is tracked, then the bodyId and the data for all body joints follow. if it is not tracked – the bodyId and joint data (7-9) are skipped
7. body ID

body joint data (25 times, for all body joints – ordered by JointType (see KinectScripts/KinectInterop.cs)
8. joint tracking state – 0 means not-tracked; 1 – inferred; 2 – tracked

if the joint is inferred or tracked, the joint position data follows. if it is not-tracked, the joint position data (9) is skipped.
9. joint position data, in meters (3 numbers for the X, Y & Z-axes).
10. joint orientation data, in degrees (3 numbers for the X, Y & Z-axes).

And here are two body-frame samples, for reference:

4.285|k4b;805827551;1;32;-1;-1;1;1;1;2;-0.415;0.002;1.746;3.538;356.964;2.163;2;-0.408;-0.190;1.717;358.542;356.776;2.160;2;-0.402;-0.345;1.705;353.569;356.587;2.643;2;-0.392;-0.577;1.743;354.717;353.971;4.102;2;-0.386;-0.666;1.742;348.993;349.257;8.088;2;-0.357;-0.537;1.742;351.562;359.329;4.547;2;-0.202;-0.525;1.743;307.855;64.654;324.608;2;0.027;-0.614;1.570;15.579;121.943;301.289;2;-0.092;-0.812;1.468;345.353;117.828;300.655;2;-0.096;-0.899;1.409;345.353;117.828;220.655;2;-0.430;-0.541;1.734;352.718;351.386;16.178;2;-0.566;-0.578;1.714;315.412;285.925;25.871;2;-0.732;-0.648;1.466;15.003;217.635;48.440;2;-0.568;-0.825;1.383;337.354;236.803;353.364;2;-0.512;-0.802;1.288;337.354;236.803;358.364;2;-0.316;0.005;1.752;346.343;356.306;359.296;2;-0.305;0.436;1.694;17.148;355.929;359.284;2;-0.347;0.819;1.852;338.148;350.141;3.025;2;-0.319;0.893;1.664;337.809;349.421;3.293;2;-0.505;-0.001;1.740;350.880;356.484;1.804;2;-0.514;0.433;1.716;11.939;357.147;1.821;2;-0.510;0.833;1.844;334.003;20.297;2.223;2;-0.579;0.886;1.678;333.513;21.027;1.903;2;-0.356;-0.737;1.585;348.993;349.257;8.088;2;-0.333;-0.766;1.634;348.993;349.257;8.088;2;-0.292;-0.728;1.760;348.993;349.257;8.088;2;-0.388;-0.768;1.620;348.993;349.257;8.088;2;-0.470;-0.744;1.734;348.993;349.257;8.088;2;-0.155;-0.866;1.313;345.353;117.828;11.142;2;-0.156;-0.839;1.334;80.906;53.999;316.725;2;-0.539;-0.838;1.394;285.211;20.682;56.425;2;-0.519;-0.966;1.378;10.344;155.193;69.151

4.361|k4b;806494221;1;32;-1;-1;1;1;1;2;-0.404;-0.076;1.745;6.454;356.702;1.397;2;-0.399;-0.264;1.706;2.510;356.605;1.390;2;-0.395;-0.416;1.684;357.553;356.485;1.980;2;-0.384;-0.647;1.724;353.562;358.508;2.961;2;-0.380;-0.735;1.725;355.491;345.619;7.062;2;-0.350;-0.607;1.720;350.254;353.270;353.332;2;-0.198;-0.627;1.739;299.401;48.858;332.268;2;0.067;-0.682;1.613;3.271;109.329;295.800;2;0.018;-0.895;1.492;351.194;137.170;266.059;2;0.037;-0.997;1.483;351.194;137.170;186.059;2;-0.422;-0.610;1.718;348.712;3.006;24.969;2;-0.550;-0.670;1.730;313.287;304.828;18.589;2;-0.772;-0.716;1.528;358.776;240.673;49.456;2;-0.695;-0.892;1.363;319.830;227.016;6.959;2;-0.633;-0.897;1.270;319.830;227.016;11.959;2;-0.306;-0.074;1.751;344.427;356.157;0.339;2;-0.301;0.352;1.679;14.904;356.335;0.338;2;-0.347;0.737;1.820;0.651;347.614;3.214;2;-0.319;0.876;1.677;0.313;346.946;3.207;2;-0.493;-0.077;1.739;348.410;356.260;2.692;2;-0.508;0.352;1.696;7.027;357.126;2.657;2;-0.499;0.759;1.787;5.252;40.137;0.152;2;-0.589;0.893;1.696;4.784;40.810;0.213;2;-0.339;-0.787;1.565;355.491;345.619;7.062;2;-0.319;-0.821;1.611;355.491;345.619;7.062;2;-0.288;-0.798;1.742;355.491;345.619;7.062;2;-0.373;-0.821;1.593;355.491;345.619;7.062;2;-0.462;-0.811;1.703;355.491;345.619;7.062;2;0.024;-1.103;1.436;351.194;137.170;286.972;2;0.050;-0.997;1.413;349.990;58.882;331.104;2;-0.669;-0.904;1.379;279.467;85.405;345.866;2;-0.660;-1.033;1.395;25.016;150.346;66.683

How to utilize green screen for better volumetric scenes or videos

Here is how to utilize a green (or other color) screen for high quality volumetric scenes or videos. If you need practical examples, please look at the GreenScreenDemo1- and GreenScreenDemo2-scene in KinectDemos/GreenScreenDemo-folder.

1. First, please set the ‘Get color frames’-setting of the KinectManager-component in the scene to ‘Color texture’ and ‘Get depth data’ to ‘Raw depth data’.
2. Create an object in the scene to hold the green screen components. This object is called ‘GreenScreenMgr’ in the green screen demo scenes.
3. Add the BackgroundRemovalManager- and BackgroundRemovalByGreenScreen-scripts from KinectScripts-folder as components to the object. The green screen component is implemented as a background-removal filter, similar to the other BR filters.
4. Set the ‘Green screen color’-setting of the BackgroundRemovalByGreenScreen-component. This is the color of the physical green screen. By default it is green, but you can use whatever color screen you have (even some towels, but they should not be black, gray or white). To find out the screen’s color, you could make a photo or screenshot of your setup and then use a color picker in any picture editor, to get the color’s RGB values.
5. You can utilize the other settings of the BackgroundRemovalByGreenScreen-component, to adjust the foreground filtering according to your needs, after you start the scene later.
6. Set the ‘Foreground image’-setting of the BackgroundRemovalManager-component to point to any RawImage-object in the scene, to get the green screen filtered output image displayed there.
7. Alternatively, if you need a volumetric output in the scene, add a Quad-object in the scene (menu ‘3D Object / Quad’). This is the ForegroundRenderer-object in the GreenScreenDemo2-scene. Then add the ForegroundBlendRenderer-component from KinectScripts-folder as component to the object.
8. If you use the ForegroundBlendRenderer-component, set its ‘Invalid depth value’ to the distance between the camera and the green screen, in meters. This value will be used as a depth value for the so called invalid pixels in the depth image. This will provide more consistent, high quality foreground image.
9. If you use the ForegroundBlendRenderer-component and you need to apply the scene lighting to the generated foreground mesh, please enable the ‘Apply lighting’-setting of the ForegroundBlendRenderer-component. By default this setting is disabled.
10. Run the scene to see the result. You can then adjust the settings of the BackgroundRemovalByGreenScreen-component to match your needs as good as possible. Don’t forget to copy the changed settings back to the component, when you stop the scene.

How to integrate the Cubemos body tracking with the RealSense sensor interface

Please note the Cubemos Skeleton Tracking SDK is no longer available either on Cubemos’s or Intel’s websites and shops. In this regard, please use Azure Kinect or iPhone Pro w/LiDAR-sensor, whenever possible and avoid using RealSense.
To integrate the Cubemos skeleton tracking SDK with the RealSense sensor interface, please follow these steps:

1. Go to https://www.cubemos.com/skeleton-tracking-sdk and download Cubemos Skeleton Tracking SDK v3.0.
2. On the same page, click on ‘Try for free’. It will bring you to this RealSense page. Click ‘Try for free’ again (if you haven’t done it already), to get a license key.
3. Install the downloaded Cubemos Skeleton Tracking SDK. The process is pretty straightforward. After the installation completes, you need to restart the machine. This will add the ‘CUBEMOS_SKEL_SDK’-variable to the list of the environment variables, and the path to the Cubemos bin-folder (where the Cubemos native libraries reside) to the system’s path.
4. After the restart, run ‘Activate Skeleton Tracking SDK’ from the ‘Cubemos Skeleton Tracking SDK’-folder in the Windows Start menu. You’ll have to enter the license key you got earlier from Intel, in order to activate the Cubemos SDK. You don’t need to install the VS or other sample projects. Please look for error or warning messages in the console.
5. If you see a warning that ‘libmmd.dll’ or ‘cubemos_engine.dll’ cannot be found in path, please check if you have the ‘Intel C++ Redistributable’ installed on your machine. If needed, you can download and install it from here.
6. At this point you should be able to run ‘Skeleton Tracking with Intel RealSense’ from the same menu.
7. Create a new Unity project, import the K4A-asset (at least v1.15) into this project, then import the RealSenseInterface-package (if you’d like to try it out free of charge, please e-mail me).
8. Open a demo scene (e.g. the 1st or 4th avatar demo) and start it. If you don’t have any other sensor connected (e.g. Azure Kinect or Kinect-v2), the RealSense-sensor should be automatically started, when you run the scene.

Please note that after the scene starts, there will be a significant delay caused by loading the model into memory and then transferring it to the GPU. By default, the RealSenseInterface starts the Cubemos tracker in GPU mode. If you get errors like ‘cubemos Error. Can’t create skeleton tracker for RealSenseInterface0!’, it may be because the GPU is external or incompatible with Cubemos. Please look at this forum for more info in this regard.

If there are errors in GPU mode, or if you prefer to start the Cubemos tracker in CPU mode in Unity editor, please do as follows:

  • Copy “tbb.dll” from “C:\Program Files\Cubemos\SkeletonTracking\bin” to “C:\Program Files\Unity\Hub\Editor\{your-Unity-version}\Editor”. Don’t forget to close the Unity editor before that, and save (or rename) the original “tbb.dll” used by Unity.
  • Start the Unity editor again, and change the “Body tracking compute”-setting of the RealSenseInterface-component in the scene to “CM_CPU”.
  • Run a scene with body tracking to see, if Cubemos works as expected in CPU mode.

Please also note, the Cubemos-RealSense integration is still experimental and issues are possible.

How to utilize Apple iPhone-Pro or iPad-Pro as depth sensors

As of v1.15 of the K4A-asset you can use the latest Apple’s iPhone Pro or iPad Pro as depth sensors, because they have LiDAR (depth) sensors included in their hardware configurations. If you have got one of these devices, and would like to use it as a depth (and color) camera in the K4A-asset, here is what to do:

1. Create a new Unity project (with Unity 2020.1.0f1 or later), and import the K4A-asset (v1.15 or later) into this project.
2. Import the ARKitInterface-package into the project (if you’d like to try it out free of charge, please e-mail me).
3. Open Unity ‘Build settings’ and select ‘iOS’ as target platform.
4. Open ‘Player settings’. Then set ‘Camera usage description’ to a description of your choice (e.g. ‘AR’), and the ‘Target minimum iOS Version’ to ‘14.0’ (or at least ‘13.0’). You can also set other Player settings, as needed, for instance the company or product name, bundle identifier, etc.
5. Open the scene you’d like to build (or add it to the ‘Scenes in Build’-list). Then press the ‘Build and Run’ (or ‘Build’) button. The build and deploy process is the same as for all iOS devices.
6. When the app starts on the device for a first time, it will ask for camera access permission. This permission is needed to get the camera (and the LiDAR sensor) running.
7. Please note, there are currently some limitations and peculiarities of the iOS devices, when used as depth sensors:

  • The depth camera subsystem and the body tracking subsystem cannot run the same time.
  • The body tracking subsystem can track no more than one user.
  • The iOS devices are usually mobile (i.e. non-static).
  • The iOS device can be turned in portrait or landscape mode.

These limitations and features render limitations (and features, as well) while running the demo scenes. For instance:

  • When a scene uses body tracking, it can’t utilize the depth camera. Hence, some of the demo scenes (like the 1st background-removal demo, collider demos, etc.) will not run as expected on this platform.
  • In many demo scenes, when you want the device to track its physical position and rotation, set the ‘Get pose frames’-setting of KinectManager to ‘Update transform’. If you don’t want the device to track its physical position & rotation, set it to ‘None’.
  • If the device is static, you can explicitly set its position. Please contact me, if you need more info in this regard. By default the device is at position (0, 1, 0), in meters.

How to replace Azure Kinect with Femto Bolt or Mega sensors

In August 2023 Microsoft announced they will discontinue the production of Azure Kinect cameras after October 2023, but have partners such as Orbbec and Analog Devices, who will provide alternative solutions. Luckily, the Femto Bolt and Femto Mega cameras developed by Orbbec in partnership with Microsoft are not only alternatives, but practically a replacement of Azure Kinect cameras. They can work with Orbbec’s version of Azure Kinect SDK and with the Body Tracking SDK, as well.

In this regard, the transition from Azure Kinect to Femto Bolt or Femto Mega cameras in “Azure Kinect Examples”-asset (K4A-asset for short) should be fairly simple and straightforward. Please follow these steps:

1. Connect the camera to the power supply and to the computer with the provided USB-C or USB-A data cable.
2a. Download, unzip and run Orbbec Viewer (v1.8.1 or later), select the connected camera and check the quality of its color, depth, IR and IMU streams. See also, if the device timestamps are updating as expected. Then close the Orbbec Viewer.
2b. If you are on Windows, please go to the ‘script’-subfolder of Orbbec Viewer’s folder, open and read ‘obsensor_metadata_win10.md’. Then follow the steps in it, to make the device timestamps go through the UVC protocol.
2c. Check if the firmware version of your device corresponds to the one listed in Orbbec’s firmware repo. If needed, upgrade the firmware via Orbbec Viewer. The instructions are provided on the respective firmware sub-page.
3. Download and unzip Orbbec SDK K4A-Wrapper (v1.8.1 or later). Run ‘k4aviewer’, open the connected device and start the cameras. Check again, if the IR, depth, color and IMU streams are visible, the timestamps are updating and there are no errors in the console. Then stop the cameras, close the device and close the K4A-Viewer.
4. Open the Unity project, where you have previously imported the “Azure Kinect Examples for Unity”-asset (or create a new one from scratch and import the K4A-asset into it).
5. Download, unzip and import the OrbbecFemtoInterface-unitypackage. It will replace the native libraries in the project, to make them work with Orbbec’s Femto Bolt and Femto Mega cameras. The package import has to be done before running any scene. Please note, after this import your project will not work with Azure Kinect cameras any more.
6. Run any of the demo scenes (or your own scene), to make sure the cameras and scene work correctly. Please note, to run the demo scenes that require body tracking you still need to have ‘Azure Kinect Body Tracking SDK’ installed into its by-default location ‘C:\Program Files\Azure Kinect Body Tracking SDK’.
7. If there are any issues in the steps above, or if there are console errors regarding any Kinect-related functionalities, please feel free to e-mail me for support.

 

174 thoughts on “Azure Kinect Tips & Tricks

  1. Pingback: Azure Kinect Examples for Unity | RF Solutions - Technology, Health and More

  2. Hi, I’m using good sauce.

    I’m working on a project to write two Kinect v2 at the same time with two Kinect v2
    But not at the same time, but at the same time, only one Kinect is used in a single scene.

    How do I use the desired KINNEX with two KINNEX connected?

    • Sorry, but the K2-asset does not support multiple sensor setups. In this regard, why would you use two sensors, if only one is used in a single scene?

  3. Hi.Runmen F .
    I ‘m using azure kinect and follow your tips & Tricks ,then try to open the sec deviec ,but unity have some erros are “Can’t create body tracker for Kinect4AzureInterface1! & AzureKinectException: result = K4A_RESULT_FAILED” . how can i fix this erro for mutilple device ?

  4. Hi,

    Can I access coordinations(x, y, z) of point cloud and it’s color(RGB)?

    If I can do that, Could you explain how to?

    • If you mean the SceneMeshDemo or UserMeshDemo, for performance reasons the point clouds there are generated by shaders. If you need script access to the mesh coordinates, please replace SceneMeshRendererGpu-component with SceneMeshRenderer (or UserMeshRendererGpu with UserMeshRenderer). They generate the meshes on CPU and you can have access to everything there.

  5. To start with: LOVE the Unity asset from the Azure Kinect examples you created.
    I have one question:
    How can I detect multiple tracker users?
    In the Kinect manager it says Max tracker user should be set on “0”
    But still it only detects 1 person.What am I forgetting?What do I need to Add to my scene?

    • Thank you! To your question: All components in the K4A-asset related to user tracking have a setting called ‘Player index’. If you need an example, please open the KinectAvatarsDemo1-scene in KinectDemos/AvatarDemo-folder. Then look at the AvatarController-component of U_CharacterBack or U_CharacterFront objects in the scene. The ‘Player index’ setting determines the tracked user – 0 means the 1st detected user, 1 – the 2nd detected user, 2 – the 3rd detected user, etc. If you change this setting, you can track different users. Lastly, look at KinectAvatarsDemo4-scene and its UserAvatarMatcher-component. This component creates and destroys avatars the scene automatically, according to the currently detected users.

      The ‘Max tracked users’-setting of KM determines the maximum number of users that may be tracked at the same time. For instance, if there are many people in the room, the users that need to be really tracked could be limited by distance or by the max number of users (e.g. if your scene needs only one or two users).

      • how can I regulate maximum distance? in kinectmanager script I tried to give maxuserdistance as 2.0f, but it doesn’t work

      • As answered few days ago, ‘MaxUserDistance = 2’ would mean users, whose distance to the sensor is more than 2 meters will not be detected (or will be lost).

  6. How do I use the background removal to display the head only?
    I have a idea that I can use a image mask on the raw image.
    But the overlay is world space position.
    Can I get screen position?
    Or more simple method to solve it.

    • Please open BackgroundRemovalManager.cs in the KinectScripts-folder and find ‘kinectManager.GetUserBoundingBox’. Comment out the invocation of this method, and replace it with the following code:

      bool bSuccess = kinectManager.IsJointTracked(userId, KinectInterop.JointType.Head);
      Vector3 pMin = kinectManager.GetJointPosition(userId, KinectInterop.JointType.Head);
      Vector3 pMax = pMin;

      Then adjust the offsets around the head joint for each axis (x, y, z) in the lines below, where posMin, posMaxX, posMaxY, posMaxZ are set. The shader should filter out all points that are not within the specified cuboid coordinates.

  7. Is there a way to make it so that it does NOT hide the joints its uncertain of? I believe that used to be the behavior but now the skeleton keeps flickering

    • There is a setting of the KinectManager-component in the scene, called ‘Ignore inferred joints’. You can use it, to determine whether the inferred joints to be considered as tracked or not.

      At the same time, I’m not quite sure what you mean by “skeleton keeps flickering”. May I ask you to e-mail me and share some more details on how to reproduce the issue you are having.

    • Hi Ruben. Yes, I have seen the floor detection sample and find it quite useful. It could replace the current way of detecting sensor rotation in the K4A-asset, or even enhance it for detecting the height above the ground, as well. It’s on my to-do list. But I’m still thinking how to implement it, in order not to lose performance, while processing point clouds on each IMU frame. Please feel free to e-mail me, if you have done something in this regard, or if you have fresh ideas.

      • Sorry to bother you again Rumen, I’ve sent you an email, but I got no response, so I guess it ended in the spam folder. Can you check it?
        Thanks,
        Ruben

  8. Hi! Anybody how to setup the azure kinect in portrait orientation (phisically the sensor), and how to setup then de project to use it in a 1080×1920 unity project? I’m not able to do it following the information nor the posts about this… Thanks and sorry!

    • Hi, when you turn the sensor sideways, you would need to turn your monitor sideways, too. Don’t forget to set the sensor interface’s transform rotation in the scene, too. See this tip: https://rfilkov.com/2019/08/26/azure-kinect-tips-tricks/#t9 And, if you are using body tracking, please open DepthSensorBase.cs in KinectScripts/Interfaces-folder, look for ‘K4ABT_SENSOR_ORIENTATION_DEFAULT’ and change it to ‘K4ABT_SENSOR_ORIENTATION_CLOCKWISE90’ (or ‘K4ABT_SENSOR_ORIENTATION_COUNTERCLOCKWISE90’).

      • Hi Rumen F. Reaalllyyyy thanks! Just one more and last question… When turning my monitor sideways… Must I leave in the Windows 10 dislay settings to “Landscape” orientation or must I also configure the display as “Portrait”… Seems to be the first option…?

      • I think everything should be left as it was, just turned sideways. This way you will get the full display resolution. But feel free to experiment a bit, just to find out what fits the best.

      • Hi! I’m not able to solve it using the Fitting Room 1 samble. I’ve done all the changes: sensor transform, device configuration in the interface,… but no luck… With the CLOCKWISE setting, it’s still working well in “NORMAL” orientation of the device!!! Even with the setting:

        bodyTracker = new BodyTracking(calibration, k4abt_sensor_orientation_t.K4ABT_SENSOR_ORIENTATION_CLOCKWISE90, k4abt_tracker_processing_mode_t.K4ABT_TRACKER_PROCESSING_MODE_GPU, 0);

        It’s strange!!!

      • Ok… Seems to be something related with the Fitting Room 1 sample… The Fitting Room 2 sample works perfectly… And also the others samples works fine with the device orientation clockwise and counterclockwise… strangely, the fitting room 1 sample not! Any clue???

      • This portrait mode changes the world axes, as well. Up becomes left, down becomes right, left becomes down, etc. This makes the gesture and pose recognition not work correctly. I think this is the problem with FR1-demo. In this regard, please find the KinectManager-component in the scene, change its ‘Player calibration pose’-setting from ‘Tpose’ to ‘None’ and then try again.

  9. Hi,
    I set two userimage and kinectcamera and set player index 0 and 1. And I set max tracked users for 2 in the kinectmanager.
    My problem is, every userimage show player 1’s image.
    How can I detect multiple users corectly in the backgroundremoval?

    • Please set the ‘Player index’-setting of BackgroundRemovalManager-component to -1, to get the both users on the image. Also, please use only one UserImage, controller by one of the users, because the ForegroundToRenderer-component expects only one BackgroundRemovalManager in the scene. For two separate user images, you would need to modify ForegroundToRenderer.cs a bit and have two BR-managers in the scene, for both user 0 and 1.

      • I changed backManager to public in ForegroundToRenderer and connected each user images to each BackgroundRemovalManager(also, changed related player index.). But both images show nothing. What was I wrong?

      • I suppose you haven’t done anything wrong. There is probably an issue in the BackgroundRemovalManager shaders in this case. I need to take a look.

    • Thank you! Yes, I managed to reproduce the issue. The fix will be available in the next release. If you are in a hurry, feel free to e-mail me, to get the updated package a.s.a.p. Please don’t forget to mention your invoice number in the e-mail, as well.

  10. Hello Rumen,
    I’ve a problem, each time I try to launch one of the examples, unity crashes, sorry if someone gave you the same issue but I didn’t found my answer ^^’

    Here is a part of my error.log file :

    KERNELBASE.dll caused an Unknown exception type (0xc06d007f)
    in module KERNELBASE.dll at 0033:47d4a799.

    Error occurred at 2020-05-15_163952.
    C:\Program Files\Unity\Hub\Editor\2019.3.13f1\Editor\Unity.exe
    […]
    Stack Trace of Crashed Thread 15464:
    0x00007FFD47D4A799 (KERNELBASE) RaiseException
    0x00007FFCFDF2AFEB (k4abt) k4abt_tracker_destroy
    [..]
    0x00000245B2A75405 (Microsoft.Azure.Kinect.Sensor) Microsoft.Azure.Kinect.Sensor.NativeMethods.k4abt_tracker_create()
    0x00000245B2A730A3 (Microsoft.Azure.Kinect.Sensor) Microsoft.Azure.Kinect.Sensor.BodyTracking..ctor()
    0x00000245B2A3D483 (Assembly-CSharp) com.rfilkov.kinect.DepthSensorBase.InitBodyTracking()
    0x00000245B29CA161 (Assembly-CSharp) com.rfilkov.kinect.Kinect4AzureInterface.OpenSensor()
    0x00000245B29C0CF6 (Assembly-CSharp) com.rfilkov.kinect.KinectManager.StartDepthSensors()
    0x00000245B29B5AF3 (Assembly-CSharp) com.rfilkov.kinect.KinectManager.Awake()
    0x00000245AC86F4D8 (mscorlib) System.Object.runtime_invoke_void__this__()
    0x00007FFCD9ECCBA0 (mono-2.0-bdwgc) mono_get_runtime_build_info
    0x00007FFCD9E52112 (mono-2.0-bdwgc) mono_perfcounters_init
    0x00007FFCD9E5B10F (mono-2.0-bdwgc) mono_runtime_invoke

    Thank you very much for your help !

    • Hi Adrien, As far as I see, the body tracking crashes at scene start. I assume you’re using the latest release of the K4A-asset. In this regard, please check if:
      1. You have installed Body Tracking SDK v1.0.1.
      2. The Body Tracking SDK is installed into its by-default folder ‘C:\Program Files\Azure Kinect Body Tracking SDK’.
      3. Azure Kinect Body Tracking Viewer works as expected.

      If the problem persists, please e-mail me and attach the Editor’s log-file after the crash has occurred. Here is where to find the Unity log-files: https://docs.unity3d.com/Manual/LogFiles.html

      • Thank you very much for your help and the time you spent to help me I really appreciate and am very thankful for your help and your devotion !

      • I’m having the same issue. Is there something I can implement to avoid this crash? Let me know if you need to see my log file as well.

      • Please make sure you’re using the latest version of the K4A-asset (currently v1.12.1). Please also look at my comment above and check if everything is OK. Then try again, and if the crash persists, please e-mail me your Editor’s log file, so I can take a look. Here is where to find the Unity log-files: https://docs.unity3d.com/Manual/LogFiles.html

  11. Hello Rumen F.

    I have a question about MultiCameraSetup.scene.

    Two Azure Kinects were recognized,
    The calibration remains 0% even if one user is photographed.

    Do you know why?

    Thank you

    • Hi, only one user should be visible to both cameras, and should move a bit, as well. I don’t think there is any other significant limitation.

      • Hello Rumen F.

        When I tried again on another PC,
        It worked safely.
        I was able to confirm the operation.

        perhaps,
        The CPU may be too low.

        Thank you very much.

  12. I noticed a bunch of really large dlls are copied over to the root folder after playing a scene. Can I get a list of what to put in my gitignore for my private source control? Thank you!

  13. Hey Rumen,
    I need to record high quality rgb streams from 4 kinects. Is it possible to control white balance with your wrappers? Thank you!

  14. Hello Rumen ! Excellent Project !
    I am trying to align the Avatar body tracked by Azure DK with Oculus VR Headset in the Scene. What part of the AvatarController can i tap into to accomplish that?
    Thank You !

    • Hi, I think you need to enable the ‘External root motion’-setting of the AvatarController-component, and add the HmdHeadMover from the KinectScripts-folder as component to the humanoid model in the scene. The head mover should follow the headset’s position and the avatar controller will control the body joint orientations. That should be enough.

  15. hello, I use the background removal and set azure kinect 4096 x 3072 resolution.
    but the screen look too blurry to believe the resolution.( Already set Apply Blur Filter false)
    the background removal use a lot of inner shaders and I read them difficultly.
    Can I modify the background removal script to output high resolution?

    • Hi, the background removal scenes (and all other scenes) use the color camera resolution you set. Please note, the maximum FPS of 4096×3072 is 15 instead of 30. This wouldn’t explain the unexpected blurriness though. What depth mode do you use in combination with the color-camera resolution?

      Also, may I ask you to e-mail me a short video clip (or some screenshots) of the Kinect4AzureInterface-settings and what you get on screen after running the scene. It may help me understand better your issue.

  16. Hello Rumen F.

    I have some questions about MultiCameraSetup.scene.

    4 Azure Kinects were recognized,”Standalone”can calibration 100%,But this calibration is wrong, right?
    if 1 ‘Master’,3‘Subordinate’, result the calibration remains 0%

  17. Hello
    Maybe someone needs to move in the space during calibration? It may be the wrong way to do it once. Although 4 devices are set to Standalone, although they are calibrated, they are applied to the cg model. Real-time mocap found that the action is very lagging. The PC used has a high configuration and it will not be a configuration problem. , What is the correct way

    • Hi, sorry for the delay, but (as I said) I don’t work at weekends.

      Now to your issues:
      1. Yes, there should be one (and only one) person moving in the intersection area of all cameras, in order for the calibration process to complete.
      2. I’m not sure about the configuration of your cameras. If they are all standalone, they should be set as Standalone in the scene. If they are Master/Subs, they should be set that way in the scene too. The configuration should match the reality.
      3. Please look at this tip regarding the multi-camera calibration in the K4A-asset: https://rfilkov.com/2019/08/26/azure-kinect-tips-tricks/#t10
      4. If you still have issues calibrating the cameras, please e-mail me with some more details about your issues, and we can continue the discussion there.

  18. 1Azure Action recognition is possible, there is shaking, but the degree of acceptable, this is the effect of kinect2, but with 4 azures, the hand shakes very badly, what methods can be used to adjust the hand recognition, the depth image is also always the hand Not recognized

    • Yes, this is the main goal, although the multi-camera tracking currently has some issues when tracking specific joints, like the hands for instance.

  19. Hello Rumen. To echo what others have said, thank you so much for your work on this invaluable asset for using the Kinect Azure with Unity.

    If I want to calibrate the alignment of the kinect azure with a projector, particularly for the blob tracking demo, should I follow the procedures outlined in the kinect v2 asset / projector demo? I’m not sure if the RoomAlive toolkit is still the best method or even compatible with the new kinect / unity 2019/2020? Hoping there is a better method than manually tweaking kinect transform values during runtime? Thanks in advance help or pointers you can give.

    Cheers,
    Tom

    • Hello Tom. Yes, to align the sensor with a projector you’d need to follow the same procedures. Unfortunately the ‘RoomAlive Toolkit’-project still provides support to the Kinect-v2 sensor only. As far as I remember, Prof. Andy Wilson had intention some time ago to extend the project and make it work with Azure Kinect and RealSense sensors. But I’m not sure about the progress so far.

  20. Hello Rumen,k4a,1.14,,The calibration of multiple sensors is indeed a lot more accurate. Now it is better to use two devices. In all directions, the most delicate hand will still be deviated and misaligned, but it is indeed a great improvement. To a new problem, hand gesture recognition is found to be invalid after checking Use Multi Cam Config

    • Hello wangpeng, thank you for the feedback! I’ll answer your e-mail. I need to finish one more urgent task and check something first. As you know, the hands are more prone to misalignment between sensors, and probably this affects the gesture detection.

  21. Hello,

    I am currently making a Unity scene in which I want to insert multiple .mkv recordings and rearrange them in different places. I added SceneMeshS0 into my scene and the 1st video is showing up fine, but I tried to duplicate it for a 2nd video, and only one video shows. Also, whenever I change transform values, the video no longer plays.

    Any suggestions much appreciated.

    Thank you!

    • Do multiple recordings mean you need multiple sensor interfaces, or do you use only one sensor? If you have multiple sensor interfaces, each one utilizing different MKV-file, you should have multiple scene-meshes for each sensor. This means the components (SceneMeshRendererGpu & FollowSensorTransform) of each SceneMeshSx should point to the respective sensor (0, 1, 2, etc.)

      • Using only one sensor, but playing multiple recordings within the scene.
        For a little more background, I am making a virtual music experience in which I will be inserting musicians in different positions within the unity scene. The plan is to record each musician separately and insert the individual recordings into the scene.

  22. Hello guys!

    Great plugin, well done!

    I would like to get the full skeleton coordinates so I can map those to some meshes position.
    So, I’m using the kinectManager.GetJointPosition() method, going through all the Joints Types, but it looks like I get only the top-left part of the joints.
    I definitely get the updated positions of the spline, the left arm, but nothing from the legs or the right part.

    Any idea how to solve this ? I’m using the Kinect Azure with a recorded .mkv as streaming mode.

    Thank you for your help!

    • Hi Mat, everything looks OK. The joints are too close to the camera and that makes the skeleton only partially visible. For more details look at my e-mail.

  23. Hello guys!
    Azure Kinect
    When will the official driver be able to carry out a major update? At present, it cannot solve the problem of multi-device body data matching, and the motion jitter is also obvious, which does not meet the commercial development standards. Many projects need to replace the equipment for development

    • Hi wangpeng, one more general improvement in multi-camera setup and tracking is coming with the next K4A-asset update. Please contact me, if you’d like to try it out earlier. Unfortunately I can’t fix the issues in the Azure Kinect Body Tracking SDK.

      • Hello Rumen,Very expectancy asset update, but if the official motion tracking sdk motion jitter cannot be eliminated very well, the bone smoothing method given at present is not very useful. It is estimated that the purpose of this device will be used in point cloud volume video. But the clarity is not up to the movie level

  24. Great plugin, Great assets
    The azure sdk is poorly done and the algorithm is terrible. We used titan rtx, but it was useless. The sdk is too bad. I hope the official sdk can change it as soon as possible.

  25. hi,
    i need to detect shapes of objects by depth. and my kinect installed heading the floor from the ceil.

    so, i hope to clip the floor.

    but, the angle of the kinect is wouldn’t be perpendicular practically and the floor depth value wouldn’t be uniformed in a depth image.

    do you have any idea for clipping depth concerned with a angle?

    • In this case, if the camera angle is fixed and you know the height of the ceiling, I suppose it’s a matter of applying some trigonometry in the DetectBlobsInRawDepth()-method of BlobDetector-script component (instead of minDistanceMm & maxDistanceMm), to clip the floor.

  26. Hi Rumen,
    Thanks for your help. This asset is incredibly helpful. However I encountered an issue that is the background removal function ofter cut off the hair leaving the performance less acceptable to me. Background removal is one heavily used function in our case. It would be great if there is any solution to this.

    Best,
    Danny

    • Hi Danny, the black hair (as well as black clothing) have less reflectivity. A workaround to this may be to put a light source (a light bulb for instance) directly above and in front of the place, where the user should stay.

  27. Hi! Your asset is very helpful, however, I’m having a hard time getting it to work with Azure Kinect in portrait mode. A set the orientation to the correct mode, but I’m not sure the plugin is actually doing anything with that. The GetPrimaryBodySensorOrientationAngle() method is not called from anywhere, except for two placed, both commented out, so not used. Is there anything I’m missing?

    • Hi. The idea is, when you turn the sensor sideways (and set the transform orientation & the ‘Body tracking sensor orientation’-setting of the AzureKinectInterface-component accordingly), to turn the monitor sideways too, to make use of the full sensor resolution. If you don’t turn the sensor sideways, but want to use the monitor in portrait mode, just set the output resolution (or aspect ratio) accordingly. In this case, only part of the sensor resolution will be visible in the portrait mode.

      • I can get the full resolution color stream, that’s not really an issue. Sorry if I wasn’t completely clear. Basically what I’d like is for an avatar to follow the user on the live camera feed in portrait mode.

        So what I do is:
        1. I set the orientation correctly in the AzureKinectInterface component
        2. I set the color stream correctly, so I see the color in the right resolution and orientation (I have to rotate it 90 degrees)

        What I expect:
        – The avatar keeps the distance and direction relative to the user

        What I see
        – The avatar’s directions are rotated: if the user squats, the avatar slides to the left, if the user steps to the left, the avatar slides upwards.

        If I disable “Pos Rel Overlay Color” on the Avatar Controller, then the direction is fine, but positioning the avatar correctly is a challenge.

        So it seems like the relative positioning is messing things up as it doesn’t account for the rotation.

        Just want to make sure I’m doing things correctly or not missing something before diving deep into code modifications.

      • Hi again, sorry for the delayed response. Please e-mail me and send me a short video of what’s your setup and what you see. Check if the ‘Grounded feet’-setting of the AvatarController-component is disabled. Please also make sure you have the imported the latest version of the K4A-asset (v1.15), so we can be on the same page.

  28. Hi everybody, I am working with version 1.15.0 of the Azure Kinect Examples for Unity and everything works fine in the Game window, but when I build a project into a .exe file it does not work.

    I have not had this happen before.

    It can’t seem to find any connected devices or SDK. Does anybody know what could be going on? Thank you so much

  29. Hi! I’m looking for a way of changing the exposure control on the Azure Kinect. I know there’s it’s possible with the SDK, but I haven’t found a way of doing it using the asset. Thanks in advance

  30. I want to create a floor in the KinectOverlayDemo2 Scene that matches the position of the user’s foot joints.
    I want to put the ball on the plain so the user can kick it.
    Azure cameras are angled from top to bottom.
    How can you make the floor?

    • I’m not sure what exactly you mean by “Azure cameras are angled from top to bottom”. Could you please e-mail me and provide some more details and, if possible, some pictures of your setup. This may help me understand better your issue, so I can give you a better advice.

  31. Hi, is the RealSenseWithCubemosSDK-package something you can provide, or is this available elsewhere? Is there some other channel I should use to contact you for requesting a supplemental package? (I assume this is different from the skeleton-tracking package in the wrappers directory of the Cubemos SDK distribution.) I recently purchased your Kinect Examples package, and I’d be interested in seeing if I could finally get the Cubemos SDK in working in Unity with my RealSense.

    Thanks!

    • Hi, just e-mail your request me and I’ll send you back the requested package free of any charge.
      FYI: I think to provide the supplemental packages in the shopping section of this website later.

  32. Hello, Rumen.

    How do I get the depth texture in grayscale rather than blue and yellow?

    Is the only way to calculate and use raw depth data?
    think, blue and yellow texture is too many affected with environment.

    • Hi, please open ‘DepthHistImageShader.shader’ in ‘AzureKinectExamples/Resources’-folder and change the values of _FrontColor and _BackColor, according to your requirements.

  33. Hi Rumen!

    Thanks for making this tool, it looks amazing when projected to VR 🙂

    I am encountering some issues when playing a recorded file with Kinectv2. I play the .xef file from Kinect Studio while running the scene in Unity but the sensor just stops and it won’t play the file. I’m using the VFXPointCloud scene in Unity 2020

    Any idea what the issue could be? Or how I could give you more informations?

    • Hi Leo, I’m not sure what you mean by “the sensor just stops and won’t play the file”. May I ask you to e-mail me and send me a video of what you see when playing the recording, while running the point cloud scene.

  34. Hi Rumen

    Using this asset

    Connect multiple Intel RealSense D400 series to your PC

    I want to move the avatar with four people, is it possible?

    Thank you

  35. Hi Rumen
    Thanks for the reply!
    I’m glad to know that you can connect multiple RealSense-sensors to a single PC.

    What I want to do
    is not to track 4 different people and control one avatar, but

    What I want to do is not to track 4 different people and control one avatar, but to move 4 avatars with 4 different people.

    I’ll look into Cubemos SDK and RealSense-Interface v2.

    Thank you.

    This sentence uses the DeepL translation

  36. Hi, I am using two Kinects (an Azure and a V2). Is there a way to get joint position of a hand for example from both sensors (merged is preferred, but separate is also fine)?

      • Thanks for your response. I was able to get ahold of another Azure, so now I am using two Azures instead. Does GetBodyCount() return the amount of bodies in view across all sensors while taking into account body merging? Sometimes randomly that GetBodyCount() will return 2 even though I am the only person testing at the time.

        What about GetJointPosition()? Should it return the position of the merged joints? For this I am seeing (0,0,0) even when my body is in view to at least one of the sensors using GetSensorJointPosition().

      • The body merging takes part automatically, when you have more than one sensors connected.
        In this case GetBodyCount() should return the count of the merged bodies, and GetJointPosition() should return the joint position of the merged body joint. You can use the KinectAvatarsDemo4-scene, to see how many merged bodies get detected, and if the respective body joint is detected (correctly) or not. Don’t forget to enable the ‘Use multi-cam config’-setting of the KinectManager-component in the scene, too.

  37. Hi Rumen,
    I love this project, I have learned so much from your code thank you for sharing!
    I’m currently trying to run the azure bodytracking sdk on and aligned rgdb stream from HoloLens2. I’m streaming the rgdb frames to a server but I’m not sure how to feed them into the body tracking sdk to get and stream back the joint transforms for animating a 3D avatar. I was thinking the Kinect2Interface might be a good starting point to get some inspiration but I thought it would be a good idea to reach out to you as well in case you have any recommendations how to best get this to work.
    Thank you so much for taking the time!!
    Best,
    Marc

    • Hi Marc, if you’re using the native Azure Kinect SDK / Azure Kinect Body Tracking SDK, you should create a new Capture-object, new Image-objects for the depth, IR and optionally color frames (according to the Azure Kinect specification for each frame), put the images into the capture, and then call bodyTracker.EnqueueCapture() to enqueue the capture for body tracking. The depth and IR images are mandatory. The color image is not, but you could need it later, when you process the body tracking results.

Leave a Reply to sang jin LeeCancel reply