Azure Kinect Tips & Tricks

tips and tricksWell, I think it’s time to share some tips, tricks and examples regarding the K4A-asset, as well. This package, although similar to the K2-asset, has some features that significantly differ from the previous one. It supports multiple sensors in one scene and different types of depth sensors, as well.

The configuration and detection of the available sensors is less automatic than before, but allows more complex multi-sensor configurations. In this regard, check the first 4-5 tips in the list below. Please also consult the online documentation, if you need more information regarding the available demo scenes and depth sensor related components.

For more tips and tricks, please look at the Kinect-v2 Tips, Tricks & Examples. These tips and tricks were written for the K2-asset, but for backward compatibility a lot of the demo scenes, components and API mentioned there are the same or very similar.

Table of Contents:

How to reuse the K4A-asset functionality in your Unity project
How to update your existing K2-project to the K4A-asset
How to set up the KinectManager and Azure Kinect interface in the scene
How to use Kinect-v2 sensor instead of Azure Kinect in the demo scenes
How to use RealSense sensor instead of Azure Kinect in the demo scenes
How to set up multiple Azure Kinect (or other) sensors in the scene
How to remove the SDKs and sensor interfaces you don’t need
Why is the K2-asset still around
How to play a recording instead of utilizing live data from a connected sensor
How to set up sensor’s position and rotation in the scene
How to calibrate and set up multiple cameras in the scene
How to make the point cloud demos work with multiple cameras
How to send the sensor streams across the network
How to get the user or scene mesh working on Oculus Quest
How to get the color-camera texture in your code
How to get the depth-frame data or texture in your code
How to get the position of a body joint in your code
How to get the orientation of a body joint in your code
What is the file-format used by the body recordings
How to utilize green screen for better volumetric scenes or videos
How to integrate the Cubemos body tracking with the RealSense sensor interface
How to utilize Apple iPhone-Pro or iPad-Pro as depth sensors
How to replace Azure Kinect with Femto Bolt or Mega sensors

How to reuse the K4A-asset functionality in your Unity project

Here is how to reuse the K4A-asset scripts and components in your Unity project:

1. Copy folder ‘KinectScripts’ from the AzureKinectExamples-folder of this package to your project. This folder contains all needed components, scripts, filters and interfaces.
2. Copy folder ‘Resources’ from the AzureKinectExamples-folder of this package to your project. This folder contains some needed libraries and resources.
3. Copy the sensor-SDK specific sub-folders (Kinect4AzureSDK, KinectSDK2.0 & RealSenseSDK2.0) from the AzureKinectExamples/SDK-folder of this package to your project. It contains the plugins and wrapper classes for the respective sensor types.
4. Wait until Unity detects, imports and compiles the newly detected resources and scripts.
5. Please do not share the KinectDemos-folder in source form or as part of public repositories.

How to update your existing K2-project to the K4A-asset

If you have an existing project utilizing the K2-asset, and would like to update it to use the K4A-asset, please look at the following steps. Please also note that not all K2-asset functions are currently supported by the K4A-asset. For instance, the face-tracking components and API and speech recognition are not currently supported. Look at this tip for more info.

1. First off, don’t forget to make a backup of the existing project, or copy it to a new folder. Then open it in Unity editor.
2. Remove the K2Examples-folder from the Unity project.
3. Import the K4A-asset from Unity asset store, or from the provided Unity package.
4. After the import finishes, check the console for error messages. If there are any and they are easy to fix, just go ahead and fix them. An easy fix would be to add ‘using com.rfilkov.kinect;’ or ‘using com.rfilkov.components;’ to your scripts that use the K2-asset API. Missing namespaces was an issue before.
5. If the error fixes are more complicated, you can contact me for support, as long as you are legitimate customer. I tried to keep all components as close as possible to the same components in the K2-asset, but there may be some slight differences. For instance, the K2-asset supports only one sensor, while the K4A-asset can support many sensors at once. That’s why a sensor-index parameter may be needed by some API calls, as well.
5. Select the KinectController-game object in the scene and make sure the KinectManager’s settings are similar to the KinectManager settings in the K2-asset and correct for that scene. Some of the KM settings in the K4A-asset are a bit different now, because of the bigger amount of provided functionality.
6. Go through the other Kinect-related components in the scene, and make sure their settings are correct, too.
7. Run the scene to try it out. Compare it with the scene running in the K2-asset. If the output is not similar enough, look again at the component settings and at the custom API calls.

How to set up the KinectManager and Azure Kinect interface in the scene

Please see the hierarchy of objects in any of the demo scenes, as a practical implementation of this tip:

1. Create an empty KinectController-game object in the scene. Set its position to (0, 0, 0) rotation to (0, 0, 0) and scale to (1, 1, 1).
2. Add KinectManager-script from AzureKinectExamples/KinectScripts-folder as component to the KinectController game object.
3. Select the frame types you need to get from the sensor – depth, color, IR, pose and/or body. Enable synchronization between frames, as needed. Check the user-detection setting and change them if needed, as well as the on-screen info you’d like to get in the scene.
4. Create Kinect4Azure-game object as child object of KinectController in Hierarchy. Set its position and rotation to match the Azure-Kinect sensor position & rotation in the world. For a start, you can set only the position in meters, then estimate the sensor rotation from the pose frames later, if you like.
5. Add Kinect4AzureInterface-script from AzureKinectExamples/KinectScripts/Interfaces-folder to the newly created Kinect4Azure-game object.
6. Change the by-default setting of the component, if needed. For instance, you can select different color camera mode, depth camera mode, device sync mode, the min & max distances used for creating the depth-related images.
7. If you like to replay a previously saved recording file, select ‘Device streaming mode’ = ‘Play recording’ and set the full path to recording file in the ‘Recording file’-setting.
8. Run the scene, to check if everything selected works as expected.

How to use Kinect-v2 sensor instead of Azure Kinect in the demo scenes

1. Unfold the KinectController-object in the scene.
2. Select the Kinect4Azure-child object.
3. (Optional) Set the ‘Device streaming mode’ of its Kinect4AzureInterface-component to ‘Disabled’.
4. Select the KinectV2-child object.
5. Set the ‘Device streaming mode’ of its Kinect2Interface-component to ‘Connected sensor’.
6. If you like to replay a previously saved recording file, you should play it in the ‘Kinect studio v2.0’ (part of Kinect SDK 2.0).
7. Run the scene, to check if the Kinect-v2 sensor interface is used instead of Azure-Kinect interface.

How to use RealSense sensor instead of Azure Kinect in the demo scenes

1. Unfold the KinectController-object in the scene.
2. Select the Kinect4Azure-child object.
3. (Optional) Set the ‘Device streaming mode’ of its Kinect4AzureInterface-component to ‘Disabled’.
4. Select the RealSense-child object.
5. Set the ‘Device streaming mode’ of its RealSenseInterface-component to ‘Connected sensor’.
6. If you like to replay a previously saved recording file, select ‘Device streaming mode’ = ‘Play recording’ and set the full path to recording file in the ‘Recording file’-setting.
7. Run the scene, to check if the RealSense sensor interface is used instead of Azure-Kinect interface.

How to set up multiple Azure Kinect (or other) sensors in the scene

Here is how to set up a 2nd (as well as 3rd, 4th, etc.) Azure Kinect camera interface in the scene:

1. Unfold the KinectController-object in the scene.
2. Duplicate the Kinect4Azure-child object.
3. Set the ‘Device index’ of the new object to 1 instead of 0. Other connected sensors should have device indices of 2, 3, etc.
4. Change ‘Device sync mode’ of the connected cameras, as needed. One sensor should be ‘Master’ and the others – ‘Subordinate’, instead of ‘Standalone’.
5. Set the position and rotation of the new object to match the sensor’s position & rotation in the world. For a start, you can set only the position in meters, then estimate the sensor rotation from the pose frames later, if you like.

How to remove the SDKs and sensor interfaces you don’t need

If you work with only one type of sensors (probably Azure Kinects), here is what to do, to get rid of the extra SDKs in the K4A-asset. This will decrease the space of your project and build:

– To remove the RealSense SDK: 1. Delete ‘RealSenseInterface.cs’ from KinectScripts/Interfaces-folder; 2. Delete the RealSenseSDK2.0-folder from AzureKinectExamples/SDK-folder.
– To remove the Kinect-v2 SDK: 1. Delete ‘Kinect2Interface.cs’ from KinectScripts/Interfaces-folder; 2. Delete the KinectSDK2.0-folder from AzureKinectExamples/SDK-folder.

Why is the K2-asset still around

The ‘Kinect v2 Examples with MS-SDK and Nuitrack SDK’-package (or ‘K2-asset’ for short) is still around (and will be around for some time), because it has components and demo scenes that are not available in the new K4A-asset. For instance: the face-tracking components & demo scenes, the hand-interaction components & scenes and the speech-recognition component & scene. This is due to various reasons. For instance, the SDK API does not yet provide this functionality, or I have not managed to add this functionality to the K4A-asset yet. As long as these (or replacement) components & scenes are missing in the K4A-asset, the K2-asset will be kept around.

On the other hand the K4A-asset has significant advantages, as well. It works with the most up-to-date sensors (like Azure Kinect & Realsense), allows multi-camera setups, has better internal structure and gets better and better (with more components, functions and demo scenes) with each next release.

How to play a recording instead of utilizing live data from a connected sensor

The sensor interfaces in K4A-asset provide the option to play back a recording file, instead of getting data from a physically connected sensor. Here is how to achieve this for all types of sensor interfaces:

1. Unfold the KinectController-game object in Hierarchy.
2. Select the proper sensor interface object. If you need to play back a Kinect-v2 recording, please skip steps 3, 4 & 5, and look at the note below.
3. In the sensor interface component in Inspector, change ‘Device streaming mode’ from ‘Connected sensor’ to ‘Play recording’.
4. Set ‘Recording file’ to the full path to previously saved recording. This is the MKV-file (in case of Kinect4Azure) or OUT-file (in case of RealSense-sensors).
5. Run the scene to check, if it is working as expected.

Note: In case of Kinect-v2, please start ‘Kinect Studio v2.0’ (part of Kinect SDK 2.0) and open the previously saved XEF-recording file. Then go to the Play-tab, press the Connect-button and play the file. The scene utilizing Kinect2Interface should be run before you start playing the recording file.

How to set up sensor’s position and rotation in the scene

Here is how to set up the sensor’s transform (position and rotation) in the scene. In case of multiple camera setup this should be done at least for the 1st sensor:

1. Unfold the KinectController-game object in Hierarchy, and select the respective sensor interface object.
2. If you are NOT using Azure Kinect with ‘Detect floor for pose estimation’-setting enabled, please measure manually the distance between the floor and the camera, in meters. Set it as Y-position in the Transform-component of the sensor interface object. Leave the X- & Z-values as 0.
3. Set the ‘Get pose frames’-setting of KinectManager-component in the scene to ‘Display info’. Make sure the rotation of the sensor interface object transform is set to (0, 0, 0). Start the scene.
4. You should see the detected sensor’s position and rotation on the screen. Write down the rotation values. If you are using Azure Kinect with ‘Detect floor for pose estimation’ enabled, write down the detected position values too.
5. Select again the sensor interface object in the scene. Set the written down values as X-, Y- & Z-rotation of its Transform-component. If the sensor’s position was detected too, set the position values of the Transform-component, as well.
6. Set the ‘Get pose frames’-setting of KinectManager-component in the scene back to ‘None’, to avoid the overhead of IMU frames processing.
7. Start the scene again, to check if the world coordinates are correct. In case of sensors that are turned sideways, you should turn the monitor sideways, too.

If you are using Azure Kinect and the sensor is turned sideways (+/-90 degrees) or upside-down (180 degrees), please find the ‘Kinect4AzureInterface’-component in the scene, and set its ‘Body tracking sensor orientation’-setting accordingly:
– K4ABT_SENSOR_ORIENTATION_DEFAULT corresponds to a Z-rotation 0 degrees,
– K4ABT_SENSOR_ORIENTATION_CLOCKWISE90 corresponds to a Z-rotation of 270 (or -90) degrees,
– K4ABT_SENSOR_ORIENTATION_COUNTERCLOCKWISE90 corresponds to a Z-rotation of 90 degrees, and
– K4ABT_SENSOR_ORIENTATION_FLIP180 corresponds to a Z-rotation of 180 degrees.
This hints the Body Tracking SDK, to consider the respective sensor rotation, when detecting the positions and rotations of the body joints.

How to calibrate and set up multiple cameras in the scene

To calibrate multiple cameras, connected to the same machine (usually two or more Azure Kinect sensors), you can utilize the MultiCameraSetup-scene in KinectDemos/MultiCameraSetup-folder. Please open this scene and do as follows:

1. Create the needed sensor interface objects, as children of the KinectController-game object in Hierarchy. By default there are two Kinect4Azure-objects there, but if you have more sensors connected, feel free to create new, or duplicate one of the existing sensor interface objects. Don’t forget to set their ‘Device index’ and ‘Device sync mode’-settings accordingly.
2a. Set up the position and rotation of the 1st sensor-interface object (Kinect4Azure0 in the Hierarchy of the scene). See the tip above on how to do that.
2b. Select the ‘KinectController’-object in Hierarchy and enable ‘Sync multi-cam frames’-setting of its KinectManager-component, if it is currently disabled.
3. Run the scene. All configured sensors should light up. During the calibration process one (and only one) user should stay visible to all configured sensors. The calibration progress and the quality of the calibration will be displayed on screen. The user meshes, as seen by all cameras should be visible too. They may help you track visually the quality of calibration, too. After the calibration completes, the calibration-config file ‘multicam_config.json’ will be saved to the root folder of your Unity project.
4. If, during the calibration process, the user cannot be found in the intersection area of the cameras, please select the ‘KinectController’-object in Hierarchy again, disable the ‘Sync multi-cam frames’-setting of its KinectManager-component and then try to run the scene again.
4. After the automatic calibration completes, you can manually adjust the rotations and positions of the calibrated sensors (except for the first one), to make the user meshes match each other as close as possible. To orbit around the user meshes for better visibility, press Alt + mouse drag. When you are ready, press the ‘Save’-button to save the changes to the calibration-config file.
6. To test the quality of the saved calibration file, select the ‘KinectController’-object in Hierarchy and enable the ‘Use multi-cam config’-setting of its KinectManager-component. Then run the scene again and check how close are the meshes of the detected user, as seen by different camera perspectives.
7. Feel free to re-run the MultiCameraSetup-scene with ‘Use multi-cam config’-setting disabled, if you are not satisfied with the current quality of the multi-camera calibration results.
8. To use the saved calibration-config in any other scene, open the respective scene and enable the ‘Use multi-cam config’-setting of KinectManager-component in that scene. When you run it, the KinectManager should recreate and set up the sensor interfaces, according to the saved calibration config. Run the scene to check, if it works as expected. Please note, in case of multiple cameras, the user-body-merger script will try to merge automatically the user bodies detected by all cameras, according to their proximity to each other.

How to make the point cloud demos work with multiple cameras

To make the point cloud demo scenes work with multiple calibrated cameras, please follow these steps:

1. Follow the tip above to calibrate the cameras, if you haven’t done it yet.
2. Open the respective point-cloud scene (i.e. SceneMeshDemo or UserMeshDemo in the KinectDemos/PointCloudDemo-folder).
3. Enable the ‘Use multi-cam config’-option of the KinectManager-component in the scene.
4. Duplicate the SceneMeshS0-object as SceneMeshS1 (in SceneMeshDemo-scene), or User0MeshS0-object as User0MeshS1 (in UserMeshDemo-scene).
5. Change the ‘Sensor index’-setting of their Kinect-related components from 0 to 1. This way you will get a 2nd mesh in the scene, from the 2nd sensor’s point of view.
6. If you have more cameras, continue duplicating the same objects (as SceneMeshS2, SceneMeshS3, etc. in SceneMeshDemo, or User0MeshS2, User0MeshS3, etc. in UserMeshDemo), and change their respective ‘Sensor index’-settings to point to the other cameras, in order to get meshes from these cameras’ POV too.
7. Run the scene to see the result. The sensor meshes should align properly, if the calibration done in MultiCameraSetup-scene is good.

How to send the sensor streams across the network

To send the sensor streams across the network, you can utilize the KinectNetServer-scene and the NetClientInterface-component, as follows:

1. On the machine, where the sensor is connected, open and run the KinectNetServer-scene from KinectDemos/NetworkDemo-folder. This scene utilizes the KinectNetServer-component, to send the requested sensor frames across the network, to the connected clients.
2. On the machine, where you need the sensor streams, create an empty game object, name it NetClient and add it as child to the KinectController-game object (containing the KinectManager-component) in the scene. For reference, see the NetClientDemo1-scene in KinectDemos/NetworkDemo-folder.
3. Set the position and rotation of the NetClient-game object to match the sensor’s physical position and rotation. Add NetClientInterface from KinectScripts/Interfaces as component to the NetClient-game object.
4. Configure the network-specific settings of the NetClientInterface-component. Either enable ‘Auto server discovery’ to get the server address and port automatically (LAN only), or set the server host (and eventually base port) explicitly. Enable ‘Get body-index frames’, if you need them in that specific scene. The body-index frames may cause extra network traffic, but are needed in many scenes, e.g. in background removal or user meshes.
5. Configure the KinectManager-component in the scene, to receive only the needed sensor frames, and if frame synchronization is needed or not. Keep in mind that more sensor streams would mean more network traffic, and frame sync would increase the lag on the client side.
6. If the scene has UI, consider adding a client-status UI-Text as well, and reference it with the ‘Client status text’-setting of NetClientInterface-component. This may help you localize various networking issues, like connection-cannot-be-established, temporary disconnection, etc.
7. Run the client scene in the Editor to make sure it connects successfully to the server. If it doesn’t, check the console for error messages.
8. If the client scene needs to be run on mobile device, build it for the target platform and run it on device, to check if it works there, as well.

How to get the user or scene mesh working on Oculus Quest

To get the UserMeshDemo- or SceneMeshDemo-scene working on Oculus Quest, please do as follows:

1. First off, you need to do the usual Oculus-specific setup, i.e. import the ‘Oculus Integration’-asset from the Unity asset store, enable ‘Virtual reality supported’ in ‘Player Settings / XR Settings’ and add Oculus’ as ‘Virtual Reality SDK’ there, as well. In the more recent Unity releases this is replaced by the ‘XR Plugin Management’ group of project settings.
2. Oculus Quest will need to get the sensor streams over the network. In this regard, open NetClientDemo1-scene in KinectDemos/NetworkDemo-folder, unfold KinectController-game object in Hierarchy and copy the NetClient-object below it to the clipboard.
3. Open UserMeshDemo- or SceneMeshDemo-scene, paste the copied NetClient-object from the clipboard to the Hierarchy and then move it below the KinectController-game object. Make sure that ‘Auto server discovery’ and ‘Get body index frames’-settings of the NetClientInterface-component are both enabled. Feel free also to set the ‘Device streaming mode’ of the Kinect4AzureInterface-component to ‘Disabled’. It will not be used on Quest.
4. Add the KinectNetServer-scene from the KinectDemos/NetworkDemo-folder to the ‘Scenes in Build’-setting of Unity ‘Build settings’. Then build it for the Windows-platform and architecture ‘x86_64’. Alternatively, create a second Unity project, import the K4A-asset in there and open the KinectNetServer-scene. This scene (or executable) will act as network server for the sensor data. You should run it on the machine, where the sensor is physically connected.
5. Remove ‘KinectNetServer’ from the list of ‘Scenes in Build’, and add UserMeshDemo- or SceneMeshDemo-scene instead.
6. Switch to the Android-platform, open ‘Player settings’, go to ‘Other settings’ and make sure ‘OpenGLES3’ is the only item in the ‘Graphics APIs’-list. I would also recommend disabling the ‘Multithreaded rendering’, but this is still a subject for further experiments.
7. Start the KinectNetServer-scene (or executable). Connect your Oculus Quest HMD to the machine, then build, deploy and run the UserMeshDemo- or SceneMeshDemo-scene to the device. After the scene starts, you should see the user or scene mesh live on the HMD. Enjoy!

How to get the color-camera texture in your code

Please check, if the ‘Get color frames’-setting of the KinectManager-component in the scene is set to ‘Color texture’. Then use the following snippet in the Update()-method of your script:

KinectManager kinectManager = KinectManager.Instance;
if(kinectManager && kinectManager.IsInitialized())
{
    Texture texColor = kinectManager.GetColorImageTex(sensorIndex);
    // do something with the texture
}

How to get the depth-frame data or texture in your code

Please check, if the ‘Get depth frames’-setting of the KinectManager-component in the scene is set to ‘Depth texture’, if you need the texture or to ‘Raw depth data’, if you need the depth data only. Then use the following snippet in the Update()-method of your script:

KinectManager kinectManager = KinectManager.Instance;
if(kinectManager && kinectManager.IsInitialized())
{
    Texture texDepth = kinectManager.GetDepthImageTex(sensorIndex);  // to get the depth frame texture
    ushort[] rawDepthData = kinectManager.GetRawDepthMap(sensorIndex);  // to get the raw depth frame data
    // do something with the texture or data
}

Please note, the raw depth data is an array of shorts, with size equal to (depthImageWidth * depthImageHeight). Each value represents the depth in mm for the respective depth frame point.

How to get the position of a body joint in your code

First, please make sure the ‘Get body frames’-setting of the KinectManager-component in the scene is set to something different than ‘None’. Then use the following snippet in the Update()-method of your script (and replace ‘HandRight’ below with the body joint you need):

KinectInterop.JointType joint = KinectInterop.JointType.HandRight;

KinectManager kinectManager = KinectManager.Instance;
if(kinectManager && kinectManager.IsInitialized())
{
    if(kinectManager.IsUserDetected(playerIndex))
    {
        ulong userId = kinectManager.GetUserIdByIndex(playerIndex);

        if(kinectManager.IsJointTracked(userId, joint))
        {
            Vector3 jointPos = kinectManager.GetJointPosition(userId, joint);
            // do something with the joint position
        }
    }
}

How to get the orientation of a body joint in your code

Again, please make sure the ‘Get body frames’-setting of the KinectManager-component in the scene is set to something different than ‘None’. Then use the following snippet in the Update()-method of your script (and replace ‘Pelvis’ below with the body joint you need):

KinectInterop.JointType joint = KinectInterop.JointType.Pelvis;
bool mirrored = false;

KinectManager kinectManager = KinectManager.Instance;
if(kinectManager && kinectManager.IsInitialized())
{
    if(kinectManager.IsUserDetected(playerIndex))
    {
        ulong userId = kinectManager.GetUserIdByIndex(playerIndex);

        if(kinectManager.IsJointTracked(userId, joint))
        {
            Quaternion jointOrientation = kinectManager.GetJointOrientation(userId, joint, !mirrored);
            // do something with the joint orientation
        }
    }
}

What is the file-format used by the body recordings

The KinectRecorderPlayer-component can record or replay body-recording files. These recordings are text files, where each line represents a body-frame at a specific moment in time. You can use it to replay or analyze the body-frame recordings in your own tools. Here is the format of each line. See the sample body-frames below, for reference.

0. time in seconds, since the start of recording, followed by ‘|’. All other field separators are ‘;’.
This value is used by the KinectRecorderPlayer-component for time-sync, when it needs to replay the body recording.

1. body frame identifier. should be ‘k4b’.
2. body-frame timestamp, coming from the SDK. This field is ignored by the KinectManager.
3. number of tracked bodies.
4. number of body joints (32).
5. space scale factor (3 numbers for the X,Y & Z-axes).

Then follows the data for each tracked body:
6. body tracking flag – 1 if the body is tracked, 0 if it is not tracked (the 5 zeros at the end of the lines below are for the 5 missing bodies)

if the body is tracked, then the bodyId and the data for all body joints follow. if it is not tracked – the bodyId and joint data (7-9) are skipped
7. body ID

body joint data (25 times, for all body joints – ordered by JointType (see KinectScripts/KinectInterop.cs)
8. joint tracking state – 0 means not-tracked; 1 – inferred; 2 – tracked

if the joint is inferred or tracked, the joint position data follows. if it is not-tracked, the joint position data (9) is skipped.
9. joint position data, in meters (3 numbers for the X, Y & Z-axes).
10. joint orientation data, in degrees (3 numbers for the X, Y & Z-axes).

And here are two body-frame samples, for reference:

4.285|k4b;805827551;1;32;-1;-1;1;1;1;2;-0.415;0.002;1.746;3.538;356.964;2.163;2;-0.408;-0.190;1.717;358.542;356.776;2.160;2;-0.402;-0.345;1.705;353.569;356.587;2.643;2;-0.392;-0.577;1.743;354.717;353.971;4.102;2;-0.386;-0.666;1.742;348.993;349.257;8.088;2;-0.357;-0.537;1.742;351.562;359.329;4.547;2;-0.202;-0.525;1.743;307.855;64.654;324.608;2;0.027;-0.614;1.570;15.579;121.943;301.289;2;-0.092;-0.812;1.468;345.353;117.828;300.655;2;-0.096;-0.899;1.409;345.353;117.828;220.655;2;-0.430;-0.541;1.734;352.718;351.386;16.178;2;-0.566;-0.578;1.714;315.412;285.925;25.871;2;-0.732;-0.648;1.466;15.003;217.635;48.440;2;-0.568;-0.825;1.383;337.354;236.803;353.364;2;-0.512;-0.802;1.288;337.354;236.803;358.364;2;-0.316;0.005;1.752;346.343;356.306;359.296;2;-0.305;0.436;1.694;17.148;355.929;359.284;2;-0.347;0.819;1.852;338.148;350.141;3.025;2;-0.319;0.893;1.664;337.809;349.421;3.293;2;-0.505;-0.001;1.740;350.880;356.484;1.804;2;-0.514;0.433;1.716;11.939;357.147;1.821;2;-0.510;0.833;1.844;334.003;20.297;2.223;2;-0.579;0.886;1.678;333.513;21.027;1.903;2;-0.356;-0.737;1.585;348.993;349.257;8.088;2;-0.333;-0.766;1.634;348.993;349.257;8.088;2;-0.292;-0.728;1.760;348.993;349.257;8.088;2;-0.388;-0.768;1.620;348.993;349.257;8.088;2;-0.470;-0.744;1.734;348.993;349.257;8.088;2;-0.155;-0.866;1.313;345.353;117.828;11.142;2;-0.156;-0.839;1.334;80.906;53.999;316.725;2;-0.539;-0.838;1.394;285.211;20.682;56.425;2;-0.519;-0.966;1.378;10.344;155.193;69.151

4.361|k4b;806494221;1;32;-1;-1;1;1;1;2;-0.404;-0.076;1.745;6.454;356.702;1.397;2;-0.399;-0.264;1.706;2.510;356.605;1.390;2;-0.395;-0.416;1.684;357.553;356.485;1.980;2;-0.384;-0.647;1.724;353.562;358.508;2.961;2;-0.380;-0.735;1.725;355.491;345.619;7.062;2;-0.350;-0.607;1.720;350.254;353.270;353.332;2;-0.198;-0.627;1.739;299.401;48.858;332.268;2;0.067;-0.682;1.613;3.271;109.329;295.800;2;0.018;-0.895;1.492;351.194;137.170;266.059;2;0.037;-0.997;1.483;351.194;137.170;186.059;2;-0.422;-0.610;1.718;348.712;3.006;24.969;2;-0.550;-0.670;1.730;313.287;304.828;18.589;2;-0.772;-0.716;1.528;358.776;240.673;49.456;2;-0.695;-0.892;1.363;319.830;227.016;6.959;2;-0.633;-0.897;1.270;319.830;227.016;11.959;2;-0.306;-0.074;1.751;344.427;356.157;0.339;2;-0.301;0.352;1.679;14.904;356.335;0.338;2;-0.347;0.737;1.820;0.651;347.614;3.214;2;-0.319;0.876;1.677;0.313;346.946;3.207;2;-0.493;-0.077;1.739;348.410;356.260;2.692;2;-0.508;0.352;1.696;7.027;357.126;2.657;2;-0.499;0.759;1.787;5.252;40.137;0.152;2;-0.589;0.893;1.696;4.784;40.810;0.213;2;-0.339;-0.787;1.565;355.491;345.619;7.062;2;-0.319;-0.821;1.611;355.491;345.619;7.062;2;-0.288;-0.798;1.742;355.491;345.619;7.062;2;-0.373;-0.821;1.593;355.491;345.619;7.062;2;-0.462;-0.811;1.703;355.491;345.619;7.062;2;0.024;-1.103;1.436;351.194;137.170;286.972;2;0.050;-0.997;1.413;349.990;58.882;331.104;2;-0.669;-0.904;1.379;279.467;85.405;345.866;2;-0.660;-1.033;1.395;25.016;150.346;66.683

How to utilize green screen for better volumetric scenes or videos

Here is how to utilize a green (or other color) screen for high quality volumetric scenes or videos. If you need practical examples, please look at the GreenScreenDemo1- and GreenScreenDemo2-scene in KinectDemos/GreenScreenDemo-folder.

1. First, please set the ‘Get color frames’-setting of the KinectManager-component in the scene to ‘Color texture’ and ‘Get depth data’ to ‘Raw depth data’.
2. Create an object in the scene to hold the green screen components. This object is called ‘GreenScreenMgr’ in the green screen demo scenes.
3. Add the BackgroundRemovalManager- and BackgroundRemovalByGreenScreen-scripts from KinectScripts-folder as components to the object. The green screen component is implemented as a background-removal filter, similar to the other BR filters.
4. Set the ‘Green screen color’-setting of the BackgroundRemovalByGreenScreen-component. This is the color of the physical green screen. By default it is green, but you can use whatever color screen you have (even some towels, but they should not be black, gray or white). To find out the screen’s color, you could make a photo or screenshot of your setup and then use a color picker in any picture editor, to get the color’s RGB values.
5. You can utilize the other settings of the BackgroundRemovalByGreenScreen-component, to adjust the foreground filtering according to your needs, after you start the scene later.
6. Set the ‘Foreground image’-setting of the BackgroundRemovalManager-component to point to any RawImage-object in the scene, to get the green screen filtered output image displayed there.
7. Alternatively, if you need a volumetric output in the scene, add a Quad-object in the scene (menu ‘3D Object / Quad’). This is the ForegroundRenderer-object in the GreenScreenDemo2-scene. Then add the ForegroundBlendRenderer-component from KinectScripts-folder as component to the object.
8. If you use the ForegroundBlendRenderer-component, set its ‘Invalid depth value’ to the distance between the camera and the green screen, in meters. This value will be used as a depth value for the so called invalid pixels in the depth image. This will provide more consistent, high quality foreground image.
9. If you use the ForegroundBlendRenderer-component and you need to apply the scene lighting to the generated foreground mesh, please enable the ‘Apply lighting’-setting of the ForegroundBlendRenderer-component. By default this setting is disabled.
10. Run the scene to see the result. You can then adjust the settings of the BackgroundRemovalByGreenScreen-component to match your needs as good as possible. Don’t forget to copy the changed settings back to the component, when you stop the scene.

How to integrate the Cubemos body tracking with the RealSense sensor interface

Please note the Cubemos Skeleton Tracking SDK is no longer available either on Cubemos’s or Intel’s websites and shops. In this regard, please use Azure Kinect or iPhone Pro w/LiDAR-sensor, whenever possible and avoid using RealSense.
To integrate the Cubemos skeleton tracking SDK with the RealSense sensor interface, please follow these steps:

1. Go to https://www.cubemos.com/skeleton-tracking-sdk and download Cubemos Skeleton Tracking SDK v3.0.
2. On the same page, click on ‘Try for free’. It will bring you to this RealSense page. Click ‘Try for free’ again (if you haven’t done it already), to get a license key.
3. Install the downloaded Cubemos Skeleton Tracking SDK. The process is pretty straightforward. After the installation completes, you need to restart the machine. This will add the ‘CUBEMOS_SKEL_SDK’-variable to the list of the environment variables, and the path to the Cubemos bin-folder (where the Cubemos native libraries reside) to the system’s path.
4. After the restart, run ‘Activate Skeleton Tracking SDK’ from the ‘Cubemos Skeleton Tracking SDK’-folder in the Windows Start menu. You’ll have to enter the license key you got earlier from Intel, in order to activate the Cubemos SDK. You don’t need to install the VS or other sample projects. Please look for error or warning messages in the console.
5. If you see a warning that ‘libmmd.dll’ or ‘cubemos_engine.dll’ cannot be found in path, please check if you have the ‘Intel C++ Redistributable’ installed on your machine. If needed, you can download and install it from here.
6. At this point you should be able to run ‘Skeleton Tracking with Intel RealSense’ from the same menu.
7. Create a new Unity project, import the K4A-asset (at least v1.15) into this project, then import the RealSenseInterface-package (if you’d like to try it out free of charge, please e-mail me).
8. Open a demo scene (e.g. the 1st or 4th avatar demo) and start it. If you don’t have any other sensor connected (e.g. Azure Kinect or Kinect-v2), the RealSense-sensor should be automatically started, when you run the scene.

Please note that after the scene starts, there will be a significant delay caused by loading the model into memory and then transferring it to the GPU. By default, the RealSenseInterface starts the Cubemos tracker in GPU mode. If you get errors like ‘cubemos Error. Can’t create skeleton tracker for RealSenseInterface0!’, it may be because the GPU is external or incompatible with Cubemos. Please look at this forum for more info in this regard.

If there are errors in GPU mode, or if you prefer to start the Cubemos tracker in CPU mode in Unity editor, please do as follows:

  • Copy “tbb.dll” from “C:\Program Files\Cubemos\SkeletonTracking\bin” to “C:\Program Files\Unity\Hub\Editor\{your-Unity-version}\Editor”. Don’t forget to close the Unity editor before that, and save (or rename) the original “tbb.dll” used by Unity.
  • Start the Unity editor again, and change the “Body tracking compute”-setting of the RealSenseInterface-component in the scene to “CM_CPU”.
  • Run a scene with body tracking to see, if Cubemos works as expected in CPU mode.

Please also note, the Cubemos-RealSense integration is still experimental and issues are possible.

How to utilize Apple iPhone-Pro or iPad-Pro as depth sensors

As of v1.15 of the K4A-asset you can use the latest Apple’s iPhone Pro or iPad Pro as depth sensors, because they have LiDAR (depth) sensors included in their hardware configurations. If you have got one of these devices, and would like to use it as a depth (and color) camera in the K4A-asset, here is what to do:

1. Create a new Unity project (with Unity 2020.1.0f1 or later), and import the K4A-asset (v1.15 or later) into this project.
2. Import the ARKitInterface-package into the project (if you’d like to try it out free of charge, please e-mail me).
3. Open Unity ‘Build settings’ and select ‘iOS’ as target platform.
4. Open ‘Player settings’. Then set ‘Camera usage description’ to a description of your choice (e.g. ‘AR’), and the ‘Target minimum iOS Version’ to ‘14.0’ (or at least ‘13.0’). You can also set other Player settings, as needed, for instance the company or product name, bundle identifier, etc.
5. Open the scene you’d like to build (or add it to the ‘Scenes in Build’-list). Then press the ‘Build and Run’ (or ‘Build’) button. The build and deploy process is the same as for all iOS devices.
6. When the app starts on the device for a first time, it will ask for camera access permission. This permission is needed to get the camera (and the LiDAR sensor) running.
7. Please note, there are currently some limitations and peculiarities of the iOS devices, when used as depth sensors:

  • The depth camera subsystem and the body tracking subsystem cannot run the same time.
  • The body tracking subsystem can track no more than one user.
  • The iOS devices are usually mobile (i.e. non-static).
  • The iOS device can be turned in portrait or landscape mode.

These limitations and features render limitations (and features, as well) while running the demo scenes. For instance:

  • When a scene uses body tracking, it can’t utilize the depth camera. Hence, some of the demo scenes (like the 1st background-removal demo, collider demos, etc.) will not run as expected on this platform.
  • In many demo scenes, when you want the device to track its physical position and rotation, set the ‘Get pose frames’-setting of KinectManager to ‘Update transform’. If you don’t want the device to track its physical position & rotation, set it to ‘None’.
  • If the device is static, you can explicitly set its position. Please contact me, if you need more info in this regard. By default the device is at position (0, 1, 0), in meters.

How to replace Azure Kinect with Femto Bolt or Mega sensors

In August 2023 Microsoft announced they will discontinue the production of Azure Kinect cameras after October 2023, but have partners such as Orbbec and Analog Devices, who will provide alternative solutions. Luckily, the Femto Bolt and Femto Mega cameras developed by Orbbec in partnership with Microsoft are not only alternatives, but practically a replacement of Azure Kinect cameras. They can work with Orbbec’s version of Azure Kinect SDK and with the Body Tracking SDK, as well.

In this regard, the transition from Azure Kinect to Femto Bolt or Femto Mega cameras in “Azure Kinect Examples”-asset (K4A-asset for short) should be fairly simple and straightforward. Please follow these steps:

1. Connect the camera to the power supply and to the computer with the provided USB-C or USB-A data cable.
2a. Download, unzip and run Orbbec Viewer (v1.8.1 or later), select the connected camera and check the quality of its color, depth, IR and IMU streams. See also, if the device timestamps are updating as expected. Then close the Orbbec Viewer.
2b. If you are on Windows, please go to the ‘script’-subfolder of Orbbec Viewer’s folder, open and read ‘obsensor_metadata_win10.md’. Then follow the steps in it, to make the device timestamps go through the UVC protocol.
2c. Check if the firmware version of your device corresponds to the one listed in Orbbec’s firmware repo. If needed, upgrade the firmware via Orbbec Viewer. The instructions are provided on the respective firmware sub-page.
3. Download and unzip Orbbec SDK K4A-Wrapper (v1.8.1 or later). Run ‘k4aviewer’, open the connected device and start the cameras. Check again, if the IR, depth, color and IMU streams are visible, the timestamps are updating and there are no errors in the console. Then stop the cameras, close the device and close the K4A-Viewer.
4. Open the Unity project, where you have previously imported the “Azure Kinect Examples for Unity”-asset (or create a new one from scratch and import the K4A-asset into it).
5. Download, unzip and import the OrbbecFemtoInterface-unitypackage. It will replace the native libraries in the project, to make them work with Orbbec’s Femto Bolt and Femto Mega cameras. The package import has to be done before running any scene. Please note, after this import your project will not work with Azure Kinect cameras any more.
6. Run any of the demo scenes (or your own scene), to make sure the cameras and scene work correctly. Please note, to run the demo scenes that require body tracking you still need to have ‘Azure Kinect Body Tracking SDK’ installed into its by-default location ‘C:\Program Files\Azure Kinect Body Tracking SDK’.
7. If there are any issues in the steps above, or if there are console errors regarding any Kinect-related functionalities, please feel free to e-mail me for support.

 

174 thoughts on “Azure Kinect Tips & Tricks

  1. Hi Rumen,
    Thanks for this asset. I tried it for my kinect v2 virtual fitting room scene. However, i found that trousers and shoes are difficult to overlay on player as usual trousers are not lossy.
    I wanna know if you have experience on virtuall fitting scene by Azure kinect. Will it give a better results in these tighy fitting?

    • Hi Rex. Azure Kinect looks pretty good at tracking legs and feet, as to me. If you have a sensor at hand, please e-mail me, tell me your invoice (or order) number, and I’ll send you the ‘Azure Kinect Examples…’-asset, to try it out by yourself. In case of trousers and shoes, I would turn off the body blending, as well. In the K4A-asset this means disabling the SceneBlendRenderer-object in the scene.

  2. Hi Rumen,

    Thanks for your help, It’s excited that my multiple avatar project by K2 is finally worked!!

    But here I have another problem: the site for my project to install is way more bigger than I was told, It requires multiple kinect now.

    Q: Does K2 support multiple sensor in one scene with ‘Skeleton-Avatar’ function? If it doesn’t, is the Azure my only possible option?

    • Hi, the K4A-asset supports multiple sensors, but the Kinect SDK v2.0 does not support multiple sensors connected to the same machine. If you want to use Kinect-v2, you’d need extra PCs, connect the sensors to them and with the help of KinectNetServer-scene & ClientNetInterface-component get their data over the network. The other option is Azure Kinect. You can connect multiple Azure sensors to one machine.

  3. Hi Rumen,

    I’m working on my Multi-azure / Multi-skeleton to Avatar project now, it’s nearly done but I meet the final problem: When there is multiple Azure, is it possible to bind the specific Azure Interface to specific camera with AvatarCotroller function?

    eg.
    My purpose: I have 2 Azure in diferent place and 2 camrera / 4 Avatars in the unity scene, when Azure1 detect a user, the Avatar1 will move in the sight of cam1(pos relative to camera), when Azure2 detect a user, another avatar would move to cam2, etc.
    But when each Azure detect a user, the 2nd Avatar will move to cam1, not cam2 as expected, only if the user Azure2 detect is the 3rd or 4th user in the scene, that it will move to cam2.

    Building 2 project that use different devices seems a solution but it brings too much GPU pressure and with cause pretty low framerate.
    I couldn’t find any component in Azure Interface or Kinect controller.cs that could bind the device to specific cam, only in AvatarCotroller.cs, is it possible to let it be?

  4. Hi Rumen,

    Sorry that I should read more carefully about the tips, the solution is already there, the multi sensor calibration function in Multi camera setup scene should work.
    I had a try and it do work, with a few adjustment of the position value in the json file after the calibration, I can finally arrange the avatar correctly.

    But it only worked in editor, I build & run it and found that one of the devices didn’t light up, the window only shows effect of one device. I tried few build settings but failed, and I keep the cable arrangement and swithed the device with the other one, found that the device can’t boot is the same one.

    Than I reburn the firmware by the tools in Azure SDK for both of them, but the problem still there: One of two devices can’t boot in build but can in editor.

    Have you ever meet this device issue? How can I deal with it?

    • Hi Tsai, have you copied ‘multicam_config.json’ from the root-folder of your project to the folder, where your exe-file resides after the build? The other option would be to copy it to a Resources-folder in the project. Then the Unity build system will take care of putting it into the right asset. The KinectManager looks for this file in the root-folder, and then in resources. If it doesn’t find any, it tries to start up the sensors, as configured in the scene.

      • Sorry for replay so late,
        The issue has already solved by doing ion the way as exactly as you told, thx!

        For some glitch(maybe) feedback:

        1. Sometimes when I try to re-calibrate this two devices (after set their rotation one by one) in editor, the devices & the calibration program can boot up but with totally blind, the devices can’t detect anyone. Only if I calibrate them in both standalone mode, the calibration can works correctly. It’s different with the tips but the result suit my need tho.

        2. Sometimes when the issue above didn’t happen(calibration done in sync mode), and the json file set correctly, it works perfectly well when it just built. but when I set the build run automatically when computer boot up(which is my need)and reboot my computer, the build could run but can’t detect any devices.Only if I config the sync mode of both device to standalone in json file, it could run and detect the devices when reboot the computer, but it seems will change the result of calibration, thst I have to calibrate them again.

        3. when I built the new change to the folder that already has a build, the new multi-cam.json can’t be built in, the build still run in old set, I have to drag the new json from the project folder to the build folder, that it could run correctly.

        Anyway, the newest build is behave as expected, can’t wait to install it to the site, thanks again!

      • To 1&2: Please try to uncheck the ‘Sync multi-cam frames’-setting of KinectManager-component in the MultiCameraSetup-scene, when you start the calibration process.

        To 3: As I said earlier, you can also copy the json-file to Resources-folder after the calibration. Then it will be included in the build automatically. Please don’t forget to rename it to ‘multicam_config.json.txt’ in this case.

  5. Hello Rumen,

    Is there a way I can map user texture onto a 3D model / avatar generated based on user mesh?

    The texture mapping quality are not a concern.

    • If you generate the model from the user mesh, you can map each point of the mesh to a pixel from the color camera image. But I’m not quite sure how you’re going to generate it.

  6. Great work Rumen, is it possible for an avatar to display a closed fist, with the capabilities of the Azure Kinect, and your software? Also, what about face motion capture?

  7. Hi Rumen,
    thank you for the great asset. I’m currently trying to write an application that tracks several users with two overlapping Kinects that are positioned across from each other. I went through your MultiCameraSetup example and setup everything accordingly. However, your example assumes that I always know which user belongs to which camera (in your case ther number of users is limited to one). How do I get the world position of all tracked users in such a setting?

    • Hi Moritz, I suppose you misunderstood something. When you calibrate multiple cameras and then enable ‘Use multi-cam config’ in any scene, the KinectManager starts merging the users seen by the different cameras, and you always get a single list of (merged) users. What you need to do is just process the user data, as if it was a single camera.

  8. Hello Rumen,

    We have a question regarding the mirroring of the Scene. After we successfully do a Multi-camera Setup with K4A, it seems like the scene is mirrored in the X axis. (Ex: When I raise right hand it shows the opposite..)

    Is there any way to get around it? or correct it?

    Thank you!

    • Hi. I’m not sure what exactly your issue is, but you can change the scaling of all Kinect frames. Please open ‘Kinect4AzureInterface.cs’ in ‘AzureKinectExamples/KinectScripts/Interfaces’-folder, find the code on this screenshot and change the scaling (+1/-1), according to your use case.

  9. Hey Rumen,

    Thank you for the response; it’s greatly appreciated!

    Regarding the MultiCamera Setup: We have successfully calibrated 4 Kinect Azure cameras, which are displayed using the SceneMeshPrefab with the SceneMeshRendererGPU. However, the resulting point cloud of the floor isn’t aligned with the world’s plane; it appears rotated. In edit mode, we can only adjust the position and rotation of individual cameras, not the entire calibrated scene. When I try to correct the rotation in the Scene Window during calibration and then save it, the issue persists. Upon checking the “Use Multi Cam Config”, the stitched point cloud still appears rotated.

    I kindly would like to ask if there is a way to align the calibrated scene to the World Floor Plane

    Thank you
    Kind Regards

    • I suppose something is wrong with setting of the 1st camera pose, but I still don’t quite understand the issue. May I ask you to e-mail me a picture of your setup and some screenshots – of the point cloud after you start the scene + some of the scene components (KinectManager-settings & sensor interfaces with their transforms). Please include the ‘multicam_config.json’-file as well. I hope this will help me understand your issue better, and think of some solution.

  10. Hi Rumen,

    I am having trouble getting merged and aligned body data on a multiple camera setup. I have run the MultiCameraSetup scene and have followed the steps above. After running the calibration, I can see two skeletal rigs (green + yellow) aligned to the one person who is standing in a position visible to both cameras, and the Kinect User Manager reports two unique user IDs for that person (and each person visible by both cameras).

    Is it expected that the Kinect User Manager produce two user IDs per person? If so, what is the process to identify the camera source and merge the data generated by each camera?

    If this is not the expected behavior, what is the best path to getting merged user IDs? Currently the cameras are facing each other, raised ~3.5 meters above the ground, ~6 meters apart. There is an overlap in tracked body space of over 1 meter.

    Let me know if I can provide any additional details. Any help would be greatly appreciated!

    • Hi, do you see this behavior in MultiCameraSetup-scene (while and after the calibration), or in any other scene after the calibration? The user merging usually happens automatically, when the ‘Use multi-cam config’-setting is turned on.

      • Yes, I see that behavior in the MultiCameraSetup-scene after calibration and in my own scene. Both have the ‘Use multi-cam config’-setting turned on.

      • In MultiCameraConfig-scene ‘Use multi-cam config’ should be off.
        Please e-mail me and add some screenshots or short video depicting the erroneous behavior, the settings of KinectManager in the scene and if possible – a picture of the setup. Please also attach the ‘multicam_config.json’-file, so I can check what’s in it.

  11. Hei Rumen, I am using MultiCameraSetup with 3 Kinects and I am curious to know if it possibile to save the calibrated scene (3 simultaneous kinect stream) in realtime. Your help would be truly appreciated!

    • Hi Anna, I’m not sure how exactly you would like to save scene in real time. Do you mean meshes, point clouds, body data, or anything else?

      • I apologize for my lack of details – I need to save the pointclouds from the 3 kinects to be able to play them afterwords as a volumetric film. I also would like to ask if it is possibile to change the point size in the pointcloud display mode. Thanks in advance for your time and help

      • Sorry for the delay! There is no such demo that saves the point clouds to a file. There are certain file formats that can be used for saving point clouds, like PLY, OBJ or USD. In this case you should use the SceneMeshRenderer-component instead of SceneMeshRendererGpu. This way you’ll have access to point cloud/mesh data on the CPU side. Please note, this will be CPU intensive job.

        Regarding point size: PSIZE is no more supported by DirectX and built-in render pipeline. The point size there is always 1. But you can use the Shader Graph in HDRP or URP. It has a PointSize-node. For Shader-graph demo, please look at the VfxPointCloudDemo-scene.

  12. Hi Rumen,

    I have two questions regarding the MultiCamera setup:

    1. In your MultiCamera example, the two meshes created by the UserMesh Prefab align perfectly. However, when I use the SceneMeshGpu Prefab in a different scene and simply use the follow sensor transform script, they do not overlap. Why is that and does that mean that the user tracking will be off as well?
    2. Both cameras look on a physical table and I have a digital twin of that table in Unity. Now I want to place the Kinects in the Unity scene in such way, that the digital table is placed at the same position as the physical one to allow interaction on the table. I hope it’s clear what I mean. Do you have an idea on how do that?

    All best
    Moritz

    • Ah and one other thing:

      If I wanted to get the e.g. hips world position of a specific user. How I would do that in a multicamera scenario?

      • Hi Moritz,

        To your questions:
        1. In SceneMeshDemo you should have the same result. Just please add SceneMeshSx-objects for each camera, and change the sensor index it its FollowSensorTransorm & SceneMeshRendererGpu-components. Please also upgrade to the latest version of the Azure Kinect-asset from Unity asset store and then calibrate the cameras again. It contains a fix for alignment imperfections in case of multiple cameras, as far as I remember.
        2. If you calibrate the cameras as they’re physically set and then enable ‘Use multi-cam config’-setting of KM, you should get them the same way in your scene (e.g. in the digital twin), along with the table and all objects on it.
        3. In case of multiple cameras, KM employs a class called KinectUserBodyMerger that merges the bodies detected by the different cameras in one list of users. In this regard, you should use the same methods, as if there is only one camera. In your case, KinectManager.Instance.GetUserPosition(userId) or KinectManager.Instance.GetJointPosition(userId, KinectInterop.JointType.Pelvis) should do the job. And ‘userId’ you can get by calling KinectManager.Instance.GetUserIdByIndex(playerIndex).

      • Ah and one more thing: If I wanted to not use the body merger, but simply detect the users and also their corresponding sensor ids. Is there way to do that?

      • Body merger is automatic, but you can get the sensor-detected body data as well. Use KinectManager.Instance.GetSensorJointPosition()-method instead.

      • Ah, I didn’t see that you answered my previous questions in the meantime! Thank you for the updates so far! Will test it asap 🙂

      • Thank you so much. Currently, I’m getting whether a new user was detected via the OnUserAdded event from KinectUserManager. How would I know to which sensor the UserId and Index belong to?

      • And one last thing and then I’m stopping with the questions for a bit 😉

        I can’t seem to get the syncing of the Kinects to work and running them both in standalone mode for now. How important is the sync functionality in your experience?

      • Please look here how to wire the Azure Kinect sensors, to sync them. Don’t forget to set their respective roles with ‘Device sync mode’-setting in the respective sensor interfaces, before you run the MultiCameraSetup-scene.
        Of course, you can keep the cameras in standalone mode. In this case are possible slight discrepancies between the frames of different sensors. If the sensors work in standalone mode, better uncheck the ‘Sync multi-cam frames’-KM setting.

  13. Dear Everybody,

    I have been working for many years with this amazing asset.

    With this latest version however I can not get the orientation of the Avatar Controller right.
    It makes no difference wether setting I use for the Get Body Frames (raw pose data vs update transform) to get the world transformation.

    Picture here:
    https://ibb.co/g6fXJR3

    Thank you so much for your help.

  14. Dear Everybody,

    I have been working for many years with this amazing asset.

    With this latest version however I can not get the orientation of the Avatar Controller right.
    It makes no difference wether setting I use for the Get Body Frames (raw pose data vs update transform) to get the world transformation.

    Picture here:
    https://ibb.co/g6fXJR3

    Thank you very much,

    Kind regards,

    Wim

    • Dear Wim, when you are playing a recording, the pose frames are not available. That’s why the sensor pose estimation does not work. In this case, please set manually the Y-position of the Kinect4Azure’s transform to the actual sensor height, and the transform rotation to the actual sensor orientation. If this is your sensor and its pose didn’t change since the recording, just connect it and use the ‘Display info’-setting of KM’s ‘Get pose frames’-setting to run a demo scene with connected sensor, then write down the position and rotation values.

Leave a Reply to s-akaishiCancel reply