Kinect v2 Tips, Tricks and Examples

teaching4After answering so many different questions about how to use various parts and components of the “Kinect v2 with MS-SDK”-package, I think it would be easier, if I share some general tips, tricks and examples. I’m going to add more tips and tricks to this article in time. Feel free to drop by, from time to time, to check out what’s new.

And here is a link to the Online documentation of the K2-asset.

Table of Contents:

What is the purpose of all manages in the KinectScripts-folder
How to use the Kinect v2-Package functionality in your own Unity project
How to use your own model with the AvatarController
How to make the avatar hands twist around the bone
How to utilize Kinect to interact with GUI buttons and components
How to get the depth- or color-camera textures
How to get the position of a body joint
How to make a game object rotate as the user
How to make a game object follow user’s head position and rotation
How to get the face-points’ coordinates
How to mix Kinect-captured movement with Mecanim animation
How to add your models to the FittingRoom-demo
How to set up the sensor height and angle
Are there any events, when a user is detected or lost
How to process discrete gestures like swipes and poses like hand-raises
How to process continuous gestures, like ZoomIn, ZoomOut and Wheel
How to utilize visual (VGB) gestures in the K2-asset
How to change the language or grammar for speech recognition
How to run the fitting-room or overlay demo in portrait mode
How to build an exe from ‘Kinect-v2 with MS-SDK’ project
How to make the Kinect-v2 package work with Kinect-v1
What do the options of ‘Compute user map’-setting mean
How to set up the user detection order
How to enable body-blending in the FittingRoom-demo, or disable it to increase FPS
How to build Windows-Store (UWP-8.1) application
How to work with multiple users
How to use the FacetrackingManager
How to add background image to the FittingRoom-demo
How to move the FPS-avatars of positionally tracked users in VR environment
How to create your own gestures
How to enable or disable the tracking of inferred joints
How to build exe with the Kinect-v2 plugins provided by Microsoft
How to build Windows-Store (UWP-10) application
How to run the projector-demo scene
How to render background and the background-removal image on the scene background
How to run the demo scenes on non-Windows platforms
How to workaround the user tracking issue, when the user is turned back
How to get the full scene depth image as texture
Some useful hints regarding AvatarController and AvatarScaler
How to setup the K2-package to work with Orbbec Astra sensors (deprecated)
How to setup the K2-asset to work with Nuitrack body tracking SDK
How to control Keijiro’s Skinner-avatars with the Avatar-Controller component
How to track a ball hitting a wall (hints)
How to create your own programmatic gestures
What is the file-format used by the KinectRecorderPlayer-component (KinectRecorderDemo)
How to enable user gender and age detection in KinectFittingRoom1-demo scene

What is the purpose of all manages in the KinectScripts-folder:

The managers in the KinectScripts-folder are components. You can utilize them in your projects, depending on the features you need. The KinectManager is the most general component, needed to interact with the sensor and to get basic data from it, like the color and depth streams, and the bodies and joints’ positions in meters, in Kinect space. The purpose of the AvatarController is to transfer the detected joint positions and orientations to a rigged skeleton. The CubemanController is similar, but it works with transforms and lines to represent the joints and bones, in order to make locating the tracking issues easier. The FacetrackingManager deals with the face points and head/neck orientation. It is used internally by the KinectManager (if available at the same time) to get the precise position and orientation of the head and neck. The InteractionManager is used to control the hand cursor and to detect hand grips, releases and clicks. And finally, the SpeechManager is used for recognition of speech commands. Pay also attention to the Samples-folder. It contains several simple examples (some of them cited below) you can learn from, use directly or copy parts of the code into your scripts.

How to use the Kinect v2-Package functionality in your own Unity project:

1. Copy folder ‘KinectScripts’ from the Assets/K2Examples-folder of the package to your project. This folder contains the package scripts, filters and interfaces.
2. Copy folder ‘Resources’ from the Assets/K2Examples-folder of the package to your project. This folder contains all needed libraries and resources. You can skip copying the libraries you don’t plan to use, in order to save space.
3. Copy folder ‘Standard Assets’ from the Assets/K2Examples-folder of the package to your project. It contains the wrapper classes for Kinect-v2 SDK.
4. Wait until Unity detects and compiles the newly copied resources, folders and scripts.
See this tip as well, if you like to build your project with the Kinect-v2 plugins provided by Microsoft.

How to use your own model with the AvatarController:

1. (Optional) Make sure your model is in T-pose. This is the zero-pose of Kinect joint orientations.
2. Select the model-asset in Assets-folder. Select the Rig-tab in Inspector window.
3. Set the AnimationType to ‘Humanoid’ and AvatarDefinition – to ‘Create from this model’.
4. Press the Apply-button. Then press the Configure-button to make sure the joints are correctly assigned. After that exit the configuration window.
5. Put the model into the scene.
6. Add the KinectScript/AvatarController-script as component to the model’s game object in the scene.
7. Make sure your model also has Animator-component, it is enabled and its Avatar-setting is set correctly.
8. Enable or disable (as needed) the MirroredMovement and VerticalMovement-settings of the AvatarController-component. Do mind when mirrored movement is enabled, the model’s transform should have Y-rotation of 180 degrees.
9. Run the scene to test the avatar model. If needed, tweak some settings of AvatarController and try again.

How to make the avatar hands twist around the bone:

To do it, you need to set ‘Allowed Hand Rotations’-setting of the KinectManager to ‘All’. KinectManager is a component of the MainCamera in the example scenes. This setting has three options: None – turns off all hand rotations, Default – turns on the hand rotations, except the twists around the bone, All – turns on all hand rotations.

How to utilize Kinect to interact with GUI buttons and components:

1. Add the InteractionManager to the main camera or to other persistent object in the scene. It is used to control the hand cursor and to detect hand grips, releases and clicks. Grip means closed hand with thumb over the other fingers, Release – opened hand, hand Click is generated when the user’s hand doesn’t move (stays still) for about 2 seconds.
2. Enable the ‘Control Mouse Cursor’-setting of the InteractionManager-component. This setting transfers the position and clicks of the hand cursor to the mouse cursor, this way enabling interaction with the GUI buttons, toggles and other components.
3. If you need drag-and-drop functionality for interaction with the GUI, enable the ‘Control Mouse Drag’-setting of the InteractionManager-component. This setting starts mouse dragging, as soon as it detects hand grip and continues the dragging until hand release is detected. If you enable this setting, you can also click on GUI buttons with a hand grip, instead of the usual hand click (i.e. staying in place, over the button, for about 2 seconds).

How to get the depth- or color-camera textures:

First off, make sure that ‘Compute User Map’-setting of the KinectManager-component is enabled, if you need the depth texture, or ‘Compute Color Map’-setting of the KinectManager-component is enabled, if you need the color camera texture. Then write something like this in the Update()-method of your script:

KinectManager manager = KinectManager.Instance;
if(manager && manager.IsInitialized())
    Texture2D depthTexture = manager.GetUsersLblTex();
    Texture2D colorTexture = manager.GetUsersClrTex();
    // do something with the textures

How to get the position of a body joint:

This is demonstrated in KinectScripts/Samples/GetJointPositionDemo-script. You can add it as a component to a game object in your scene to see it in action. Just select the needed joint and optionally enable saving to a csv-file. Do not forget that to add the KinectManager as component to a game object in your scene. It is usually a component of the MainCamera in the example scenes. Here is the main part of the demo-script that retrieves the position of the selected joint:

KinectInterop.JointType joint = KinectInterop.JointType.HandRight;
KinectManager manager = KinectManager.Instance;

if(manager && manager.IsInitialized())
        long userId = manager.GetPrimaryUserID();

        if(manager.IsJointTracked(userId, (int)joint))
            Vector3 jointPos = manager.GetJointPosition(userId, (int)joint);
            // do something with the joint position

How to make a game object rotate as the user:

This is similar to the previous example and is demonstrated in KinectScripts/Samples/FollowUserRotation-script. To see it in action, you can create a cube in your scene and add the script as a component to it. Do not forget to add the KinectManager as component to a game object in your scene. It is usually a component of the MainCamera in the example scenes.

How to make a game object follow user’s head position and rotation:

You need the KinectManager and FacetrackingManager added as components to a game object in your scene. For example, they are components of the MainCamera in the KinectAvatarsDemo-scene. Then, to get the position of the head and orientation of the neck, you need code like this in your script:

KinectManager manager = KinectManager.Instance;

if(manager && manager.IsInitialized())
        long userId = manager.GetPrimaryUserID();

        if(manager.IsJointTracked(userId, (int)KinectInterop.JointType.Head))
            Vector3 headPosition = manager.GetJointPosition(userId, (int)KinectInterop.JointType.Head);
            Quaternion neckRotation = manager.GetJointOrientation(userId, (int)KinectInterop.JointType.Neck);
            // do something with the head position and neck orientation

How to get the face-points’ coordinates:

You need a reference to the respective FaceFrameResult-object. This is demonstrated in KinectScripts/Samples/GetFacePointsDemo-script. You can add it as a component to a game object in your scene, to see it in action. To get a face point coordinates in your script you need to invoke its public GetFacePoint()-function. Do not forget to add the KinectManager and FacetrackingManager as components to a game object in your scene. For example, they are components of the MainCamera in the KinectAvatarsDemo-scene.

How to mix Kinect-captured movement with Mecanim animation

1. Use the AvatarControllerClassic instead of AvatarController-component. Assign only these joints that have to be animated by the sensor.
2. Set the SmoothFactor-setting of AvatarControllerClassic to 0, to apply the detected bone orientations instantly.
3. Create an avatar-body-mask and apply it to the Mecanim animation layer. In this mask, disable Mecanim animations of the Kinect-animated joints mentioned above. Do not disable the root-joint!
4. Enable the ‘Late Update Avatars’-setting of KinectManager (component of MainCamera in the example scenes).
5. Run the scene to check the setup. When a player gets recognized by the sensor, part of his joints will be animated by the AvatarControllerClassic component, and the other part – by the Animator component.

How to add your models to the FittingRoom-demo

1. For each of your fbx-models, import the model and select it in the Assets-view in Unity editor.
2. Select the Rig-tab in Inspector. Set the AnimationType to ‘Humanoid’ and the AvatarDefinition to ‘Create from this model’.
3. Press the Apply-button. Then press the Configure-button to check if all required joints are correctly assigned. The clothing models usually don’t use all joints, which can make the avatar definition invalid. In this case you can assign manually the missing joints (shown in red).
4. Keep in mind: The joint positions in the model must match the structure of the Kinect-joints. You can see them, for instance in the KinectOverlayDemo2. Otherwise the model may not overlay the user’s body properly.
5. Create a sub-folder for your model category (Shirts, Pants, Skirts, etc.) in the FittingRoomDemo/Resources-folder.
6. Create a sub-folders with subsequent numbers (0000, 0001, 0002, etc.) for all imported in p.1 models, in the model category folder.
7. Move your models into these numerical folders, one model per folder, along with the needed materials and textures. Rename the model’s fbx-file to ‘model.fbx’.
8. You can put a preview image for each model in jpeg-format (100 x 143px, 24bpp) in the respective model folder. Then rename it to ‘preview.jpg.bytes’. If you don’t put a preview image, the fitting-room demo will display ‘No preview’ in the model-selection menu.
9. Open the FittingRoomDemo1-scene.
10. Add a ModelSelector-component for each model category to the KinectController game object. Set its ‘Model category’-setting to be the same as the name of sub-folder created in p.5 above. Set the ‘Number of models’-setting to reflect the number of sub-folders created in p.6 above.
11. The other settings of your ModelSelector-component must be similar to the existing ModelSelector in the demo. I.e. ‘Model relative to camera’ must be set to ‘BackgroundCamera’, ‘Foreground camera’ must be set to ‘MainCamera’, ‘Continuous scaling’ – enabled. The scale-factor settings may be set initially to 1 and the ‘Vertical offset’-setting to 0. Later you can adjust them slightly to provide the best model-to-body overlay.
12. Enable the ‘Keep selected model’-setting of the ModelSelector-component, if you want the selected model to continue overlaying user’s body, after the model category changes. This is useful, if there are several categories (i.e. ModelSelectors), for instance for shirts, pants, skirts, etc. In this case the selected shirt model will still overlay user’s body, when the category changes and the user starts selects pants, for instance.
13. The CategorySelector-component provides gesture control for changing models and categories, and takes care of switching model categories (e.g for shirts, pants, ties, etc.) for the same user. There is already a CategorySelector for the 1st user (player-index 0) in the scene, so you don’t need to add more.
14. If you plan for multi-user fitting-room, add one CategorySelector-component for each other user. You may also need to add the respective ModelSelector-components for model categories that will be used by these users, too.
15. Run the scene to ensure that your models can be selected in the list and they overlay the user’s body correctly. Experiment a bit if needed, to find the values of scale-factors and vertical-offset settings that provide the best model-to-body overlay.
16. If you want to turn off the cursor interaction in the scene, disable the InteractionManager-component of KinectController-game object. If you want to turn off the gestures (swipes for changing models & hand raises for changing categories), disable the respective settings of the CategorySelector-component. If you want to turn off or change the T-pose calibration, change the ‘Player calibration pose’-setting of KinectManager-component.
17. You can use the FittingRoomDemo2 scene, to utilize or experiment with a single overlay model. Adjust the scale-factor settings of AvatarScaler to fine tune the scale of the whole body, arm- or leg-bones of the model, if needed. Enable the ‘Continuous Scaling’ setting, if you want the model to rescale on each Update.
18. If the clothing/overlay model uses the Standard shader, set its ‘Rendering mode’ to ‘Cutout’. See this comment below for more information.

How to set up the sensor height and angle

There are two very important settings of the KinectManager-component that influence the calculation of users’ and joints’ space coordinates, hence almost all user-related visualizations in the demo scenes. Here is how to set them correctly:

1. Set the ‘Sensor height’-setting, as to how high above the ground is the sensor, in meters. The by-default value is 1, i.e. 1.0 meter above the ground, which may not be your case.
2. Set the ‘Sensor angle’-setting, as to the tilt angle of the sensor, in degrees. Use positive degrees if the sensor is tilted up, negative degrees – if it is tilted down. The by-default value is 0, which means 0 degrees, i.e. the sensor is not tilted at all.
3. Because it is not so easy to estimate the sensor angle manually, you can use the ‘Auto height angle’-setting to find out this value. Select ‘Show info only’-option and run the demo-scene. Then stand in front of the sensor. The information on screen will show you the rough height and angle-settings, as estimated by the sensor itself. Repeat this 2-3 times and write down the values you see.
4. Finally, set the ‘Sensor height’ and ‘Sensor angle’ to the estimated values you find best. Set the ‘Auto height angle’-setting back to ‘Dont use’.
5. If you find the height and angle values estimated by the sensor good enough, or if your sensor setup is not fixed, you can set the ‘Auto height angle’-setting to ‘Auto update’. It will update the ‘Sensor height’ and ‘Sensor angle’-settings continuously, when there are users in the field of view of the sensor.

Are there any events, when a user is detected or lost

There are no special event handlers for user-detected/user-lost events, but there are two other options you can use:

1. In the Update()-method of your script, invoke the GetUsersCount()-function of KinectManager and compare the returned value to a previously saved value, like this:

KinectManager manager = KinectManager.Instance;
if(manager && manager.IsInitialized())
    int usersNow = manager.GetUsersCount();

    if(usersNow > usersSaved)
        // new user detected
    if(usersNow < usersSaved)
        // user lost

    usersSaved = usersNow;

2. Create a class that implements KinectGestures.GestureListenerInterface and add it as component to a game object in the scene. It has methods UserDetected() and UserLost(), which you can use as user-event handlers. The other methods could be left empty or return the default value (true). See the SimpleGestureListener or GestureListener-classes, if you need an example.

How to process discrete gestures like swipes and poses like hand-raises

Most of the gestures, like SwipeLeft, SwipeRight, Jump, Squat, etc. are discrete. All poses, like RaiseLeftHand, RaiseRightHand, etc. are also considered as discrete gestures. This means these gestures may report progress or not, but all of them get completed or cancelled at the end. Processing these gestures in a gesture-listener script is relatively easy. You need to do as follows:

1. In the UserDetected()-function of the script add the following line for each gesture you need to track:

manager.DetectGesture(userId, KinectGestures.Gestures.xxxxx);

2. In GestureCompleted() add code to process the discrete gesture, like this:

if(gesture == KinectGestures.Gestures.xxxxx)
    // gesture is detected - process it (for instance, set a flag or execute an action)

3. In the GestureCancelled()-function, add code to process the cancellation of the continuous gesture:

if(gesture == KinectGestures.Gestures.xxxxx)
    // gesture is cancelled - process it (for instance, clear the flag)

If you need code samples, see the SimpleGestureListener.cs or CubeGestureListener.cs-scripts.

4. From v2.8 on, KinectGestures.cs is not any more a static class, but a component that may be extended, for instance with the detection of new gestures or poses. You need to add it as component to the KinectController-game object, if you need gesture or pose detection in the scene.

How to process continuous gestures, like ZoomIn, ZoomOut and Wheel

Some of the gestures, like ZoomIn, ZoomOut and Wheel, are continuous. This means these gestures never get fully completed, but only report progress greater than 50%, as long as the gesture is detected. To process them in a gesture-listener script, do as follows:

1. In the UserDetected()-function of the script add the following line for each gesture you need to track:

manager.DetectGesture(userId, KinectGestures.Gestures.xxxxx);

2. In GestureInProgress() add code to process the continuous gesture, like this:

if(gesture == KinectGestures.Gestures.xxxxx)
    if(progress > 0.5f)
        // gesture is detected - process it (for instance, set a flag, get zoom factor or angle)
        // gesture is no more detected - process it (for instance, clear the flag)

3. In the GestureCancelled()-function, add code to process the end of the continuous gesture:

if(gesture == KinectGestures.Gestures.xxxxx)
    // gesture is cancelled - process it (for instance, clear the flag)

If you need code samples, see the SimpleGestureListener.cs or ModelGestureListener.cs-scripts.

4. From v2.8 on, KinectGestures.cs is not any more a static class, but a component that may be extended, for instance with the detection of new gestures or poses. You need to add it as component to the KinectController-game object, if you need gesture or pose detection in the scene.

How to utilize visual (VGB) gestures in the K2-asset

The visual gestures, created by the Visual Gesture Builder (VGB) can be used in the K2-asset, too. To do it, follow these steps (and see the VisualGestures-game object and its components in the KinectGesturesDemo-scene):

1. Copy the gestures’ database (xxxxx.gbd) to the Resources-folder and rename it to ‘xxxxx.gbd.bytes’.
2. Add the VisualGestureManager-script as a component to a game object in the scene (see VisualGestures-game object).
3. Set the ‘Gesture Database’-setting of VisualGestureManager-component to the name of the gestures’ database, used in step 1 (‘xxxxx.gbd’).
4. Create a visual-gesture-listener to process the gestures, and add it as a component to a game object in the scene (see the SimpleVisualGestureListener-script).
5. In the GestureInProgress()-function of the gesture-listener add code to process the detected continuous gestures and in the GestureCompleted() add code to process the detected discrete gestures.

How to change the language or grammar for speech recognition

1. Make sure you have installed the needed language pack from here.
2. Set the ‘Language code’-setting of SpeechManager-component, as to the grammar language you need to use. The list of language codes can be found here (see ‘LCID Decimal’).
3. Make sure the ‘Grammar file name’-setting of SpeechManager-component corresponds to the name of the grxml.txt-file in Assets/Resources.
4. Open the grxml.txt-grammar file in Assets/Resources and set its ‘xml:lang’-attribute to the language that corresponds to the language code in step 2.
5. Make the other needed modifications in the grammar file and save it.
6. (Optional since v2.7) Delete the grxml-file with the same name in the root-folder of your Unity project (the parent folder of Assets-folder).
7. Run the scene to check, if speech recognition works correctly.

How to run the fitting-room or overlay demo in portrait mode

1. First off, add 9:16 (or 3:4) aspect-ratio to the Game view’s list of resolutions, if it is missing.
2. Select the 9:16 (or 3:4) aspect ratio of Game view, to set the main-camera output in portrait mode.
3. Open the fitting-room or overlay-demo scene and select each of the BackgroundImage(X)-game object(s). If it has a child object called RawImage, select this sub-object instead.
4. Enable the PortraitBackground-component of each of the selected BackgroundImage object(s). When finished, save the scene.
5. Run the scene and test it in portrait mode.

How to build an exe from ‘Kinect-v2 with MS-SDK’ project

By default Unity builds the exe (and the respective xxx_Data-folder) in the root folder of your Unity project. It is recommended to you use another, empty folder instead. The reason is that building the exe in the folder of your Unity project may cause conflicts between the native libraries used by the editor and the ones used by the exe, if they have different architectures (for instance the editor is 64-bit, but the exe is 32-bit).

Also, before building the exe, make sure you’ve copied the Assets/Resources-folder from the K2-asset to your Unity project. It contains the needed native libraries and custom shaders. Optionally you can remove the unneeded zip.bytes-files from the Resources-folder. This will save a lot of space in the build. For instance, if you target Kinect-v2 only, you can remove the Kinect-v1 and OpenNi2-related zipped libraries. The exe won’t need them anyway.

How to make the Kinect-v2 package work with Kinect-v1

If you have only Kinect v2 SDK or Kinect v1 SDK installed on your machine, the KinectManager should detect the installed SDK and sensor correctly. But in case you have both Kinect SDK 2.0 and SDK 1.8 installed simultaneously, the KinectManager will put preference on Kinect v2 SDK and your Kinect v1 will not be detected. The reason for this is that you can use SDK 2.0 in offline mode as well, i.e. without sensor attached. In this case you can emulate the sensor by playing recorded files in Kinect Studio 2.0.

If you want to make the KinectManager utilize the appropriate interface, depending on the currently attached sensor, open KinectScripts/Interfaces/Kinect2Interface.cs and at its start change the value of ‘sensorAlwaysAvailable’ from ‘true’ to ‘false’. After this, close and reopen the Unity editor. Then, on each start, the KinectManager will try to detect which sensor is currently attached to your machine and use the respective sensor interface. This way you could switch the sensors (Kinect v2 or v1), as to your preference, but will not be able to use the offline mode for Kinect v2. To utilize the Kinect v2 offline mode again, you need to switch ‘sensorAlwaysAvailable’ back to true.

What do the options of ‘Compute user map’-setting mean

Here are one-line descriptions of the available options:

  • RawUserDepth means that only the raw depth image values, coming from the sensor will be available, via the GetRawDepthMap()-function for instance;
  • BodyTexture means that GetUsersLblTex()-function will return the white image of the tracked users;
  • UserTexture will cause GetUsersLblTex() to return the tracked users’ histogram image;
  • CutOutTexture, combined with enabled ‘Compute color map‘-setting, means that GetUsersLblTex() will return the cut-out image of the users.

All these options (except RawUserDepth) can be tested instantly, if you enable the ‘Display user map‘-setting of KinectManager-component, too.

How to set up the user detection order

There is a ‘User detection order’-setting of the KinectManager-component. You can use it to determine how the user detection should be done, depending on your requirements. Here are short descriptions of the available options:

  • Appearance is selected by default. It means that the player indices are assigned in order of user appearance. The first detected user gets player index 0, The next one gets index 1, etc. If the user 0 gets lost, the remaining users are not reordered. The next newly detected user will take its place;
  • Distance means that player indices are assigned depending on distance of the detected users to the sensor. The closest one will get player index 0, the next closest one – index 1, etc. If a user gets lost, the player indices are reordered, depending on the distances to the remaining users;
  • Left to right means that player indices are assigned depending on the X-position of the detected users. The leftmost one will get player index 0, the next leftmost one – index 1, etc. If a user gets lost, the player indices are reordered, depending on the X-positions of the remaining users;

The user-detection area can be further limited with ‘Min user distance’, ‘Max user distance’ and ‘Max left right distance’-settings, in meters from the sensor. The maximum number of detected user can be limited by lowering the value of ‘Max tracked user’-setting.

How enable body-blending in the FittingRoom-demo, or disable it to increase FPS

If you select the MainCamera in the KinectFittingRoom1-demo scene (in v2.10 or above), you will see a component called UserBodyBlender. It is responsible for mixing the clothing model (overlaying the user) with the real world objects (including user’s body parts), depending on the distance to camera. For instance, if you arms or other real-world objects are in front of the model, you will see them overlaying the model, as expected.

You can enable the component, to turn on the user’s body-blending functionality. The ‘Depth threshold’-setting may be used to adjust the minimum distance to the front of model (in meters). It determines when a real-world’s object will become visible. It is set by default to 0.1m, but you could experiment a bit to see, if any other value works better for your models. If the scene performance (in means of FPS) is not sufficient, and body-blending is not important, you can disable the UserBodyBlender-component to increase performance.

How to build Windows-Store (UWP-8.1) application

To do it, you need at least v2.10.1 of the K2-asset. To build for ‘Windows store’, first select ‘Windows store’ as platform in ‘Build settings’, and press the ‘Switch platform’-button. Then do as follows:

1. Unzip Assets/ This will create Assets/Plugins-Metro-folder.
2. Delete the KinectScripts/SharpZipLib-folder.
3. Optionally, delete all zip.bytes-files in Assets/Resources. You won’t need these libraries in Windows/Store. All Kinect-v2 libraries reside in Plugins-Metro-folder.
4. Select ‘File / Build Settings’ from the menu. Add the scenes you want to build. Select ‘Windows Store’ as platform. Select ‘8.1’ as target SDK. Then click the Build-button. Select an empty folder for the Windows-store project and wait the build to complete.
5. Go to the build-folder and open the generated solution (.sln-file) with Visual studio.
6. Change the ‘by default’ ARM-processor target to ‘x86’. The Kinect sensor is not compatible with ARM processors.
7. Right click ‘References’ in the Project-windows and select ‘Add reference’. Select ‘Extensions’ and then WindowsPreview.Kinect and Microsoft.Kinect.Face libraries. Then press OK.
8. Open solution’s manifest-file ‘Package.appxmanifest’, go to ‘Capabilities’-tab and enable ‘Microphone’ and ‘Webcam’ in the left panel. Save the manifest. This is needed to to enable the sensor, when the UWP app starts up. Thanks to Yanis Lukes (aka Pendrokar) for providing this info!
9. Build the project. Run it, to test it locally. Don’t forget to turn on Windows developer mode on your machine.

How to work with multiple users

Kinect-v2 can fully track up to 6 users simultaneously. That’s why many of the Kinect-related components, like AvatarController, InteractionManager, model & category-selectors, gesture & interaction listeners, etc. have a setting called ‘Player index’. If set to 0, the respective component will track the 1st detected user. If set to 1, the component will track the 2nd detected use. If set to 2 – the 3rd user, etc. The order of user detection may be specified with the ‘User detection order’-setting of the KinectManager (component of KinectController game object).

How to use the FacetrackingManager

The FacetrackingManager-component may be used for several purposes. First, adding it as component of KinectController will provide more precise neck and head tracking, when there are avatars in the scene (humanoid models utilizing the AvatarController-component). If HD face tracking is needed, you can enable the ‘Get face model data’-setting of FacetrackingManager-component. Keep in mind that using HD face tracking will lower performance and may cause memory leaks, which can cause Unity crash after multiple scene restarts. Please use this feature carefully.

In case of ‘Get face model data’ enabled, don’t forget to assign a mesh object (e.g. Quad) to the ‘Face model mesh’-setting. Pay also attention to the ‘Textured model mesh’-setting. The available options are: ‘None’ – means the mesh will not be textured; ‘Color map’ – the mesh will get its texture from the color camera image, i.e. it will reproduce user’s face; ‘Face rectangle’ – the face mesh will be textured with its material’s Albedo texture, whereas the UI coordinates will match the detected face rectangle.

Finally, you can use the FacetrackingManager public API to get a lot of face-tracking data, like the user’s head position and rotation, animation units, shape units, face model vertices, etc.

How to add background image to the FittingRoom-demo (updated for v2.14 and later)

To replace the color-camera background in the FittingRoom-scene with a background image of your choice, please do as follows:

1. Enable the BackgroundRemovalManager-component of the KinectController-game object in the scene.
2. Make sure the ‘Compute user map’-setting of KinectManager (component of the KinectController, too) is set to ‘Body texture’, and the ‘Compute color map’-setting is enabled.
3. Set the needed background image as texture of the RawImage-component of BackgroundImage1-game object in the scene.
4. Run the scene to check, if it works as expected.

How to move the FPS-avatars of positionally tracked users in VR environment

There are two options for moving first-person avatars in VR-environment (the 1st avatar-demo scene in K2VR-asset):

1. If you use the Kinect’s positional tracking, turn off the Oculus/Vive positional tracking, because their coordinates are different to Kinect’s.
2. If you prefer to use the Oculus/Vive positional tracking:
– enable the ‘External root motion’-setting of the AvatarController-component of avatar’s game object. This will disable avatar motion as to Kinect special coordinates.
– enable the HeadMover-component of avatar’s game object, and assign the MainCamera as ‘Target transform’, to follow the Oculus/Vive position.

Now try to run the scene. If there are issues with the MainCamera used as positional target, do as follows:
– add an empty game object to the scene. It will be used to follow the Oculus/Vive positions.
– assign the newly created game object to the ‘Target transform’-setting of the HeadMover-component.
– add a script to the newly created game object, and in that script’s Update()-function set programatically the object’s transform position to be the current Oculus/Vive position.

How to create your own gestures

For gesture recognition there are two options – visual gestures (created with the Visual Gesture Builder, part of Kinect SDK 2.0) and programmatic gestures, coded in KinectGestures.cs or a class that extends it. The programmatic gestures detection consists mainly of tracking the position and movement of specific joints, relative to some other joints. For more info regarding how to create your own programmatic gestures look at this tip below.

The scenes demonstrating the detection of programmatic gestures are located in the KinectDemos/GesturesDemo-folder. The KinectGesturesDemo1-scene shows how to utilize discrete gestures, and the KinectGesturesDemo2-scene is about continuous gestures.

And here is a video on creating and checking for visual gestures. Please check KinectDemos/GesturesDemo/VisualGesturesDemo-scene too, to see how to use visual gestures in Unity. A major issue with the visual gestures is that they usually work in the 32-bit builds only.

How to enable or disable the tracking of inferred joints

First, keep in mind that:
1. There is ‘Ignore inferred joints’-setting of the KinectManager. KinectManager is usually a component of the KinectController-game object in demo scenes.
2. There is a public API method of KinectManager, called IsJointTracked(). This method is utilized by various scripts & components in the demo scenes.

Here is how it works:
The Kinect SDK tracks the positions of all body joints’ together with their respective tracking states. These states can be Tracked, NotTracked or Inferred. When the ‘Ignore inferred joints’-setting is enabled, the IsJointTracked()-method returns true, when the tracking state is Tracked or Inferred, and false when the state is NotTracked. I.e. both tracked and inferred joints are considered valid. When the setting is disabled, the IsJointTracked()-method returns true, when the tracking state is Tracked, and false when the state is NotTracked or Inferred. I.e. only the really tracked joints are considered valid.

How to build exe with the Kinect-v2 plugins provided by Microsoft

In case you’re targeting Kinect-v2 sensor only, and would like to avoid packing all native libraries that come with the K2-asset in the build, as well as unpacking them into the working directory of the executable afterwards, do as follows:

1. Download and unzip the Kinect-v2 Unity Plugins from here.
2. Open your Unity project. Select ‘Assets / Import Package / Custom Package’ from the menu and import only the Plugins-folder from ‘Kinect.2.0.1410.19000.unitypackage’. You can find it in the unzipped package from p.1 above. Please don’t import anything from the ‘Standard Assets’-folder of the unitypackage. All needed standard assets are already present in the K2-asset.
3. If you are using the FacetrackingManager in your scenes, import only the Plugins-folder from ‘Kinect.Face.2.0.1410.19000.unitypackage’ as well. If you are using visual gestures (i.e. VisualGestureManager in your scenes), import only the Plugins-folder from ‘Kinect.VisualGestureBuilder.2.0.1410.19000.unitypackage’, too. Again, please don’t import anything from the ‘Standard Assets’-folders of these unitypackages. All needed standard assets are already present in the K2-asset.
4. Delete & (or all zipped libraries) from the K2Examples/Resources-folder. You can see them as .zip-files in the Assets-window, or as .zip.bytes-files in the Windows explorer. You are going to use the Kinect-v2 sensor only, so all these zipped libraries are not needed any more.
5. Delete all dlls in the root-folder of your Unity project. The root-folder is the parent-folder of the Assets-folder of your project, and is not visible in the Editor. You may need to stop the Unity editor. Delete the NuiDatabase- and vgbtechs-folders in the root-folder, as well. These dlls and folders are no more needed, because they are part of the project’s Plugins-folder now.
6. Open Unity editor again, load the project and try to run the demo scenes in the project, to make sure they work as expected.
7. If everything is OK, build the executable again. This should work for both x86 and x86_64-architectures, as well as for Windows-Store, SDK 8.1.

How to build Windows-Store (UWP-10) application

To do it, you need at least v2.12.2 of the K2-asset. Then follow these steps:

1. (optional, as of v2.14.1) Delete the KinectScripts/SharpZipLib-folder. It is not needed for UWP. If you leave it, it may cause syntax errors later.
2. Open ‘File / Build Settings’ in Unity editor, switch to ‘Windows store’ platform and select ‘Universal 10’ as SDK. Make sure ‘.Net’ is selected as scripting backend. Optionally enable the ‘Unity C# Project’ and ‘Development build’-settings, if you’d like to edit the Unity scripts in Visual studio later.
3. Press the ‘Build’-button, select output folder and wait for Unity to finish exporting the UWP-Visual studio solution.
4. Close or minimize the Unity editor, then open the exported UWP solution in Visual studio.
5. Select x86 or x64 as target platform in Visual studio.
6. Open ‘Package.appmanifest’ of the main project, and on tab ‘Capabilities’ enable ‘Microphone’ & ‘Webcam’. These may be enabled in the Windows-store’s Player settings in Unity, too.
7. If you have enabled the ‘Unity C# Project’-setting in p.2 above, right click on ‘Assembly-CSharp’-project in the Solution explorer, select ‘Properties’ from the context menu, and then select ‘Windows 10 Anniversary Edition (10.0; Build 14393)’ as ‘Target platform’. Otherwise you will get compilation errors.
8. Build and run the solution, on the local or remote machine. It should work now.

Please mind the FacetrackingManager and SpeechRecognitionManager-components, hence the scenes that use them, will not work with the current version of the K2-UWP interface.

How to run the projector-demo scene (v2.13 and later)

To run the KinectProjectorDemo-scene, you need to calibrate the projector to the Kinect sensor first. To do it, please follow these steps:

1. To do the needed sensor-projector calibration, you first need to download RoomAliveToolkit, and then open and build the ProCamCalibration-project in Microsoft Visual Studio 2015 or later. For your convenience, here is a ready-made build of the needed executables, made with VS-2015.
2. Then open the ProCamCalibration-page and follow carefully the instructions in ‘Tutorial: Calibrating One Camera and One Projector’, from ‘Room setup’ to ‘Inspect the results’.
3. After the ProCamCalibration finishes successfully, copy the generated calibration xml-file to the KinectDemos/ProjectorDemo/Resources-folder of the K2-asset.
4. Open the KinectProjectorDemo-scene in Unity editor, select the MainCamera-game object in Hierarchy, and drag the calibration xml-file generated by ProCamCalibrationTool to the ‘Calibration Xml’-setting of its ProjectorCamera-component. Please also check, if the value of ‘Proj name in config’-setting is the same as the projector name set in the calibration xml-file (usually ‘0’).
5. Set the projector to duplicate the main screen, enable ‘Maximize on play’ in Editor (or build the scene), and run the scene in full-screen mode. Walk in front of the sensor, to check if the projected skeleton overlays correctly the user’s body. You can also try to enable ‘U_Character’ game object in the scene, to see how a virtual 3D-model can overlay the user’s body at runtime.

How to render background and the background-removal image on the scene background

First off, if you want to replace the color-camera background in the FittingRoom-demo scene with the background-removal image, please see and follow these steps.

For all other demo-scenes: You can replace the color-camera image on scene background with the background-removal image, by following these (rather complex) steps:

1. Create an empty game object in the scene, name it BackgroundImage1, and add ‘GUI Texture’-component to it (this will change after the release of Unity 2017.2, because it deprecates GUI-Textures). Set its Transform position to (0.5, 0.5, 0) to center it on the screen. This object will be used to render the scene background, so you can select a suitable picture for the Texture-setting of its GUITexture-component. If you leave its Texture-setting to None, a skybox or solid color will be rendered as scene background.

2. In a similar way, create a BackgroundImage2-game object. This object will be used to render the detected users, so leave the Texture-setting of its GUITexture-component to None (it will be set at runtime by a script), and set the Y-scale of the object to -1. This is needed to flip the rendered texture vertically. The reason: Unity textures are rendered bottom to top, while the Kinect images are top to bottom.

3. Add KinectScripts/BackgroundRemovalManager-script as component to the KinectController-game object in the scene (if it is not there yet). This is needed to provide the background removal functionality to the scene.

4. Add KinectDemos/BackgroundRemovalDemo/Scripts/ForegroundToImage-script as component to the BackgroundImage2-game object. This component will set the foreground texture, created at runtime by the BackgroundRemovalManager-component, as Texture of the GUI-Texture component (see p2 above).

Now the tricky part: Two more cameras are needed to display the user image over the scene background – one to render the background picture, 2nd one to render the user image on top of it, and finally – the main camera – to render the 3D objects on top of the background cameras.  Cameras in Unity have a setting called ‘Culling Mask’, where you can set the layers rendered by each camera. There are also two more settings: Depth and ‘Clear flags’ that may be used to change the cameras rendering order.

5. In our case, two extra layers will be needed for the correct rendering of background cameras. Select ‘Add layer’ from the Layer-dropdown in the top-right corner of the Inspector and add 2 layers – ‘BackgroundLayer1’ and ‘BackgroundLayer2’, as shown below. Unfortunately, when Unity exports the K2-package, it doesn’t export the extra layers too. That’s why the extra layers are missing in the demo-scenes.

6. After you have added the extra layers, select the BackgroundImage1-object in Hierarchy and set its layer to ‘BackgroundLayer1’. Then select the BackgroundImage2 and set its layer to ‘BackgroundLayer2’.

7. Create a camera-object in the scene and name it BackgroundCamera1. Set its CullingMask to ‘BackgroundLayer1’ only. Then set its ‘Depth’-setting to (-2) and its ‘Clear flags’-setting to ‘Skybox’ or ‘Solid color’. This means this camera will render first, will clear the output and then render the texture of BackgroundImage1. Don’t forget to disable its AudioListener-component, too. Otherwise, expect endless warnings in the console, regarding multiple audio listeners in the scene.

8. Create a 2nd camera-object and name it BackgroundCamera2. Set its CullingMask to ‘BackgroundLayer2’ only, its ‘Depth’ to (-1) and its ‘Clear flags’ to ‘Depth only’. This means this camera will render 2nd (because -1 > -2), will not clear the previous camera rendering, but instead render the BackgroundImage2 texture on top of it. Again, don’t forget to disable its AudioListener-component.

9. Finally, select the ‘Main Camera’ in the scene. Set its ‘Depth’ to 0 and ‘Clear flags’ to ‘Depth only’. In its ‘Culling mask’ disable ‘BackgroundLayer1’ and ‘BackgroundLayer2’, because they are already rendered by the background cameras. This way the main camera will render all other layers in the scene, on top of the background cameras (depth: 0 > -1 > -2).

If you need a practical example of the above setup, please look at the objects, layers and cameras of the KinectDemos/BackgroundRemovalDemo/KinectBackgroundRemoval1-demo scene.

How to run the demo scenes on non-Windows platforms

Starting with v2.14 of the K2-asset you can run and build many of the demo-scenes on non-Windows platform. In this case you can utilize the KinectDataServer and KinectDataClient components, to transfer the Kinect body and interaction data over the network. The same approach is used by the K2VR-asset. Here is what to do:

1. Add KinectScripts/KinectDataClient.cs as component to KinectController-game object in the client scene. It will replace the direct connection to the sensor with connection to the KinectDataServer-app over the network.
2. On the machine, where the Kinect-sensor is connected, run KinectDemos/KinectDataServer/KinectDataServer-scene or download the ready-built KinectDataServer-app for the same version of Unity editor, as the one running the client scene. The ready-built KinectDataServer-app can be found on this page.
3. Make sure the KinectDataServer and the client scene run in the same subnet. This is needed, if you’d like the client to discover automatically the running instance of KinectDataServer. Otherwise you would need to set manually the ‘Server host’ and ‘Server port’-settings of the KinectDataClient-component.
4. Run the client scene to make sure it connects to the server. If it doesn’t, check the console for error messages.
5. If the connection between the client and server is OK, and the client scene works as expected, build it for the target platform and test it there too.

How to workaround the user tracking issue, when the user is turned back

Starting with v2.14 of the K2-asset you can (at least roughly) work around the user tracking issue, when the user is turned back. Here is what to do:

1. Add FacetrackingManager-component to your scene, if there isn’t one there already. The face-tracking is needed for front & back user detection.
2. Enable the ‘Allow turn arounds’-setting of KinectManager. The KinectManager is component of KinectController-game object in all demo scenes.
3. Run the scene to test it. Keep in mind this feature is only a workaround (not a solution) for an issue in Kinect SDK. The issue is that by design Kinect tracks correctly only users who face the sensor. The side tracking is not smooth, as well. And finally, this workaround is experimental and may not work in all cases.

How to get the full scene depth image as texture

If you’d like to get the full scene depth image, instead of user-only depth image, please follow these steps:

1. Open Resources/DepthShader.shader and uncomment the commented out else-part of the ‘if’, you can see near the end of the shader. Save the shader and go back to the Unity editor.
2. Make sure the ‘Compute user map’-setting of the KinectManager is set to ‘User texture’. KinectManager is component of the KinectController-game object in all demo scenes.
3. Optionally enable the ‘Display user map’-setting of KinectManager, if you want to see the depth texture on screen.
4. You can also get the depth texture by calling ‘KinectManager.Instance.GetUsersLblTex()’ in your scripts, and then use it the way you want.

Some useful hints regarding AvatarController and AvatarScaler

The AvatarController-component moves the joints of the humanoid model it is attached to, according to the user’s movements in front of the Kinect-sensor. The AvatarScaler-component (used mainly in the fitting-room scenes) scales the model to match the user in means of height, arms length, etc. Here are some useful hints regarding these components:

1. If you need the avatar to move around its initial position, make sure the ‘Pos relative to camera’-setting of its AvatarController is set to ‘None’.
2. If ‘Pos relative to camera’ references a camera instead, the avatar’s position with respect to that camera will be the same as the user’s position with respect to the Kinect sensor.
3. If ‘Pos relative to camera’ references a camera and ‘Pos rel overlay color’-setting is enabled too, the 3d position of avatar is adjusted to overlay the user on color camera feed.
4. In this last case, if the model has AvatarScaler component too, you should set the ‘Foreground camera’-setting of AvatarScaler to the same camera. Then scaling calculations will be based on the adjusted (overlayed) joint positions, instead of on the joint positions in space.
5. The ‘Continuous scaling’-setting of AvatarScaler determines whether the model scaling should take place only once when the user is detected (when the setting is disabled), or continuously – on each update (when the setting is enabled).

If you need the avatar to obey physics and gravity, disable the ‘Vertical movement’-setting of the AvatarController-component. Disable the ‘Grounded feet’-setting too, if it is enabled. Then enable the ‘Freeze rotation’-setting of its Rigidbody-component for all axes (X, Y & Z). Make sure the ‘Is Kinematic’-setting is disabled as well, to make the physics control the avatar’s rigid body.

If you want to stop the sensor control of the humanoid model in the scene, you can remove the AvatarController-component of the model. If you want to resume the sensor control of the model, add the AvatarController-component to the humanoid model again. After you remove or add this component, don’t forget to call ‘KinectManager.Instance.refreshAvatarControllers();’, to update the list of avatars KinectManager keeps track of.

How to setup the K2-package (v2.16 or later) to work with Orbbec Astra sensors (deprecated – use Nuitrack)

1. Go to and click on ‘Download Astra Driver and OpenNI 2’. Here is the shortcut:
2. Unzip the downloaded file, go to ‘Sensor Driver’-folder and run SensorDriver_V4.3.0.4.exe to install the Orbbec Astra driver.
3. Connect the Orbbec Astra sensor. If the driver is installed correctly, you should see it in the Device Manager, under ‘Orbbec’.
4. If you have Kinect SDK 2.0 installed, please open KinectScripts/Interfaces/Kinect2Interface.cs and change ‘sensorAlwaysAvailable = true;’ at the beginning of the class to ‘sensorAlwaysAvailable = false;’. More information about this action can be found here.
5. If you have ‘Kinect SDK 2.0’ installed on the same machine, look at this tip above, to see how to turn off the K2-sensor-always-available flag.
6. Run one of the avatar-demo scenes to check, if the Orbbec Astra interface works. The sensor should light up and the user(s) should be detected.

How to setup the K2-asset (v2.17 or later) to work with Nuitrack body tracking SDK (updated 11.Jun.2018)

1. To install Nuitrack SDK, follow the instructions on this page, for your respective platform. Nuitrack installation archives can be found here.
2. Connect the sensor, go to [NUITRACK_HOME]/activation_tool-folder and run the Nuitrack-executable. Press the Test-button at the top. You should see the depth stream coming from the sensor. And if you move in front of the sensor, you should see how Nuitrack SDK tracks your body and joints.
3. If you can’t see the depth image and body tracking, when the sensor connected, this would mean Nuitrack SDK is not working properly. Close the Nuitrack-executable, go to [NUITRACK_HOME]/bin/OpenNI2/Drivers and delete (or move somewhere else) the SenDuck-driver and its ini-file. Then go back to step 2 above, and try again.
4. If you have ‘Kinect SDK 2.0‘ installed on the same machine, look at this tip, to see how to turn off the K2-sensor always-available flag.
5. Please mind, you can expect crashes, while using Nuitrack SDK with Unity. The two most common crash-causes are: a) the sensor is not connected when you start the scene; b) you’re using Nuitrack trial version, which will stop after 3 minutes of scene run, and will cause Unity crash as side effect.
6. If you buy a Nuitrack license, don’t forget to import it into Nuitrack’s activation tool. On Windows this is: <nuitrack-home>\activation_tool\Nuitrack.exe. You can use the same app to test the currently connected sensor, as well. If everything works, you are ready to test the Nuitrack interface in Unity..
7. Run one of the avatar-demo scenes to check, if the Nuitrack interface, the sensor depth stream and Nuitrack body tracking works. Run the color-collider demo scene, to check if the color stream works, as well.
8. Please mind: The scenes that rely on color image overlays may or may not work correctly. This may be fixed in future K2-asset updates.

How to control Keijiro’s Skinner-avatars with the Avatar-Controller component

1. Download Keijiro’s Skinner project from its GitHub-repository.
2. Import the K2-asset from Unity asset store into the same project. Delete K2Examples/KinectDemos-folder. The demo scenes are not needed here.
3. Open Assets/Test/Test-scene. Disable Neo-game object in Hierarchy. It is not really needed.
4. Create an empty game object in Hierarchy and name it KinectController, to be consistent with the other demo scenes. Add K2Examples/KinectScripts/KinectManager.cs as component to this object. The KinectManager-component is needed by all other Kinect-related components.
5. Select ‘Neo (Skinner Source)’-game object in Hierarchy. Delete ‘Mocaps’ from the Controller-setting of its Animator-component, to prevent playing the recorded mo-cap animation, when the scene starts.
6. Press ‘Select’ below the object’s name, to find model’s asset in the project. Disable ‘Optimize game objects’-setting on its Rig-tab, and make sure its rig is Humanoid. Otherwise the AvatarController will not find the model’s joints it needs to control.
7. Add K2Examples/KinectScripts/AvatarController-component to ‘Neo (Skinner Source)’-game object in the scene, and enable its ‘Mirrored movement’ and ‘Vertical movement’-settings. Make sure the object’s transform rotation is (0, 180, 0).
8. Optionally, disable the script components of ‘Camera tracker’, ‘Rotation’, ‘Distance’ & ‘Shake’-parent game objects of the main camera in the scene, if you’d like to prevent the camera’s own animated movements.
9. Run the scene and start moving in front of the sensor, to see the effect. Try the other skinner renderers as well. They are children of ‘Skinner Renderers’-game object in the scene.

How to track a ball hitting a wall (hints)

This is a question I was asked quite a lot recently, because there are many possibilities for interactive playgrounds out there. For instance: virtual football or basketball shooting, kids throwing balls at projected animals on the wall, people stepping on virtual floor, etc. Here are some hints how to achieve it:

1. The only thing you need in this case, is to process the raw depth image coming from the sensor. You can get it by calling KinectManager.Instance.GetRawDepthMap(). It is an array of short-integers (DepthW x DepthH in size), representing the distance to the detected objects for each point of the depth image, in mm.
2. You know the distance from the sensor to the wall in meters, hence in mm too. It is a constant, so you can filter out all depth points that represent object distances closer than 1-2 meters (or less) from the wall. They are of no interest here, because too far from the wall. You need to experiment a bit to find the exact filtering distance.
3. Use some CV algorithm to locate the centers of the blobs of remaining, unfiltered depth points. There may be only one blob in case of one ball, or many blobs in case of many balls, or people walking on the floor.
4. When these blobs (and their respective centers) is at maximum distance, close to the fixed distance to the wall, this would mean the ball(s) have hit the wall.
5. Map the depth coordinates of the blob centers to color camera coordinates, by using KinectManager.Instance.MapDepthPointToColorCoords(), and you will have the screen point of impact, or KinectManager.Instance.MapDepthPointToSpaceCoords(), if you prefer to get the 3D position of the ball at the moment of impact. If you are not sure how to do the sensor to projector calibration, look at this tip.

How to create your own programmatic gestures

The programmatic gestures are implemented in KinectScripts/KinectGestures.cs or class that extends it. The detection of a gesture consists of checking for gesture-specific poses in the different gesture states. Look below for more information.

1. Open KinectScripts/KinectGestures.cs and add the name of your gesture(s) to the Gestures-enum. As you probably know, the enums in C# cannot be extended. This is the reason you should modify it to add your unique gesture names here. Alternatively, you can use the predefined UserGestureX-names for your gestures, if you prefer not to modify KinectGestures.cs.
2. Find CheckForGesture()-method in the opened class and add case(s) for the new gesture(s), at the end of its internal switch. It will contain the code that detects the gesture.
3. In the gesture-case, add an internal switch that will check for user pose in the respective state. See the code of other simple gesture (like RaiseLeftHand, RaiseRightHand, SwipeLeft or SwipeRight), if you need example.

In CheckForGestures() you have access to the jointsPos-array, containing all joint positions and jointsTracked-array, containing showing whether the the respective joints are currently tracked or not. The joint positions are in world coordinates, in meters.

The gesture detection code usually consists of checking for specific user poses in the current gesture state. The gesture detection always starts with the initial state 0. At this state you should check if the gesture has started. For instance, if the tracked joint (hand, foot or knee) is positioned properly relative to some other joint (like body center, hip or shoulder). If it is, this means the gesture has started. Save the position of the tracked joint, the current time and increment the state to 1. All this may be done by calling SetGestureJoint().

Then, at the next state (1), you should check if the gesture continues successfully or not. For instance, if the tracked joint has moved as expected relative to the other joint, and within the expected time frame in seconds. If it’s not, cancel the gesture and start over from state 0. This could be done by calling SetGestureCancelled()-method.

Otherwise, if this is the last expected state, consider the gesture is completed, call CheckPoseComplete() with last parameter 0 (i.e. don’t wait), to mark the gesture as complete. In case of gesture cancelation or completion, the gesture listeners will be notified.

If the gesture is successful so far, but not yet completed, call SetGestureJoint() again to save the current joint position and timestamp, as well as increment the gesture state again. Then go on with the next gesture-state processing, until the gesture gets completed. It would be also good to set the progress of the gesture the gestureData-structure, when the gesture consists of more than two states.

The demo scenes related to checking for programmatic gestures are located in the KinectDemos/GesturesDemo-folder. The KinectGesturesDemo1-scene shows how to utilize discrete gestures, and the KinectGesturesDemo2-scene is about the continuous gestures.

More tips regarding listening for discrete and continuous programmatic gestures in Unity scenes can be found above.

What is the file-format used by the KinectRecorderPlayer-component (KinectRecorderDemo)

The KinectRecorderPlayer-component can record or replay body-recording files. These are text files, where each line represents a body-frame at a specific moment in time. You can use it to replay or analyze the body-frame recordings in your own tools. Here is the format of each line. See the sample body-frames below, for reference.

0. time in seconds, since the start of recording, followed by ‘|’. All other field separators are ‘,’.
This value is used by the KinectRecorderPlayer-component for time-sync, when it needs to replay the body recording.

1. body frame identifier. should be ‘kb’.
2. body-frame timestamp, coming from the Kinect SDK (ignored by the KinectManager)
3. number of max tracked bodies (6).
4. number of max tracked body joints (25).

then, follows the data for each body (6 times)
6. body tracking flag – 1 if the body is tracked, 0 if it is not tracked (the 5 zeros at the end of the lines below are for the 5 missing bodies)

if the body is tracked, then the bodyId and the data for all body joints follow. if it is not tracked – the bodyId and joint data (7-9) are skipped
7. body ID

body joint data (25 times, for all body joints – ordered by JointType (see KinectScripts/KinectInterop.cs)
8. joint tracking state – 0 means not-tracked; 1 – inferred; 2 – tracked

if the joint is inferred or tracked, the joint position data follows. if it is not-tracked, the joint position data (9) is skipped.
9. joint position data – X, Y & Z.

And here are two body-frame samples, for reference:



How to enable user gender and age detection in KinectFittingRoom1-demo scene

You can utilize the cloud face detection in KinectFittingRoom1-demo scene, if you’d like to detect the user’s gender and age, and properly adjust the model categories for him or her. The CloudFaceDetector-component uses Azure Cognitive Services for user-face detection and analysis. These services are free of charge, if you don’t exceed a certain limit (30000 requests per month and 20 per minute). Here is how to do it:

1. Go to this page and press the big blue ‘Get API Key’-button next to ‘Face API’. See this screenshot, if you need more details.
2. You will be asked to sign-in with your Microsoft account and select your Azure subscription. At the end you should land on this page.
3. Press the ‘Create a resource’-button at the upper left part of the dashboard, then select ‘AI + Machine Learning’ and then ‘Face’. You need to give the Face-service a name & resource group and select endpoint (server address) near you. Select the free payment tier, if you don’t plan a bulk of requests. Then create the service. See this screenshot, if you need more details.
4. After the Face-service is created and deployed, select ‘All resources’ at the left side of the dashboard, then the name of the created Face-serice from the list of services. Then select ‘Quick start’ from the service menu, if not already selected. Once you are there, press the ‘Keys’-link and copy one of the provided subscription keys. Don’t forget to write down the first part of the endpoint address, as well. These parameters will be needed in the next step. See this screenshot, if you need more details.
5. Go back to Unity, open KinectFittingRoom1-demo scene and select the CloudFaceController-game object in Hierarchy. Type in the (written down in 4. above) first part of the endpoint address to ‘Face service location’, and paste the copied subscription key to ‘Face subscription key’. See this screenshot, if you need more details.
6. Finally, select the KinectController-game object in Hierarchy, find its ‘Category Selector’-component and enable the ‘Detect gender age’-setting. See this screenshot, if you need more details.
7. That’s it. Save the scene and run it to try it out. Now, after the T-pose detection, the user’s face will be analyzed and you will get information regarding the user’s gender and age at the lower left part of the screen.
8. If everything is OK, you can setup the model selectors in the scene to be available only for users with specific gender and age range, as needed. See the ‘Model gender’, ‘Minimum age’ and ‘Maximum age’-settings of the available ModelSelector-components.



1,063 thoughts on “Kinect v2 Tips, Tricks and Examples

  1. Hi, I tried to load a avatar prefab (with AvatarController and AvatarScaler or adding them later) to the scene when running, but the model wouldn’t change its position according to the user(the scripts didn’t work) although it was in the Hierarchy and set Active. However, after the user is not tracked first and then tracked again, it works. So, can I let the scripts work just after adding the model to the scene? Thanks in advance!

    • Hi there,

      If you create the avatars dynamically, please use the code below after instantiating the avatar(s). It is from LocateAvatarsAndGestureListeners.cs-script:

      KinectManager manager = KinectManager.Instance;


      MonoBehaviour[] monoScripts = FindObjectsOfType(typeof(MonoBehaviour)) as MonoBehaviour[];

      foreach(MonoBehaviour monoScript in monoScripts)
      if((monoScript is AvatarController) && monoScript.enabled)
      AvatarController avatar = (AvatarController)monoScript;

  2. hi, when i try my unity say 17 error whose : “The type or namespace name `Kinect’ does not exist in the namespace `Microsoft’. Are you missing an assembly reference? ” , i install sdk from microsoft,i read all pages but it doesnt work. i use kinect V2

    • Just import the K2-asset into new Unity project, open one of the demo scenes and run it. You need to have Kinect SDK 2.0 installed, as well.

  3. Hi Rumen. I have an issue for loading new models and category into Resources folder dynamically. This dynamic means with no build and run again the project to recognize. For instance, the model data can be queried from pre-stored place like database. Do I need every time to perform rigging, avatar instantiate and others for every new model and category update in Resource folder?

  4. Floow to the color displaying issue, the player settings has switched to Gamma already.
    Even if I disabled the UserBodyBleder-component, the problem still exists. However, as I am running in portrait mode, if I disable the UserBodyBleder-component, The clothing model cannot fit into the body.

  5. If i need to build my own model (Avatar) in 3D how can i put the skeleton for animate my model with Kinect? Thank you

    • What you need is just a standard humanoid rigging for the model that (as far as I know) could be done with Maya or 3dsMax. Many customers use Mixamo humanoid rigs, as well. But, as I said multiple times, I’m not a model designer and just use ready-made models in my demo scenes.

  6. Could you please provide me an example for dynamically update the Models at run time with the AssetBundle as you suggested. We can consider the new models with existing ones for example (Resources \ Clothing \0000 and 0001).

    • Such an example would require some research and coding efforts. I wouldn’t do it for free, and also don’t have the needed free time at the moment.

      • hi rumen please me to I can change my contact with @Vistana because we’re working on the same problem so that we can help each other.

  7. Hi Rumen I have just bought the your K2 samples from unity stores. Thats helps me alot

    Basically I was trying to upload a pant model its uploaded correctly but “PANT” is not showing at run time I mean everything is done like wise you have said in the description in “KINECT V2 TIPS AND TRICKS”.

    where I’m doing the mistake I don’t know?

    if you’ll just make a short video for uploading a model then it’ll be very helpful for us.

    My other question is that.

    if we are going to use the high poly cloth model is that possible that we can minimize the distance of cloth and body ? I mean to said that can we increase our accuracy to this level in the following video youtube link after increasing the polygons of cloth models like in this video! In the following link———>

    • Hi, as you probably know, I don’t do any videos. This is because I don’t have experience in making videos and because of missing time. Coding, updating the packages and responding to customer requests eats the major portion of my time.

      The procedure of adding new models is exactly as described here: It was tested many times already. Please check again, if you have a separate folder, number and ModelSelector for ‘Pants’. If you still can’t locate the issue, please zip your project (along with ‘Pants’) and send it over to me via WeTransfer. Then I’ll take a look at it, to find out what exactly went wrong. Don’t forget to mention your invoice number, as well. This is to prove your eligibility for support.

      Regarding the distance between the body and the model: There is a setting of the UserBodyBlender-component of MainCamera, called ‘Depth threshold’. You can experiment with it. According to my experience, when the value is too low and the model surface is curved because of some joint orientations, it may happen that user body penetrates the model, and this may look weird. That’s why the girl in the video barely moves, as to me.

  8. Hi Rumen voice recognition is not working in K2 of assets, I have follow all the steps according to your description that you have given in the TIPS AND TRICKS

  9. HI Rumen thanks for the response I have overcome the issue related to uploading model.

    Is that possible that we change the calibration Pose mode of the user in the fitting! please let me about the reference from where it can be change BY DEFAULT it is the T-POSE I have tried but can’t find it. From where it can be change?

    Other thing is that the interaction system of hands is not working properly after making the scene orientation of fitting room in portrait mode. Is that possible that we can change interaction logic by using the “Mirsosoft.Kinect.dll ” in this program ? if it is not possible then how can we improve the interaction system in the fitting room and I don’t want to operate the fitting room or change the models with the gestures. Please let me know how can I do it in more better way.

    Here is the my invoice image link and number 18966678920773

    • The calibration pose is a setting of KinectManager-component in the scene. It is called ‘Player calibration pose’. You can change it to something else, or set it to ‘None’, as in the FR-demo2.

      The interaction system works quite well in the portrait mode, according to my tests. See here how to properly set the portrait mode:

      To turn off the interaction system, disable the InteractionManager-component of KinectController-game object, and the InteractionInputModule-component of EventSystem-game object in the scene. To turn off the gesture recognition, disable ‘Swipe to change models’ & ‘Raise hand to change categories’-settings of the CategorySelector-component of KinectController. Feel free to replace it with the interaction system that fits best your needs.

  10. Hi Rumen,

    I’m having an issue when I’m switching scenes between a scene that tracks a user’s hand movements with a real life camera view and a scene that displays the game view while tracking a user’s full body movements and vice versa. For example, in my menu screen the user can see himself on the screen and can move his hand to a position to move into the first level. Then in the first level the Kinect does not track the user’s movements. This happens in the reverse as well. If the user starts in the level the Kinect picks up his movements and the avatar moves with him, but when he dies and moves onto the end scene, the Kinect does not display him in real life, and instead just shows a white screen, but his movements are still being tracked.

    • Hi, do you follow the instructions in ‘Howto-Use-KinectManager-Across-Multiple-Scenes.pdf’ in the _Readme-folder of the K2-asset? They are illustrated by the demo scenes, located in KinectDemos/MultiSceneDemo-folder, as well.

  11. Hi Rumen, I need to change add my own photo in OverlayDemo> Sprites folder to use for the source image in photo button section in Fitting Room demo. I made folder like Superman and other folders and files located on Sprites folder, but the added folder with containing png file was not added to the Image (component) > Source Image. Please guide me how can I add my own for this purpose.

    • I’m not sure I understand your issue. You can set the photo-button image directly in the PhotoBtn/Image-component (as you did, I think). What’s the problem with that?
      Keep in mind that before that your image asset needs to have its ‘Texture type’ set as ‘Sprite (2D and UI)’. That’s all.

  12. Hi Rumen, first of all, thanks for all your replies. I have question about InteractionInputModule and InteractionManager components. Firstly, how add or change mouse hover when mouse move or hover on an item for example on Dressing Menu Items in KinectFittingRoom1. Secondly, how improve grip and release process when select an item. Because it has mouse jumping when click on an item.

    • Hi, Regarding hand cursor hovering the UI, this should be enabled somewhere in the InteractionInputModule. I think it is related to m_framePressState and module activation, when the hand cursor is moving, but cannot research it deeper at the moment, because of missing time. I have a project deadline in two weeks.

      Regarding the jumping hand cursor, this is caused by the change in hand joint position, when you close and open your hand. You can see what I mean in OverlayDemos/KinectOverlayDemo1-scene. I don’t think this is a major issue though, because the offset is not big. A good workaround would be to make the underlying UI bigger, i.e. tolerate some offset.

    • Well, I’m not an artist or game-creation consultant 🙂 I would suggest to try all demo scenes and think a bit what to use, combine or extend, in order to achieve what you need.

    • Not yet, because only Windows 64-bit native library is currently supplied in the K2-package. But as next (and final) step, I plan to integrate the latest Astra SDK with body-tracking included, and remove my native wrappers and libraries. Then, if Android is supported internally, you will be able to build for it, as well.

  13. Hi Rumen,

    I want to stick the kinectFace onto my animated avatar. Therefore I edited your “UpdateFaceModelMesh()” meshod. Within a loop unter “if (moveModelMesh)” you asign a position to the face mesh via “faceModelMesh.transform.position =”

    so for testing purposes I added a Quaternion rotation and set pos and rot to (0, 0, 0) and (0,0,0,0)
    seems that the head geometry has some kind of offset, since the sphere is also set to (0,0,0)
    Can you elaborate on where this comes from?

    • Hi, I think the difference between the primitives and rigged model would be in parenting of the face mesh to the head joint of the model. In this case, the face mesh should move along with the skeleton, i.e. not have its own movement. This means you’d need to disable the ‘Move model mesh’-setting of FacetrackingManager, and use its ‘Vertical mesh offset’ and ‘Model mesh scale’-settings to adjust the face mesh to the head of the model. The face mesh model is built around the head joint position. I hope I understood your issue and question correctly.

  14. Hi Rumen, I am working on a project to put a 3D Model behind my body when I am in front of the kinect sensor, is there any scene/script from your packet, that I can use to achieve that? I dont want to overlay the 3D model or use background removal.(Example: if you are in the living Room you will see a 3D Model Ghost passing behind you) I was wondering if visualizer could help me, but I can’t put 3d Models

    • 1. You can try to combine 3d models with background removal. See 5th background-removal demo for an example.
      2. You can try to use the 2nd fitting-room demo, enable the UserBodyBlender-component of the MainCamera, and experiment a bit with negative values of its ‘Depth threshold’-settings to put the model behind the user instead of in front of it.

      I still cannot imagine how your scene would look like with this ghost behind you, but suppose it’s beyond my creativity 🙂

  15. Hi rfilkov!

    Hope you are doing well. In my scenario I want user to stand in T Pose to start the app. After going to fitting room menu I want user to stand again in T Pose to activate the model. How is it possible 2nd time?

    And if don’t want to show use the scroll bar of clothes what should I do?

    and can you tell me what minimum pc specs would you recommend to run the fitting room smoothly?

    Thanks a lot!

    • Hi, everything is possible, but you would need to do some coding. You should modify the ModelSelector.cs-script to fit your scenario, and make (or combine it with) gesture listener that listens for T-pose. OnDressingItemSelected() should only store the selected model index, but the loading of the model (i.e. invocation of LoadDressingModel()) should be done by the GestureCompleted()-function of the gesture listener, i.e. when the T-pose was detected. Look at the KinectGesturesDemo1-scene and its CubeGestureListener.cs-component, if you need an gesture-listener example.

      The menu, scroll-bars, etc. are pure Unity UI components. You should either put less items in the menu than the visible area, or get rid of the automatic layout (i.e ContentSizeFitter) and layout the menu content in your script. But the question in this case would be, how the user would see or manipulate the items that are out of the visible region.

      Regarding minimum spec: I don’t have any. Various combinations are possible here – with or without background removal, with or without body blending, etc. Experiment a bit and you will find out. Generally, you would need a good graphics card, because there is a lot of graphics and shaders, a good USB-3 port because of the Kinect streams, and a good CPU to bind everything together.

      • Thanks a lot for your reply Rumen!

        One more thing. Can you please tell me what if i have 3 size of the model per clothing article like small, medium and large. Is this possible to detect user body size and lay it accordingly?

        Thanks! Waiting for reply!

      • I’m not sure how exactly the S,M,L size is determined, but you can always calculate some user body sizes, for instance the distances between the shoulders or from spine base to neck.

        The scaling is done by the AvatarScaler-component attached to each model. It has some ScaleFactor-settings, the most important of them is BodyScaleFactor, because it scales the whole model. In the 1st fitting room demo, the scale factors are settings of the ModelSelector-component. I suppose you could use this BodyScaleFactor-setting for the different sized models. If not – feel free to modify the code of AvatarScaler accordingly.

  16. Hi Rumen!

    Can you tell me how to add cloth effect in Fitting Room Clothes. It’s not allowing me to add cloth effect because of the continuous scaling being done in every frame.

  17. Hi Rumen,

    These days I am struggling with the issue how to playback the exact frames of motion. Thanks to your Avatar Control script, I can trigger the avatar’s motion following my motion in front of Kinect. What I want to do is playback the avatar’s motion. For an example, I make the “kick” motion, so the avatar also do this motion. At the same time, I need to play back the motion in the same game scenario, and also need to adjust some joints’ position of the avatar in order to make it similar with standard posture(expert’s standard kick motion). Is it possible to do this just using coding (1. playback the motion; 2. adjust some joints’ position).

    Thank you for your help.

  18. Hi Rumen,

    Last time I may not well explained what I want to do for utilizing your asset. Actually, there is a humanoid avatar (not cuberman in your Recorded Demo) in my scene. After I attach “Avatar Controller” script, so I can control the avatar in the scene in real time. But I want the avatar could “memorize” the motion and replay the motion I did before in the same scene. For example, I did a kick motion, the avatar followed the motion in real time. After finishing the kick motion, I want the avatar to replay the kick motion again I did before in the same scene.
    Is there a way to save the motion of avatar and replay the motion in the same scene again? I tried to save the joint position data of avatar in Unity coordinate system, but I don’t know how to use the data to drive the avatar again for replaying the motion? Could you give me some hints to do that? Thank you for your kind help.

    • The KinectRecorderDemo saves or replays all Kinect-detected user motions, not the motion of the Cubeman or avatar in the scene. When you re-play the recorded file, it replaces the current Kinect-tracked body data with the recorded data. So, I would suggest to put the KinectRecorderPlayer-component in your scene, and in your script:

      1. To call KinectRecorderPlayer.Instance.StartRecording() & KinectRecorderPlayer.Instance.StopRecordingOrPlaying() to record some motion. Then
      2. Call KinectRecorderPlayer.Instance.StartPlaying() & KinectRecorderPlayer.Instance.StopRecordingOrPlaying() to replay the same motion in the same scene. Do it, and you will see what I mean. Hope it is what you need, as well.

      See the code of KinectPlayerController-component in KinectRecorderDemo-scene, if you need an example.

  19. Hi Rumen, I have been trying Kinect v2 MS-SDK together Unity and it works very well, but I have a problem when I made the exe.

    Once I have the build and then i execute, i can see in console the following message:
    DllNotFoundException: KinectUnityAddin. Obviously I can not use my application, so can you help me? or can you advice me?

    Thank you for your help.

  20. Hello Rumen,

    Thanks for replying me back. I still didn’t bought it. After your assurance I will go for it.

    Thanks again.

      • I should work, why not. Please only mind the K2-asset supports Astra & Astra-Pro sensors only. If you have PerSee or something else, it will not work.

  21. Hello Rumen,
    In the HeightEstimator.cs you defune smoothFactor as 5f, Is there is any reason there?
    I want to calculate exact height of one body, or want to get nearly accurate body measurements.

    • The smooth factor is used just to smooth the updates of the measured values, so they don’t vary too much. You can set the smoothing factor higher, if you want faster updates or lower (even below 1) to get slower updates. The body height is estimated from the depth image. The highest to the lowest point of the respective user are found, and then calculated in meters. In this regard, make sure your body is completely in the field of view of the sensor – from head to feet.

      • Thanks for your reply.

        One more query:

        I noticed that the mesh generated by the user mesh visualizer only contains the portion of the user that is in front of the camera. This is as expected, as the camera can only capture one side of the user at a time, but I was wondering if there was any feature to generate a full body mesh (as opposed to only the visible side) by multiple captures from different angles?

        One method might be to capture and store meshes from different angles and stitch them together, but is there any feature in the SDK that does this?

      • The K2-asset works with one sensor only, while a full body would requires 2 or 3 sensors to capture the same body from different POV, and then a fusing algorithm to bond these captures together. There is no such feature in the K2-package.

    • “The K2-asset works with one sensor only, while a full body would requires 2 or 3 sensors to capture the same body from different POV, and then a fusing algorithm to bond these captures together. There is no such feature in the K2-package.” —

      Ok understood. If you plan to add such a feature in the future, that would be awesome.

      Other than that, this is a great plugin. Many thanks for all the help!

  22. This is an awesome plugin !
    While I encounter a problem when I ran the KinectAvatarsDemo4 scene. When the first player go to the scene and the second player go to the scene afterward, then the first player go out of the scene, the second player’s avatar would stuck, please help me! Thanks very much !

  23. This is an awesome plugin !
    While I encounter a problem when I ran the KinectAvatarsDemo4 scene. When the first player go to the scene and the second player go to the scene afterward, then the first player go out of the scene, the second player’s avatar would stuck, please help me! Thanks very much !

  24. Hi Rumen
    I know Kinect v2 device does not support run on mac or linux. How about the astra sensor? Can I develop app for mac & linux by your new updates?

    • The latest update (v2.17) includes Nuitrack support. As far as I’ve read, Nuitrack can work on Linux, as well. I don’t have Linux here, hence no way to test it, but you could try. Please only mind, Nuitrack supports the Orbbec Astra sensor, but not Astra-Pro.

      • Hey Rumen, Astra pro works wonderfully well with Nuitrack. Will check back on the Linux support and let you know if/how it works. Intel D415 also works for that matter. Just one quick tip, Unity crashes during play after a certain time when using the Nuitrack SDK, I think it is because of the trial limit. Would be good to handle that properly as new developers won’t know the reason for it.

        Thanks again for the wonderful SDK! I’m glad you’re updating the plugin wonderfully well, but I feel it would be much better if you could spin off the future Orbbec SDK/Nuitrack as a separate plugin. I don’t see why Kinect libraries have to be included if I’m using the orbbec. Just my thoughts.

      • Hi, and thank you for the feedback.

        Yes, the crashes are due to Nuitrack licensing and unfortunately these ‘Memory access violation errors’ cannot be prevented in C#. If you buy a license the crashes will stop. This is what I got from the Nuitrack support regarding these crashes: Q: “Are the crashes when a trial version is used “by design” so? This is against the positive user experience.”, A: “In the next releases we will fix the crash for a correct completion, and also we will add the possibility to receive an exception if there are problems with the license.”. By the way, my Astra-Pro doesn’t work with Nuitrack SDK, that’s why your message sounds a bit surprising 🙂

        Regarding the support for Orbbec SDK 2.x: I intentionally postponed the interface to Orbbec SDK 2.x, until they manage to provide some means for stream synchronization.

  25. Hi, I am a beginner for Kinect.
    I am amazing to your SDK.
    But I don’t know why my result is different to your demo.(FittingRoom2)
    My result picture like this.
    I try to do anything I have think.
    I check out the model rig.
    Add the same scripts to gameobject.
    Set all same parameters on scripts.
    Change Camera position and create new background camera and set the same near and far clipping plane.
    Except some GUITexture chage to RawImage.
    But I can’t resolve the bug and I don’t know how to do at all.
    Thank you very much!

    • Hi, please correct me, if I’m wrong: Your issue is that when you move ModelMF from FittingRoomDemo2-scene to the hat-scene (with some modifications), it is scaled wrong?

      If this is correct, please copy the SkeletonOverlayer-component from FittingRoomDemo2 to the KinectController-object of your scene, enable it and set its ‘Foreground camera’-setting to reference the MainCamera. Then run the scene to see, if the green balls overlay correctly the user’s body parts. The AvatarScaler-component of ModelMF uses the same overlay positions to scale the model.

      By the way, I just tried to do the same – copied ModelMF-model to the hat-overlay scene, and it worked as expected, even without copying the OverlayController-component, as well. Its purpose is to setup the cameras in the scene, to have the same POV as the Kinect-sensor (i.e. same height and angle).

      • I find the error that it is currentUserId in AvatarScaler is 0 in my scene.
        Get Id By PlayIndex can fix that. Thanks!
        And I want to ask another question.
        I tried to get a 1080 X 1920 (vertical) color camera texture, but I changed the RawImage’s size.
        The image is out of shape.
        I thought the resolution is using SetClipRect to match the size.
        But the sensor detected user out of rect.
        I will research this, and thank you very much.

  26. Hi,

    First of all, this plugin is great! I’ve been using it with a Kinect v2 and it’s been working well for me so far. I’m now trying to use it with an Intel RealSense D415 with the latest 2.17.0 update, but have been running into some issues getting it working… I followed the directions here:, but get the following error when I try running one of the avatar demos:

    System.DllNotFoundException: libnuitrack
    at (wrapper managed-to-native) nuitrack.NativeImporter:nuitrack_InitializeFromConfig (string)
    at nuitrack.NativeNuitrack.Init (System.String config) [0x00000] in :0
    at nuitrack.Nuitrack.Init (System.String config, NuitrackMode mode) [0x00000] in :0
    at NuitrackInterface.NuitrackInit () [0x0002e] in C:\Documents\Unity\RealSenseTest\Assets\K2Examples\KinectScripts\Interfaces\NuitrackInterface.cs:188

    When I put a copy of all the dlls found in the nuitrack\bin folder in, I then get this error:

    nuitrack.TerminateException: Exception of type ‘nuitrack.TerminateException’ was thrown.
    at nuitrack.NativeImporter.throwException (ExceptionType type) [0x00000] in :0
    at nuitrack.NativeNuitrack.Init (System.String config) [0x00000] in :0
    at nuitrack.Nuitrack.Init (System.String config, NuitrackMode mode) [0x00000] in :0
    at NuitrackInterface.NuitrackInit () [0x0002e] in C:\Documents\Unity\RealSenseTest\Assets\K2Examples\KinectScripts\Interfaces\NuitrackInterface.cs:188

    I noticed somebody mentioned above that they’ve had success with the D415, so I’m wondering what I might have missed… I’m using Unity 2017.2.0f3 (64bit) on Windows 10.


    • Hi, ‘libnuitrack.dll’ is in \bin-folder, and this folder should be in your PATH. See p.3 in the link above. I suppose you have not run ‘set_env.bat’, or have opened Unity editor before you did that. Please open the command prompt (Run ‘cmd’), and type ‘path’ in the console. You should see the path to Nuitrack’s bin-folder in there. If you don’t, run ‘set_env.bat’ again, as administrator.

  27. Hi, i have a problem with demo Fitting room, when i use my 3d models ,I have squares on my model 3D and in these edges we see the background video. You have solution ? i try change float depth Threshold but my artifact stay.

    • Hi, what squares do you mean? Please e-mail me a screenshot of your issue, so I can understand it better, and mention your invoice number, as well.

  28. hi Rumen
    I am a student. I am learning how to remove the arms of 3D demo in AvatarsDemo. What should I do?

    • Why would you remove the avatar’s arms?! Maybe I don’t understand your issue correctly, so could you be a bit more specific?

      • I am so sorry,because i am a Chinese student,I can’t speak English fluently.My issue is that i need combine Unity, Kinect and Magic glove,i do’t need the data of the avatar’s arms from Kinect.My purpose is to reject the information which it come from Kinect,and use the data of Magic glove’s data to control the avatar’s whole arms .I’m so sorry to disturb you.thank you very much.

      • I understand now. And your English is pretty good, by the way.
        You can use the AvatarControllerClassic instead of AvatarController as component of the humanoid model in the scene. Then, assign only the body joints you need to control by the sensor to the respective component settings, and control the rest with your own script.

      • oh,yeah
        now ,I know how to solve my question.Thank you very much.ヽ( ̄▽ ̄)ノ

  29. hi Rumen

    There are some issue occurring me, tell me if its limitation or its just me
    Sometime model lay on my body properly doesn’t overlay other body and sometimes it doesn’t detect others person.

    If i turn on Auto Angle Update on kinect controller all clothes in resource folder, doesn’t overlay on body like before.

    • Hm, bodies are not equal in real life. For instance, some clothing may fit OK on you, but not on your wife. And vice versa. That’s why you could provide model-selectors for different categories of people – boys, girls, men, women, young, old, xxl, etc. Use the scaling factor-settings of each model-selector if needed, to adjust the models to the respective user type.

      Regarding user detection: There are several settings of KinectManager that control the user detection – User detection order (look above for the tip), Min user distance, Max user distance, Max left right distance, Max tracked users. You could use them to tune the user detection process. When someone is not detected or lost unexpectedly, look at the console (or log-file) to see what has happens with the user detection. This may give you some hints, what exactly went wrong.

      Regarding sensor height&angle auto-update: It’s not a good practice to use Auto-update in production environment. In this case, the SDK updates sensor height and tilt angle, whenever it detects users. But this height and angle estimation is not always correct and reliable. I suppose you don’t move the sensor too much. You could use AutoUpdateAndShowInfo only to find out the auto-detected values, when you setup the sensor on a new place, for a first time. And if the values look correct, put them as values of Sensor-height & Sensor-angle settings of KinectManager. Then set AutoHeightAngle-setting back to DontUse.

      • Thanks for your reply

        One question how im going to detect people categories and people sizes like it width/height and place specific model to it and what do you recommend how should we design 3d model of xxl size?

      • You can allow them select the category from a menu (or sequence of options) at the start. You can also detect their gender and age automatically, with the CloudFaceManager and CloudFaceDetector-components, and ‘Detect gender age’-setting. Each ModelSelector has its own filters for gender and age. Look at the last few of its settings.

        I’m not a model designer, so it would be better not do give expert opinions regarding 3d model designs.

      • Does cloudFaceManager and CloudFaceDetector determine user body size e.g an old or young aged man might have slim or fat body.

        If no i’m thinking about somehow i can detect user height and width and take him to the specific model category like fat body model or slim body model and overlay it to the user? Does that make any sense?

        Sorry for disturbing you again and again!

      • No, the face detector determines the gender and age only. If you need the body measures, look at KinectHeightEstimator-demo scene in KinectDemos/VariousDemos-folder, and its HeightEstimator & BodySlicer-components.

  30. hello Rumen

    I use “Astra Pro” but its color camera FPS dosen’t set to 30FPS.
    it looks like 15FPS.
    what can i do?
    thank you for reading my poor english.

      • Thank you Rumen!
        but I didn’t solve the problem.
        in orbbec unity sdk, we can see webcam works 30fps.
        and now I have another problem.
        if orbbec detects more than two to people, one person leaves the screen. then remanig person loses his bone and joint.
        how can i solve it.
        I’m sorry to bother you 🙁

      • I don’t currently use Orbbec SDK 2.0.x in the K2-asset, because of some issues with it. Instead, I read the depth stream from OpenNI2, the color stream from the webcam and the body stream – from their old, standalone body tracking SDK. To be honest, I’m considering to phase out the direct support for Astra sensors in the future releases.

        Regarding the lost bones and joints: Not sure that I understand what you mean. Could you please e-mail me the Unity-log when this happened (here is where to find the log:, explain a bit more what exactly happens, and tell me your invoice number.

  31. Hello Rumen,

    In your KinectPoseDetector demo, there are some textural instructions (the angle between two corresponding bone vectors), which is very informative but not explicit enough. I want to add some graphical instructions at the same time. What I want to do is to highlight the body part of the two avatars. For example, if the lower arm doesn’t perform well, I want to use the red color to highlight the lower arm of both avatars in the scene. Is there a way to do that? If this is very difficult, is there a way to highlight the specific bone in the skeleton lines (after I click Display User Map and Display Skeleton Lines of Kinect Manager Script, there is a small window in the right bottom of the scene), which could also be very helpful to me. Thank you a lot!

    • I would parent some primitives (say capsules) to the joints of the avatar, and adjust them to cover the respective bones only. Then change their material with my own material, make it transparent by default. Then in the script, set the color of the respective primitive and make it semitransparent. It will look like an outline.

      If you prefer to color the skeleton lines in the user-map instead, open KinectScripts/KinectManager.cs, find DrawSkeleton()-method and modify it, as to your needs.

      • Thanks for your helpful reply. I already successfully colored the skeleton lines by modifying DrawSkeleton() method. However, I really want to color the respective bones in the scene. According to your suggestion, I found out in the Hierarchy panel of Unity, humanoid avatar is based on joints hierarchy rather than bone hierchy. For example, “joint knee left” is the parent node, it has a child node called “joint foot left”. Therefore, I don’t know how to “parent some primitives (say capsules) to the joints of the avatar, and adjust them to cover the respective bones only”, so that the primitives (say capsule) could follow the motion of respective bones. Did I wrongly understand your points?
        And by the way, I really appreciate your wonderful work of this asset and your massive help. So in my master thesis, I want to cite your work, is there a form to cite your work, or just cite your website? Thanks for your help.

      • I mean something like this:
        Capsule over shoulder
        Regarding citation: Yes, please cite this website or the online documentation website.

      • I exactly followed your suggestion, put the capsule under the left elbow joint (as parent node) if I want the capsule to outline the left elbow of the avatar . But the problem is that after I put the capsule to cover the respective bones in the Hierarchy, when the animation start, the capsule will still remain in the same place, it can not follow the animation of avatar. Should I can more settings so that the capsule could follow the animation of respective bones? Many thanks!

      • The capsule in my example was parented to the left shoulder. If it doesn’t move together with the joint, then maybe it is not parented correctly.

        I just made a 2nd experiment – with 2 capsules this time – parented to the shoulder and to the elbow, with different materials/colors, positioned and scaled appropriately to cover the respective child bone. Then tried it with Kinect capture, in real time. Here is the result. I suppose you should get something similar:


  32. Hi Rumen,

    First i want to say thanks for this amazing K2-asset, lots of useful stuff in there.

    I’m building a experience with VR and K2 sensor and everything is working fine until i run into a dilema with the avatar controller, maybe you can point me in the right direction:

    The avatar controller works fine if you use it for characters with human proportions, but in my case im using cartoony characters where the head is a LOT bigger than the body. The problem is that i’m clipping the avatar’s hands inside the head when i try to touch the head. I need to restrict the hands movement so that they dont clip inside the head.

    I tried with the muscle limits but with no luck, now my approach is to attach a collider to the head and detect a collision with the hands. The problem is I can’t find a way to limit the hands/arm movement to the surface of the collider without stopping all the motion in the joint on a collision.

    Greetings and any help or pointers on how i can avoid this clipping is kindly appreciated.

    • Please try to enable the ‘Apply muscle limits’-setting of the AvatarController-component. If this doesn’t help, try to use colliders – maybe sphere for the head, and capsules for the arm bones. Don’t forget to make the avatar-model RigidBody as well, with the needed constraints.

  33. Hi Rumen,

    I want to show a 2d image of webcam stream infront of 3d model, how is it possible?

    Please help!

  34. Hi Rumen,

    While I am writing my thesis, I have some questions about KinectManager script and AvatarControl script.
    First, are the values of sensorHeight and sensorAngle important? I mean, do I need to write the exact real value to replace default value? For my purpose, in order to capture people’s whole body motion, I need to put Kinect much higher and tilt the Kinect sensor. I just afraid if I still use the default value, the accuracy of my algorithm will decrease. Because in my algorithm, I used getBoneTransform in PoseHelper script to calculate the joint angle of avatar, I need to make sure the accuracy of joint angle is acceptable. And how do you define sensorAngle is positive or negative?
    Secondly, how do you convert Kinect coordination system to Unity coordination system so that avatarControl script could drive human avatar’s motion? You just need to tell me the rough idea. Thank you for your kind help.

    • To your questions:
      1. Yes, sensorHeight and sensorAngle are used to transform positions from the Kinect coordinate system to the world coordinate system. If you keep the default values the avatar may be displayed differently in the scene. Just experiment a bit and you will see what I mean. As far as I remember, the positive angle was when the sensor is tilted up, and negative when it is tilted down. But again, experiment a bit with ‘Auto height angle’ set to ‘Auto update and show info’, and you will find out.
      2. See 1. above.

      • The world coordinate system you mentioned above is the world coordinate system in Unity? If yes, how do you transfer the real people’s joint position to the humanoid avatar in game scene by avatarControl script? I try to understand your code by myself, but it is some difficult to me. You can just tell me the rough idea, so that later I can understand your code easier. Thank you for you help.

  35. Hello!
    I have a question again.
    In the portrait mode, I want my RawImage to be smaller than screen size, and its position is not in the screen center but its posY is a little smaller than center.
    Because of the RawImage, GetJointPosColorOverlay is offset.
    I tried to research PortraitBackground, but I don’t understand what the fFactorDW meaning, and why backgroundRect is different from RawImage rect.
    Can you explain how to calculate or how to fix the offset position in color camera?
    Thank you very very much.

    • fFactorDW, as far as I remember, is the part of color image width that is beyond the screen. The RawImage uses shaderUvRect in code to estimate the visible part of the image. Try to modify its parameters to match your needs. Experiment a bit, and you will find the correct values.

  36. Hello Rumen!
    Can I know why parameter “depthValue” of the function MapDepthPointToColorCoords is required?
    Thank you.
    Best Regards,

    • Hi YG, this parameter is required by the respective SDK function too, to provide the correct coordinate mapping. You can get the value by calling GetDepthForPixel() with the same depth point coordinates.

    • Hi, do you mean the black dots on the dress?
      By the way, which version of the K2-asset do you use? See Whats-New pdf-file in the K2Examples/_Readme-folder.

      • OK. I’ll check it again today and will try to reproduce your issue. Could you please make a short video of the issue in FittingRoom1-scene, and e-mail it to me. I need to see what exactly you do, to get these white points in the model.

  37. Hi, Rumen I have just bought your K2 samples from unity stores. That’s helped me a lot.

    I am having an issue with my 3D model’s shape. When User’s several bone joints are out of a track, the 3D model is out of shape. Please check the capture below.

    I tried to use “Apply Muscle Limits” from AvatarController, but the movement of 3D model became awkward. So I gave up to activate this.

    Is there any solution to this issue?

    If I couldn’t solve this issue, there is only a backup plan. It’s the way that tracks all bone joints of the user every frame, and if several bone joints are inferred, hide user avatar.
    I think it’s not a smart way to solve this…

    Can you help me with this?

    • Hi, please enable ‘Ignore inferred joints’-setting of the KinectManager-component in the scene, and then try again. What happens, when you enable ‘Apply muscle limits’? This should be the cleanest solution, but because of lacking Unity documentation, samples and support it’s very difficult to implement it.

    • Apart of Kinect v1 & v2 it supports Orbbec Astra & Astra-Pro sensors, as well as Intel’s RealSense D400-series. Or, do you mean anything else?

  38. Hi Rumen,

    Do you know if it is possible using Nuitrack export to Android devices?
    It may be a silly question, but I never tested (I actually does not have a Nuitrack sensor, but I am advising a friend of mine)


    • Hi Marcos, Theoretically yes, according to their own document: As a matter of fact, I’ve also never tried this myself, because 1. I don’t like installing 3rd party APK’s on my devices, and 2. I don’t think the phone has enough processing power to collect that much sensor data. Nevertheless, if your friend likes to experiment, I’m sure he will find a full or partial solution.

  39. Hi Rumen! I’ve got positive result in Avatar Demo Scene with realsense d415 and nuitrack.

    But one problem still remain. I’ve upgrade my nuitrack to Pro version,
    the asset still throws message ‘nuitrack.LicenseNotAcquiredException’.
    Is there any expected problem?

    • Have you imported your license in Nuitrack, after its installation? I don’t remember all the details, but as far as I remember, you should run \nuitrack\activation_tool\Nuitrack.exe and import the license you got from Nuitrack there.

      • yeah I’ve run the Nuitrack.exe and press the ‘get license available’, and i made it activated state.
        But demo scene still crashes after 3 minutes with ‘nuitrack.LicenseNotAcquiredException’ message.

      • I have activated my nuitrack license, and my system environment variables are totally properly set.

        Idk why demo scene still crashes.

  40. Hi Rumen!
    I’m rilly sorry to bother you.
    But i have critical problem.
    Is it possible to use orbbec astra and realsense(nuitrack) at the same time?

    After i connect orbbec astra, nuitrack Interface Initializing causes crashing.

    how do i fix it?

    • And i have an one more question.

      In case of using User Mesh Visualizer,
      the user texture is fit in background removal line in the case of Orbbec Astra,
      but it’s out of background removal line little bit in the case of Realsense(Nuitrack).

      Do you know why this happens?

      • Yes, I know. This happens because the Nuitrack SDK doesn’t provide coordinate mapping between the depth and color frames. That’s why I use the mapping I estimated myself for Astra. But I have not solved this yet for Realsense.

    • Nuitrack should work with both Astra and RealSense. You can test them both in the Nuitrack’s activation tool, or by running \bin\nuitrack_c11_sample.exe. If they work there, they should work in Unity, as well. I have used them both many times so far, and it was OK. Of course, you can’t connect two sensors at the same time to the same machine. Stop Unity player, disconnect the 1st one, connect the 2nd one, then restart Unity player.

      • Thank you for response.
        How about XTion 2’s disconnection with assets?

        When i connect my XTion 2 and run Nuitrack.exe, then it works fine.
        But i start demo scene with XTion 2, then it crashes with Unity Error Dialog Box.
        I did identify XTion 2’s webcam is well printed, Nuitrack.exe and nuitrack_c11_sample.exe also work well.

        I guess it regards to OpenNI version… but i’m not sure.
        Of course, I installed OpenNI with Program Files/OpenNI/Bin64/niReg64.exe file.

      • No idea about XTion 2. I don’t have the sensor and have never used or tested it. Find the Unity log and look for the error that caused the crash (should be at the end). It may give you some hints.

  41. Hi, Rumen I have a problem with Kinect Sensing. Please give me some advice.

    I have been developing an application of dance game in a nightclub.
    I use your fantastic Unity Assets to build this application.

    In the dark place, Kinect’s frame rate is falling down. Because of this, the 3D model is going jumpy.
    I now have two options to avoid this issue. First is to flash users with spotlight light.
    Second is to light the area near Kinect by point light.
    But, I would rather not do this approach as much as possible, because I would like to keep a room dark.

    Is there any way to solve this problem?

    • Hi, what sensor are you using – Kinect v1 or v2? Kinect-v2 should work in low-light conditions, too. If you use v2, what Kinect-related components are you using (are these only avatars + some effects or background removal, etc.). As to me, it’s always good to have some light source over the people heads, because (many user experiences have shown that) Kinect doesn’t track well people with black clothes without enough light.

  42. Hi Rumen! Thanks again for your great improvement in you usefull asset!
    I have a question: I want to use the Background Removal Demo in vertical orientation with a better quality of the RGB image. It’s possible to replace the RGB image from the Kinect with the RGB image of a better quality webcam (in vertical orientation)??

    Thanks in advance!


    • Hi Chris, theoretically this is possible, but will require great deal of modifications in code, and calibration between the depth sensor and the webcam image. Unless you know how to do the calibration and code modifications, and you are not 101% sure you really need it, I would not advise you to do it 🙂

  43. Hi Rumen! i asked 3DiVi about coordinate mapping between color and depth frames,

    3DiVi said ‘Uncomment line ;Registration=1 in [depth] section’ in the %NUITRACK_HOME%/bin/OpenNI2/Drivers/orbbec.ini,
    ‘Add a line “Registration=1” to the [depth] section and add the section [Image], add a line “Resolution=1” to the [Image] section’ in the %NUITRACK_HOME%/bin/OpenNI2/Drivers/SenDuck.ini.

    And they said the last RealSense D415 firmware provides automatic alignment of image and depth.

    Have you ever tried to modify .ini files before?
    If u had, is it effective?

    • Hi, not really. I try not to change the ini-configuration files, unless absolutely necessary. What you say about Realsense alignment sounds interesting (and unusual). I must do some researches on the topic to see what will come out, but this week I’m going to the Unite-event in Berlin. Please remind me again (better email me) next week, so I don’t forget.

Leave a Reply