Kinect v2 Tips, Tricks and Examples

teaching4After answering so many different questions about how to use various parts and components of the “Kinect v2 with MS-SDK”-package, I think it would be easier, if I share some general tips, tricks and examples. I’m going to add more tips and tricks to this article in time. Feel free to drop by, from time to time, to check out what’s new.

And here is a link to the Online documentation of the K2-asset.

Table of Contents:

What is the purpose of all manages in the KinectScripts-folder
How to use the Kinect v2-Package functionality in your own Unity project
How to use your own model with the AvatarController
How to make the avatar hands twist around the bone
How to utilize Kinect to interact with GUI buttons and components
How to get the depth- or color-camera textures
How to get the position of a body joint
How to make a game object rotate as the user
How to make a game object follow user’s head position and rotation
How to get the face-points’ coordinates
How to mix Kinect-captured movement with Mecanim animation
How to add your models to the FittingRoom-demo
How to set up the sensor height and angle
Are there any events, when a user is detected or lost
How to process discrete gestures like swipes and poses like hand-raises
How to process continuous gestures, like ZoomIn, ZoomOut and Wheel
How to utilize visual (VGB) gestures in the K2-asset
How to change the language or grammar for speech recognition
How to run the fitting-room or overlay demo in portrait mode
How to build an exe from ‘Kinect-v2 with MS-SDK’ project
How to make the Kinect-v2 package work with Kinect-v1
What do the options of ‘Compute user map’-setting mean
How to set up the user detection order
How to enable body-blending in the FittingRoom-demo, or disable it to increase FPS
How to build Windows-Store (UWP-8.1) application
How to work with multiple users
How to use the FacetrackingManager
How to add background image to the FittingRoom-demo
How to move the FPS-avatars of positionally tracked users in VR environment
How to create your own gestures
How to enable or disable the tracking of inferred joints
How to build exe with the Kinect-v2 plugins provided by Microsoft
How to build Windows-Store (UWP-10) application
How to run the projector-demo scene
How to render background and the background-removal image on the scene background
How to run the demo scenes on non-Windows platforms
How to workaround the user tracking issue, when the user is turned back
How to get the full scene depth image as texture
Some useful hints regarding AvatarController and AvatarScaler
How to setup the K2-package to work with Orbbec Astra sensors (deprecated)
How to setup the K2-asset to work with Nuitrack body tracking SDK
How to control Keijiro’s Skinner-avatars with the Avatar-Controller component
How to track a ball hitting a wall (hints)
How to create your own programmatic gestures
What is the file-format used by the KinectRecorderPlayer-component (KinectRecorderDemo)
How to enable user gender and age detection in KinectFittingRoom1-demo scene

What is the purpose of all manages in the KinectScripts-folder:

The managers in the KinectScripts-folder are components. You can utilize them in your projects, depending on the features you need. The KinectManager is the most general component, needed to interact with the sensor and to get basic data from it, like the color and depth streams, and the bodies and joints’ positions in meters, in Kinect space. The purpose of the AvatarController is to transfer the detected joint positions and orientations to a rigged skeleton. The CubemanController is similar, but it works with transforms and lines to represent the joints and bones, in order to make locating the tracking issues easier. The FacetrackingManager deals with the face points and head/neck orientation. It is used internally by the KinectManager (if available at the same time) to get the precise position and orientation of the head and neck. The InteractionManager is used to control the hand cursor and to detect hand grips, releases and clicks. And finally, the SpeechManager is used for recognition of speech commands. Pay also attention to the Samples-folder. It contains several simple examples (some of them cited below) you can learn from, use directly or copy parts of the code into your scripts.

How to use the Kinect v2-Package functionality in your own Unity project:

1. Copy folder ‘KinectScripts’ from the Assets/K2Examples-folder of the package to your project. This folder contains the package scripts, filters and interfaces.
2. Copy folder ‘Resources’ from the Assets/K2Examples-folder of the package to your project. This folder contains all needed libraries and resources. You can skip copying the libraries you don’t plan to use, in order to save space.
3. Copy folder ‘Standard Assets’ from the Assets/K2Examples-folder of the package to your project. It contains the wrapper classes for Kinect-v2 SDK.
4. Wait until Unity detects and compiles the newly copied resources, folders and scripts.
See this tip as well, if you like to build your project with the Kinect-v2 plugins provided by Microsoft.

How to use your own model with the AvatarController:

1. (Optional) Make sure your model is in T-pose. This is the zero-pose of Kinect joint orientations.
2. Select the model-asset in Assets-folder. Select the Rig-tab in Inspector window.
3. Set the AnimationType to ‘Humanoid’ and AvatarDefinition – to ‘Create from this model’.
4. Press the Apply-button. Then press the Configure-button to make sure the joints are correctly assigned. After that exit the configuration window.
5. Put the model into the scene.
6. Add the KinectScript/AvatarController-script as component to the model’s game object in the scene.
7. Make sure your model also has Animator-component, it is enabled and its Avatar-setting is set correctly.
8. Enable or disable (as needed) the MirroredMovement and VerticalMovement-settings of the AvatarController-component. Do mind when mirrored movement is enabled, the model’s transform should have Y-rotation of 180 degrees.
9. Run the scene to test the avatar model. If needed, tweak some settings of AvatarController and try again.

How to make the avatar hands twist around the bone:

To do it, you need to set ‘Allowed Hand Rotations’-setting of the KinectManager to ‘All’. KinectManager is a component of the MainCamera in the example scenes. This setting has three options: None – turns off all hand rotations, Default – turns on the hand rotations, except the twists around the bone, All – turns on all hand rotations.

How to utilize Kinect to interact with GUI buttons and components:

1. Add the InteractionManager to the main camera or to other persistent object in the scene. It is used to control the hand cursor and to detect hand grips, releases and clicks. Grip means closed hand with thumb over the other fingers, Release – opened hand, hand Click is generated when the user’s hand doesn’t move (stays still) for about 2 seconds.
2. Enable the ‘Control Mouse Cursor’-setting of the InteractionManager-component. This setting transfers the position and clicks of the hand cursor to the mouse cursor, this way enabling interaction with the GUI buttons, toggles and other components.
3. If you need drag-and-drop functionality for interaction with the GUI, enable the ‘Control Mouse Drag’-setting of the InteractionManager-component. This setting starts mouse dragging, as soon as it detects hand grip and continues the dragging until hand release is detected. If you enable this setting, you can also click on GUI buttons with a hand grip, instead of the usual hand click (i.e. staying in place, over the button, for about 2 seconds).

How to get the depth- or color-camera textures:

First off, make sure that ‘Compute User Map’-setting of the KinectManager-component is enabled, if you need the depth texture, or ‘Compute Color Map’-setting of the KinectManager-component is enabled, if you need the color camera texture. Then write something like this in the Update()-method of your script:

KinectManager manager = KinectManager.Instance;
if(manager && manager.IsInitialized())
    Texture2D depthTexture = manager.GetUsersLblTex();
    Texture2D colorTexture = manager.GetUsersClrTex();
    // do something with the textures

How to get the position of a body joint:

This is demonstrated in KinectScripts/Samples/GetJointPositionDemo-script. You can add it as a component to a game object in your scene to see it in action. Just select the needed joint and optionally enable saving to a csv-file. Do not forget that to add the KinectManager as component to a game object in your scene. It is usually a component of the MainCamera in the example scenes. Here is the main part of the demo-script that retrieves the position of the selected joint:

KinectInterop.JointType joint = KinectInterop.JointType.HandRight;
KinectManager manager = KinectManager.Instance;

if(manager && manager.IsInitialized())
        long userId = manager.GetPrimaryUserID();

        if(manager.IsJointTracked(userId, (int)joint))
            Vector3 jointPos = manager.GetJointPosition(userId, (int)joint);
            // do something with the joint position

How to make a game object rotate as the user:

This is similar to the previous example and is demonstrated in KinectScripts/Samples/FollowUserRotation-script. To see it in action, you can create a cube in your scene and add the script as a component to it. Do not forget to add the KinectManager as component to a game object in your scene. It is usually a component of the MainCamera in the example scenes.

How to make a game object follow user’s head position and rotation:

You need the KinectManager and FacetrackingManager added as components to a game object in your scene. For example, they are components of the MainCamera in the KinectAvatarsDemo-scene. Then, to get the position of the head and orientation of the neck, you need code like this in your script:

KinectManager manager = KinectManager.Instance;

if(manager && manager.IsInitialized())
        long userId = manager.GetPrimaryUserID();

        if(manager.IsJointTracked(userId, (int)KinectInterop.JointType.Head))
            Vector3 headPosition = manager.GetJointPosition(userId, (int)KinectInterop.JointType.Head);
            Quaternion neckRotation = manager.GetJointOrientation(userId, (int)KinectInterop.JointType.Neck);
            // do something with the head position and neck orientation

How to get the face-points’ coordinates:

You need a reference to the respective FaceFrameResult-object. This is demonstrated in KinectScripts/Samples/GetFacePointsDemo-script. You can add it as a component to a game object in your scene, to see it in action. To get a face point coordinates in your script you need to invoke its public GetFacePoint()-function. Do not forget to add the KinectManager and FacetrackingManager as components to a game object in your scene. For example, they are components of the MainCamera in the KinectAvatarsDemo-scene.

How to mix Kinect-captured movement with Mecanim animation

1. Use the AvatarControllerClassic instead of AvatarController-component. Assign only these joints that have to be animated by the sensor.
2. Set the SmoothFactor-setting of AvatarControllerClassic to 0, to apply the detected bone orientations instantly.
3. Create an avatar-body-mask and apply it to the Mecanim animation layer. In this mask, disable Mecanim animations of the Kinect-animated joints mentioned above. Do not disable the root-joint!
4. Enable the ‘Late Update Avatars’-setting of KinectManager (component of MainCamera in the example scenes).
5. Run the scene to check the setup. When a player gets recognized by the sensor, part of his joints will be animated by the AvatarControllerClassic component, and the other part – by the Animator component.

How to add your models to the FittingRoom-demo

1. For each of your fbx-models, import the model and select it in the Assets-view in Unity editor.
2. Select the Rig-tab in Inspector. Set the AnimationType to ‘Humanoid’ and the AvatarDefinition to ‘Create from this model’.
3. Press the Apply-button. Then press the Configure-button to check if all required joints are correctly assigned. The clothing models usually don’t use all joints, which can make the avatar definition invalid. In this case you can assign manually the missing joints (shown in red).
4. Keep in mind: The joint positions in the model must match the structure of the Kinect-joints. You can see them, for instance in the KinectOverlayDemo2. Otherwise the model may not overlay the user’s body properly.
5. Create a sub-folder for your model category (Shirts, Pants, Skirts, etc.) in the FittingRoomDemo/Resources-folder.
6. Create a sub-folders with subsequent numbers (0000, 0001, 0002, etc.) for all imported in p.1 models, in the model category folder.
7. Move your models into these numerical folders, one model per folder, along with the needed materials and textures. Rename the model’s fbx-file to ‘model.fbx’.
8. You can put a preview image for each model in jpeg-format (100 x 143px, 24bpp) in the respective model folder. Then rename it to ‘preview.jpg.bytes’. If you don’t put a preview image, the fitting-room demo will display ‘No preview’ in the model-selection menu.
9. Open the FittingRoomDemo1-scene.
10. Add a ModelSelector-component for each model category to the KinectController game object. Set its ‘Model category’-setting to be the same as the name of sub-folder created in p.5 above. Set the ‘Number of models’-setting to reflect the number of sub-folders created in p.6 above.
11. The other settings of your ModelSelector-component must be similar to the existing ModelSelector in the demo. I.e. ‘Model relative to camera’ must be set to ‘BackgroundCamera’, ‘Foreground camera’ must be set to ‘MainCamera’, ‘Continuous scaling’ – enabled. The scale-factor settings may be set initially to 1 and the ‘Vertical offset’-setting to 0. Later you can adjust them slightly to provide the best model-to-body overlay.
12. Enable the ‘Keep selected model’-setting of the ModelSelector-component, if you want the selected model to continue overlaying user’s body, after the model category changes. This is useful, if there are several categories (i.e. ModelSelectors), for instance for shirts, pants, skirts, etc. In this case the selected shirt model will still overlay user’s body, when the category changes and the user starts selects pants, for instance.
13. The CategorySelector-component provides gesture control for changing models and categories, and takes care of switching model categories (e.g for shirts, pants, ties, etc.) for the same user. There is already a CategorySelector for the 1st user (player-index 0) in the scene, so you don’t need to add more.
14. If you plan for multi-user fitting-room, add one CategorySelector-component for each other user. You may also need to add the respective ModelSelector-components for model categories that will be used by these users, too.
15. Run the scene to ensure that your models can be selected in the list and they overlay the user’s body correctly. Experiment a bit if needed, to find the values of scale-factors and vertical-offset settings that provide the best model-to-body overlay.
16. If you want to turn off the cursor interaction in the scene, disable the InteractionManager-component of KinectController-game object. If you want to turn off the gestures (swipes for changing models & hand raises for changing categories), disable the respective settings of the CategorySelector-component. If you want to turn off or change the T-pose calibration, change the ‘Player calibration pose’-setting of KinectManager-component.
17. You can use the FittingRoomDemo2 scene, to utilize or experiment with a single overlay model. Adjust the scale-factor settings of AvatarScaler to fine tune the scale of the whole body, arm- or leg-bones of the model, if needed. Enable the ‘Continuous Scaling’ setting, if you want the model to rescale on each Update.
18. If the clothing/overlay model uses the Standard shader, set its ‘Rendering mode’ to ‘Cutout’. See this comment below for more information.

How to set up the sensor height and angle

There are two very important settings of the KinectManager-component that influence the calculation of users’ and joints’ space coordinates, hence almost all user-related visualizations in the demo scenes. Here is how to set them correctly:

1. Set the ‘Sensor height’-setting, as to how high above the ground is the sensor, in meters. The by-default value is 1, i.e. 1.0 meter above the ground, which may not be your case.
2. Set the ‘Sensor angle’-setting, as to the tilt angle of the sensor, in degrees. Use positive degrees if the sensor is tilted up, negative degrees – if it is tilted down. The by-default value is 0, which means 0 degrees, i.e. the sensor is not tilted at all.
3. Because it is not so easy to estimate the sensor angle manually, you can use the ‘Auto height angle’-setting to find out this value. Select ‘Show info only’-option and run the demo-scene. Then stand in front of the sensor. The information on screen will show you the rough height and angle-settings, as estimated by the sensor itself. Repeat this 2-3 times and write down the values you see.
4. Finally, set the ‘Sensor height’ and ‘Sensor angle’ to the estimated values you find best. Set the ‘Auto height angle’-setting back to ‘Dont use’.
5. If you find the height and angle values estimated by the sensor good enough, or if your sensor setup is not fixed, you can set the ‘Auto height angle’-setting to ‘Auto update’. It will update the ‘Sensor height’ and ‘Sensor angle’-settings continuously, when there are users in the field of view of the sensor.

Are there any events, when a user is detected or lost

There are no special event handlers for user-detected/user-lost events, but there are two other options you can use:

1. In the Update()-method of your script, invoke the GetUsersCount()-function of KinectManager and compare the returned value to a previously saved value, like this:

KinectManager manager = KinectManager.Instance;
if(manager && manager.IsInitialized())
    int usersNow = manager.GetUsersCount();

    if(usersNow > usersSaved)
        // new user detected
    if(usersNow < usersSaved)
        // user lost

    usersSaved = usersNow;

2. Create a class that implements KinectGestures.GestureListenerInterface and add it as component to a game object in the scene. It has methods UserDetected() and UserLost(), which you can use as user-event handlers. The other methods could be left empty or return the default value (true). See the SimpleGestureListener or GestureListener-classes, if you need an example.

How to process discrete gestures like swipes and poses like hand-raises

Most of the gestures, like SwipeLeft, SwipeRight, Jump, Squat, etc. are discrete. All poses, like RaiseLeftHand, RaiseRightHand, etc. are also considered as discrete gestures. This means these gestures may report progress or not, but all of them get completed or cancelled at the end. Processing these gestures in a gesture-listener script is relatively easy. You need to do as follows:

1. In the UserDetected()-function of the script add the following line for each gesture you need to track:

manager.DetectGesture(userId, KinectGestures.Gestures.xxxxx);

2. In GestureCompleted() add code to process the discrete gesture, like this:

if(gesture == KinectGestures.Gestures.xxxxx)
    // gesture is detected - process it (for instance, set a flag or execute an action)

3. In the GestureCancelled()-function, add code to process the cancellation of the continuous gesture:

if(gesture == KinectGestures.Gestures.xxxxx)
    // gesture is cancelled - process it (for instance, clear the flag)

If you need code samples, see the SimpleGestureListener.cs or CubeGestureListener.cs-scripts.

4. From v2.8 on, KinectGestures.cs is not any more a static class, but a component that may be extended, for instance with the detection of new gestures or poses. You need to add it as component to the KinectController-game object, if you need gesture or pose detection in the scene.

How to process continuous gestures, like ZoomIn, ZoomOut and Wheel

Some of the gestures, like ZoomIn, ZoomOut and Wheel, are continuous. This means these gestures never get fully completed, but only report progress greater than 50%, as long as the gesture is detected. To process them in a gesture-listener script, do as follows:

1. In the UserDetected()-function of the script add the following line for each gesture you need to track:

manager.DetectGesture(userId, KinectGestures.Gestures.xxxxx);

2. In GestureInProgress() add code to process the continuous gesture, like this:

if(gesture == KinectGestures.Gestures.xxxxx)
    if(progress > 0.5f)
        // gesture is detected - process it (for instance, set a flag, get zoom factor or angle)
        // gesture is no more detected - process it (for instance, clear the flag)

3. In the GestureCancelled()-function, add code to process the end of the continuous gesture:

if(gesture == KinectGestures.Gestures.xxxxx)
    // gesture is cancelled - process it (for instance, clear the flag)

If you need code samples, see the SimpleGestureListener.cs or ModelGestureListener.cs-scripts.

4. From v2.8 on, KinectGestures.cs is not any more a static class, but a component that may be extended, for instance with the detection of new gestures or poses. You need to add it as component to the KinectController-game object, if you need gesture or pose detection in the scene.

How to utilize visual (VGB) gestures in the K2-asset

The visual gestures, created by the Visual Gesture Builder (VGB) can be used in the K2-asset, too. To do it, follow these steps (and see the VisualGestures-game object and its components in the KinectGesturesDemo-scene):

1. Copy the gestures’ database (xxxxx.gbd) to the Resources-folder and rename it to ‘xxxxx.gbd.bytes’.
2. Add the VisualGestureManager-script as a component to a game object in the scene (see VisualGestures-game object).
3. Set the ‘Gesture Database’-setting of VisualGestureManager-component to the name of the gestures’ database, used in step 1 (‘xxxxx.gbd’).
4. Create a visual-gesture-listener to process the gestures, and add it as a component to a game object in the scene (see the SimpleVisualGestureListener-script).
5. In the GestureInProgress()-function of the gesture-listener add code to process the detected continuous gestures and in the GestureCompleted() add code to process the detected discrete gestures.

How to change the language or grammar for speech recognition

1. Make sure you have installed the needed language pack from here.
2. Set the ‘Language code’-setting of SpeechManager-component, as to the grammar language you need to use. The list of language codes can be found here (see ‘LCID Decimal’).
3. Make sure the ‘Grammar file name’-setting of SpeechManager-component corresponds to the name of the grxml.txt-file in Assets/Resources.
4. Open the grxml.txt-grammar file in Assets/Resources and set its ‘xml:lang’-attribute to the language that corresponds to the language code in step 2.
5. Make the other needed modifications in the grammar file and save it.
6. (Optional since v2.7) Delete the grxml-file with the same name in the root-folder of your Unity project (the parent folder of Assets-folder).
7. Run the scene to check, if speech recognition works correctly.

How to run the fitting-room or overlay demo in portrait mode

1. First off, add 9:16 (or 3:4) aspect-ratio to the Game view’s list of resolutions, if it is missing.
2. Select the 9:16 (or 3:4) aspect ratio of Game view, to set the main-camera output in portrait mode.
3. Open the fitting-room or overlay-demo scene and select each of the BackgroundImage(X)-game object(s). If it has a child object called RawImage, select this sub-object instead.
4. Enable the PortraitBackground-component of each of the selected BackgroundImage object(s). When finished, save the scene.
5. Run the scene and test it in portrait mode.

How to build an exe from ‘Kinect-v2 with MS-SDK’ project

By default Unity builds the exe (and the respective xxx_Data-folder) in the root folder of your Unity project. It is recommended to you use another, empty folder instead. The reason is that building the exe in the folder of your Unity project may cause conflicts between the native libraries used by the editor and the ones used by the exe, if they have different architectures (for instance the editor is 64-bit, but the exe is 32-bit).

Also, before building the exe, make sure you’ve copied the Assets/Resources-folder from the K2-asset to your Unity project. It contains the needed native libraries and custom shaders. Optionally you can remove the unneeded zip.bytes-files from the Resources-folder. This will save a lot of space in the build. For instance, if you target Kinect-v2 only, you can remove the Kinect-v1 and OpenNi2-related zipped libraries. The exe won’t need them anyway.

How to make the Kinect-v2 package work with Kinect-v1

If you have only Kinect v2 SDK or Kinect v1 SDK installed on your machine, the KinectManager should detect the installed SDK and sensor correctly. But in case you have both Kinect SDK 2.0 and SDK 1.8 installed simultaneously, the KinectManager will put preference on Kinect v2 SDK and your Kinect v1 will not be detected. The reason for this is that you can use SDK 2.0 in offline mode as well, i.e. without sensor attached. In this case you can emulate the sensor by playing recorded files in Kinect Studio 2.0.

If you want to make the KinectManager utilize the appropriate interface, depending on the currently attached sensor, open KinectScripts/Interfaces/Kinect2Interface.cs and at its start change the value of ‘sensorAlwaysAvailable’ from ‘true’ to ‘false’. After this, close and reopen the Unity editor. Then, on each start, the KinectManager will try to detect which sensor is currently attached to your machine and use the respective sensor interface. This way you could switch the sensors (Kinect v2 or v1), as to your preference, but will not be able to use the offline mode for Kinect v2. To utilize the Kinect v2 offline mode again, you need to switch ‘sensorAlwaysAvailable’ back to true.

What do the options of ‘Compute user map’-setting mean

Here are one-line descriptions of the available options:

  • RawUserDepth means that only the raw depth image values, coming from the sensor will be available, via the GetRawDepthMap()-function for instance;
  • BodyTexture means that GetUsersLblTex()-function will return the white image of the tracked users;
  • UserTexture will cause GetUsersLblTex() to return the tracked users’ histogram image;
  • CutOutTexture, combined with enabled ‘Compute color map‘-setting, means that GetUsersLblTex() will return the cut-out image of the users.

All these options (except RawUserDepth) can be tested instantly, if you enable the ‘Display user map‘-setting of KinectManager-component, too.

How to set up the user detection order

There is a ‘User detection order’-setting of the KinectManager-component. You can use it to determine how the user detection should be done, depending on your requirements. Here are short descriptions of the available options:

  • Appearance is selected by default. It means that the player indices are assigned in order of user appearance. The first detected user gets player index 0, The next one gets index 1, etc. If the user 0 gets lost, the remaining users are not reordered. The next newly detected user will take its place;
  • Distance means that player indices are assigned depending on distance of the detected users to the sensor. The closest one will get player index 0, the next closest one – index 1, etc. If a user gets lost, the player indices are reordered, depending on the distances to the remaining users;
  • Left to right means that player indices are assigned depending on the X-position of the detected users. The leftmost one will get player index 0, the next leftmost one – index 1, etc. If a user gets lost, the player indices are reordered, depending on the X-positions of the remaining users;

The user-detection area can be further limited with ‘Min user distance’, ‘Max user distance’ and ‘Max left right distance’-settings, in meters from the sensor. The maximum number of detected user can be limited by lowering the value of ‘Max tracked user’-setting.

How enable body-blending in the FittingRoom-demo, or disable it to increase FPS

If you select the MainCamera in the KinectFittingRoom1-demo scene (in v2.10 or above), you will see a component called UserBodyBlender. It is responsible for mixing the clothing model (overlaying the user) with the real world objects (including user’s body parts), depending on the distance to camera. For instance, if you arms or other real-world objects are in front of the model, you will see them overlaying the model, as expected.

You can enable the component, to turn on the user’s body-blending functionality. The ‘Depth threshold’-setting may be used to adjust the minimum distance to the front of model (in meters). It determines when a real-world’s object will become visible. It is set by default to 0.1m, but you could experiment a bit to see, if any other value works better for your models. If the scene performance (in means of FPS) is not sufficient, and body-blending is not important, you can disable the UserBodyBlender-component to increase performance.

How to build Windows-Store (UWP-8.1) application

To do it, you need at least v2.10.1 of the K2-asset. To build for ‘Windows store’, first select ‘Windows store’ as platform in ‘Build settings’, and press the ‘Switch platform’-button. Then do as follows:

1. Unzip Assets/ This will create Assets/Plugins-Metro-folder.
2. Delete the KinectScripts/SharpZipLib-folder.
3. Optionally, delete all zip.bytes-files in Assets/Resources. You won’t need these libraries in Windows/Store. All Kinect-v2 libraries reside in Plugins-Metro-folder.
4. Select ‘File / Build Settings’ from the menu. Add the scenes you want to build. Select ‘Windows Store’ as platform. Select ‘8.1’ as target SDK. Then click the Build-button. Select an empty folder for the Windows-store project and wait the build to complete.
5. Go to the build-folder and open the generated solution (.sln-file) with Visual studio.
6. Change the ‘by default’ ARM-processor target to ‘x86’. The Kinect sensor is not compatible with ARM processors.
7. Right click ‘References’ in the Project-windows and select ‘Add reference’. Select ‘Extensions’ and then WindowsPreview.Kinect and Microsoft.Kinect.Face libraries. Then press OK.
8. Open solution’s manifest-file ‘Package.appxmanifest’, go to ‘Capabilities’-tab and enable ‘Microphone’ and ‘Webcam’ in the left panel. Save the manifest. This is needed to to enable the sensor, when the UWP app starts up. Thanks to Yanis Lukes (aka Pendrokar) for providing this info!
9. Build the project. Run it, to test it locally. Don’t forget to turn on Windows developer mode on your machine.

How to work with multiple users

Kinect-v2 can fully track up to 6 users simultaneously. That’s why many of the Kinect-related components, like AvatarController, InteractionManager, model & category-selectors, gesture & interaction listeners, etc. have a setting called ‘Player index’. If set to 0, the respective component will track the 1st detected user. If set to 1, the component will track the 2nd detected use. If set to 2 – the 3rd user, etc. The order of user detection may be specified with the ‘User detection order’-setting of the KinectManager (component of KinectController game object).

How to use the FacetrackingManager

The FacetrackingManager-component may be used for several purposes. First, adding it as component of KinectController will provide more precise neck and head tracking, when there are avatars in the scene (humanoid models utilizing the AvatarController-component). If HD face tracking is needed, you can enable the ‘Get face model data’-setting of FacetrackingManager-component. Keep in mind that using HD face tracking will lower performance and may cause memory leaks, which can cause Unity crash after multiple scene restarts. Please use this feature carefully.

In case of ‘Get face model data’ enabled, don’t forget to assign a mesh object (e.g. Quad) to the ‘Face model mesh’-setting. Pay also attention to the ‘Textured model mesh’-setting. The available options are: ‘None’ – means the mesh will not be textured; ‘Color map’ – the mesh will get its texture from the color camera image, i.e. it will reproduce user’s face; ‘Face rectangle’ – the face mesh will be textured with its material’s Albedo texture, whereas the UI coordinates will match the detected face rectangle.

Finally, you can use the FacetrackingManager public API to get a lot of face-tracking data, like the user’s head position and rotation, animation units, shape units, face model vertices, etc.

How to add background image to the FittingRoom-demo (updated for v2.14 and later)

To replace the color-camera background in the FittingRoom-scene with a background image of your choice, please do as follows:

1. Enable the BackgroundRemovalManager-component of the KinectController-game object in the scene.
2. Make sure the ‘Compute user map’-setting of KinectManager (component of the KinectController, too) is set to ‘Body texture’, and the ‘Compute color map’-setting is enabled.
3. Set the needed background image as texture of the RawImage-component of BackgroundImage1-game object in the scene.
4. Run the scene to check, if it works as expected.

How to move the FPS-avatars of positionally tracked users in VR environment

There are two options for moving first-person avatars in VR-environment (the 1st avatar-demo scene in K2VR-asset):

1. If you use the Kinect’s positional tracking, turn off the Oculus/Vive positional tracking, because their coordinates are different to Kinect’s.
2. If you prefer to use the Oculus/Vive positional tracking:
– enable the ‘External root motion’-setting of the AvatarController-component of avatar’s game object. This will disable avatar motion as to Kinect special coordinates.
– enable the HeadMover-component of avatar’s game object, and assign the MainCamera as ‘Target transform’, to follow the Oculus/Vive position.

Now try to run the scene. If there are issues with the MainCamera used as positional target, do as follows:
– add an empty game object to the scene. It will be used to follow the Oculus/Vive positions.
– assign the newly created game object to the ‘Target transform’-setting of the HeadMover-component.
– add a script to the newly created game object, and in that script’s Update()-function set programatically the object’s transform position to be the current Oculus/Vive position.

How to create your own gestures

For gesture recognition there are two options – visual gestures (created with the Visual Gesture Builder, part of Kinect SDK 2.0) and programmatic gestures, coded in KinectGestures.cs or a class that extends it. The programmatic gestures detection consists mainly of tracking the position and movement of specific joints, relative to some other joints. For more info regarding how to create your own programmatic gestures look at this tip below.

The scenes demonstrating the detection of programmatic gestures are located in the KinectDemos/GesturesDemo-folder. The KinectGesturesDemo1-scene shows how to utilize discrete gestures, and the KinectGesturesDemo2-scene is about continuous gestures.

And here is a video on creating and checking for visual gestures. Please check KinectDemos/GesturesDemo/VisualGesturesDemo-scene too, to see how to use visual gestures in Unity. A major issue with the visual gestures is that they usually work in the 32-bit builds only.

How to enable or disable the tracking of inferred joints

First, keep in mind that:
1. There is ‘Ignore inferred joints’-setting of the KinectManager. KinectManager is usually a component of the KinectController-game object in demo scenes.
2. There is a public API method of KinectManager, called IsJointTracked(). This method is utilized by various scripts & components in the demo scenes.

Here is how it works:
The Kinect SDK tracks the positions of all body joints’ together with their respective tracking states. These states can be Tracked, NotTracked or Inferred. When the ‘Ignore inferred joints’-setting is enabled, the IsJointTracked()-method returns true, when the tracking state is Tracked or Inferred, and false when the state is NotTracked. I.e. both tracked and inferred joints are considered valid. When the setting is disabled, the IsJointTracked()-method returns true, when the tracking state is Tracked, and false when the state is NotTracked or Inferred. I.e. only the really tracked joints are considered valid.

How to build exe with the Kinect-v2 plugins provided by Microsoft

In case you’re targeting Kinect-v2 sensor only, and would like to avoid packing all native libraries that come with the K2-asset in the build, as well as unpacking them into the working directory of the executable afterwards, do as follows:

1. Download and unzip the Kinect-v2 Unity Plugins from here.
2. Open your Unity project. Select ‘Assets / Import Package / Custom Package’ from the menu and import only the Plugins-folder from ‘Kinect.2.0.1410.19000.unitypackage’. You can find it in the unzipped package from p.1 above. Please don’t import anything from the ‘Standard Assets’-folder of the unitypackage. All needed standard assets are already present in the K2-asset.
3. If you are using the FacetrackingManager in your scenes, import only the Plugins-folder from ‘Kinect.Face.2.0.1410.19000.unitypackage’ as well. If you are using visual gestures (i.e. VisualGestureManager in your scenes), import only the Plugins-folder from ‘Kinect.VisualGestureBuilder.2.0.1410.19000.unitypackage’, too. Again, please don’t import anything from the ‘Standard Assets’-folders of these unitypackages. All needed standard assets are already present in the K2-asset.
4. Delete & (or all zipped libraries) from the K2Examples/Resources-folder. You can see them as .zip-files in the Assets-window, or as .zip.bytes-files in the Windows explorer. You are going to use the Kinect-v2 sensor only, so all these zipped libraries are not needed any more.
5. Delete all dlls in the root-folder of your Unity project. The root-folder is the parent-folder of the Assets-folder of your project, and is not visible in the Editor. You may need to stop the Unity editor. Delete the NuiDatabase- and vgbtechs-folders in the root-folder, as well. These dlls and folders are no more needed, because they are part of the project’s Plugins-folder now.
6. Open Unity editor again, load the project and try to run the demo scenes in the project, to make sure they work as expected.
7. If everything is OK, build the executable again. This should work for both x86 and x86_64-architectures, as well as for Windows-Store, SDK 8.1.

How to build Windows-Store (UWP-10) application

To do it, you need at least v2.12.2 of the K2-asset. Then follow these steps:

1. (optional, as of v2.14.1) Delete the KinectScripts/SharpZipLib-folder. It is not needed for UWP. If you leave it, it may cause syntax errors later.
2. Open ‘File / Build Settings’ in Unity editor, switch to ‘Windows store’ platform and select ‘Universal 10’ as SDK. Make sure ‘.Net’ is selected as scripting backend. Optionally enable the ‘Unity C# Project’ and ‘Development build’-settings, if you’d like to edit the Unity scripts in Visual studio later.
3. Press the ‘Build’-button, select output folder and wait for Unity to finish exporting the UWP-Visual studio solution.
4. Close or minimize the Unity editor, then open the exported UWP solution in Visual studio.
5. Select x86 or x64 as target platform in Visual studio.
6. Open ‘Package.appmanifest’ of the main project, and on tab ‘Capabilities’ enable ‘Microphone’ & ‘Webcam’. These may be enabled in the Windows-store’s Player settings in Unity, too.
7. If you have enabled the ‘Unity C# Project’-setting in p.2 above, right click on ‘Assembly-CSharp’-project in the Solution explorer, select ‘Properties’ from the context menu, and then select ‘Windows 10 Anniversary Edition (10.0; Build 14393)’ as ‘Target platform’. Otherwise you will get compilation errors.
8. Build and run the solution, on the local or remote machine. It should work now.

Please mind the FacetrackingManager and SpeechRecognitionManager-components, hence the scenes that use them, will not work with the current version of the K2-UWP interface.

How to run the projector-demo scene (v2.13 and later)

To run the KinectProjectorDemo-scene, you need to calibrate the projector to the Kinect sensor first. To do it, please follow these steps:

1. To do the needed sensor-projector calibration, you first need to download RoomAliveToolkit, and then open and build the ProCamCalibration-project in Microsoft Visual Studio 2015 or later. For your convenience, here is a ready-made build of the needed executables, made with VS-2015.
2. Then open the ProCamCalibration-page and follow carefully the instructions in ‘Tutorial: Calibrating One Camera and One Projector’, from ‘Room setup’ to ‘Inspect the results’.
3. After the ProCamCalibration finishes successfully, copy the generated calibration xml-file to the KinectDemos/ProjectorDemo/Resources-folder of the K2-asset.
4. Open the KinectProjectorDemo-scene in Unity editor, select the MainCamera-game object in Hierarchy, and drag the calibration xml-file generated by ProCamCalibrationTool to the ‘Calibration Xml’-setting of its ProjectorCamera-component. Please also check, if the value of ‘Proj name in config’-setting is the same as the projector name set in the calibration xml-file (usually ‘0’).
5. Set the projector to duplicate the main screen, enable ‘Maximize on play’ in Editor (or build the scene), and run the scene in full-screen mode. Walk in front of the sensor, to check if the projected skeleton overlays correctly the user’s body. You can also try to enable ‘U_Character’ game object in the scene, to see how a virtual 3D-model can overlay the user’s body at runtime.

How to render background and the background-removal image on the scene background

First off, if you want to replace the color-camera background in the FittingRoom-demo scene with the background-removal image, please see and follow these steps.

For all other demo-scenes: You can replace the color-camera image on scene background with the background-removal image, by following these (rather complex) steps:

1. Create an empty game object in the scene, name it BackgroundImage1, and add ‘GUI Texture’-component to it (this will change after the release of Unity 2017.2, because it deprecates GUI-Textures). Set its Transform position to (0.5, 0.5, 0) to center it on the screen. This object will be used to render the scene background, so you can select a suitable picture for the Texture-setting of its GUITexture-component. If you leave its Texture-setting to None, a skybox or solid color will be rendered as scene background.

2. In a similar way, create a BackgroundImage2-game object. This object will be used to render the detected users, so leave the Texture-setting of its GUITexture-component to None (it will be set at runtime by a script), and set the Y-scale of the object to -1. This is needed to flip the rendered texture vertically. The reason: Unity textures are rendered bottom to top, while the Kinect images are top to bottom.

3. Add KinectScripts/BackgroundRemovalManager-script as component to the KinectController-game object in the scene (if it is not there yet). This is needed to provide the background removal functionality to the scene.

4. Add KinectDemos/BackgroundRemovalDemo/Scripts/ForegroundToImage-script as component to the BackgroundImage2-game object. This component will set the foreground texture, created at runtime by the BackgroundRemovalManager-component, as Texture of the GUI-Texture component (see p2 above).

Now the tricky part: Two more cameras are needed to display the user image over the scene background – one to render the background picture, 2nd one to render the user image on top of it, and finally – the main camera – to render the 3D objects on top of the background cameras.  Cameras in Unity have a setting called ‘Culling Mask’, where you can set the layers rendered by each camera. There are also two more settings: Depth and ‘Clear flags’ that may be used to change the cameras rendering order.

5. In our case, two extra layers will be needed for the correct rendering of background cameras. Select ‘Add layer’ from the Layer-dropdown in the top-right corner of the Inspector and add 2 layers – ‘BackgroundLayer1’ and ‘BackgroundLayer2’, as shown below. Unfortunately, when Unity exports the K2-package, it doesn’t export the extra layers too. That’s why the extra layers are missing in the demo-scenes.

6. After you have added the extra layers, select the BackgroundImage1-object in Hierarchy and set its layer to ‘BackgroundLayer1’. Then select the BackgroundImage2 and set its layer to ‘BackgroundLayer2’.

7. Create a camera-object in the scene and name it BackgroundCamera1. Set its CullingMask to ‘BackgroundLayer1’ only. Then set its ‘Depth’-setting to (-2) and its ‘Clear flags’-setting to ‘Skybox’ or ‘Solid color’. This means this camera will render first, will clear the output and then render the texture of BackgroundImage1. Don’t forget to disable its AudioListener-component, too. Otherwise, expect endless warnings in the console, regarding multiple audio listeners in the scene.

8. Create a 2nd camera-object and name it BackgroundCamera2. Set its CullingMask to ‘BackgroundLayer2’ only, its ‘Depth’ to (-1) and its ‘Clear flags’ to ‘Depth only’. This means this camera will render 2nd (because -1 > -2), will not clear the previous camera rendering, but instead render the BackgroundImage2 texture on top of it. Again, don’t forget to disable its AudioListener-component.

9. Finally, select the ‘Main Camera’ in the scene. Set its ‘Depth’ to 0 and ‘Clear flags’ to ‘Depth only’. In its ‘Culling mask’ disable ‘BackgroundLayer1’ and ‘BackgroundLayer2’, because they are already rendered by the background cameras. This way the main camera will render all other layers in the scene, on top of the background cameras (depth: 0 > -1 > -2).

If you need a practical example of the above setup, please look at the objects, layers and cameras of the KinectDemos/BackgroundRemovalDemo/KinectBackgroundRemoval1-demo scene.

How to run the demo scenes on non-Windows platforms

Starting with v2.14 of the K2-asset you can run and build many of the demo-scenes on non-Windows platform. In this case you can utilize the KinectDataServer and KinectDataClient components, to transfer the Kinect body and interaction data over the network. The same approach is used by the K2VR-asset. Here is what to do:

1. Add KinectScripts/KinectDataClient.cs as component to KinectController-game object in the client scene. It will replace the direct connection to the sensor with connection to the KinectDataServer-app over the network.
2. On the machine, where the Kinect-sensor is connected, run KinectDemos/KinectDataServer/KinectDataServer-scene or download the ready-built KinectDataServer-app for the same version of Unity editor, as the one running the client scene. The ready-built KinectDataServer-app can be found on this page.
3. Make sure the KinectDataServer and the client scene run in the same subnet. This is needed, if you’d like the client to discover automatically the running instance of KinectDataServer. Otherwise you would need to set manually the ‘Server host’ and ‘Server port’-settings of the KinectDataClient-component.
4. Run the client scene to make sure it connects to the server. If it doesn’t, check the console for error messages.
5. If the connection between the client and server is OK, and the client scene works as expected, build it for the target platform and test it there too.

How to workaround the user tracking issue, when the user is turned back

Starting with v2.14 of the K2-asset you can (at least roughly) work around the user tracking issue, when the user is turned back. Here is what to do:

1. Add FacetrackingManager-component to your scene, if there isn’t one there already. The face-tracking is needed for front & back user detection.
2. Enable the ‘Allow turn arounds’-setting of KinectManager. The KinectManager is component of KinectController-game object in all demo scenes.
3. Run the scene to test it. Keep in mind this feature is only a workaround (not a solution) for an issue in Kinect SDK. The issue is that by design Kinect tracks correctly only users who face the sensor. The side tracking is not smooth, as well. And finally, this workaround is experimental and may not work in all cases.

How to get the full scene depth image as texture

If you’d like to get the full scene depth image, instead of user-only depth image, please follow these steps:

1. Open Resources/DepthShader.shader and uncomment the commented out else-part of the ‘if’, you can see near the end of the shader. Save the shader and go back to the Unity editor.
2. Make sure the ‘Compute user map’-setting of the KinectManager is set to ‘User texture’. KinectManager is component of the KinectController-game object in all demo scenes.
3. Optionally enable the ‘Display user map’-setting of KinectManager, if you want to see the depth texture on screen.
4. You can also get the depth texture by calling ‘KinectManager.Instance.GetUsersLblTex()’ in your scripts, and then use it the way you want.

Some useful hints regarding AvatarController and AvatarScaler

The AvatarController-component moves the joints of the humanoid model it is attached to, according to the user’s movements in front of the Kinect-sensor. The AvatarScaler-component (used mainly in the fitting-room scenes) scales the model to match the user in means of height, arms length, etc. Here are some useful hints regarding these components:

1. If you need the avatar to move around its initial position, make sure the ‘Pos relative to camera’-setting of its AvatarController is set to ‘None’.
2. If ‘Pos relative to camera’ references a camera instead, the avatar’s position with respect to that camera will be the same as the user’s position with respect to the Kinect sensor.
3. If ‘Pos relative to camera’ references a camera and ‘Pos rel overlay color’-setting is enabled too, the 3d position of avatar is adjusted to overlay the user on color camera feed.
4. In this last case, if the model has AvatarScaler component too, you should set the ‘Foreground camera’-setting of AvatarScaler to the same camera. Then scaling calculations will be based on the adjusted (overlayed) joint positions, instead of on the joint positions in space.
5. The ‘Continuous scaling’-setting of AvatarScaler determines whether the model scaling should take place only once when the user is detected (when the setting is disabled), or continuously – on each update (when the setting is enabled).

If you need the avatar to obey physics and gravity, disable the ‘Vertical movement’-setting of the AvatarController-component. Disable the ‘Grounded feet’-setting too, if it is enabled. Then enable the ‘Freeze rotation’-setting of its Rigidbody-component for all axes (X, Y & Z). Make sure the ‘Is Kinematic’-setting is disabled as well, to make the physics control the avatar’s rigid body.

If you want to stop the sensor control of the humanoid model in the scene, you can remove the AvatarController-component of the model. If you want to resume the sensor control of the model, add the AvatarController-component to the humanoid model again. After you remove or add this component, don’t forget to call ‘KinectManager.Instance.refreshAvatarControllers();’, to update the list of avatars KinectManager keeps track of.

How to setup the K2-package (v2.16 or later) to work with Orbbec Astra sensors (deprecated – use Nuitrack)

1. Go to and click on ‘Download Astra Driver and OpenNI 2’. Here is the shortcut:
2. Unzip the downloaded file, go to ‘Sensor Driver’-folder and run SensorDriver_V4.3.0.4.exe to install the Orbbec Astra driver.
3. Connect the Orbbec Astra sensor. If the driver is installed correctly, you should see it in the Device Manager, under ‘Orbbec’.
4. If you have Kinect SDK 2.0 installed, please open KinectScripts/Interfaces/Kinect2Interface.cs and change ‘sensorAlwaysAvailable = true;’ at the beginning of the class to ‘sensorAlwaysAvailable = false;’. More information about this action can be found here.
5. If you have ‘Kinect SDK 2.0’ installed on the same machine, look at this tip above, to see how to turn off the K2-sensor-always-available flag.
6. Run one of the avatar-demo scenes to check, if the Orbbec Astra interface works. The sensor should light up and the user(s) should be detected.

How to setup the K2-asset (v2.17 or later) to work with Nuitrack body tracking SDK (updated 11.Jun.2018)

1. To install Nuitrack SDK, follow the instructions on this page, for your respective platform. Nuitrack installation archives can be found here.
2. Connect the sensor, go to [NUITRACK_HOME]/activation_tool-folder and run the Nuitrack-executable. Press the Test-button at the top. You should see the depth stream coming from the sensor. And if you move in front of the sensor, you should see how Nuitrack SDK tracks your body and joints.
3. If you can’t see the depth image and body tracking, when the sensor connected, this would mean Nuitrack SDK is not working properly. Close the Nuitrack-executable, go to [NUITRACK_HOME]/bin/OpenNI2/Drivers and delete (or move somewhere else) the SenDuck-driver and its ini-file. Then go back to step 2 above, and try again.
4. If you have ‘Kinect SDK 2.0‘ installed on the same machine, look at this tip, to see how to turn off the K2-sensor always-available flag.
5. Please mind, you can expect crashes, while using Nuitrack SDK with Unity. The two most common crash-causes are: a) the sensor is not connected when you start the scene; b) you’re using Nuitrack trial version, which will stop after 3 minutes of scene run, and will cause Unity crash as side effect.
6. If you buy a Nuitrack license, don’t forget to import it into Nuitrack’s activation tool. On Windows this is: <nuitrack-home>\activation_tool\Nuitrack.exe. You can use the same app to test the currently connected sensor, as well. If everything works, you are ready to test the Nuitrack interface in Unity..
7. Run one of the avatar-demo scenes to check, if the Nuitrack interface, the sensor depth stream and Nuitrack body tracking works. Run the color-collider demo scene, to check if the color stream works, as well.
8. Please mind: The scenes that rely on color image overlays may or may not work correctly. This may be fixed in future K2-asset updates.

How to control Keijiro’s Skinner-avatars with the Avatar-Controller component

1. Download Keijiro’s Skinner project from its GitHub-repository.
2. Import the K2-asset from Unity asset store into the same project. Delete K2Examples/KinectDemos-folder. The demo scenes are not needed here.
3. Open Assets/Test/Test-scene. Disable Neo-game object in Hierarchy. It is not really needed.
4. Create an empty game object in Hierarchy and name it KinectController, to be consistent with the other demo scenes. Add K2Examples/KinectScripts/KinectManager.cs as component to this object. The KinectManager-component is needed by all other Kinect-related components.
5. Select ‘Neo (Skinner Source)’-game object in Hierarchy. Delete ‘Mocaps’ from the Controller-setting of its Animator-component, to prevent playing the recorded mo-cap animation, when the scene starts.
6. Press ‘Select’ below the object’s name, to find model’s asset in the project. Disable ‘Optimize game objects’-setting on its Rig-tab, and make sure its rig is Humanoid. Otherwise the AvatarController will not find the model’s joints it needs to control.
7. Add K2Examples/KinectScripts/AvatarController-component to ‘Neo (Skinner Source)’-game object in the scene, and enable its ‘Mirrored movement’ and ‘Vertical movement’-settings. Make sure the object’s transform rotation is (0, 180, 0).
8. Optionally, disable the script components of ‘Camera tracker’, ‘Rotation’, ‘Distance’ & ‘Shake’-parent game objects of the main camera in the scene, if you’d like to prevent the camera’s own animated movements.
9. Run the scene and start moving in front of the sensor, to see the effect. Try the other skinner renderers as well. They are children of ‘Skinner Renderers’-game object in the scene.

How to track a ball hitting a wall (hints)

This is a question I was asked quite a lot recently, because there are many possibilities for interactive playgrounds out there. For instance: virtual football or basketball shooting, kids throwing balls at projected animals on the wall, people stepping on virtual floor, etc. Here are some hints how to achieve it:

1. The only thing you need in this case, is to process the raw depth image coming from the sensor. You can get it by calling KinectManager.Instance.GetRawDepthMap(). It is an array of short-integers (DepthW x DepthH in size), representing the distance to the detected objects for each point of the depth image, in mm.
2. You know the distance from the sensor to the wall in meters, hence in mm too. It is a constant, so you can filter out all depth points that represent object distances closer than 1-2 meters (or less) from the wall. They are of no interest here, because too far from the wall. You need to experiment a bit to find the exact filtering distance.
3. Use some CV algorithm to locate the centers of the blobs of remaining, unfiltered depth points. There may be only one blob in case of one ball, or many blobs in case of many balls, or people walking on the floor.
4. When these blobs (and their respective centers) is at maximum distance, close to the fixed distance to the wall, this would mean the ball(s) have hit the wall.
5. Map the depth coordinates of the blob centers to color camera coordinates, by using KinectManager.Instance.MapDepthPointToColorCoords(), and you will have the screen point of impact, or KinectManager.Instance.MapDepthPointToSpaceCoords(), if you prefer to get the 3D position of the ball at the moment of impact. If you are not sure how to do the sensor to projector calibration, look at this tip.

How to create your own programmatic gestures

The programmatic gestures are implemented in KinectScripts/KinectGestures.cs or class that extends it. The detection of a gesture consists of checking for gesture-specific poses in the different gesture states. Look below for more information.

1. Open KinectScripts/KinectGestures.cs and add the name of your gesture(s) to the Gestures-enum. As you probably know, the enums in C# cannot be extended. This is the reason you should modify it to add your unique gesture names here. Alternatively, you can use the predefined UserGestureX-names for your gestures, if you prefer not to modify KinectGestures.cs.
2. Find CheckForGesture()-method in the opened class and add case(s) for the new gesture(s), at the end of its internal switch. It will contain the code that detects the gesture.
3. In the gesture-case, add an internal switch that will check for user pose in the respective state. See the code of other simple gesture (like RaiseLeftHand, RaiseRightHand, SwipeLeft or SwipeRight), if you need example.

In CheckForGestures() you have access to the jointsPos-array, containing all joint positions and jointsTracked-array, containing showing whether the the respective joints are currently tracked or not. The joint positions are in world coordinates, in meters.

The gesture detection code usually consists of checking for specific user poses in the current gesture state. The gesture detection always starts with the initial state 0. At this state you should check if the gesture has started. For instance, if the tracked joint (hand, foot or knee) is positioned properly relative to some other joint (like body center, hip or shoulder). If it is, this means the gesture has started. Save the position of the tracked joint, the current time and increment the state to 1. All this may be done by calling SetGestureJoint().

Then, at the next state (1), you should check if the gesture continues successfully or not. For instance, if the tracked joint has moved as expected relative to the other joint, and within the expected time frame in seconds. If it’s not, cancel the gesture and start over from state 0. This could be done by calling SetGestureCancelled()-method.

Otherwise, if this is the last expected state, consider the gesture is completed, call CheckPoseComplete() with last parameter 0 (i.e. don’t wait), to mark the gesture as complete. In case of gesture cancelation or completion, the gesture listeners will be notified.

If the gesture is successful so far, but not yet completed, call SetGestureJoint() again to save the current joint position and timestamp, as well as increment the gesture state again. Then go on with the next gesture-state processing, until the gesture gets completed. It would be also good to set the progress of the gesture the gestureData-structure, when the gesture consists of more than two states.

The demo scenes related to checking for programmatic gestures are located in the KinectDemos/GesturesDemo-folder. The KinectGesturesDemo1-scene shows how to utilize discrete gestures, and the KinectGesturesDemo2-scene is about the continuous gestures.

More tips regarding listening for discrete and continuous programmatic gestures in Unity scenes can be found above.

What is the file-format used by the KinectRecorderPlayer-component (KinectRecorderDemo)

The KinectRecorderPlayer-component can record or replay body-recording files. These are text files, where each line represents a body-frame at a specific moment in time. You can use it to replay or analyze the body-frame recordings in your own tools. Here is the format of each line. See the sample body-frames below, for reference.

0. time in seconds, since the start of recording, followed by ‘|’. All other field separators are ‘,’.
This value is used by the KinectRecorderPlayer-component for time-sync, when it needs to replay the body recording.

1. body frame identifier. should be ‘kb’.
2. body-frame timestamp, coming from the Kinect SDK (ignored by the KinectManager)
3. number of max tracked bodies (6).
4. number of max tracked body joints (25).

then, follows the data for each body (6 times)
6. body tracking flag – 1 if the body is tracked, 0 if it is not tracked (the 5 zeros at the end of the lines below are for the 5 missing bodies)

if the body is tracked, then the bodyId and the data for all body joints follow. if it is not tracked – the bodyId and joint data (7-9) are skipped
7. body ID

body joint data (25 times, for all body joints – ordered by JointType (see KinectScripts/KinectInterop.cs)
8. joint tracking state – 0 means not-tracked; 1 – inferred; 2 – tracked

if the joint is inferred or tracked, the joint position data follows. if it is not-tracked, the joint position data (9) is skipped.
9. joint position data – X, Y & Z.

And here are two body-frame samples, for reference:



How to enable user gender and age detection in KinectFittingRoom1-demo scene

You can utilize the cloud face detection in KinectFittingRoom1-demo scene, if you’d like to detect the user’s gender and age, and properly adjust the model categories for him or her. The CloudFaceDetector-component uses Azure Cognitive Services for user-face detection and analysis. These services are free of charge, if you don’t exceed a certain limit (30000 requests per month and 20 per minute). Here is how to do it:

1. Go to this page and press the big blue ‘Get API Key’-button next to ‘Face API’. See this screenshot, if you need more details.
2. You will be asked to sign-in with your Microsoft account and select your Azure subscription. At the end you should land on this page.
3. Press the ‘Create a resource’-button at the upper left part of the dashboard, then select ‘AI + Machine Learning’ and then ‘Face’. You need to give the Face-service a name & resource group and select endpoint (server address) near you. Select the free payment tier, if you don’t plan a bulk of requests. Then create the service. See this screenshot, if you need more details.
4. After the Face-service is created and deployed, select ‘All resources’ at the left side of the dashboard, then the name of the created Face-serice from the list of services. Then select ‘Quick start’ from the service menu, if not already selected. Once you are there, press the ‘Keys’-link and copy one of the provided subscription keys. Don’t forget to write down the first part of the endpoint address, as well. These parameters will be needed in the next step. See this screenshot, if you need more details.
5. Go back to Unity, open KinectFittingRoom1-demo scene and select the CloudFaceController-game object in Hierarchy. Type in the (written down in 4. above) first part of the endpoint address to ‘Face service location’, and paste the copied subscription key to ‘Face subscription key’. See this screenshot, if you need more details.
6. Finally, select the KinectController-game object in Hierarchy, find its ‘Category Selector’-component and enable the ‘Detect gender age’-setting. See this screenshot, if you need more details.
7. That’s it. Save the scene and run it to try it out. Now, after the T-pose detection, the user’s face will be analyzed and you will get information regarding the user’s gender and age at the lower left part of the screen.
8. If everything is OK, you can setup the model selectors in the scene to be available only for users with specific gender and age range, as needed. See the ‘Model gender’, ‘Minimum age’ and ‘Maximum age’-settings of the available ModelSelector-components.



1,048 thoughts on “Kinect v2 Tips, Tricks and Examples

    • Hello Vadim, well Kinect is by-design made so, it detects moving people facing the sensor. As to my experience it detects also the people walking sideways, only not so precise as the front-facing ones. Anyway, if it doesn’t fit the scenarios you need, try to work around the issue by reorienting the sensor (or include 2nd, and optionally 3rd one), in order to face the directions you expect people to come from. If you use multiple sensors, their outputs for body world position/orientation need to be sent over the network and synchronized afterwards, which is not so crazy difficult, but also not a trivial task.

      • Thanks for reply, i think we.ll need to use OpenCV to detect and track people.
        Now we cant find another way (

      • Hello Quick question, 1st sorry i didnt find the normal post section so i used this one.

        I have a few question and wonder if you could help me with that:
        -I notice we have a sample for the pointcloud but i dont seemed to manage to to find what is the distance of each point. I am looking for a floor detection. i saw the FloorClippingPlane but i dont understand how it works.
        -i notice the multi sensor option, is this availble with multiple kinect v2? or only for version one?

        Thank you

      • Hi there, to your questions:
        – KinectManager.Instance.GetRawDepthMap() returns array of distances for each pixel (shorts, in mm). In the point-cloud sample, the GetAvg() function of PointCloudView.cs returns the distance.
        – The floor clipping plane is not directly available, but you can make use of it, if you set the ‘Auto height angle’-setting of KinectManager-component to ‘Auto update’.
        – I suppose you mean the ‘Use multi source reader’-setting of KinectManager. It is about stream synchronization of the same sensor – the color, depth and body streams – which is needed in overlay or background removal demos, for instance. Connecting multiple sensors on the same machine is not supported by the SDK.

      • Thank you Rumen, i was impressed by how fast you responded, this assest has been a really good purchase.
        I think i am very new to this i see where you point me too and that is what i am looking for but i guess i need to learn more about point clouds as i am not sure how to proceed also how to use the floorclippingplane i understand its variable with 4 values but no idea how to assign to a plane my goal was to create a mesh of the room so i can place 3d objects around. like you see in hololens and tango devices.

        Another quick question i was playing with the recorder and worked great but when i included the UserMesh, that one did not got recorded. Looking at the code is only body tracking is this a limitation or is something that we can include?

        Inregards to the multi-kinect question what about the Server-CLient setup? Is it possible to use multiple kinects on different boxes to accomplish 360 tracking?

      • Well, I don’t think the floor-clipping-point is the right origin point for what you need. It is available only when there is at least one user in front of the sensor. Here is more information about its meaning: A better origin point would be the Kinect camera itself, if you could estimate its location in space. Moreover, all other data, incl. color, depth & coordinate streams are relative to the sensor. You can see that the main camera in all demo scenes shows what the Kinect sensor “sees” in real world, mostly in its own coordinate system. In this regard, see the KinectSceneVisualizer-demo and its respective SceneMeshVisualizer-component.

        I’m not sure which recorder you mean. The KinectRecorderDemo-scene saves and replays the user-body data only. If you need to save/play all Kinect streams, you can use the Kinect studio-app. It is part of the Kinect SDK 2.0. If you’d like to save custom data, you can use the KinectRecorderPlayer.cs-script and the underlying functions as example (the full source code is there, but feel free to ask me for details if needed), to save and replay the mesh and other custom data you need.

        Regarding multiple Kinects, this is the last topic in my todo-list for KinectDataServer & Client, but I’m quite busy and cannot say at the moment if or when it will be available.

      • Hehe I still dont understand how to start or continue a thread in this posting blog 😀
        I have spend sometime reviewing what you said about the point cloud and the depth.
        I reviewed GetDepthForPixel() , GetDepthForIndex(), GetAvg() and SceneVisualizer script.
        On those functions i am able to get the Depth as a float. So what i am trying to do is some sort of spartial mapping like in Hollolens or Tango but i am to far away from that that i will start with something simpler, Like placing 3d object in real space.

        -This is what i have done:
        I am using the OverlayDemo2 which is a colorbackground with your dots avatar mapped to color, if you look at the editor you can see that the avatar is also using its Depth. Based on that i was able to manually place 3d objects in 3d space, based on my avatar location positioning but this is alot of manual work, i need it to be automatic.

        I have been reading alot of point clouds and computer vision but i am not at that level yet. So instead of spartial mapping i would like to start by clicking on the screen (Scene) and capture mouse cordinates and get the depth for that point where i clicked and instantiate a 3d object there, I thought this was going to be a really easy task but i cant sort it out.

        I manage to map the colorbackground and the SceneVisualizer, just need a bit of offset to look good.
        Sample: (No offset)

        If i use a simple code to get the mouse input, i get Float Values and all the functions to get the depth required Integers, i can convert them to int but i am not sure if that is the proper path to proceed, also the screen is probably at 1080p as the OverlayDemo2 uses that one to build the rectangle, so I guess i need to use some sort of cordinate mapping.
        Looking at the SceneVisualizer i can see we use this one to build the mesh:

        Vector3 vSpacePos = sensorData.depth2SpaceCoords[xyIndex];

        And then we converted to space world coordinates like this :

        vSpacePos = kinectToWorld.MultiplyPoint3x4(vSpacePos);

        What i dont get is how to work with this X,Y values based on my mouse location.

        Any guidance is welcome 🙂

      • Hi ,I’m working in a project where I have to control the movement of the legs and of the avatar with animations. And to move the arms, hands, etc i have to use kinect.what should i do?i am using your kinect v2 ms-sdk example

      • I am working on a project which uses multiple kinects. The problem is I still didn’t get to the multiple kinect part yet. (very sarcastic ha.. :P)

        Guess you are aware of the library which they claim that they could get raw data from multiple kinects on the same PC (on some USB drivers only) [haven’t tested this yet since my other kinect is still shipping]

        Is there any possibility that I could get the skeleton tracking working from this library?

        Or should I try networking two PC’s and getting the thing done instead?

      • The K2-asset does not have interface to this library yet. I may add interface to it in the future, if it proves to be usable and easy to integrate. But if you need it now, you can try to add this interface. The full source code is there, so anything should be possible. Feel free to email me, to ask for more info, if needed.

      • One more thing. Does the Unity library for Kinect V2 (the one you wrote) support OpenNI2? libfreenect2 supports OpenNI2.

        This is the thing, say if I could get the image streams from two Kinects by using libfreenect2 library, I could send those data to OpenNI2 (could I send two data streams to OpenNI2?). Using NiTE 2.2 will I be able to do the skeleton tracking for the two data streams separately? (everything should be done on the same PC)

        Please let me know if the problem is not clear.

        PS. This is for my final year project (Multi-Kinect data fusion) and only few moths are left. I don’t need to go on a path with a dead end. That’s why I asked what path is more easier.

        Thank you.

      • Well, it’s up to you to do your final year project in the remaining months. I would not rely on someone else’s words and would check everything by myself.

        Theoretically it should work. The K2-asset has interface to OpenNI2 as well, although I have not tested it for years. The problems I see are: 1. How would multiple Kinects work on the same PC. For instance SDK 2.0 supports a single Kinect only; 2. If NiTE2 (used by the K2U-library) works with libfreenect2. Unfortunately NiTE2 is not anymore in development (as far as I know), since Apple bought and closed Primesense.

        One alternative path (to check too) would be to use Kinects on multiple PCs and fuse their data streams over the network into one server. The best implementation of this (that I’m aware of) is the RoomAlive project of prof. Andy Wilson. Check it out, too.

  1. Pingback: Kinect v2 VR Examples | - Technology, Health and More

  2. Hello Rumen F,
    I’m a student from Vietnam. I’m using your Kinect v2 project. But i got an issue in Fitting Room demo, when i wear a shirt then select a pant, the shirt not move anyway. The shirt and pant have different joint. So how can i have bot a shirt and pant in one joint system. Sorry for my Eng. Hope you reply for me, Thanks.

    • Hi, I think the shirts and pants should be different models that overlay and move with the user simultaneously. This would require two model selectors (for shirt and pant models), as well as some changes in ModelSelector.cs-script. By the way, thank you for this feedback! Working with several clothing categories simultaneously is in my list for the next round of fitting-room demo updates.

  3. Hello Rumen, i’m working around with your “UserMeshVisualiazer” demo, it’s fit our need for our project, but the mesh is not smooth. So i tried to modify your code to get smooth edge but not work. How can we get smooth edge in this demo scene (like we have in BackgroundRemoval scene)?

    • Hi LeHung, to get smooth edges you need to use the BackgroundManager component and its color texture, instead of the standard one. Please contact me next week by e-mail and I’ll tell you exactly what to do.

      • Hi Rimen,
        i am trying to develop a virtual trial room ,but i am not able to rotate my models more than 90 degree.Please tell me how to get 180 degree rotation using single kinect v2 sensor.

      • This is due to the design issue of Kinect SDK. The user must face the sensor, in order to be properly recognized. I have not solved this issue yet. Please consider it as a limitation, and advise the users not to turn their backs at the sensor.

  4. Question about using the UserBodyBlender script (to cut hands and put them in front of 3d model) in portrait mode. When we use this script the camera image becames streched (narrowed).
    What should be changed to use that functionality in portrait mode?.

  5. Hi Rumen,

    Does the package still work with Kinect for Xbox One? I read that Microsoft had already phased out the Kinect V2 back in November and that they are pushing for developers to use the newer Kinect sensor with the adapter.


  6. Hello Rumen,

    I really like your package, but I’m having trouble trying to get the fitting room and overlay demos in portrait mode but it is not working. I get a very deformed picture and the 3d elements arent place correctly, see this example:

    How can I get a non-deformed image and accurate tracking?

    Thanks in advance for your help!

    • Yeah, there is a bug in a shader, causing this behavior in portrait mode. A quick workaround would be to disable the UserBodyBlender-component of the MainCamera. Please e-mail me next week and I’ll send you the bug-fix.

  7. Hello Rumen,

    I am new for unity and C#. I am looking forward to fly the avatar like airplane. But I dont know which part of the code I can modify or add. I want to fly the avatar when I raise both of my hands.

    Thank you in advance.

    • Hi, I don’t think you need to modify the code. If I were you, I would add a script, where I would check if the Y-position of both hands of the user is above the Y-position of user’s head or neck, i.e. raised. And if so, I would “fly” the avatar. I’m not sure how exactly you’d like to fly it, but you could enable ‘External root motion’-setting of AvatarController and move the avatar from your script.

      • Hello Rumen,

        I tried to move the avatar when I raise my hands. To check the joints positions the code is in “void update()”.I used force to move the avatar. But I read force should be used in “FixedUpdate ()”. What could I do about it? The flying avatar is not smooth. Would you give me some advice, please?

        Thank you in advance.

      • Yes, if you use forces, better do it in FixedUpdate(), although I’m not sure why you use forces instead of moving avatar as kinematic body directly from your script. Don’t forget to enable the ‘External root motion’-setting, too.

  8. Hi Rumen! =)

    Thanks for your amazing work.

    I would like to simulate that the user can “touch” the water than is falling down from the top of the scene (by a particle system). So, trying to get my goal, I’ve started with the KinectOverlay2Demo and I’ve added to the hierarchy the “HandColliderLeft” and “HandColliderRight” prefabs (with the correct values attached in the inspector for each one) and a particle system. This particle system has “Collision” enabled.

    With this configuration, I can’t “touch” the water… The particle system not collides with my hands.

    Any idea?

    Thanks in advance

    • Hi, I suppose you also added a rigged-body component to at least one object in the scene. Unity requires at least one rigged body and colliders, for the physics to work. If it still doesn’t, please try it to emit regular objects in the scene, that would fall and collide with your hands. If this works, but the particle system – doesn’t, that would mean the problem is in the particle system.

      • Thanks so much! =) Now it’s working.

        However, I’ve another little problem… The Kinect camera is at 2,5 meters of the user, but it appears too big in the screen… Is there any way to make the user smaller than by default in the screen?

      • You can use (or not) the ‘Pos relative to camera’-setting of AvatarController. If you set it to be the main camera, then the avatar always move to a position relative to this camera as your position to the sensor. If you leave it to None, then you can place the avatar wherever you like in the 3d-scene, and it will move with respect to this initial position. Hope I understood your question correctly…

      • Thanks for the answer! and sorry for my english =(

        In my scene (based on KinectOverlay2Demo) not exists any avatar and I can’t use it for the project =(
        I’m using the background removal and the image of the user is over a custom background, but the problem is that the user is too big regarding the background, so I would like to make the user smaller on the screen.

        Check this image as an example:

        Thanks for all Rumen!

      • In this case, you should have ‘Foreground camera’-setting of BackgroundRemovalManager set to None, the foreground image rendered by separate camera (like BackgroundCamera2, BackgroundImage2 and its ForegroundToImage-component) in the 2nd BR-demo), and finally shrink the same camera’s ViewportRect to make it render only on part of the screen. Hope this tip helps..

  9. Hi again Rumen,

    with this new configuration, I’ve a new little problem =(

    As you told me, the viewport rect of the new BackgroundCamera have been set to W=0.69 and H=0.69… With this configuration, the user image at the screen is reduced =) Correct.

    However, the joints colliders are positioned as the previous configuration of W=1 and H=1, not with the new configuration, so when I try to touch any GameObject with my hands for example, the colliders of the gameobject and my hands are not colliding because my hands colliders are above my hands image at the screen.

    Hope you understand me and thanks for your help.

    • Sorry, I don’t quite understand which colliders do you mean? I suppose, if the objects they’re attached to, are rendered with the same camera, the screen coordinates should match.

      • I’ll check it later… For the moment, I haven’t got any success with this problem.

        Returning to the above problem, modifying the viewport rect as you told me, the user appears smaller at the screen, but now the kinect image seems to be of a different size than the canvas. This means that the image from kinect is clipped beyond some limits. For example, with the configuration of the previous message (W=0.69 and H=0.69), the right and top areas of the image are clipped.

        Check this image as an example:

        Thanks for all Rumen.

      • There are two things to keep in mind: 1. Two cameras are used in background removal – the background camera renders the image behind the user, and foreground camera renders the user’s foreground image. Both camera should be in sync, i.e. have equal viewport rectangles; 2. The BR procedure uses coordinate mapping, which does not cover fully the color camera image (because of the different resolutions of depth and color cameras). That’s why the user’s foreground image gets clipped at the border areas of the color image.

  10. Hello

    I want to do a simple thing, but I can not manage to do it correctly, I need that a magicwand follows the hand of a kinect user and move it accordly, can you provide ideas or sample on how to do this?

      • Yes saw it but the wand moves it not accurate it random jump to different directions

        Can this be fixed?

      • I tested it with the GreenStick-prefab instead of the GreenBall. Just dragged it into the scene and set it as ‘Overlay object’. Then I set its Y-rotation to 180 degrees (because we need mirrored movement) of the stick, and changed its scale to (0.5, 0.1, 0.1), because it needs to follow the hand (pointing right in T-pose). Then I started the scene, and the stick followed my hand pretty well. If you need additional smoothness, open JointOverlayer.cs-script, find ‘Quaternion.Slerp(‘-invocation near the end, and change the number in its last parameter from 20f to a smaller number. I used 10f instead, and the smoothness was OK. If you still have major issues, please e-mail me with your invoice number and attach the wand you use, so I can take a closer look at the issue.

      • Hi I did your sample it moves very weel but the greenstick start in the middle of the arm, how can I do to started at the hand position instead?

      • Hi, I have installed everthing (speech , language packs) etc and i am still getting the error … how can we add that missing dll in unity?? please help

      • The Speech runtime v11 system requirements does NOT include windows 10 (which is the OS i am using) …that probably the issue but i don’t know the workaround

      • If you mean the UWP platform, I have some clues, but have not researched or implemented the speech recognition yet. That’s why the K2-UWP interface was announced as ‘initial support’ for this platform.

      • Ok thanks … looks like i am stuck 🙂 … but for your info it’s failing on ..

        // Initialize the speech recognizer
        string sCriteria = String.Format(“Language={0:X};Kinect=True”, languageCode);
        int rc = sensorData.sensorInterface.InitSpeechRecognition(sCriteria, true, false);
        if (rc < 0)
        string sErrorMessage = (new SpeechErrorHandler()).GetSapiErrorMessage(rc);
        throw new Exception(String.Format("Error initializing Kinect/SAPI: " + sErrorMessage));

        // DLL Imports for speech wrapper functions
        [DllImport("Kinect2SpeechWrapper", EntryPoint = "InitSpeechRecognizer")]
        private static extern int InitSpeechRecognizerNative([MarshalAs(UnmanagedType.LPWStr)]string sRecoCriteria, bool bUseKinect, bool bAdaptationOff);

        I am getting the error "Error Initializing …."


      • Just switch to ‘PC, Mac & Linux Standalone’ – Windows platform in the Build settings, and it should work without errors. You should get back the SharpZipLib-library, too. By the way, in the error message above, what is important is the error code, not the message 😉

      • Sorry, not sure what is the sharpZipLib library … otherwise it’s always been on the same build settings as mentioned in the docs and… Not working …the error is Always there …Error initializing kinect/SAPI: The requested data item (file, data key, value etc) was not found. at speechmanager.Start() [0X00133]

        If there is a mail address i can send you error snapshots if required 🙂


      • Ah, I thought you ask about UWP. Yes, please send me the console log (or a screenshot). My e-mail is on the About-page. Make also a screenshots of the Unity build settings & the installed software packages related to the issue (kinect sdk, speech platform sdk & language packs).

  11. Hi Rumen! Thanks for your amazing work, and for your help!
    I need to develop a simple photo booth app that can show the “snapchat filters” effect, like a “dog face”, just like this picture:

    Can I use your Kinect V2 asset to add 3D accessories to a human face near of the camera?
    Does the Kinect V2 recognize near faces?
    Can you please give some help to do this with your Kinect V2 asset?

    Thanks in advance!

    Best regards,


    • Hi Cris, I don’t think it will work in such a short distance. The problem is that Kinect assigns recognized faces to users, and in order to detect a user, at least a distance of 0.5m from the sensor is needed. Anyway you could try the 1st face-tracking demo to see at what distance your face gets recognized (look at the face-bounding rectangle). There is a 2D photo-booth demo in the K2-asset too, but it is not so much face-related.

  12. Hi Rumen, I want to add some new gesture in KinectGesturesDemo2 . I add gesture in KinectGestures>switch(gestureData.gesture)and type the new gesture ‘s animation . Then I run unity ,Kinect can’t catch my gesture.
    Whether the demo program can’t add new gestrues or I do something wrong?
    Sorry, my english isn’t well.

  13. Hi
    nice to meet you!
    My name is Samson Wang
    I like your package very well.
    I hope to know the way how distinguish multi user gestures like hand right raise or hand left raise.

    • Hi, nice to meet you, too. See the KinectScripts/Samples/SimpleGestureListener.cs as example. It is component of KinectController-game object in the 1st avatar-demo scene, so you can test it there. If you want to distinguish between gestures of multiple users, change the ‘Player index’-setting of the component – 0 meaning the 1st user, 1 – the second, etc. If you want to detect multiple gestures, made by the same user, add lines like this: ‘manager.DetectGesture(userId, KinectGestures.Gestures.xxxx);’ to the UserDetected()-function in the script, and then process them in GestureCompleted(). For more information about processing gestures see the tip above:

      • Hi
        Thanks for your reply.
        I can distinguish the multiple users
        And then I want to get the left raise hand gesture of 1st user, the right raise hand gesture of 2nd user ….
        How can I get it ?
        I can distinguish the multi – users, but I can’t get gesture of each user.
        Now I can only get the gesture of 1st user.
        Best Regards

      • Add one gesture-listener component for the 1st user (with player-index 0), and another gesture-listener component for the 2nd user (with player-index 1).

  14. Hi, we’re having a little trouble getting camera footage to display in the GUI.
    We have the KinectManager script attached with the SimpleBackgroundRemoval script but we do’nt seem to be getting any footage.

    We’ve also tried writing our own script, following your tip “How to get the depth- or color-camera textures:” above however we keep getting a null reference to “GetUsersClrTex”.

    Our overall goal is to use the camera as a substitution for using green screens (we just want to show the player on the screen with no background).

    Can you give us any hints or tips as to what might be going wrong?

  15. hi,Rumen, i want to put the backGroundRemovalImage into a 3d animal scene ,and i want the backGroundRemovalImage to be able to resize, re-position and have layers( such as , a pig walking from in front of to behind me, it looks like the demo KinectUserVisualizer, but i want to use backGroundRemovalImage instead of point cloudsin the KinectUserVisualizer). pleae tell me how to do this.

    • what i wana do is, put the backGroundRemovalImage on a mesh plane,or a cube,etc. is it possilble? i gone through all your demos and documents, research for a culple of days, didnt get any solution.please help me.

      • Yes, it is. I’m surprised you didn’t find it. Look again at the 2nd background-removal demo, and the ForegroundToImage-component of its BackgroundImage2-object. The foreground image may be referenced as texture like this: ‘BackgroundRemovalManager.Instance.GetForegroundTex();’ provided that BackgroundRemovalManager-component is in the scene, too. Then you can apply the texture on any object you like.

    • Not sure what exactly you want to do. You can use the background-removal image as texture and apply it to any object you want. See the 2nd background-removal demo scene, if you need an example. You can also use the user-visualizer approach, which utilizes the depth info, hence is a 3d mesh that may be used as full featured 3d-object in the scene. Hope this information helps.

      • (1) yeah, i have gone through demo1-3 these days, and demo3 is very close to what i need, which i can walk behind different balls/cubes, this is what i need. but in demo 3, the image effect is kind of “visualizer style”, i want it to be BackgroundRemovalImage style. (if i can send you an image, it would be very easy to describe, so i add your skype). (2) about “BackgroundRemovalManager.Instance.GetForegroundTex(); “, do i need to put this into the ForegroundToImage.cs ?

      • (1) I meant KinectBackgroundRemoval2. Yes, please send me the image by e-mail.
        (2) No, the script is just an example, showing how to use GetForegroundTex().

  16. Hello,

    Is it possible to have the plugin detect a user based on distance to the camera, and then stay locked on that person until they are lost?

    We are using the plugin for a game in which someone stands in front of the camera to start the game, then they play it over the course of a minute. It could be that other people will walk behind them or to the side of them, and in some cases they may be closer to the camera than the user who is playing. The problem is that the 3 choices given for user detection in the plugin won’t work.

    We wrote our own Kinect Manager before purchasing this plugin and in that case, we would detect the user closest to the camera and then store that id. As long as the body id was in the frame, we locked on to that user, regardless of who wandered in and out of the camera detection distance.


    • Hi, there are 3 functions in the KinectManager – GetEmptyUserSlot() and FreeEmptyUserSlot(), called when a new user gets detected and when user was lost, and RearrangeUserIndices(), called by the previous two functions before the user is assigned to a free index, and after an index was freed, respectfully. In all cases the userId is stored in the slot. When ‘User detection order’ is set to ‘Appearance’ there is no rearrangement, so the indices are kept as they are until the user gets lost, without change. You are free to modify or override these functions, as to your needs. I think you can even re-detect the user with respect to its last known position, if needed. Hope this information is clear enough. Feel free to contact me, if you need more info.

  17. Hi, Runmen

    My name is Herbert Bridge.

    I am an unity developer who loeve your unity package very much.

    And then I need your help.

    now, I am working dressing app.

    And then need to control dress model’s scale by user’s tall.

    I want to hear your answer asap.

    Kind Regards

    • Hi, there are two fitting-room demo scenes in the K2-asset. They both use the AvatarScaler-script as component of the dress model object to do the scaling. Take a look at it, if you are interested.

  18. HI, Rumen.I have a question.
    I want to make cameras like FPS games using kinect.(For example, the angle of the camera changes depending on the angle of the face. And the position of the camera change with the position of the avatar etc.)
    Is it possible to make this using assets?If so, how can I do it?


    • Hi, as the Kinect-controlled avatar reproduces your movements, the only thing you need to make it FPS, is to fix the camera to the head or neck joint of the avatar, in order to emulate eye sight. See the 2nd avatar demo, if you need an example.

      • Thanks nice advice.I could make what I expected.
        I have one more question.How can I use multiple VGB gestures?
        Is it possible to set multiple Gesture databases for Visual Gesture Manager?
        Please let me know if there is any way.

      • The VisualGestureManager currently works with one database file. But if you need this, you could try to extent the InitVisualGestures()-function, in order to import gestures from multiple VG-databases. You would need to replace the single ‘gestureDatabase’ string with an array or list of strings, as well. I have never tried this, but theoretically it could work.

  19. Hi, the KinectGestures.cs was updated to be a monobehaviour with protected members, I’m assuming, so that it can be inherited. Eg

    public class KG2 : KinectGestures {
    public new enum Gestures {
    public override void CheckForGestures(………..

    So I can easily allow multiple developers to work off the same template without git conflicts etc

    But the KinectManager doesnt support more than 1 KinectGestures:
    (line 114) public KinectGestures gestureManager;

    Will there be a future update to allow more than 1 KinectGestures based monobehaviour to exist in the same scene? Or maybe some other way to easily allow the KinectGestures to be extended (dotnet enums are just the worst)

    Thanks for your work on this asset

    • Hi Nathan, thank you for this interesting suggestion! Until now I have never considered having more than one gesture manager in the scene. May I ask you to explain me in an e-mail how you see the use-case of having multiple gesture managers, and how to replace the gesture enums with something better. The implementation of such an idea would be not so difficult, as to me. I’d need to replace the single gestureManager in KinectManager with an array and use them for checking gestures. You can also try it by yourself.

      • Hi

        Sorry about this super late reply, I thought this comment was deleted, now I realise I was checking the wrong post for your reply, and also forgot to check the “Notify by email” option. Oops.

        My objective is to preserve a base KinectGestures script with your built in gestures, and have subclasses inherited from KinectGestures for new gestures. I am trying to do this with all Kinect-related scripts so that I can update the Kinect plugin safely without losing my amendments to your scripts. It will also make it easier for me to manage several games that each involve different gestures.

        I did not come up with a way to replace enums, as it is the only member type in Unity to display a dropdown in the inspector, which is important for easy developing.

        But c# enums are pretty rigid, I can’t simply inherit KinectGestures and add on to the enum, the only way I can think of is to initialise a new enum with new choices in the subclass.

        I played around with generic types for a while, but it didn’t work out. Since you have updated your plugin to handle multiple AvatarControllers in a list, I thought it might be easy to handle multiple KinectGestures as well.

        Thanks for your time

      • Hi there, having multiple KinectGesture-components sounds cool to me, and would be a real solution as well. I’ll research this option and may implement it for the next release. Please remind me again, in case I forget.

        By the way, because the enums are so rigid, the current workaround was to add UserGestureX-elements to the Gestures-enum, as an option to have custom gestures preserved across the script updates.

  20. Hi Rumen,

    Is it possible to apply a shader or a depth mask on some part of detected by sensorData.color2DepthTexture? Right now if you call the color2DepthTexture, it outputs the whole body of the user detected, what I want to do is output the whole body but hide it on runtime ( like making the limbs invisible and the head is the only one shown but same position as the color2DepthTexture detected). Similar to SetFaceTexture but i want to make my output more like 3d. Thanks.

    • Yes I think it is possible. The full source code, including shaders are there at your disposal. For instance you could modify or replace the Color2BodyShader.shader, which currently creates the body mask used for background removal. You could add the depth- or color-coordinates of the head as shader property, as well as the distance (in depth or color coordinates) between the head and neck. Then use these parameters in shader’s code to filter out all pixels that have distance to the head joint more than the distance between head and neck. Hope this is clear enough.

  21. Hi Rumen,

    Your package is very useful. Can I extend the Interaction Demo to have two or more users to move objects at the same time? I tried to add another Interaction Manager with a different player Index. The user’s hand cursor will appear but cannot move objects. Also, since the Interaction Manager implements the Singleton pattern, does it mean it will only work for one user at a time? How should I make it work for two or more users?


    • Hi Jing, I think the problem is that GrabDropScript uses the singleton instance of InteractionManager. I.e. it will always get the 1st IM instance. In your case this script need some modifications, so one could set the needed instance of InteractionManager as setting of the script component. If you can’t do these modifications alone, please send me an e-mail after the weekend, and I’ll try to provide you the modified script.

  22. Hi Rumen,
    I hope to solve one simple problem.
    Now I want to switch removal back ground mode from the common camera mode at runtime.
    That is, users see real environment, and they are going to switch the another mode that has special background by using their gesture at runtime.
    How can I solve this problem.

    • Hi, this is more a trivial task, not a problem. If you look at the KinectBackgroundRemoval1-demo scene, the background image is actually the texture of the BackgroundImage-game object. So, to change it to the real camera image, you need to do something like this: ‘backgroundImage.texture = KinectManager.Instance.GetUsersClrTex();’. To set it to a fixed texture (picture), make a similar assignment. To do this with a gesture, your script should implement the gesture listener interface. If you need example, see the SimpleGestureListener.cs or CubeGestureListener.cs-scripts.

  23. Thanks for your reply.
    It works very well.
    And then I cant find the mouse cursor.
    I need to see the cursor.
    How can I do it ?


    • You don’t need to have full HD monitor. The background image will resize itself to fit the resolution of the screen. What you could do is to change the aspect ratio of the Game view in Unity editor (or the supported aspect ratios in Build settings) to 16:9, in order to match the full-hd aspect ratio. Otherwise the background image may look distorted.

  24. Sorry for my English, How can I take a photograph in high resolution ?, I tried with the PhotoBooth script, but it is of low quality. Any ideas?

    • Open PhotoBoothController.cs and find the TakePicture()-method. As you see, the textures at the beginning are initialized with the current screen width & height. You can use 1920 x 1080 instead, or other resolution if you like. It would be good to keep the aspect ratio to 16:9 anyway.

  25. Hey! whenever i start the game my character will go into t pose ,how do i set my own pose to my character on start just like your unity character does?

    • Which demo scene do you mean? I suppose the character goes into T-pose, because it has an AvatarController-component attached, and the T-pose is zero-pose for the Kinect orientations, when the user is not detected by the sensor. I would put the character at some invisible position /for instance at (0,0,0)/, when the user is not detected. To do it, open AvatarController.cs, find ResetToInitialPosition()-method, and near its end, replace ‘transform.position = initialPosition;’ with ‘transform.position =;’. I think this should be enough.

  26. Hi Rumen,

    I’m Vignesh from Singapore. I’m using this package in a tele-rehabilitation research project involving Unity and Kinect, and I face a performance issue. In my Unity project, I have a menu scene where the user would see 5 options. All the 5 options will load the same scene, but each with specific parameters as per the selected option.

    In the scene’s Update() method, I’m reading the joint positions using the below code.

    manager = KinectManager.Instance;
    userID = manager.GetPrimaryUserID();
    manager.GetJointPosition(userID, 0);
    .. and so on for all joints

    I don’t face any issue during the first execution of the scene. However, when I return to the menu and load the scene again for the second test, the animation of the avatar is very slow. I see a lot of lag between the actual human joint movements and the avatar’s joint movement on the screen. The same happens for the third time when the scene is loaded from the menu scene.

    What do you think the issue would be? It would be great if you can help me solve this.

    Thanks in advance.


    • Hi Vignesh, I think you are experiencing a “multi-scene” issue. When you use the Kinect sensor in multiple scenes, keep in mind that the object containing the KinectManager-component is not destroyed between scenes, and this could cause 2 or more KM components to co-exist at the same time, causing bigger and bigger delay in data processing. There are 2 possible solutions:
      1. Put the KinectManager in a startup-scene, as described in ‘Howto-Use-KinectManager-Across-Multiple-Scenes.pdf’. The same is demonstrated in the MultiSceneDemo, or
      2. Comment out this line at the beginning of KinectManager.cs: ‘#define USE_SINGLE_KM_IN_MULTIPLE_SCENES’. This way you can have a KinectManager-component only in the scenes that uses Kinect, and would be useful when you use the Kinect-sensor in few scenes only. Of course, this will turn-on/turn-off the sensor when you enter or leave the scene.

      • Thanks a ton for the solution! It works like a charm now :). I wasted close to three days trying different things to solve this issue. Thanks again.


      • Hi Rumen,

        i am having the same issue as Vignesh was having. However the only solution in my case is to comment out the USE_SINGLE_KM_IN_MULTIPLE_SCENES in Kinect Manager, as i have only one scene and i have to load every thing again. The issue is, on first time when the scene is played the Kinect sensor is always On, but when i reload the Level some time the sensor is On and some time it is off. which make trouble as i am using the Overlay controller and if the sensor is off i couldn’t see any thing.

      • I’m not quite sure what happens, when the same scene gets reloaded. Please add some Debug.Log() lines to the StartKinect() and Destroy()-methods of KinectManager, to check if they are invoked properly each time. If not, use a separate startup scene and put the KinectManager in there as non-destroyed component, as shown in the multi-scene demo.

  27. Hi Roumenf,

    Great plugin, thanks for your great work!! you rock 🙂

    only 1 issue so far, I am using it with some fuse characters and its a really nice and easy to use system you have created. Only problem is when the character closes the hands the thumb rotations are looking correct local to the thumb itself but the overall angle of the thumb is incorrect. It looks like the same rotation is being applied to all the fingers and the thumbs, is that the case?
    If so can you consider adding a line or two to just angle the the thumb so it makes a proper fist? Also after using some other kinect v2 tools I see sensing of a finger point is possible, is this something you might consider adding to this toolset? Im still fairly beginner to coding and had a look at the code, I found the area relating to this but didn’t want to break things.. 🙂 if not any tips on how to fix this would be great!



    • Hi Alex, the function that emulates fist is TransformSpecialBoneFist() in AvatarController.cs. Don’t be afraid to modify it, you can always get back to the original code. Yes, you’re right about the same rotation applied to all finger and thumb joints. I’m not much of model designer, and this was the best I figured out when I wrote it 🙂 What is exactly your idea about making of proper fist?

      I’m also not sure what exactly you meant by ‘consider adding the finger point to this toolset’. If you enable ‘Finger orientations’ of AvatarController, and also set ‘Allowed hand rotations’ to ‘All’, you should get the avatar fingers, as the sensor tracks them. Or, did you mean anything else?

  28. Hi Rumen,

    Ok thanks I will have a look at it 🙂 Oh really all I meant is that the thumb could just use an additional transfom applied to it so the thumb sits across the fingers. at the moment it looks like the thumb should be causing a lot of pain to the avatar 🙂 hehehe..

    Regarding the pointing as by default when animating via kinect its just hands open and closed fist, could we get a finger point too? I will check those settings you mentioned..

    Anyway thanks for the reply, great work!


  29. Sir can you kindly attach the pants also to the human body. in “fitting room demo” ? It will be really great if you did that. Also i tried to import a custom model in unity and rig it but as there were missing joints i was not able to see any option of manually assigning any joint as you have mentioned in your article. Kindly help in this regard.

    Thanks in advance

    • Hi, I don’t have pants models, but have done this before. If there are missing joints, you cannot set the rig to Humanoid. In this case, you can create prefab of the model with AvatarControllerClassic attached to it, instead of AvatarController, and set only the lower body joints, and optionally the root. Experiment a bit and you will see what I mean.

      • Hello, Yes we have used the AvatarControllerClassic but it requires game objects of different parts. We are stuck at the stage where we have a single static mesh of the garment and want to divide it into multiple game objects for each part of the body.It would be great if you can help us understand. Also why are there 2 cameras in the fitting room scene. Can u elaborate the use of the main camera? Thank you , your package is extremely good (Y).

      • AvatarControllerClassic doesn’t require the model to consists of multiple meshes, but to be rigged, i.e. have joints that control the model’s skin. As I said above, you need to assign the respective thigh, knee & foot-joints to the respective settings in AvatarControllerClassic. They will be used to control the appearance of the meshes (the skin). The only difference between this component and AvatarController is that AvatarController uses the Humanoid rigging to find the joints in the model automatically, while when the AvatarControllerClassic is used, the joints need to be set manually. All joints that are not assigned will be considered missing, and not used at all.

  30. Hi Rumen,

    Thank you for this amazing package! It really was of great help to me.

    There is one thing I need help with. I was wondering whether it was possible to change the aesthetics of the body while using the KinectRecorderDemo-scene. I simply want to change the color of the lines and change the sphere radius of the joints. I’ve been going through the various scripts but am so far unable to pinpoint which piece of code controls this.

    Thanks in advance,


  31. Hi Rumen,

    Thank you for the work you put in this package! Was of great help to me.

    I have one question though, is it possible to change the looks of the body in the KinectRecorderDemo-scene? I have been going through the various scripts but am unable to pinpoint which code controls this. I would simply like to change the color from green to an RGB color code. And perhaps change the size of the joint spheres.

    Thank you in advance.


    • Hi Rik, the bodies in this scene are represented by Cubeman0 to Cubeman5 (for Kinect tracked bodies from 0 to 5) in the Hierarchy-window, and are controlled by their respective CubemanController-components. You are free to change the material of joints (child objects of CubemanX) or appearance of the bone-lines (setting of CubemanController). You can even replace them with avatar models, like those used in the avatar demo-scenes, if you like.

  32. Hi Rumen,
    Thanks for your great work.
    I just buy your asset at unity,

    I wanna ask you something.
    Did you have any example or method for handling missing joint for better display?, i want my 3d animation keep look smooth tough the joint is missed for some sec.

    sory for my english.

    • Hi, if the joint is missing this means it is not tracked. There is a setting of KinectManager-component called ‘Ignore inferred joints’. Please check first if this setting is enabled. If it is, disable it to make KinectManager consider the inferred joints as tracked, too. If this is not enough, open KinectScripts/KinectInterop.cs, find CalcBodyFrameBoneDirs()-method and add some code there, to process the not-tracked joints and make them inferred or tracked.

  33. Hi Rumen,

    i just wanna ask you something.
    i am trying to move a gameobject forward when i am in certain position e.x leaningforward by using transform.foward code in void update() but am not able to move it. However, when i input key i was able to move the gameobject . i am referring to your cube/model presentation script. am i missing something?

    BTW Thanks for your great work.

    • Hi there, transform.forward is the forward-direction of a game object, not its position or movement. You should use transform.position or transform.Translate() to move the object.

      Regarding gesture detection, there are 4 important components in the cube-presentation scene:
      – KinectManager – needed by all Kinect-related scenes.
      – KinectGestures – detection of all programmatic gestures is implemented in there.
      – CubePresentationListener – gesture listener, who specifies the gestures it is interested in, and gets invoked when these gestures are detected.
      – CubePresentationScript – polls the gesture listener, and acts accordingly. In this scene it rotates the cube and changes the pictures on its sides.

      Hope this helps you find out, if you are missing something…

  34. Hi Rumen! Thanks again for your asset, your improvements make it always better for use!!

    I want to develop a “face mask” windows aplication, that replace the face of someone in a video with the face of the user in front of the kinect in real time.
    Something like this image, but with the face mask of a user in real time:

    That’s why I would like some help from you please?

    – I think I can track the face in the video with OpenCV, but how can I put the 3D face of the kinect user tracking the movement of the replaced face in the video?

    – Do I have to put the 3D Face Mesh behind the video texture, setting the face of the video as transparent? Or, would it be better to put the 3D Face Mesh over the video texture?

    – How can I “blur” the edges of the 3D Face model of the kinect?

    – If some user has a larger or smaller face, does the 3D face mesh change, or does the user’s face texture adapt to the 3D Face Mesh?

  35. Hi Rumen,
    I’ve found in the Kinect manager script a function named TurnArround that return always value ‘false’ .
    In a related issue the avatar don’t turn arround when I do turn arround , is this problem in relation whit the Turnarround function.
    If it’s the case how can I implement it or solve the problem in order to make the avatar turn arround him self

    • Hi, I don’t see such function in KM. But there is a setting called ‘Allow turn arounds’ that is meant to provide some kind of software workaround for this SDK issue. The problem is that when the user turns around, the SDK switches left and right joints. My idea back then was to use a ML algorithm to detect when the user is turned, and to fix the switched body joints. But my experiments back then were not very successful, and the issue still stays on my todo list. Because of a large set of more urgent tasks and requests at the moment, I cannot tell you when exactly I’ll get back to this issue…

  36. hi Rumen, I would like to Thank you for your work and efforts .
    I am working on a project where multiple servers (each one with his own Kinect) send the skeleton data ( for the same person ) to a client which will mix them and animate an avatar.
    so does your KiectDataClient respond to my need? if not what do you suggest to me ?
    thanks a lot .

    • Hi, unfortunately the KinectDataClient is not yet developed that far. I have this in my todo-list, but because of many more urgent issues and requests, can’t say when this will be available. I would suggest you to look at the Room-Alive project of prof. Andy Wilson. It supported multiple Kinect servers, as far as I remember.

  37. Hi Rumen!
    Hey thanks for updating again the Kinect V2 asset! I can’t believe how many new things can I do with the Kinect!!

    I want to do a body with face realtime avatar. I mean, having the funcionelities of the “AvatarDemo1” with the “FaceTrackingDemo4” running simoultaneously. How can I do that?

    I want to develop a virtual character, that follows not only my body’s movements, also my face gestures.

    Can I do this simoultaneously?

    Thanks in advance!

    As always, thanks for your useful tips and help! 🙂

    Best regards,


    • Hi Cris, you would need a model with rigged body and rigged face. Then import it into your project, add AvatarController and ModelFaceController as components, in similar way as in the demo scenes you mentioned, and finally adjust the settings of the ModelFaceController, to match the ‘face bones’ of your model. That should do the job. The alternative would be to have separate models for body & face, but this would not look so good, as to me.

      • Hi Rumen!

        Thanks for your answer! Do you know what is the necessary face rigging to be compatible with the kinect hd face tracking?
        How many rigging points does the Kinect need for the hd face tracking?
        Do you know an instruction guide to do the face rigging mapping between the 3D model and the Kinect?

        Thanks again for your help!

        Best regards,


      • Hi Rumen!

        Sorry that I insist, but I need to develop a full body avatar.
        Do you know what is the necessary face rigging to be compatible with the kinect hd face tracking?
        How many rigging points does the Kinect need for the hd face tracking?
        Do you know an instruction guide to do the face rigging mapping between the 3D model and the Kinect?

        Thanks for any guide about this!

        Thanks again for your help!

        Best regards,


      • Hi, and sorry. I’ve probably missed your previous comment 🙁

        I’m not a designer and cannot tell you how many rigging points to use. As far as I remember, Kinect-v2 uses fixed number of HD face points. Some of them are even named. I’m on the road this week and cannot check this right now, but you can open FacetrackingManager.cs, and in CreateFaceModelMesh() & UpdateFaceModelMesh() add a Debug.Log() to print the value of iNumVertices.

        By the way, in the 4th face-tracking demo scene, the ModelFaceController-script uses the face-tracking AUs returned by the HD face tracker, together with the FaceShapeAnimations-enum, to animate face-expression specific points only. They could be enough for your avatar, I think.

  38. Hi Rumen,

    How are you ?

    I am working some head traking.

    Current user should select your favorite hat and put it.

    So I am using jointoverlayer.cs.

    And then I cant control the scale of hat model.

    I hope to help with me this.

    Best Regards

    • Hi, what do you mean? What’s up with the scale of the model?

      By the way, see the KinectFaceTrackingDemo2 scene. It demonstrates how to achieve similar case, like yours.

  39. Hello Rumen,

    Wow, so impressive, another update!! Congrats and thanks for that!
    I also saw this update comes with the UWP option.
    I did some diving the code, just to know how you accomplish that. I was trying using the MultiK2 as well. You answered my question there in the git.
    Point is. I notice that we only could build that against .NetCore, and I was trying to build it using IL2CPP.
    Is that because the MultiK2? Or Maybe I could get the same result in IL2CPP?

    If you could only give some direction in what I needed to search it would be good.

    Thanks again!

  40. Hi! Two questions!
    1: How can i edit the stored animations?
    2: Is it possible to also record finger movement?

  41. Hi
    How are you ?
    I want to know the action this : “KinectGestures.Gestures.None”.
    I need to know initial gesture now.
    If it is possible to show me images or of video all of KinectGestures.Gestures for you, I want to show me the all.
    Thank you.

    • ‘KinectGestures.Gestures.None’ means no gesture/pose needs to be detected. This is utilized by some setting, like the ‘Player calibration pose’-setting of KinectManager, where None means no specific calibration pose is needed, or Tpose means a T-Pose is needed to start user tracking.

      Sorry, but I have not recorded myself doing all the gestures 🙂 I think their names are quite descriptive. There is also a short description of each gesture in ‘Howto-Use-Gestures-or-Create-Your-Own-Ones.pdf’-file in the _Readme-folder of the package.

  42. Hi Rumen,
    I want to add new gesture for rotating and moving of the wist and elbow.
    How can I do for it ?

  43. I think I need to define new gesture for it: I think KinectGestures.Gestures.UserGesture1
    I want to know the method for it

    • You can set ‘Auto height angle’-setting of the KinectManager-component to ‘Auto update’. In this case the sensor will detect its height and angle automatically, when there is user in front of it. If you like to set manually the height and angle at runtime, you should also update the kinectToWorld-matrix in KinectManager.cs. A good way to do it would be to add a method you could invoke after updating H&A. Look near the end of StartKinect()-method, to see what this function should contain.

  44. and one more question regarding the kinect positioning. ok, the angel transforms the coordinate system of the kinect, so that the avatar stands upright. but what does kinectHeigth do?
    no matter, if I set it to 1 or 3, and no matter if i check or uncheck grounded feet, no matter if I use positioning relative to a camera or not – my avatar alwasy stands with its feet on the ground.
    (i did always restart the project after every change)

    • As I said in the previous comment, the height and angle form the kinectToWorld-matrix, which transforms the Kinect coordinates of all detected users and joints to Unity coordinates. How these coordinates get processed by the other components is a different questions. The AvatarController normally uses the initial position of the model and moves around it. Relative-to-camera means that the model stays with relation to the main camera, as you move with relation to the sensor.

Leave a Reply