Kinect v2 Tips, Tricks and Examples

teaching4After answering so many different questions about how to use various parts and components of the “Kinect v2 with MS-SDK”-package, I think it would be easier, if I share some general tips, tricks and examples. I’m going to add more tips and tricks to this article in time. Feel free to drop by, from time to time, to check out what’s new.

And here is a link to the Online documentation of the K2-asset.

Table of Contents:

What is the purpose of all manages in the KinectScripts-folder
How to use the Kinect v2-Package functionality in your own Unity project
How to use your own model with the AvatarController
How to make the avatar hands twist around the bone
How to utilize Kinect to interact with GUI buttons and components
How to get the depth- or color-camera textures
How to get the position of a body joint
How to make a game object rotate as the user
How to make a game object follow user’s head position and rotation
How to get the face-points’ coordinates
How to mix Kinect-captured movement with Mecanim animation
How to add your models to the FittingRoom-demo
How to set up the sensor height and angle
Are there any events, when a user is detected or lost
How to process discrete gestures like swipes and poses like hand-raises
How to process continuous gestures, like ZoomIn, ZoomOut and Wheel
How to utilize visual (VGB) gestures in the K2-asset
How to change the language or grammar for speech recognition
How to run the fitting-room or overlay demo in portrait mode
How to build an exe from ‘Kinect-v2 with MS-SDK’ project
How to make the Kinect-v2 package work with Kinect-v1
What do the options of ‘Compute user map’-setting mean
How to set up the user detection order
How to enable body-blending in the FittingRoom-demo, or disable it to increase FPS
How to build Windows-Store (UWP-8.1) application
How to work with multiple users
How to use the FacetrackingManager
How to add background image to the FittingRoom-demo
How to move the FPS-avatars of positionally tracked users in VR environment
How to create your own gestures
How to enable or disable the tracking of inferred joints
How to build exe with the Kinect-v2 plugins provided by Microsoft
How to build Windows-Store (UWP-10) application
How to run the projector-demo scene
How to render background and the background-removal image on the scene background
How to run the demo scenes on non-Windows platforms
How to workaround the user tracking issue, when the user is turned back
How to get the full scene depth image as texture
Some useful hints regarding AvatarController and AvatarScaler
How to setup the K2-package to work with Orbbec Astra sensors (deprecated)
How to setup the K2-asset to work with Nuitrack body tracking SDK
How to control Keijiro’s Skinner-avatars with the Avatar-Controller component
How to track a ball hitting a wall (hints)
How to create your own programmatic gestures
What is the file-format used by the KinectRecorderPlayer-component (KinectRecorderDemo)
How to enable user gender and age detection in KinectFittingRoom1-demo scene

What is the purpose of all manages in the KinectScripts-folder:

The managers in the KinectScripts-folder are components. You can utilize them in your projects, depending on the features you need. The KinectManager is the most general component, needed to interact with the sensor and to get basic data from it, like the color and depth streams, and the bodies and joints’ positions in meters, in Kinect space. The purpose of the AvatarController is to transfer the detected joint positions and orientations to a rigged skeleton. The CubemanController is similar, but it works with transforms and lines to represent the joints and bones, in order to make locating the tracking issues easier. The FacetrackingManager deals with the face points and head/neck orientation. It is used internally by the KinectManager (if available at the same time) to get the precise position and orientation of the head and neck. The InteractionManager is used to control the hand cursor and to detect hand grips, releases and clicks. And finally, the SpeechManager is used for recognition of speech commands. Pay also attention to the Samples-folder. It contains several simple examples (some of them cited below) you can learn from, use directly or copy parts of the code into your scripts.

How to use the Kinect v2-Package functionality in your own Unity project:

1. Copy folder ‘KinectScripts’ from the Assets/K2Examples-folder of the package to your project. This folder contains the package scripts, filters and interfaces.
2. Copy folder ‘Resources’ from the Assets/K2Examples-folder of the package to your project. This folder contains all needed libraries and resources. You can skip copying the libraries you don’t plan to use, in order to save space.
3. Copy folder ‘Standard Assets’ from the Assets/K2Examples-folder of the package to your project. It contains the wrapper classes for Kinect-v2 SDK.
4. Wait until Unity detects and compiles the newly copied resources, folders and scripts.
See this tip as well, if you like to build your project with the Kinect-v2 plugins provided by Microsoft.

How to use your own model with the AvatarController:

1. (Optional) Make sure your model is in T-pose. This is the zero-pose of Kinect joint orientations.
2. Select the model-asset in Assets-folder. Select the Rig-tab in Inspector window.
3. Set the AnimationType to ‘Humanoid’ and AvatarDefinition – to ‘Create from this model’.
4. Press the Apply-button. Then press the Configure-button to make sure the joints are correctly assigned. After that exit the configuration window.
5. Put the model into the scene.
6. Add the KinectScript/AvatarController-script as component to the model’s game object in the scene.
7. Make sure your model also has Animator-component, it is enabled and its Avatar-setting is set correctly.
8. Enable or disable (as needed) the MirroredMovement and VerticalMovement-settings of the AvatarController-component. Do mind when mirrored movement is enabled, the model’s transform should have Y-rotation of 180 degrees.
9. Run the scene to test the avatar model. If needed, tweak some settings of AvatarController and try again.

How to make the avatar hands twist around the bone:

To do it, you need to set ‘Allowed Hand Rotations’-setting of the KinectManager to ‘All’. KinectManager is a component of the MainCamera in the example scenes. This setting has three options: None – turns off all hand rotations, Default – turns on the hand rotations, except the twists around the bone, All – turns on all hand rotations.

How to utilize Kinect to interact with GUI buttons and components:

1. Add the InteractionManager to the main camera or to other persistent object in the scene. It is used to control the hand cursor and to detect hand grips, releases and clicks. Grip means closed hand with thumb over the other fingers, Release – opened hand, hand Click is generated when the user’s hand doesn’t move (stays still) for about 2 seconds.
2. Enable the ‘Control Mouse Cursor’-setting of the InteractionManager-component. This setting transfers the position and clicks of the hand cursor to the mouse cursor, this way enabling interaction with the GUI buttons, toggles and other components.
3. If you need drag-and-drop functionality for interaction with the GUI, enable the ‘Control Mouse Drag’-setting of the InteractionManager-component. This setting starts mouse dragging, as soon as it detects hand grip and continues the dragging until hand release is detected. If you enable this setting, you can also click on GUI buttons with a hand grip, instead of the usual hand click (i.e. staying in place, over the button, for about 2 seconds).

How to get the depth- or color-camera textures:

First off, make sure that ‘Compute User Map’-setting of the KinectManager-component is enabled, if you need the depth texture, or ‘Compute Color Map’-setting of the KinectManager-component is enabled, if you need the color camera texture. Then write something like this in the Update()-method of your script:

KinectManager manager = KinectManager.Instance;
if(manager && manager.IsInitialized())
{
    Texture2D depthTexture = manager.GetUsersLblTex();
    Texture2D colorTexture = manager.GetUsersClrTex();
    // do something with the textures
}

How to get the position of a body joint:

This is demonstrated in KinectScripts/Samples/GetJointPositionDemo-script. You can add it as a component to a game object in your scene to see it in action. Just select the needed joint and optionally enable saving to a csv-file. Do not forget that to add the KinectManager as component to a game object in your scene. It is usually a component of the MainCamera in the example scenes. Here is the main part of the demo-script that retrieves the position of the selected joint:

KinectInterop.JointType joint = KinectInterop.JointType.HandRight;
KinectManager manager = KinectManager.Instance;

if(manager && manager.IsInitialized())
{
    if(manager.IsUserDetected())
    {
        long userId = manager.GetPrimaryUserID();

        if(manager.IsJointTracked(userId, (int)joint))
        {
            Vector3 jointPos = manager.GetJointPosition(userId, (int)joint);
            // do something with the joint position
        }
    }
}

How to make a game object rotate as the user:

This is similar to the previous example and is demonstrated in KinectScripts/Samples/FollowUserRotation-script. To see it in action, you can create a cube in your scene and add the script as a component to it. Do not forget to add the KinectManager as component to a game object in your scene. It is usually a component of the MainCamera in the example scenes.

How to make a game object follow user’s head position and rotation:

You need the KinectManager and FacetrackingManager added as components to a game object in your scene. For example, they are components of the MainCamera in the KinectAvatarsDemo-scene. Then, to get the position of the head and orientation of the neck, you need code like this in your script:

KinectManager manager = KinectManager.Instance;

if(manager && manager.IsInitialized())
{
    if(manager.IsUserDetected())
    {
        long userId = manager.GetPrimaryUserID();

        if(manager.IsJointTracked(userId, (int)KinectInterop.JointType.Head))
        {
            Vector3 headPosition = manager.GetJointPosition(userId, (int)KinectInterop.JointType.Head);
            Quaternion neckRotation = manager.GetJointOrientation(userId, (int)KinectInterop.JointType.Neck);
            // do something with the head position and neck orientation
        }
    }
}

How to get the face-points’ coordinates:

You need a reference to the respective FaceFrameResult-object. This is demonstrated in KinectScripts/Samples/GetFacePointsDemo-script. You can add it as a component to a game object in your scene, to see it in action. To get a face point coordinates in your script you need to invoke its public GetFacePoint()-function. Do not forget to add the KinectManager and FacetrackingManager as components to a game object in your scene. For example, they are components of the MainCamera in the KinectAvatarsDemo-scene.

How to mix Kinect-captured movement with Mecanim animation

1. Use the AvatarControllerClassic instead of AvatarController-component. Assign only these joints that have to be animated by the sensor.
2. Set the SmoothFactor-setting of AvatarControllerClassic to 0, to apply the detected bone orientations instantly.
3. Create an avatar-body-mask and apply it to the Mecanim animation layer. In this mask, disable Mecanim animations of the Kinect-animated joints mentioned above. Do not disable the root-joint!
4. Enable the ‘Late Update Avatars’-setting of KinectManager (component of MainCamera in the example scenes).
5. Run the scene to check the setup. When a player gets recognized by the sensor, part of his joints will be animated by the AvatarControllerClassic component, and the other part – by the Animator component.

How to add your models to the FittingRoom-demo

1. For each of your fbx-models, import the model and select it in the Assets-view in Unity editor.
2. Select the Rig-tab in Inspector. Set the AnimationType to ‘Humanoid’ and the AvatarDefinition to ‘Create from this model’.
3. Press the Apply-button. Then press the Configure-button to check if all required joints are correctly assigned. The clothing models usually don’t use all joints, which can make the avatar definition invalid. In this case you can assign manually the missing joints (shown in red).
4. Keep in mind: The joint positions in the model must match the structure of the Kinect-joints. You can see them, for instance in the KinectOverlayDemo2. Otherwise the model may not overlay the user’s body properly.
5. Create a sub-folder for your model category (Shirts, Pants, Skirts, etc.) in the FittingRoomDemo/Resources-folder.
6. Create a sub-folders with subsequent numbers (0000, 0001, 0002, etc.) for all imported in p.1 models, in the model category folder.
7. Move your models into these numerical folders, one model per folder, along with the needed materials and textures. Rename the model’s fbx-file to ‘model.fbx’.
8. You can put a preview image for each model in jpeg-format (100 x 143px, 24bpp) in the respective model folder. Then rename it to ‘preview.jpg.bytes’. If you don’t put a preview image, the fitting-room demo will display ‘No preview’ in the model-selection menu.
9. Open the FittingRoomDemo1-scene.
10. Add a ModelSelector-component for each model category to the KinectController game object. Set its ‘Model category’-setting to be the same as the name of sub-folder created in p.5 above. Set the ‘Number of models’-setting to reflect the number of sub-folders created in p.6 above.
11. The other settings of your ModelSelector-component must be similar to the existing ModelSelector in the demo. I.e. ‘Model relative to camera’ must be set to ‘BackgroundCamera’, ‘Foreground camera’ must be set to ‘MainCamera’, ‘Continuous scaling’ – enabled. The scale-factor settings may be set initially to 1 and the ‘Vertical offset’-setting to 0. Later you can adjust them slightly to provide the best model-to-body overlay.
12. Enable the ‘Keep selected model’-setting of the ModelSelector-component, if you want the selected model to continue overlaying user’s body, after the model category changes. This is useful, if there are several categories (i.e. ModelSelectors), for instance for shirts, pants, skirts, etc. In this case the selected shirt model will still overlay user’s body, when the category changes and the user starts selects pants, for instance.
13. The CategorySelector-component provides gesture control for changing models and categories, and takes care of switching model categories (e.g for shirts, pants, ties, etc.) for the same user. There is already a CategorySelector for the 1st user (player-index 0) in the scene, so you don’t need to add more.
14. If you plan for multi-user fitting-room, add one CategorySelector-component for each other user. You may also need to add the respective ModelSelector-components for model categories that will be used by these users, too.
15. Run the scene to ensure that your models can be selected in the list and they overlay the user’s body correctly. Experiment a bit if needed, to find the values of scale-factors and vertical-offset settings that provide the best model-to-body overlay.
16. If you want to turn off the cursor interaction in the scene, disable the InteractionManager-component of KinectController-game object. If you want to turn off the gestures (swipes for changing models & hand raises for changing categories), disable the respective settings of the CategorySelector-component. If you want to turn off or change the T-pose calibration, change the ‘Player calibration pose’-setting of KinectManager-component.
17. You can use the FittingRoomDemo2 scene, to utilize or experiment with a single overlay model. Adjust the scale-factor settings of AvatarScaler to fine tune the scale of the whole body, arm- or leg-bones of the model, if needed. Enable the ‘Continuous Scaling’ setting, if you want the model to rescale on each Update.
18. If the clothing/overlay model uses the Standard shader, set its ‘Rendering mode’ to ‘Cutout’. See this comment below for more information.

How to set up the sensor height and angle

There are two very important settings of the KinectManager-component that influence the calculation of users’ and joints’ space coordinates, hence almost all user-related visualizations in the demo scenes. Here is how to set them correctly:

1. Set the ‘Sensor height’-setting, as to how high above the ground is the sensor, in meters. The by-default value is 1, i.e. 1.0 meter above the ground, which may not be your case.
2. Set the ‘Sensor angle’-setting, as to the tilt angle of the sensor, in degrees. Use positive degrees if the sensor is tilted up, negative degrees – if it is tilted down. The by-default value is 0, which means 0 degrees, i.e. the sensor is not tilted at all.
3. Because it is not so easy to estimate the sensor angle manually, you can use the ‘Auto height angle’-setting to find out this value. Select ‘Show info only’-option and run the demo-scene. Then stand in front of the sensor. The information on screen will show you the rough height and angle-settings, as estimated by the sensor itself. Repeat this 2-3 times and write down the values you see.
4. Finally, set the ‘Sensor height’ and ‘Sensor angle’ to the estimated values you find best. Set the ‘Auto height angle’-setting back to ‘Dont use’.
5. If you find the height and angle values estimated by the sensor good enough, or if your sensor setup is not fixed, you can set the ‘Auto height angle’-setting to ‘Auto update’. It will update the ‘Sensor height’ and ‘Sensor angle’-settings continuously, when there are users in the field of view of the sensor.

Are there any events, when a user is detected or lost

There are no special event handlers for user-detected/user-lost events, but there are two other options you can use:

1. In the Update()-method of your script, invoke the GetUsersCount()-function of KinectManager and compare the returned value to a previously saved value, like this:

KinectManager manager = KinectManager.Instance;
if(manager && manager.IsInitialized())
{
    int usersNow = manager.GetUsersCount();

    if(usersNow > usersSaved)
    {
        // new user detected
    }
    if(usersNow < usersSaved)
    {
        // user lost
    }

    usersSaved = usersNow;
}

2. Create a class that implements KinectGestures.GestureListenerInterface and add it as component to a game object in the scene. It has methods UserDetected() and UserLost(), which you can use as user-event handlers. The other methods could be left empty or return the default value (true). See the SimpleGestureListener or GestureListener-classes, if you need an example.

How to process discrete gestures like swipes and poses like hand-raises

Most of the gestures, like SwipeLeft, SwipeRight, Jump, Squat, etc. are discrete. All poses, like RaiseLeftHand, RaiseRightHand, etc. are also considered as discrete gestures. This means these gestures may report progress or not, but all of them get completed or cancelled at the end. Processing these gestures in a gesture-listener script is relatively easy. You need to do as follows:

1. In the UserDetected()-function of the script add the following line for each gesture you need to track:

manager.DetectGesture(userId, KinectGestures.Gestures.xxxxx);

2. In GestureCompleted() add code to process the discrete gesture, like this:

if(gesture == KinectGestures.Gestures.xxxxx)
{
    // gesture is detected - process it (for instance, set a flag or execute an action)
}

3. In the GestureCancelled()-function, add code to process the cancellation of the continuous gesture:

if(gesture == KinectGestures.Gestures.xxxxx)
{
    // gesture is cancelled - process it (for instance, clear the flag)
}

If you need code samples, see the SimpleGestureListener.cs or CubeGestureListener.cs-scripts.

4. From v2.8 on, KinectGestures.cs is not any more a static class, but a component that may be extended, for instance with the detection of new gestures or poses. You need to add it as component to the KinectController-game object, if you need gesture or pose detection in the scene.

How to process continuous gestures, like ZoomIn, ZoomOut and Wheel

Some of the gestures, like ZoomIn, ZoomOut and Wheel, are continuous. This means these gestures never get fully completed, but only report progress greater than 50%, as long as the gesture is detected. To process them in a gesture-listener script, do as follows:

1. In the UserDetected()-function of the script add the following line for each gesture you need to track:

manager.DetectGesture(userId, KinectGestures.Gestures.xxxxx);

2. In GestureInProgress() add code to process the continuous gesture, like this:

if(gesture == KinectGestures.Gestures.xxxxx)
{
    if(progress > 0.5f)
    {
        // gesture is detected - process it (for instance, set a flag, get zoom factor or angle)
    }
    else
    {
        // gesture is no more detected - process it (for instance, clear the flag)
    }
}

3. In the GestureCancelled()-function, add code to process the end of the continuous gesture:

if(gesture == KinectGestures.Gestures.xxxxx)
{
    // gesture is cancelled - process it (for instance, clear the flag)
}

If you need code samples, see the SimpleGestureListener.cs or ModelGestureListener.cs-scripts.

4. From v2.8 on, KinectGestures.cs is not any more a static class, but a component that may be extended, for instance with the detection of new gestures or poses. You need to add it as component to the KinectController-game object, if you need gesture or pose detection in the scene.

How to utilize visual (VGB) gestures in the K2-asset

The visual gestures, created by the Visual Gesture Builder (VGB) can be used in the K2-asset, too. To do it, follow these steps (and see the VisualGestures-game object and its components in the KinectGesturesDemo-scene):

1. Copy the gestures’ database (xxxxx.gbd) to the Resources-folder and rename it to ‘xxxxx.gbd.bytes’.
2. Add the VisualGestureManager-script as a component to a game object in the scene (see VisualGestures-game object).
3. Set the ‘Gesture Database’-setting of VisualGestureManager-component to the name of the gestures’ database, used in step 1 (‘xxxxx.gbd’).
4. Create a visual-gesture-listener to process the gestures, and add it as a component to a game object in the scene (see the SimpleVisualGestureListener-script).
5. In the GestureInProgress()-function of the gesture-listener add code to process the detected continuous gestures and in the GestureCompleted() add code to process the detected discrete gestures.

How to change the language or grammar for speech recognition

1. Make sure you have installed the needed language pack from here.
2. Set the ‘Language code’-setting of SpeechManager-component, as to the grammar language you need to use. The list of language codes can be found here (see ‘LCID Decimal’).
3. Make sure the ‘Grammar file name’-setting of SpeechManager-component corresponds to the name of the grxml.txt-file in Assets/Resources.
4. Open the grxml.txt-grammar file in Assets/Resources and set its ‘xml:lang’-attribute to the language that corresponds to the language code in step 2.
5. Make the other needed modifications in the grammar file and save it.
6. (Optional since v2.7) Delete the grxml-file with the same name in the root-folder of your Unity project (the parent folder of Assets-folder).
7. Run the scene to check, if speech recognition works correctly.

How to run the fitting-room or overlay demo in portrait mode

1. First off, add 9:16 (or 3:4) aspect-ratio to the Game view’s list of resolutions, if it is missing.
2. Select the 9:16 (or 3:4) aspect ratio of Game view, to set the main-camera output in portrait mode.
3. Open the fitting-room or overlay-demo scene and select each of the BackgroundImage(X)-game object(s). If it has a child object called RawImage, select this sub-object instead.
4. Enable the PortraitBackground-component of each of the selected BackgroundImage object(s). When finished, save the scene.
5. Run the scene and test it in portrait mode.

How to build an exe from ‘Kinect-v2 with MS-SDK’ project

By default Unity builds the exe (and the respective xxx_Data-folder) in the root folder of your Unity project. It is recommended to you use another, empty folder instead. The reason is that building the exe in the folder of your Unity project may cause conflicts between the native libraries used by the editor and the ones used by the exe, if they have different architectures (for instance the editor is 64-bit, but the exe is 32-bit).

Also, before building the exe, make sure you’ve copied the Assets/Resources-folder from the K2-asset to your Unity project. It contains the needed native libraries and custom shaders. Optionally you can remove the unneeded zip.bytes-files from the Resources-folder. This will save a lot of space in the build. For instance, if you target Kinect-v2 only, you can remove the Kinect-v1 and OpenNi2-related zipped libraries. The exe won’t need them anyway.

How to make the Kinect-v2 package work with Kinect-v1

If you have only Kinect v2 SDK or Kinect v1 SDK installed on your machine, the KinectManager should detect the installed SDK and sensor correctly. But in case you have both Kinect SDK 2.0 and SDK 1.8 installed simultaneously, the KinectManager will put preference on Kinect v2 SDK and your Kinect v1 will not be detected. The reason for this is that you can use SDK 2.0 in offline mode as well, i.e. without sensor attached. In this case you can emulate the sensor by playing recorded files in Kinect Studio 2.0.

If you want to make the KinectManager utilize the appropriate interface, depending on the currently attached sensor, open KinectScripts/Interfaces/Kinect2Interface.cs and at its start change the value of ‘sensorAlwaysAvailable’ from ‘true’ to ‘false’. After this, close and reopen the Unity editor. Then, on each start, the KinectManager will try to detect which sensor is currently attached to your machine and use the respective sensor interface. This way you could switch the sensors (Kinect v2 or v1), as to your preference, but will not be able to use the offline mode for Kinect v2. To utilize the Kinect v2 offline mode again, you need to switch ‘sensorAlwaysAvailable’ back to true.

What do the options of ‘Compute user map’-setting mean

Here are one-line descriptions of the available options:

  • RawUserDepth means that only the raw depth image values, coming from the sensor will be available, via the GetRawDepthMap()-function for instance;
  • BodyTexture means that GetUsersLblTex()-function will return the white image of the tracked users;
  • UserTexture will cause GetUsersLblTex() to return the tracked users’ histogram image;
  • CutOutTexture, combined with enabled ‘Compute color map‘-setting, means that GetUsersLblTex() will return the cut-out image of the users.

All these options (except RawUserDepth) can be tested instantly, if you enable the ‘Display user map‘-setting of KinectManager-component, too.

How to set up the user detection order

There is a ‘User detection order’-setting of the KinectManager-component. You can use it to determine how the user detection should be done, depending on your requirements. Here are short descriptions of the available options:

  • Appearance is selected by default. It means that the player indices are assigned in order of user appearance. The first detected user gets player index 0, The next one gets index 1, etc. If the user 0 gets lost, the remaining users are not reordered. The next newly detected user will take its place;
  • Distance means that player indices are assigned depending on distance of the detected users to the sensor. The closest one will get player index 0, the next closest one – index 1, etc. If a user gets lost, the player indices are reordered, depending on the distances to the remaining users;
  • Left to right means that player indices are assigned depending on the X-position of the detected users. The leftmost one will get player index 0, the next leftmost one – index 1, etc. If a user gets lost, the player indices are reordered, depending on the X-positions of the remaining users;

The user-detection area can be further limited with ‘Min user distance’, ‘Max user distance’ and ‘Max left right distance’-settings, in meters from the sensor. The maximum number of detected user can be limited by lowering the value of ‘Max tracked user’-setting.

How enable body-blending in the FittingRoom-demo, or disable it to increase FPS

If you select the MainCamera in the KinectFittingRoom1-demo scene (in v2.10 or above), you will see a component called UserBodyBlender. It is responsible for mixing the clothing model (overlaying the user) with the real world objects (including user’s body parts), depending on the distance to camera. For instance, if you arms or other real-world objects are in front of the model, you will see them overlaying the model, as expected.

You can enable the component, to turn on the user’s body-blending functionality. The ‘Depth threshold’-setting may be used to adjust the minimum distance to the front of model (in meters). It determines when a real-world’s object will become visible. It is set by default to 0.1m, but you could experiment a bit to see, if any other value works better for your models. If the scene performance (in means of FPS) is not sufficient, and body-blending is not important, you can disable the UserBodyBlender-component to increase performance.

How to build Windows-Store (UWP-8.1) application

To do it, you need at least v2.10.1 of the K2-asset. To build for ‘Windows store’, first select ‘Windows store’ as platform in ‘Build settings’, and press the ‘Switch platform’-button. Then do as follows:

1. Unzip Assets/Plugins-Metro.zip. This will create Assets/Plugins-Metro-folder.
2. Delete the KinectScripts/SharpZipLib-folder.
3. Optionally, delete all zip.bytes-files in Assets/Resources. You won’t need these libraries in Windows/Store. All Kinect-v2 libraries reside in Plugins-Metro-folder.
4. Select ‘File / Build Settings’ from the menu. Add the scenes you want to build. Select ‘Windows Store’ as platform. Select ‘8.1’ as target SDK. Then click the Build-button. Select an empty folder for the Windows-store project and wait the build to complete.
5. Go to the build-folder and open the generated solution (.sln-file) with Visual studio.
6. Change the ‘by default’ ARM-processor target to ‘x86’. The Kinect sensor is not compatible with ARM processors.
7. Right click ‘References’ in the Project-windows and select ‘Add reference’. Select ‘Extensions’ and then WindowsPreview.Kinect and Microsoft.Kinect.Face libraries. Then press OK.
8. Open solution’s manifest-file ‘Package.appxmanifest’, go to ‘Capabilities’-tab and enable ‘Microphone’ and ‘Webcam’ in the left panel. Save the manifest. This is needed to to enable the sensor, when the UWP app starts up. Thanks to Yanis Lukes (aka Pendrokar) for providing this info!
9. Build the project. Run it, to test it locally. Don’t forget to turn on Windows developer mode on your machine.

How to work with multiple users

Kinect-v2 can fully track up to 6 users simultaneously. That’s why many of the Kinect-related components, like AvatarController, InteractionManager, model & category-selectors, gesture & interaction listeners, etc. have a setting called ‘Player index’. If set to 0, the respective component will track the 1st detected user. If set to 1, the component will track the 2nd detected use. If set to 2 – the 3rd user, etc. The order of user detection may be specified with the ‘User detection order’-setting of the KinectManager (component of KinectController game object).

How to use the FacetrackingManager

The FacetrackingManager-component may be used for several purposes. First, adding it as component of KinectController will provide more precise neck and head tracking, when there are avatars in the scene (humanoid models utilizing the AvatarController-component). If HD face tracking is needed, you can enable the ‘Get face model data’-setting of FacetrackingManager-component. Keep in mind that using HD face tracking will lower performance and may cause memory leaks, which can cause Unity crash after multiple scene restarts. Please use this feature carefully.

In case of ‘Get face model data’ enabled, don’t forget to assign a mesh object (e.g. Quad) to the ‘Face model mesh’-setting. Pay also attention to the ‘Textured model mesh’-setting. The available options are: ‘None’ – means the mesh will not be textured; ‘Color map’ – the mesh will get its texture from the color camera image, i.e. it will reproduce user’s face; ‘Face rectangle’ – the face mesh will be textured with its material’s Albedo texture, whereas the UI coordinates will match the detected face rectangle.

Finally, you can use the FacetrackingManager public API to get a lot of face-tracking data, like the user’s head position and rotation, animation units, shape units, face model vertices, etc.

How to add background image to the FittingRoom-demo (updated for v2.14 and later)

To replace the color-camera background in the FittingRoom-scene with a background image of your choice, please do as follows:

1. Enable the BackgroundRemovalManager-component of the KinectController-game object in the scene.
2. Make sure the ‘Compute user map’-setting of KinectManager (component of the KinectController, too) is set to ‘Body texture’, and the ‘Compute color map’-setting is enabled.
3. Set the needed background image as texture of the RawImage-component of BackgroundImage1-game object in the scene.
4. Run the scene to check, if it works as expected.

How to move the FPS-avatars of positionally tracked users in VR environment

There are two options for moving first-person avatars in VR-environment (the 1st avatar-demo scene in K2VR-asset):

1. If you use the Kinect’s positional tracking, turn off the Oculus/Vive positional tracking, because their coordinates are different to Kinect’s.
2. If you prefer to use the Oculus/Vive positional tracking:
– enable the ‘External root motion’-setting of the AvatarController-component of avatar’s game object. This will disable avatar motion as to Kinect special coordinates.
– enable the HeadMover-component of avatar’s game object, and assign the MainCamera as ‘Target transform’, to follow the Oculus/Vive position.

Now try to run the scene. If there are issues with the MainCamera used as positional target, do as follows:
– add an empty game object to the scene. It will be used to follow the Oculus/Vive positions.
– assign the newly created game object to the ‘Target transform’-setting of the HeadMover-component.
– add a script to the newly created game object, and in that script’s Update()-function set programatically the object’s transform position to be the current Oculus/Vive position.

How to create your own gestures

For gesture recognition there are two options – visual gestures (created with the Visual Gesture Builder, part of Kinect SDK 2.0) and programmatic gestures, coded in KinectGestures.cs or a class that extends it. The programmatic gestures detection consists mainly of tracking the position and movement of specific joints, relative to some other joints. For more info regarding how to create your own programmatic gestures look at this tip below.

The scenes demonstrating the detection of programmatic gestures are located in the KinectDemos/GesturesDemo-folder. The KinectGesturesDemo1-scene shows how to utilize discrete gestures, and the KinectGesturesDemo2-scene is about continuous gestures.

And here is a video on creating and checking for visual gestures. Please check KinectDemos/GesturesDemo/VisualGesturesDemo-scene too, to see how to use visual gestures in Unity. A major issue with the visual gestures is that they usually work in the 32-bit builds only.

How to enable or disable the tracking of inferred joints

First, keep in mind that:
1. There is ‘Ignore inferred joints’-setting of the KinectManager. KinectManager is usually a component of the KinectController-game object in demo scenes.
2. There is a public API method of KinectManager, called IsJointTracked(). This method is utilized by various scripts & components in the demo scenes.

Here is how it works:
The Kinect SDK tracks the positions of all body joints’ together with their respective tracking states. These states can be Tracked, NotTracked or Inferred. When the ‘Ignore inferred joints’-setting is enabled, the IsJointTracked()-method returns true, when the tracking state is Tracked or Inferred, and false when the state is NotTracked. I.e. both tracked and inferred joints are considered valid. When the setting is disabled, the IsJointTracked()-method returns true, when the tracking state is Tracked, and false when the state is NotTracked or Inferred. I.e. only the really tracked joints are considered valid.

How to build exe with the Kinect-v2 plugins provided by Microsoft

In case you’re targeting Kinect-v2 sensor only, and would like to avoid packing all native libraries that come with the K2-asset in the build, as well as unpacking them into the working directory of the executable afterwards, do as follows:

1. Download and unzip the Kinect-v2 Unity Plugins from here.
2. Open your Unity project. Select ‘Assets / Import Package / Custom Package’ from the menu and import only the Plugins-folder from ‘Kinect.2.0.1410.19000.unitypackage’. You can find it in the unzipped package from p.1 above. Please don’t import anything from the ‘Standard Assets’-folder of the unitypackage. All needed standard assets are already present in the K2-asset.
3. If you are using the FacetrackingManager in your scenes, import only the Plugins-folder from ‘Kinect.Face.2.0.1410.19000.unitypackage’ as well. If you are using visual gestures (i.e. VisualGestureManager in your scenes), import only the Plugins-folder from ‘Kinect.VisualGestureBuilder.2.0.1410.19000.unitypackage’, too. Again, please don’t import anything from the ‘Standard Assets’-folders of these unitypackages. All needed standard assets are already present in the K2-asset.
4. Delete KinectV2UnityAddin.x64.zip & NuiDatabase.zip (or all zipped libraries) from the K2Examples/Resources-folder. You can see them as .zip-files in the Assets-window, or as .zip.bytes-files in the Windows explorer. You are going to use the Kinect-v2 sensor only, so all these zipped libraries are not needed any more.
5. Delete all dlls in the root-folder of your Unity project. The root-folder is the parent-folder of the Assets-folder of your project, and is not visible in the Editor. You may need to stop the Unity editor. Delete the NuiDatabase- and vgbtechs-folders in the root-folder, as well. These dlls and folders are no more needed, because they are part of the project’s Plugins-folder now.
6. Open Unity editor again, load the project and try to run the demo scenes in the project, to make sure they work as expected.
7. If everything is OK, build the executable again. This should work for both x86 and x86_64-architectures, as well as for Windows-Store, SDK 8.1.

How to build Windows-Store (UWP-10) application

To do it, you need at least v2.12.2 of the K2-asset. Then follow these steps:

1. (optional, as of v2.14.1) Delete the KinectScripts/SharpZipLib-folder. It is not needed for UWP. If you leave it, it may cause syntax errors later.
2. Open ‘File / Build Settings’ in Unity editor, switch to ‘Windows store’ platform and select ‘Universal 10’ as SDK. Make sure ‘.Net’ is selected as scripting backend. Optionally enable the ‘Unity C# Project’ and ‘Development build’-settings, if you’d like to edit the Unity scripts in Visual studio later.
3. Press the ‘Build’-button, select output folder and wait for Unity to finish exporting the UWP-Visual studio solution.
4. Close or minimize the Unity editor, then open the exported UWP solution in Visual studio.
5. Select x86 or x64 as target platform in Visual studio.
6. Open ‘Package.appmanifest’ of the main project, and on tab ‘Capabilities’ enable ‘Microphone’ & ‘Webcam’. These may be enabled in the Windows-store’s Player settings in Unity, too.
7. If you have enabled the ‘Unity C# Project’-setting in p.2 above, right click on ‘Assembly-CSharp’-project in the Solution explorer, select ‘Properties’ from the context menu, and then select ‘Windows 10 Anniversary Edition (10.0; Build 14393)’ as ‘Target platform’. Otherwise you will get compilation errors.
8. Build and run the solution, on the local or remote machine. It should work now.

Please mind the FacetrackingManager and SpeechRecognitionManager-components, hence the scenes that use them, will not work with the current version of the K2-UWP interface.

How to run the projector-demo scene (v2.13 and later)

To run the KinectProjectorDemo-scene, you need to calibrate the projector to the Kinect sensor first. To do it, please follow these steps:

1. To do the needed sensor-projector calibration, you first need to download RoomAliveToolkit, and then open and build the ProCamCalibration-project in Microsoft Visual Studio 2015 or later. For your convenience, here is a ready-made build of the needed executables, made with VS-2015.
2. Then open the ProCamCalibration-page and follow carefully the instructions in ‘Tutorial: Calibrating One Camera and One Projector’, from ‘Room setup’ to ‘Inspect the results’.
3. After the ProCamCalibration finishes successfully, copy the generated calibration xml-file to the KinectDemos/ProjectorDemo/Resources-folder of the K2-asset.
4. Open the KinectProjectorDemo-scene in Unity editor, select the MainCamera-game object in Hierarchy, and drag the calibration xml-file generated by ProCamCalibrationTool to the ‘Calibration Xml’-setting of its ProjectorCamera-component. Please also check, if the value of ‘Proj name in config’-setting is the same as the projector name set in the calibration xml-file (usually ‘0’).
5. Set the projector to duplicate the main screen, enable ‘Maximize on play’ in Editor (or build the scene), and run the scene in full-screen mode. Walk in front of the sensor, to check if the projected skeleton overlays correctly the user’s body. You can also try to enable ‘U_Character’ game object in the scene, to see how a virtual 3D-model can overlay the user’s body at runtime.

How to render background and the background-removal image on the scene background

First off, if you want to replace the color-camera background in the FittingRoom-demo scene with the background-removal image, please see and follow these steps.

For all other demo-scenes: You can replace the color-camera image on scene background with the background-removal image, by following these (rather complex) steps:

1. Create an empty game object in the scene, name it BackgroundImage1, and add ‘GUI Texture’-component to it (this will change after the release of Unity 2017.2, because it deprecates GUI-Textures). Set its Transform position to (0.5, 0.5, 0) to center it on the screen. This object will be used to render the scene background, so you can select a suitable picture for the Texture-setting of its GUITexture-component. If you leave its Texture-setting to None, a skybox or solid color will be rendered as scene background.

2. In a similar way, create a BackgroundImage2-game object. This object will be used to render the detected users, so leave the Texture-setting of its GUITexture-component to None (it will be set at runtime by a script), and set the Y-scale of the object to -1. This is needed to flip the rendered texture vertically. The reason: Unity textures are rendered bottom to top, while the Kinect images are top to bottom.

3. Add KinectScripts/BackgroundRemovalManager-script as component to the KinectController-game object in the scene (if it is not there yet). This is needed to provide the background removal functionality to the scene.

4. Add KinectDemos/BackgroundRemovalDemo/Scripts/ForegroundToImage-script as component to the BackgroundImage2-game object. This component will set the foreground texture, created at runtime by the BackgroundRemovalManager-component, as Texture of the GUI-Texture component (see p2 above).

Now the tricky part: Two more cameras are needed to display the user image over the scene background – one to render the background picture, 2nd one to render the user image on top of it, and finally – the main camera – to render the 3D objects on top of the background cameras.  Cameras in Unity have a setting called ‘Culling Mask’, where you can set the layers rendered by each camera. There are also two more settings: Depth and ‘Clear flags’ that may be used to change the cameras rendering order.

5. In our case, two extra layers will be needed for the correct rendering of background cameras. Select ‘Add layer’ from the Layer-dropdown in the top-right corner of the Inspector and add 2 layers – ‘BackgroundLayer1’ and ‘BackgroundLayer2’, as shown below. Unfortunately, when Unity exports the K2-package, it doesn’t export the extra layers too. That’s why the extra layers are missing in the demo-scenes.

6. After you have added the extra layers, select the BackgroundImage1-object in Hierarchy and set its layer to ‘BackgroundLayer1’. Then select the BackgroundImage2 and set its layer to ‘BackgroundLayer2’.

7. Create a camera-object in the scene and name it BackgroundCamera1. Set its CullingMask to ‘BackgroundLayer1’ only. Then set its ‘Depth’-setting to (-2) and its ‘Clear flags’-setting to ‘Skybox’ or ‘Solid color’. This means this camera will render first, will clear the output and then render the texture of BackgroundImage1. Don’t forget to disable its AudioListener-component, too. Otherwise, expect endless warnings in the console, regarding multiple audio listeners in the scene.

8. Create a 2nd camera-object and name it BackgroundCamera2. Set its CullingMask to ‘BackgroundLayer2’ only, its ‘Depth’ to (-1) and its ‘Clear flags’ to ‘Depth only’. This means this camera will render 2nd (because -1 > -2), will not clear the previous camera rendering, but instead render the BackgroundImage2 texture on top of it. Again, don’t forget to disable its AudioListener-component.

9. Finally, select the ‘Main Camera’ in the scene. Set its ‘Depth’ to 0 and ‘Clear flags’ to ‘Depth only’. In its ‘Culling mask’ disable ‘BackgroundLayer1’ and ‘BackgroundLayer2’, because they are already rendered by the background cameras. This way the main camera will render all other layers in the scene, on top of the background cameras (depth: 0 > -1 > -2).

If you need a practical example of the above setup, please look at the objects, layers and cameras of the KinectDemos/BackgroundRemovalDemo/KinectBackgroundRemoval1-demo scene.

How to run the demo scenes on non-Windows platforms

Starting with v2.14 of the K2-asset you can run and build many of the demo-scenes on non-Windows platform. In this case you can utilize the KinectDataServer and KinectDataClient components, to transfer the Kinect body and interaction data over the network. The same approach is used by the K2VR-asset. Here is what to do:

1. Add KinectScripts/KinectDataClient.cs as component to KinectController-game object in the client scene. It will replace the direct connection to the sensor with connection to the KinectDataServer-app over the network.
2. On the machine, where the Kinect-sensor is connected, run KinectDemos/KinectDataServer/KinectDataServer-scene or download the ready-built KinectDataServer-app for the same version of Unity editor, as the one running the client scene. The ready-built KinectDataServer-app can be found on this page.
3. Make sure the KinectDataServer and the client scene run in the same subnet. This is needed, if you’d like the client to discover automatically the running instance of KinectDataServer. Otherwise you would need to set manually the ‘Server host’ and ‘Server port’-settings of the KinectDataClient-component.
4. Run the client scene to make sure it connects to the server. If it doesn’t, check the console for error messages.
5. If the connection between the client and server is OK, and the client scene works as expected, build it for the target platform and test it there too.

How to workaround the user tracking issue, when the user is turned back

Starting with v2.14 of the K2-asset you can (at least roughly) work around the user tracking issue, when the user is turned back. Here is what to do:

1. Add FacetrackingManager-component to your scene, if there isn’t one there already. The face-tracking is needed for front & back user detection.
2. Enable the ‘Allow turn arounds’-setting of KinectManager. The KinectManager is component of KinectController-game object in all demo scenes.
3. Run the scene to test it. Keep in mind this feature is only a workaround (not a solution) for an issue in Kinect SDK. The issue is that by design Kinect tracks correctly only users who face the sensor. The side tracking is not smooth, as well. And finally, this workaround is experimental and may not work in all cases.

How to get the full scene depth image as texture

If you’d like to get the full scene depth image, instead of user-only depth image, please follow these steps:

1. Open Resources/DepthShader.shader and uncomment the commented out else-part of the ‘if’, you can see near the end of the shader. Save the shader and go back to the Unity editor.
2. Make sure the ‘Compute user map’-setting of the KinectManager is set to ‘User texture’. KinectManager is component of the KinectController-game object in all demo scenes.
3. Optionally enable the ‘Display user map’-setting of KinectManager, if you want to see the depth texture on screen.
4. You can also get the depth texture by calling ‘KinectManager.Instance.GetUsersLblTex()’ in your scripts, and then use it the way you want.

Some useful hints regarding AvatarController and AvatarScaler

The AvatarController-component moves the joints of the humanoid model it is attached to, according to the user’s movements in front of the Kinect-sensor. The AvatarScaler-component (used mainly in the fitting-room scenes) scales the model to match the user in means of height, arms length, etc. Here are some useful hints regarding these components:

1. If you need the avatar to move around its initial position, make sure the ‘Pos relative to camera’-setting of its AvatarController is set to ‘None’.
2. If ‘Pos relative to camera’ references a camera instead, the avatar’s position with respect to that camera will be the same as the user’s position with respect to the Kinect sensor.
3. If ‘Pos relative to camera’ references a camera and ‘Pos rel overlay color’-setting is enabled too, the 3d position of avatar is adjusted to overlay the user on color camera feed.
4. In this last case, if the model has AvatarScaler component too, you should set the ‘Foreground camera’-setting of AvatarScaler to the same camera. Then scaling calculations will be based on the adjusted (overlayed) joint positions, instead of on the joint positions in space.
5. The ‘Continuous scaling’-setting of AvatarScaler determines whether the model scaling should take place only once when the user is detected (when the setting is disabled), or continuously – on each update (when the setting is enabled).

If you need the avatar to obey physics and gravity, disable the ‘Vertical movement’-setting of the AvatarController-component. Disable the ‘Grounded feet’-setting too, if it is enabled. Then enable the ‘Freeze rotation’-setting of its Rigidbody-component for all axes (X, Y & Z). Make sure the ‘Is Kinematic’-setting is disabled as well, to make the physics control the avatar’s rigid body.

If you want to stop the sensor control of the humanoid model in the scene, you can remove the AvatarController-component of the model. If you want to resume the sensor control of the model, add the AvatarController-component to the humanoid model again. After you remove or add this component, don’t forget to call ‘KinectManager.Instance.refreshAvatarControllers();’, to update the list of avatars KinectManager keeps track of.

How to setup the K2-package (v2.16 or later) to work with Orbbec Astra sensors (deprecated – use Nuitrack)

1. Go to https://orbbec3d.com/develop/ and click on ‘Download Astra Driver and OpenNI 2’. Here is the shortcut: http://www.orbbec3d.net/Tools_SDK_OpenNI/3-Windows.zip
2. Unzip the downloaded file, go to ‘Sensor Driver’-folder and run SensorDriver_V4.3.0.4.exe to install the Orbbec Astra driver.
3. Connect the Orbbec Astra sensor. If the driver is installed correctly, you should see it in the Device Manager, under ‘Orbbec’.
4. If you have Kinect SDK 2.0 installed, please open KinectScripts/Interfaces/Kinect2Interface.cs and change ‘sensorAlwaysAvailable = true;’ at the beginning of the class to ‘sensorAlwaysAvailable = false;’. More information about this action can be found here.
5. If you have ‘Kinect SDK 2.0’ installed on the same machine, look at this tip above, to see how to turn off the K2-sensor-always-available flag.
6. Run one of the avatar-demo scenes to check, if the Orbbec Astra interface works. The sensor should light up and the user(s) should be detected.

How to setup the K2-asset (v2.17 or later) to work with Nuitrack body tracking SDK (updated 11.Jun.2018)

1. To install Nuitrack SDK, follow the instructions on this page, for your respective platform. Nuitrack installation archives can be found here.
2. Connect the sensor, go to [NUITRACK_HOME]/activation_tool-folder and run the Nuitrack-executable. Press the Test-button at the top. You should see the depth stream coming from the sensor. And if you move in front of the sensor, you should see how Nuitrack SDK tracks your body and joints.
3. If you can’t see the depth image and body tracking, when the sensor connected, this would mean Nuitrack SDK is not working properly. Close the Nuitrack-executable, go to [NUITRACK_HOME]/bin/OpenNI2/Drivers and delete (or move somewhere else) the SenDuck-driver and its ini-file. Then go back to step 2 above, and try again.
4. If you have ‘Kinect SDK 2.0‘ installed on the same machine, look at this tip, to see how to turn off the K2-sensor always-available flag.
5. Please mind, you can expect crashes, while using Nuitrack SDK with Unity. The two most common crash-causes are: a) the sensor is not connected when you start the scene; b) you’re using Nuitrack trial version, which will stop after 3 minutes of scene run, and will cause Unity crash as side effect.
6. If you buy a Nuitrack license, don’t forget to import it into Nuitrack’s activation tool. On Windows this is: <nuitrack-home>\activation_tool\Nuitrack.exe. You can use the same app to test the currently connected sensor, as well. If everything works, you are ready to test the Nuitrack interface in Unity..
7. Run one of the avatar-demo scenes to check, if the Nuitrack interface, the sensor depth stream and Nuitrack body tracking works. Run the color-collider demo scene, to check if the color stream works, as well.
8. Please mind: The scenes that rely on color image overlays may or may not work correctly. This may be fixed in future K2-asset updates.

How to control Keijiro’s Skinner-avatars with the Avatar-Controller component

1. Download Keijiro’s Skinner project from its GitHub-repository.
2. Import the K2-asset from Unity asset store into the same project. Delete K2Examples/KinectDemos-folder. The demo scenes are not needed here.
3. Open Assets/Test/Test-scene. Disable Neo-game object in Hierarchy. It is not really needed.
4. Create an empty game object in Hierarchy and name it KinectController, to be consistent with the other demo scenes. Add K2Examples/KinectScripts/KinectManager.cs as component to this object. The KinectManager-component is needed by all other Kinect-related components.
5. Select ‘Neo (Skinner Source)’-game object in Hierarchy. Delete ‘Mocaps’ from the Controller-setting of its Animator-component, to prevent playing the recorded mo-cap animation, when the scene starts.
6. Press ‘Select’ below the object’s name, to find model’s asset in the project. Disable ‘Optimize game objects’-setting on its Rig-tab, and make sure its rig is Humanoid. Otherwise the AvatarController will not find the model’s joints it needs to control.
7. Add K2Examples/KinectScripts/AvatarController-component to ‘Neo (Skinner Source)’-game object in the scene, and enable its ‘Mirrored movement’ and ‘Vertical movement’-settings. Make sure the object’s transform rotation is (0, 180, 0).
8. Optionally, disable the script components of ‘Camera tracker’, ‘Rotation’, ‘Distance’ & ‘Shake’-parent game objects of the main camera in the scene, if you’d like to prevent the camera’s own animated movements.
9. Run the scene and start moving in front of the sensor, to see the effect. Try the other skinner renderers as well. They are children of ‘Skinner Renderers’-game object in the scene.

How to track a ball hitting a wall (hints)

This is a question I was asked quite a lot recently, because there are many possibilities for interactive playgrounds out there. For instance: virtual football or basketball shooting, kids throwing balls at projected animals on the wall, people stepping on virtual floor, etc. Here are some hints how to achieve it:

1. The only thing you need in this case, is to process the raw depth image coming from the sensor. You can get it by calling KinectManager.Instance.GetRawDepthMap(). It is an array of short-integers (DepthW x DepthH in size), representing the distance to the detected objects for each point of the depth image, in mm.
2. You know the distance from the sensor to the wall in meters, hence in mm too. It is a constant, so you can filter out all depth points that represent object distances closer than 1-2 meters (or less) from the wall. They are of no interest here, because too far from the wall. You need to experiment a bit to find the exact filtering distance.
3. Use some CV algorithm to locate the centers of the blobs of remaining, unfiltered depth points. There may be only one blob in case of one ball, or many blobs in case of many balls, or people walking on the floor.
4. When these blobs (and their respective centers) is at maximum distance, close to the fixed distance to the wall, this would mean the ball(s) have hit the wall.
5. Map the depth coordinates of the blob centers to color camera coordinates, by using KinectManager.Instance.MapDepthPointToColorCoords(), and you will have the screen point of impact, or KinectManager.Instance.MapDepthPointToSpaceCoords(), if you prefer to get the 3D position of the ball at the moment of impact. If you are not sure how to do the sensor to projector calibration, look at this tip.

How to create your own programmatic gestures

The programmatic gestures are implemented in KinectScripts/KinectGestures.cs or class that extends it. The detection of a gesture consists of checking for gesture-specific poses in the different gesture states. Look below for more information.

1. Open KinectScripts/KinectGestures.cs and add the name of your gesture(s) to the Gestures-enum. As you probably know, the enums in C# cannot be extended. This is the reason you should modify it to add your unique gesture names here. Alternatively, you can use the predefined UserGestureX-names for your gestures, if you prefer not to modify KinectGestures.cs.
2. Find CheckForGesture()-method in the opened class and add case(s) for the new gesture(s), at the end of its internal switch. It will contain the code that detects the gesture.
3. In the gesture-case, add an internal switch that will check for user pose in the respective state. See the code of other simple gesture (like RaiseLeftHand, RaiseRightHand, SwipeLeft or SwipeRight), if you need example.

In CheckForGestures() you have access to the jointsPos-array, containing all joint positions and jointsTracked-array, containing showing whether the the respective joints are currently tracked or not. The joint positions are in world coordinates, in meters.

The gesture detection code usually consists of checking for specific user poses in the current gesture state. The gesture detection always starts with the initial state 0. At this state you should check if the gesture has started. For instance, if the tracked joint (hand, foot or knee) is positioned properly relative to some other joint (like body center, hip or shoulder). If it is, this means the gesture has started. Save the position of the tracked joint, the current time and increment the state to 1. All this may be done by calling SetGestureJoint().

Then, at the next state (1), you should check if the gesture continues successfully or not. For instance, if the tracked joint has moved as expected relative to the other joint, and within the expected time frame in seconds. If it’s not, cancel the gesture and start over from state 0. This could be done by calling SetGestureCancelled()-method.

Otherwise, if this is the last expected state, consider the gesture is completed, call CheckPoseComplete() with last parameter 0 (i.e. don’t wait), to mark the gesture as complete. In case of gesture cancelation or completion, the gesture listeners will be notified.

If the gesture is successful so far, but not yet completed, call SetGestureJoint() again to save the current joint position and timestamp, as well as increment the gesture state again. Then go on with the next gesture-state processing, until the gesture gets completed. It would be also good to set the progress of the gesture the gestureData-structure, when the gesture consists of more than two states.

The demo scenes related to checking for programmatic gestures are located in the KinectDemos/GesturesDemo-folder. The KinectGesturesDemo1-scene shows how to utilize discrete gestures, and the KinectGesturesDemo2-scene is about the continuous gestures.

More tips regarding listening for discrete and continuous programmatic gestures in Unity scenes can be found above.

What is the file-format used by the KinectRecorderPlayer-component (KinectRecorderDemo)

The KinectRecorderPlayer-component can record or replay body-recording files. These are text files, where each line represents a body-frame at a specific moment in time. You can use it to replay or analyze the body-frame recordings in your own tools. Here is the format of each line. See the sample body-frames below, for reference.

0. time in seconds, since the start of recording, followed by ‘|’. All other field separators are ‘,’.
This value is used by the KinectRecorderPlayer-component for time-sync, when it needs to replay the body recording.

1. body frame identifier. should be ‘kb’.
2. body-frame timestamp, coming from the Kinect SDK (ignored by the KinectManager)
3. number of max tracked bodies (6).
4. number of max tracked body joints (25).

then, follows the data for each body (6 times)
6. body tracking flag – 1 if the body is tracked, 0 if it is not tracked (the 5 zeros at the end of the lines below are for the 5 missing bodies)

if the body is tracked, then the bodyId and the data for all body joints follow. if it is not tracked – the bodyId and joint data (7-9) are skipped
7. body ID

body joint data (25 times, for all body joints – ordered by JointType (see KinectScripts/KinectInterop.cs)
8. joint tracking state – 0 means not-tracked; 1 – inferred; 2 – tracked

if the joint is inferred or tracked, the joint position data follows. if it is not-tracked, the joint position data (9) is skipped.
9. joint position data – X, Y & Z.

And here are two body-frame samples, for reference:

0.774|kb,101856122898130000,6,25,1,72057594037928806,2,-0.415,-0.351,1.922,2,-0.453,-0.058,1.971,2,-0.488,0.223,2.008,2,-0.450,0.342,2.032,2,-0.548,0.115,1.886,1,-0.555,-0.047,1.747,1,-0.374,-0.104,1.760,1,-0.364,-0.105,1.828,2,-0.330,0.103,2.065,2,-0.262,-0.100,1.963,2,-0.363,-0.068,1.798,1,-0.416,-0.078,1.789,2,-0.457,-0.334,1.847,2,-0.478,-0.757,1.915,2,-0.467,-1.048,1.943,2,-0.365,-1.043,1.839,2,-0.361,-0.356,1.929,2,-0.402,-0.663,1.795,1,-0.294,-1.098,1.806,1,-0.218,-1.081,1.710,2,-0.480,0.154,2.001,2,-0.335,-0.109,1.840,2,-0.338,-0.062,1.804,2,-0.450,-0.067,1.736,2,-0.435,-0.031,1.800,0,0,0,0,0

1.710|kb,101856132898750000,6,25,1,72057594037928806,2,-0.416,-0.351,1.922,2,-0.453,-0.059,1.972,2,-0.487,0.223,2.008,2,-0.449,0.342,2.032,2,-0.542,0.116,1.881,1,-0.555,-0.047,1.748,1,-0.374,-0.102,1.760,1,-0.364,-0.104,1.826,2,-0.327,0.102,2.063,2,-0.262,-0.100,1.963,2,-0.363,-0.065,1.799,2,-0.415,-0.071,1.785,2,-0.458,-0.334,1.848,2,-0.477,-0.757,1.914,1,-0.483,-1.116,2.008,1,-0.406,-1.127,1.917,2,-0.361,-0.356,1.928,2,-0.402,-0.670,1.796,1,-0.295,-1.100,1.805,1,-0.218,-1.083,1.710,2,-0.480,0.154,2.001,2,-0.334,-0.106,1.840,2,-0.339,-0.061,1.799,2,-0.453,-0.062,1.731,2,-0.435,-0.020,1.798,0,0,0,0,0

How to enable user gender and age detection in KinectFittingRoom1-demo scene

You can utilize the cloud face detection in KinectFittingRoom1-demo scene, if you’d like to detect the user’s gender and age, and properly adjust the model categories for him or her. The CloudFaceDetector-component uses Azure Cognitive Services for user-face detection and analysis. These services are free of charge, if you don’t exceed a certain limit (30000 requests per month and 20 per minute). Here is how to do it:

1. Go to this page and press the big blue ‘Get API Key’-button next to ‘Face API’. See this screenshot, if you need more details.
2. You will be asked to sign-in with your Microsoft account and select your Azure subscription. At the end you should land on this page.
3. Press the ‘Create a resource’-button at the upper left part of the dashboard, then select ‘AI + Machine Learning’ and then ‘Face’. You need to give the Face-service a name & resource group and select endpoint (server address) near you. Select the free payment tier, if you don’t plan a bulk of requests. Then create the service. See this screenshot, if you need more details.
4. After the Face-service is created and deployed, select ‘All resources’ at the left side of the dashboard, then the name of the created Face-serice from the list of services. Then select ‘Quick start’ from the service menu, if not already selected. Once you are there, press the ‘Keys’-link and copy one of the provided subscription keys. Don’t forget to write down the first part of the endpoint address, as well. These parameters will be needed in the next step. See this screenshot, if you need more details.
5. Go back to Unity, open KinectFittingRoom1-demo scene and select the CloudFaceController-game object in Hierarchy. Type in the (written down in 4. above) first part of the endpoint address to ‘Face service location’, and paste the copied subscription key to ‘Face subscription key’. See this screenshot, if you need more details.
6. Finally, select the KinectController-game object in Hierarchy, find its ‘Category Selector’-component and enable the ‘Detect gender age’-setting. See this screenshot, if you need more details.
7. That’s it. Save the scene and run it to try it out. Now, after the T-pose detection, the user’s face will be analyzed and you will get information regarding the user’s gender and age at the lower left part of the screen.
8. If everything is OK, you can setup the model selectors in the scene to be available only for users with specific gender and age range, as needed. See the ‘Model gender’, ‘Minimum age’ and ‘Maximum age’-settings of the available ModelSelector-components.

 

 

1,098 thoughts on “Kinect v2 Tips, Tricks and Examples

  1. Hi Rumen

    is it possible to place Kinect in a vertical position and get it work with this project? If its possible, what are the steps?

    • Hi, you can get the positions of the left and right shoulders from KinectManager, then calculate the distance between them and add some offset (because the joints are not at the body edges). See the Torso4-value in KinectHeightEstimator-scene (folder: KinectDemos/VariousDemos) as well, but it has some intrinsic issues and is not optimized for shoulders.

      • okay thanks for your reply. Actually i need to take estimated measurement of user, not perfect, just for estimation. When i used your height estimator scene, while standing in tpose the slice of torso 4 calculated the whole arm depending on what clothes are you wearing.

        the question is what is the best way i can get estimated measurement of body?

      • Yes, I said in my previous answer there are intrinsic issues in the height-estimator demo. It uses the user pixels in the depth image to calculate the measures. The best way, as to me, is to use the positions of the respective (tracked) joints and estimate the distances between them. For instance shoulder to shoulder, shoulder to elbow + elbow to wrist = arm length, head to ankle + offset = height, etc.

  2. Hi Rumen!
    I’ll give you one good and one bad.

    You can find “Depth2ColorRegistration” Tap in \nuitrack\data\nuitrack.config,
    It’s default value is false, but you can change it into ‘true’ to take registration between color and depth.
    It’s good news.

    Bad one is same thing. performance is dramatically dropped.

    • Hi, thank you for the info! Are you using Astra or RealSense? If what you said before is true (about the RealSense automatic alignment), this would simplify greatly the coordinate mapping for these cameras. After the conference last week I have some urgent meetings today and tomorrow, but after that I’m going to research this a bit deeper.

      • Yes I have used Realsense D415 and Orbbec Astra Pro, and Persee.
        But I guess It also can be used for Realsense D435, too.
        (Sorry for that I can’t test it with d435 because I don’t have it)

        Orbbec Series and XTion has their own restriction attribute in their .ini configuration file.
        Only Realsense’s restriction is in nuitrack’s configuration file.

      • I think Orbbec cameras are going nowhere, so only RealSense really matters to me. But Intel just said they are relying on middleware software like Nuitrack for body tracking. I’m quite disappointed. Hope Kinect-v4 comes out soon.

      • Additionally, there is so many useful attribute that we can modify in nuitrack.config file,
        for instance, resolution on depth and RGB frame, or distance restriction, et cetera.

        I hope this information helps you to develop and upgrade quality of your asset!

      • Thank you for all the info so far! Feel free to email me at any time, with more info, questions or suggestions.

    • Hi Hyeon Park, I have read that you were able to configure Rumen Plugin in your orbbec persee sensor + unity3d, could you share how you did it?

  3. Hi rumen

    is there any gesture where you can detect if the person is standing straight and still, like hands down.

    there is a stop gesture but I do not know in what post you can work this gesture

      • 🙂 As far as I remember, the stop gesture was similar to the Xbox stop-gesture. Please see Howto-Use-Gestures-or-Create-Your-Own-Ones.pdf in the _Readme-folder for gesture descriptions.

  4. Rumen can you tell me if there are any limitations/issues in using realsense d series cameras?

    Are there any particular steps on working this project on realsense camera?

      • Thanks a lot rumen for your reply!

        One last question, is RealSense quality in sense of tracking and using computer resources is better than kinect? if we are using it in this project?

        Thanks a lot!

      • It depends. It has a good depth and color cameras, but doesn’t provide basic features like body and hand tracking. You should buy a middleware SDK like Nuitrack only for this. And really, I’m not delighted by the quality of Nuitrack tracking. Kinect-v2 remains the best sensor so far, if you can still find it and afford it.

  5. Hi Rumen,

    A cubemap is displayed at the bottom right of the screen when the build target is set to Windows but when I switch the platform to Android, it just stops displaying the cubemap. Any particular reason? Also, the background segmentation doesn’t seem to work as well when the build platform is set to Android.

    I’ve used D345, Astra pro, and Astra with Nuitrack but I get the same error. Any help would be greatly appreciated.

    Thanks!

    • Hi, I suppose this is due to the DirectX-11 shaders used by the background-removal subsystem. Sorry, but I’m currently on holidays and can’t check the Android issue more deeply.

  6. Hi Rumen,
    thanks for the great help from the K2-Asset.
    Here is the situation:
    I have 2 avatars in my scene and I have recorded a 5-second motion with the KinectRecorderPlayer. Now, I want to move the first avatar (the Player Index would be 0) in real-time while the second avatar (the Player Index would be 1) play the recorded motion through BodyRecording.txt at the same time?

    In addition, the data format inside the BodyRecording.txt is a little bit different from your example, please see below:
    0.000|kb,10236968473000000,6,25,0,0,0,0,0,1,72057594037930576,2,0.000,-0.515,1.603,2,-0.014,-0.276,1.673,2,-0.027,-0.046,1.727,2,-0.004,0.100,1.722,2,-0.189,-0.163,1.649,2,-0.261,-0.315,1.539,2,-0.154,-0.384,1.303,2,-0.112,-0.410,1.221,2,0.167,-0.161,1.691,2,0.386,-0.259,1.574,2,0.550,-0.263,1.406,2,0.596,-0.261,1.335,2,-0.080,-0.503,1.553,2,-0.093,-0.442,1.165,1,-0.004,-0.926,0.968,1,0.000,-0.874,0.878,2,0.081,-0.505,1.586,2,0.146,-0.473,1.256,1,0.140,-0.840,0.782,1,0.151,-0.793,0.746,2,-0.024,-0.102,1.716,2,-0.081,-0.448,1.167,2,-0.081,-0.390,1.191,2,0.646,-0.258,1.280,2,0.657,-0.254,1.326

    The 5 zeros which represent non-tracking body is in the very beginning.

    Thank you in advance!

    • To be more precise, I would need to compare the joint position with the real-time and the recorded motion, so would you please give me some hints about this part.(maybe adjust the recorded body index to 1?)
      Thank you.

      • Hi David, it’s not possible to replay a recording and capture user’s real-time motion at the same time. These will be two conflicting body frames. How would you merge them?

        I would recommend to do as follows instead:
        1. Read the whole body-recording file, parse the body frames and store them in some memory structures internally, along with the respective times. See SetBodyFrameFromCsv()-function in KinectScripts/KinectInterop.cs, if you need body frame parsing example.
        2. Start real-time capture and compare the current positions of joints with the recorded joint positions (or the angles between the joints), with the respect of time since the capture start.
        3. If you need to show the recorded data as avatar motion on the scene, it would be better to create an animation out of the recording, with the help of the Mocap animator. Please e-mail me, if you don’t have it.

        Regarding the starting 0’s in the body frame: That’s correct. In your case the first 5 body indices (bodies in the body frame, not player indices) are not tracked. Only the 6th one is tracked and its data follows in the frame. That’s why the 5 zeros are not at the end, but at the beginning.

  7. Hi Rumen,
    thanks for the rapid reply!! I will try these methods and see how they work.
    Thank you!

  8. Hi Rumen F.,
    It is sooo nice to have your asset, and save me a lot of time developing solutions.
    Recently, there are some OpenCV AI solution which can distinguish skeleton from color image.

    I am thinking of mapping those joint position in color-map coordination to depthimage-map coordination.
    Those works are currently operated post-process since the OpenCV AI computation takes some time, we cannot do it in real time.

    Do you have any idea how to do it, and many thanks for your answering.

    Regards,
    Rammondo C.

    • Hi! Yes, it is encouraging to see these rgb-camera body tracking solutions. Some of them look quite promising.

      To your question: You can record (and replay) the Kinect-tracked bodies along with the depth and optionally color frames with the help of Kinect studio 2.0 (part of Kinect SDK 2.0). If you only need the spatial positions of the tracked body joints, use the KinectRecorderPlayer-component of the K2-asset (see KinectDemos/RecorderDemo/KinectRecorderDemo-scene). I would recommend to compare the spatial coordinates, if applicable in both cases. If not, you’d need some kind of mapping between the coordinate systems.

      Hope I understood your question correctly.

      • thank for your nicely reply.
        Allow me to clarify details.

        An mapping between colorimage-space and depthimage-space is required.

        the new calculation return 2d skeleton information in colorimage-space. correspondence Z information is recorded on depth image(depthimage-space).

        Therefore, I am thinking about record both raw color image and depth image. Though addtional mapping matrix is required. Do you have any idea to implement it using your asset perfectly.

        Kindly Regards
        R.

      • In this case, use Kinect studio 2.0 to record all needed streams and then and replay the recording, while running the K2-asset in Unity.

      • Please e-mail me your request from your university e-mail address. This is to prove you really study there. Then I’ll send you back the requested Kinect Mocap animator, free of charge for academic use.

      • thanks for the answering, I understand that kinect studio can record each stream perfectly. They maps nicely there.
        Do you think anyway this mapping can be done in your unity asset?
        thanks for the answering.

      • I’m not sure I understand your question. When you replay the recording in Kinect studio, what the K2-asset sees should be similar as from a live Kinect-sensor. Have you tried it at all?

      • You can record all you want, but keep in mind that a single color image is about 2MB, and there are 30 per second coming from the sensor.

  9. Hey Rumen,
    i am using the your amazing Kinect v2 asset for my studies at university. I am having only one issue so far, i have been googling it, but no solution found so i thought maybe you could help me… I am trying to change the scenes with a collision of avatars hand, which works fine… but then when the new scene begins the tracking stops, i lose the control of my character, they cant be activated… the scene begins but not my player..even if i disappear and reappear the kinect wouldnt detect me… i have checked everything many times, like avatar indexs are both set to 0 in both scenes.. Would be great if you could help me out

    • Please make sure you follow the steps for the KinectManager setup in multi-scene setup, described in K2Examples/_Readme/Howto-Use-KinectManager-Across-Multiple-Scenes.pdf. Also, look how this is implemented in the multi-scene demo in KinectDemos/MultiSceneDemo-folder. You need to add the 3 scenes there to the ScenesInBuild-list of the Build settings.

      • Hey Rumen ! You are the best ! thank you for your quick respond. I was completely not aware of the multu scene setup at all. I thought i was doing a mistake in my script… I will quickly check it out and let you know. Thank you so much !

  10. Hello Rumen,
    Do you think nuitrack is appropriate for tracking a large number of people?
    If it doesn’t, why it is?
    Thanks in advance.

  11. Hi Rumen,

    Thanks for developing this amazing demo! Can I ask you some questions? I would like to get the user’s depth info without the background. If I enable “compute user map” as “UserTexture”, the GetUsersLblTex() function returns the user’s histogram image. From my understanding, histograme image means that pixels whose depth appears more often in the frame will get lower/darker pixel values (since you use hist = 1 – (_HistBuffer[depth] / _TotalPoints) in DepthShader.shader). Thus, the user texture doesn’t indicate the true depth of the body. Since I need the user’s true depth info (that indicates the distance of the user’s body parts to the camer), should I choose “RawUserDepth” instead? If so, the GetRawDepthMap()-function would return the depth of the entire image, what can I do if I need only the user’s depth and want to set the background as black? Besides, since the raw depth image is an array of short-integers, what is the good format (e.g, jpg, txt) if I want to save it as a file?

    Thanks in advance!

    • Hi, following up from my previous question, now I was able to get the raw depth map using GetRawDepthMap() and save it as a binary file. But I need only the user’s depth info and want to remove the background. Is there a function to get whether a pixel in the depth map is the user or the background? Thanks!

      • Hi, apart from GetRawDepthMap() that returns the array of depths in mm (depthW x depth in size), you can call GetRawBodyIndexMap() too. It returns an array of bytes with the same dimension (depthW x depthH). Each byte represents the body index for the same depth point (values between 0 to 5) or 255, if there is no body detected in that point. You can match the two arrays to get only the depth points representing the user of interest. The body index for each userId can be found by calling the GetBodyIndexByUserId(userId)-method of KinectManager.

        On the other hand, if you open DepthShader.shader, you will see some commented out code at the end of the shader. Uncomment it and you will get the background too (represented as depth, as you need it) in the user map texture. I suppose, if you replace the ‘hist’ calculation in the if-part with the hist calculation from else-part, you will get what you need. Don’t be afraid to experiment a bit.

  12. Hi,

    I am looking for a way to recognize a user when he is lying down, both on the floor and on a stretcher, but I have problems with the recognition of the joints. Is there any reliable way for these joints to be correctly recognized?

    In the same sense, can it be recognized in any way that the user is in contact with the ground? I am really looking for a way to recognize it when the user is lying down, but when the user is standing I am also interested.

    • I would put the sensor near the ceiling, turned 40-50 degrees down, so it can see the bed, floor, etc. I hope it could detect this way all standing, lying and fallen people in the room. But please check this suggestion on site first.

      There is automatic floor detection (see the ‘Auto height angle’-setting of KinectManager), but I’m not sure it is always reliable. Turn it on, check if the detected height & angle seem correct. Then turn it off and set ‘Sensor height’ & ‘Sensor angle’ manually. After that you can just compare the Y-positions of the detected body joints to 0.0, to see how far are they from the floor. Don’t be afraid to experiment.

  13. Hi,
    Is there any way to detect that the Kinect has shut down unexpectedly during execution?

    Thanks you so much!

    • This should normally never happen. Anyway, you can check the sensor availability. First get a reference to the Kinect sensor like this:

      Kinect2Interface k2interface = (Kinect2Interface)KinectManager.Instance.GetSensorData().sensorInterface;
      KinectSensor k2sensor = k2interface.kinectSensor;

      Then check ‘k2sensor.IsAvailable’-property or use its IsAvailableChanged-event. I’m not sure how reliably they work though. Please experiment a bit.

  14. Hi Again,
    i am using speech recognition in scene A, which works fine in itself if i start the game in scene A.
    As i am switching in different scenes, if i come to scene A from any other scene, the speech recognition does not work anymore in scene A. Do you have any ideas why it would happen ?

    Thank you

    • I suppose you follow the pattern of the multi-scene demo, as described in ‘Howto-Use-KinectManager-Across-Multiple-Scenes.pdf’. If this is the case, try add the SpeechManager-component to the startup scene, instead of to a real scene.

  15. Hi Rumen,

    I already used your package for various projects and it worked very well. Now I’ve a specific need, namely I should be able to intercept any problem/failure/freeze the Kinect could have, so to take the right countermeasures. Until now, I was just checking the KinectManager::IsInitialized() method, so is it the right way to manage that? Is there some more precise method to check the current Kinect working state in a deeper detail?

    Thank you for your attention.

    • It may, for non-commercial apps, if OpenPose provides enough performance. For commercial apps, the $25K annual fee will be the 1st stopper, as to me.

      By the way, I started recently a pose-experiments project to evaluate different open pose-detection models for 2d & 3d: https://github.com/rfilkov/PoseModelExperiments It currently includes the PoseNet model only, but more (incl. OpenPose) will come in time.

      • What about kinect v4?
        Its performance seems better than kinect v2 but I don’t know it has any other advantage.

      • From what I’ve seen so far it looks pretty good, in terms of stability, depth & color frames. But until I can put my hands on the K4, and experiment with it, I cannot say more. Currently the best 3D sensor on the market (except what’s left of Kinect-v2) is RealSense D415, as to me. The only drawback there is they don’t provide their own body tracking SDK and rely on Nuitrack only.

  16. Hi Rumen, I am using your script(KinectAvatarDemo1) to create an Obstacle Race, so I need users jump to avoid obstacle like rolling balls…but the height of Jump is really low. Question: How could I increase the height of user’s jump?

    • You could look for Jump-gesture (see AvatarsDemo/KinectAvatarsDemo1 & GesturesDemo/KinectGesturesDemo1-scenes), and if user jump gets detected, just move the avatar higher. Enable ‘External root motion’-setting of AvatarController, when you’d like to control the avatar position, then disable it again to allow the sensor regain control of the avatar movement.

  17. Hi rumen

    im getting the DllNotFoundException: libnuitrack while trying the nuitrack sample project. I’ve followed all the steps on their site.

    I’m migrating this project to intel real sense d415.

    • Hi, you probably forgot to add /bin-folder to the system PATH-environment variable. libnuitrack.dll is in there. If you installed Nuitrack after Unity was already open, just close and restart the Unity editor.

      • Thanks for your reply Rumen F.

        I’ve already done that but still getting the same issue.
        VariableName: NUITRACK_HOME
        Variable Value: C:\Users\abcxyz\Documents\nuitrack\bin

        I’ve tried on both Unity 2017.3.0f3 and Unity 2018.2.8f1 version.

      • Got it. Solved it by setting /bin-folder to the system PATH-environment variable

        Thanks rumen

      • Yes, in your case NUITRACK_HOME should be ‘C:\Users\abcxyz\Documents\nuitrack’ and ‘C:\Users\abcxyz\Documents\nuitrack\bin’ should be in PATH. Well done!

  18. Plz guide me if i am wrong ,
    when we add one more category in FR1 and run demo is it going to show all the models from all the categories(Shirts,Pants,etc)..or to show the models of a category by raising the hands?
    It show me all all models from all categories….which I don’t want..

    • I think you are right. My original idea was different ModelSelector-components (i.e. categories) to use different menus, not to reuse the same menu. A workaround to your issue would be to duplicate the DressingMenu1-object under Canvas in Hierarchy, and assign different menus to the different model selectors. And I’ll provide a solution in the next K2-asset release, to reuse the same menu for all categories.

      • Sir Can you please help me now i tried to solve it in different ways but nothing go right…Plz its urgent..Plz Sir..

      • Hey I tried by adding a button and activating the Duplicate DressingMenu1-object on button click It works but the problem remain with the model, previous and the new one remain on the body even by disabling the keep selected model. This problem also remain with your same code.
        can you help me out there?

      • This is what I meant:
        Two menus for two model selectors.

        If you prefer to reuse the same menu for all model selectors, please e-mail me, mention your invoice number and I’ll send you the modified ModelSelector-script.

    • KinectManager.Instance.GetSensorData() returns the data-holder class (SensorData). There are two fields in there, called ‘lastColorFrameTime’ for the color frame, and ‘lastDepthFrameTime’ for the depth frame. You can use them to check, if a new color or depth frame has come from the sensor in the meantime.

  19. Hello Rumen, I am planning to buy and test Orbbec Persee sensor, I am already using your plugin KinectV2 + Kinect xBoxV2 Sensor… I wonder if your plugin works with hardware sensor Orbbec Persee as well.

    • Hi, Persee is actually Astra-Pro, packed in a standalone device. I don’t have it and didn’t test it with the K2-asset. I only remember a customer, who once told me he tried it and it works.

  20. Hi Rumen

    Can you please tell me What a designer should follow to design a model for fitting room which perfectly fits in the project?
    one more question, if i want every dress with unique setting of modelSelector component like every dress has different body scale factor, body width factor, leg scale factor..

    what should i do?

    • Hi, I know one designer, who has done this in the past, but I’m not sure he’s free now. As I said many times, I’m not a model designer, but I think the most of the rigged models should be OK. What’s the problem with your models?

      To your “unique settings” question: Just put AvatarController & AvatarScaler-components to each of the model prefabs, similarly to what you can see in the 2nd FR-demo scene. If the model-selector finds these components in the instantiated model, it uses them instead of creating new components.

      • Thanks for your reply Rumen!

        I’ve worked a lot in this project and stuck in model designing. I don’t understand if the error is on my side or the Modeler side. Like arms of the dress get twisted on the hand side and bottom part of the shirt don’t seem right.

        I want to place full sleeves dress on a person and i have different sizes of dress.

        Can i send you the model with my asset invoice no you can tell me if it looks right to you?

        Waiting for your reply and thanks a lot!

      • No problem. Feel free to send me one or more problematic models, and I’ll try to find out what may have caused the described issues. Please only mind that I’m having some holidays at the moment and will be away till the end of next week. But as soon as I get back home, I’ll check your models, for sure.

      • Hi rumen

        Arms issue are fixed, there is another issue please check mail when you have time.

        Thanks!

  21. Hello Rumen,
    The max x distance changes according to z distance.
    How can I get its value?
    Additionally, did you see nuitrack face tracking?
    Its some parts seem better than kinect.

    • What do you mean by “the max x distance changes according to z distance”? Feel free to e-mail me some screenshots or short video, if needed.
      Not checked the newest release of Nuitrack yet, but will do it soon. What exactly is better than Kinect-v2?

  22. I have seen ur kinect v2 package How can we apply cloth physics to our cloth in FR demo scene ? Is there a requirement to rig the model some other way or can it be done using the rigging that has been done in the cloth model already present ?

    • Sorry, I’m not a model designer. Anyway, I think there shouldn’t be any special requirements. If you like, you can e-mail me one representative model, so that I can take a closer look here. If you do, please mention your invoice number as well, so I can prove you are eligible for support.

  23. Hi there, Thank you for your perfect asset. Could you please let me know how can I disable BackgroundRemovalManager on runtime and restart the Kinect manager? I want to have a level with different backgrounds, and a choice for no-background.

    • Hi, the KinectManager should be there all the time. It is the most basic Kinect-related component. You don’t need to disable the BackgroundRemovalManager too, as to me. To change the backgrounds, just change the texture of the RawImage-component of BackgroundImage1-object in the scene.

      And, I suppose that under “no background” you mean to render the RGB camera picture, instead of users only. In this case, you can set the texture mentioned above to ‘KinectManager.Instance.GetUsersClrTex()’, and disable the RawImage-object under BackgroundImage2 (or whatever object renders the foreground image).

      • Hi Rumen,
        Thanks a million mate! Second paragraph solved my problem. You saved the day.

        Cheers,
        MJ

  24. Hi Rumen… I see that Park,has tested succesfully Orbbec Persee and K2-Kinnect… I am trying Orbbec Persee which is a depth camera with an operating system built in(Android) + Windows with Unity + Kinnect V2… I connect persee using USB to Windows Machine to deploy apk installation as a normal android device. then run Nuitrack trial version in Persee in order to activate 3 minutes functionality license.. I changed alwaysavailable to False under Kinect2interface.. I run nuitrack SDK and unity samples and it works fine, but I can not using Kinnect V2 plugin.. am I missing something?….I import Nutrack SDK for Unity into my Windows Unity Project, then star Nuitrack application inside Persee sensor in order to activate 3 minutes license and then deploy Unity Nuitrack apk and it works… But I want to run your plug in inside Persee…any other idea? should I only change to false under Kinect2Interface? is this the only change to your plugin if I want to run Persee sensor instead Xbox ONe V2 sensor?

    • Hi Oscar, I don’t have or plan to buy/use Persee in my work, hence everything I say here could be only a suggestion. Setting ‘sensorAlwaysAvailable’ to ‘false’ allows KinectManager to look for other sensor interfaces (in case you have Kinect SDK 2.0 installed on your machine). After that, I suggest the Nuitrack-interface gets selected as sensor interface. You can see that in the console or in the Unity log. From there on, it should work similarly to the Nuitrack Unity samples. Please check if you have Nuitrack installed, the needed Nuitrack APK installed on Persee, and whether in this case the platform in the Unity build settings should not be set to Android.

      As you mentioned, I also remember someone commenting here on my website, who had success running Persee with the Nuitrack interface of the K2-asset. I would suggest to contact him and ask for details as well, if your issues persist.

  25. Hi Rumen.. I am trying to use the gesturedemo with the presentation cube, but when I rotate the cubes, the cubes images are opposite by default. Is there any way I can fix that such that when I rotate the side wouldn’t change? I wan it to be static no matter how I turn it around. Please and thanks for your help!

    • Hi, please open CubePresentationScript.cs in KinectGestures/GestureDemo/Scripts-folder. I suppose everywhere, where the side texture is set (i.e. where ‘cubeSides[side].GetComponent().material.mainTexture = ‘ is used) the cube-side scaling should be changed as well (in means of: cubeSides[side].transform.localScale), in a way to make the texture look correct. I’m a bit in a hurry now, so please think a bit by yourself how to set the local-scale. If you can’t find it out, feel free to contact me again.

  26. Hi Rumen
    I recently bought intel realsense D415
    is it possible to use the assets to track the 3D model ?
    (i was working with kinect 2 and it work fine)
    thanks

      • Thank you Rumen for your reply
        I already did all the tips above,but my question is what i am suppose to modify in ordre to work with the real sense D415 ?
        Thanks

      • The only thing in the K2-asset you would need to modify is the value of sensorAlwaysAvailable (from ‘true’ to ‘false’) in KinectScripts/Interfaces/Kinect2Interface.cs, if you have Kinect SDK 2.0 installed. Isn’t Realsense D415 working in your case? What exactly happens?

      • Thank you again for your reply Rumen
        I already bought the K2-asset is it necessair to bought nuitrack pro in ordre to track more than 3 minutes ?
        Thanks

      • Yes. Unfortunately, at the moment you need to buy Nuitrack as well, in order to utilize the Realsense-camera via Nuitrack-interface. In the future this should change, but right now it is needed. Sorry for this extra expense!

    • It should be renamed to ‘preview.jpg.bytes’ in Windows Explorer (with ‘Hide extensions of known file types’-option turned off). Unity hides the file extensions, so it will be shown as ‘preview.jpg’ in the Project-view. See how the demo preview images are set in Resources/Clothing/000X-folder. The bytes-extension is required by the Unity resource-handling system.

  27. Thanks for this really nice article on Kinect options.
    Do you know if it’s possible to mirror the Kinect Mesh in Unity.
    In a project, the kinect will be in front of the stage, and the projection screen behind the 2 artists. As the kinect is usually used in mirror mode, I would like to unmirror it, but I can’t figure out how in the KinectMesh script.

    • I’m not sure what KinectMesh-script you mean. There is UserMeshVisualizer-script used by the KinectUserVisualizer-scene in the K2-asset, and it has a ‘Mirrored movement’-setting for your convenience. Doesn’t it work as expected?

  28. Hello Rumen,

    First of all thank you for the great asset and the time you spend on answering all our support requests.

    Since Nuitrack now supports the KinectV2 (https://nuitrack.com/blog#kinect2), I am trying to use it as the plugin interface. I followed all the instructions in this section and was able to check that it works using the activation tool. It also works well in Unity using the provided samples (https://nuitrack.com/#rec54893368).

    But when I try to use it with your plugin, it doesn’t work. I set the flag sensorAlwaysAvailable to false but it still loads the Kinect2Interface. To investiguate a bit further, I tried to change the order of the interfaces within the KinectInterop class but then Unity crashes when I hit the play button.

    Crash report doesn’t say much:
    KERNELBASE.dll caused an Unknown exception type (0x20474343) in module KERNELBASE.dll at 0033:290fa388.

    Unity version: 2017.4.10f1
    Path variable to Nuitrack bin: OK
    NUITRACK_HOME variable: OK
    Nuitrack version: 0.24.0
    OpenNI version: 1.5.7
    K2-asset version: 2.18.1
    Scene: KinectAvatarsDemo4

    Any clue on what could be wrong ?

    • Hi David, you are actually the first one, who prefers the Nuitrack SDK to Kinect SDK 2.0 🙂

      To your issue: Setting sensorAlwaysAvailable to false means that Kinect2Interface should check the sensor presence instead of considering it “always available”. When you have both Kinect SDK & Nuitrack SDK installed and the sensor is connected, Kinect2Interface is checked before NuitrackInterface. It responds positively to the sensor check and then takes control over the sensor.

      Regarding the error, may I ask you to e-mail me the Unity log-file after the crash. Here is where to find the log-file: https://docs.unity3d.com/Manual/LogFiles.html

  29. Hi. can you please tell me some reference that provide some methods to improve the model in the nuitrack SDK (or any skeleton tracking). Including reduce noise in the skeleton or Smooth motion track

    • Hi, I don’t think the Nuitrack body tracking is so noisy (maybe it depends on the sensor used), but generally you could utilize these settings:
      1. There is a setting of the KinectManager-component in the scene called ‘Smoothing’. Try to use medium or aggressive smoothing instead of the default one.
      2. One other setting of the KinectManager is ‘Ignore inferred joints’. Enable it to prevent inferring the joint positions, when the joints are not visible to the camera.
      3. The AvatarController-component (as well as the most other joint-tracking components) have a ‘Smoothing factor’-setting. Try different values, starting from 10 and decreasing it down to 1. FYI: The bigger value means less smoothing. A value of 0 means no smoothing at all.

  30. Hi Rumen, I am using the PhotoBooth scene and it works fine for 1 user, but when I try more users it doesnt work properly. I have added under KinectController object a 2nd component for all components that have Player Index definition: JointOverlayer, interaction manager and PhotoBootController. All of them with PLayer index 1 (For a 2nd user).. When I try the scene with 2 people, Objects are replaced only on one person at a time, it detects swipeleft/right gestures and objects jump to the other user, but overlay objects doesnt appear both at the same time …am I forgetting some additional components? I also duplicate object on scene for every player index(SupermanLogo, batman, Medusa,etc)

    • Hi Oscar, I just tried it and (as far as I see) it works as expected. Here is what I did (see the image below):
      1. I duplicated the KinectController-object in the scene and deleted the KinectManager- and KinectGestures-components from it. This is due to the requirement to have only one KinectManager-component in the scene.
      2. On KinectController (1)-object (that will be used by the 2nd user) I changed the ‘Player index’-setting of InteractionManager- and PhotoBoothController-components from 0 to 1. You could also change the PlayerIndex of JointOverlayer-components as well, but this is not needed. It should be done automatically by the PhotoBoothController, when the 2nd user gets detected.
      3. I duplicated all sprite objects in the scene under Medusa, Zeus, etc. and then referenced them in the ‘Head masks’, ‘Left hand masks’, ‘Chest masks’-settings of PhotoBoothController-component of KinectController (1). This way the 1st user will use the original sprite objects, while the 2nd user will use the duplicated copy of the sprite objects.

      Please check your setup again.

      photo booth objects, components and settings

      • Rumen, as always great support!!!… based in your recommendations, now it works perfectly…Thanks for your quick response.

  31. A question about the Interaction Manager:
    How can i allow hand clicks for UI buttons only?
    The progress bar for the click begin all the time if the cursor stay more than ~2sec.

    • There are 3 ways to do a “click” – hand-grips, staying in place and hand-pushes. If you want to disable considering staying in place as click, please disable the ‘Allow hand clicks’-setting of InteractionManager. If you want to disable considering hand-pushes as clicks, please disable ‘Allow push to click’-setting of InteractionManager. If you want more complex behavior, like allowing hand clicks for buttons, but not for other UI elements, you’d need to modify the code of InteractionManager.cs and probably InteractionInputModule.cs, as well.

      • I addad pointer enter and exit events to each button now, and change a static bool for hand over buttons. This seems to work.
        Also edited the InteractionManager “//check for left/right hand click” section and check the new bool value.

        But thanks for all the examples. They are very helpful.

  32. Hi Rumen! Just got the plugin from the asset store(It’s awesome btw). Have you tried mapping/tracking individual fingers? I’d like to do some peace signs and such. Thanks!

    • Hi, the fingers tracked by Kinect-v2 (aka Kinect-for-Xbox-One) are the thumb and index finger per hand. The other individual fingers are not tracked. According to my experience, the tracking is not 100% reliable. Sometimes these joints get swapped by the sensor.

      • Ahh, I thought that would be the case, Thanks though! I did try to comment out the Index and middle finger in the hand dictionary so they’ll stay upright when the hands are closed LOL. In case I missed it, does the plugin have something that can override certain bones to make a pose

      • Like when the hands are closed, a fixed pose like a Peace/rock sign would override the hands instead. I think the fingers’ rotations are hard-coded right now?

      • Yes, the finger rotations are hard coded to 60 degrees in the TransformSpecialBoneFingers()-method of AvatarController.cs-script component. Feel free to modify the constant or code, if needed. There is also a ‘Finger orientations’-setting of the AvatarController-component. If you disable it, you could set the finger rotations in the scene or with your own script.

  33. Hello Rumen! Just recently bought this awesome asset! I have one question. I see that you have collision for hands but I am unable to see any prebas or collision for legs! I am trying to build a penalty application and I want to add collision when the user “kicks”. I tried looking between all your examples but there are not examples for Leg collision.

    • Hi, I don’t remember either to have a demo scene with colliders on the legs. But I think you could add them to the model’s leg joints, the same way I added them to the hands.

  34. Hello Rumen! Just recently bought this awesome asset! I have one question. I see that you have collision for hands but I am unable to see any prebas or collision for legs! I am trying to build a penalty application and I want to add collision when the user “kicks”. I tried looking between all your examples but there are not examples for Leg collision.

    • Hi, I don’t think Nuitrack SDK will work on UWP. Instead, please try to check, whether RealSense doesn’t work directly on UWP (the same way as Kinect-v2). First, with the UWP sensor-related samples and then with the K2-asset.

  35. Hi rumen… I am currently using the KinectFittingRoom1, I am trying to get 2 photos captured, 1 is the default one implemented and also take a photo of the “real-life” image on the bottom right window when you enable the display user map.
    Is it possible to do that? Your help would be much appreciated !

    • I suppose you mean the color-camera image. You can get it anytime by calling ‘Texture2D texColor = (Texture2D)KinectManager.Instance.GetUsersClrTex();’ and then encode it and save it as JPG-file. Look at the code of MakePhoto(bool openIt)-method in PhotoShooter.cs, if you are not sure how to encode and save the texture.

      • Hi remem, I can now take picture with the color-camera image at the bottom right, but is it possible to get it stand alone? I tried replacing the original code of Texture2D screenShot with texColor the code u provided for me. And it works where the color-camera window is at the bottom right. Please and thanks for the help!

      • Image is upside down, because the images from Kinect come with Y-axis downwards, while the Unity textures are with Y-axis going upwards. I don’t understand the other issue above. Please e-mail me, if you need code for texture flipping or to better describe your other issue. You can include some screenshots, if needed. Please also mention your invoice number in the e-mail, so I can check your eligibility for support.

      • Thanks for all the help so far! May I further trouble you to teach me how to flip the image to be normal when taken? I am not sure how to work around the Y axis problem. And may I know whats your email so that I can email you regarding my other problem? Really thanks alot for all the help!

  36. Hi Rumen!
    First of all – thanks you a lot for the K2-asset.
    I want to use couple Kinect2 sensors in one scene. Is it possible to use it both or switch it?
    Thanks!

  37. Hi Rumen…
    I am trying to create the pose detector scene such that only when user is detected, the model would show. Thus, I tried setting it to active(true) under ( false by default )
    protected virtual void CalibrateUser(Int64 userId, int bodyIndex)
    in the kinectManager. But when it shows, it doesn’t follow the user movements anymore. Can you help me please and thanks!

    • Hm, your approach should work, as to me, but I wouldn’t modify the CalibrateUser()-method of KM. Instead, look at the Update()-method of PoseDetectorScript.cs. There is an ‘if..else’ in there. The if-part is executed, when there is user tracked. You could activate the model(s) in there, if they aren’t active. The else-part is executed, when there is no user tracked. So, you could deactivate the models in there, if they are still active.

  38. Hi rumen, sorry to trouble you again…
    May i ask if is it possible to embed a image into the photo when taking a image in fittingroomDemo1 ? I would like to place an icon in the real life view when the image is taken.

  39. Hi, Rumen I was stuck I couldn’t add new gesture in for example simplegeture.cs, first of all, I add new gesture but I don’t know where I add code in completegeture and cancelgeture,
    Thanks a lot

      • I add Exactly manager.DeleteGesture(userId, KinectGestures.Gestures.RaiseLeftHand); like instruction in the UserDetected() method then I add three end lines in
        public bool GestureCompleted(long userId, int userIndex, KinectGestures.Gestures gesture,
        KinectInterop.JointType joint, Vector3 screenPos)
        {
        if (userIndex != playerIndex)
        return false;

        if(progressDisplayed)
        return true;

        string sGestureText = gesture + ” detected”;
        if(gestureInfo != null)
        {
        gestureInfo.text = sGestureText;
        }
        if (gesture == KinectGestures.Gestures.RaiseRightHand)
        {
        Debug.Log(“It is work”);
        }
        return true;
        }
        gestures before true as you told but it doesnt work,
        Thanks a lot

      • It’s not DeleteGesture(), it should be DetectGesture() in the UserDetected()-method, as per instruction. It instructs the KinectManager to start detecting the specified gesture for the given user.

  40. Hi, Rumen. I’m working on project that uses the Kinect V2 placed in the back of the user with an avatar. Originally the Kinect was detecting the user as facing forward, so the avatar was moving it’s arms and legs opposite to the actual direction. I managed to make an workaround using part of the “AllowTurnAround” code to report the user as always turned around, but that behavior is also rotating the avatar 180 degrees. How can I prevent that rotation?

    • Hi, I’m not sure I understand your issue in details. Could you please e-mail me some more details (and maybe some screenshots), so I could understand the issue better.

  41. Hi Rumen
    I am trying to apply cloth physics on your cloth model in unity And need your help

    This is what I have done so far regarding physics.

    go = selModel.transform.GetChild (1).gameObject; //gets the clothing component
    Cloth cloth = go.AddComponent ();
    Vector3[] vertices = cloth.vertices;
    ClothSkinningCoefficient[] newCoeff = cloth.coefficients;

    for (int i = 0; i<=(newCoeff.Length-1);i++)
    {

    newCoeff [i].maxDistance = 0.000001f; // here max distance is very less because i had to scale down the models to 0.0001 in blender

    }
    cloth.coefficients = newCoeff;
    cloth.useGravity = true;
    cloth.useTethers = true;

    I also tried removing the weights of bone(around waist and below) in blender using weight paint and then applying cloth physics.

    Result:

    Hi Rumen
    I am trying to apply cloth physics on your cloth model in unity And need your help

    This is what i have done so far regarding physics.

    go = selModel.transform.GetChild (1).gameObject; //gets the clothing component
    Cloth cloth = go.AddComponent ();
    Vector3[] vertices = cloth.vertices;
    ClothSkinningCoefficient[] newCoeff = cloth.coefficients;

    for (int i = 0; i<=(newCoeff.Length-1);i++)
    {

    newCoeff [i].maxDistance = 0.000001f; // here max distance is very less because i had to scale down the models to 0.0001 in blender

    }
    cloth.coefficients = newCoeff;
    cloth.useGravity = true;
    cloth.useTethers = true;

    I also tried removing the weights of bone(around waist and below) in blender using weight paint and then applying cloth physics.

    Result:

    https://drive.google.com/file/d/1pmj7SeohkFJmBsNkxcsFBG6HaEyfWugN/view

    Kindly help me with this.

      • It will be of great help for me if you can take out some time and look into it and would save me from a lots of trouble.

        I want an effect similar as mentioned in the link below (0:35 sec and further….. realistic cloth movements).Kindly suggest how can I achieve this , any suggestion wrt this will be of great help.

      • Please e-mail me your a short description of what I need to do, to apply the cloth physics to the models in the fitting-room-demo scene (a script maybe), along with some more images of what you get. Please mention your invoice number, as well.

  42. Hi Rumen I have a question about the FittingRoom1 is that possible. Multiple users can wear the virtual dresses on screen at same time or simultaneously. If it is possible how could we do that please let me know best Regrads Moeez Raja

Leave a Reply to maxzerCancel reply