Kinect v2 Tips, Tricks and Examples

teaching4After answering so many different questions about how to use various parts and components of the “Kinect v2 with MS-SDK”-package, I think it would be easier, if I share some general tips, tricks and examples. I’m going to add more tips and tricks to this article in time. Feel free to drop by, from time to time, to check out what’s new.

And here is a link to the Online documentation of the K2-asset.

Table of Contents:

What is the purpose of all manages in the KinectScripts-folder
How to use the Kinect v2-Package functionality in your own Unity project
How to use your own model with the AvatarController
How to make the avatar hands twist around the bone
How to utilize Kinect to interact with GUI buttons and components
How to get the depth- or color-camera textures
How to get the position of a body joint
How to make a game object rotate as the user
How to make a game object follow user’s head position and rotation
How to get the face-points’ coordinates
How to mix Kinect-captured movement with Mecanim animation
How to add your models to the FittingRoom-demo
How to set up the sensor height and angle
Are there any events, when a user is detected or lost
How to process discrete gestures like swipes and poses like hand-raises
How to process continuous gestures, like ZoomIn, ZoomOut and Wheel
How to utilize visual (VGB) gestures in the K2-asset
How to change the language or grammar for speech recognition
How to run the fitting-room or overlay demo in portrait mode
How to build an exe from ‘Kinect-v2 with MS-SDK’ project
How to make the Kinect-v2 package work with Kinect-v1
What do the options of ‘Compute user map’-setting mean
How to set up the user detection order
How to enable body-blending in the FittingRoom-demo, or disable it to increase FPS
How to build Windows-Store (UWP-8.1) application
How to work with multiple users
How to use the FacetrackingManager
How to use the background-removal image in FittingRoom-demo
How to move the FPS-avatars of positionally tracked users in VR environment
How to create your own gestures
How to enable or disable the tracking of inferred joints
How to build exe with the Kinect-v2 plugins provided by Microsoft
How to build Windows-Store (UWP-10) application
How to run the projector-demo scene
How to render the background-removal image or the color-camera image on scene background

What is the purpose of all manages in the KinectScripts-folder:

The managers in the KinectScripts-folder are components. You can utilize them in your projects, depending on the features you need. The KinectManager is the most general component, needed to interact with the sensor and to get basic data from it, like the color and depth streams, and the bodies and joints’ positions in meters, in Kinect space. The purpose of the AvatarController is to transfer the detected joint positions and orientations to a rigged skeleton. The CubemanController is similar, but it works with transforms and lines to represent the joints and bones, in order to make locating the tracking issues easier. The FacetrackingManager deals with the face points and head/neck orientation. It is used internally by the KinectManager (if available at the same time) to get the precise position and orientation of the head and neck. The InteractionManager is used to control the hand cursor and to detect hand grips, releases and clicks. And finally, the SpeechManager is used for recognition of speech commands. Pay also attention to the Samples-folder. It contains several simple examples (some of them cited below) you can learn from, use directly or copy parts of the code into your scripts.

How to use the Kinect v2-Package functionality in your own Unity project:

1. Copy folder ‘KinectScripts’ from the Assets-folder of the package to the Assets-folder of your project. This folder contains the package scripts, filters and interfaces.
2. Copy folder ‘Resources’ from the Assets-folder of the package to the Assets-folder of your project. This folder contains all needed libraries and resources. You can skip copying the libraries you don’t plan to use (for instance the 64-bit libraries or KinectV1-libraries), to save space.
3. Copy folder ‘Standard Assets’ from the Assets-folder of the package to the Assets-folder of your project. It contains the MS-wrapper classes for Kinect v2. Wait until Unity detects and compiles the newly copied resources and scripts.
4. Add KinectManager-script as component to the main camera or to other persistent scene object. The KinectManager is the most general Kinect component, needed to interact with the sensor and to get basic data from it.
5. Utilize the other Kinect-related components you would like to use in your scene. All these components rely on KinectManager internally.

How to use your own model with the AvatarController:

1. (Optional) Make sure your model is in T-pose. This is the zero-pose of Kinect joint orientations.
2. Select the model-asset in Assets-folder. Select the Rig-tab in Inspector window.
3. Set the AnimationType to ‘Humanoid’ and AvatarDefinition – to ‘Create from this model’.
4. Press the Apply-button. Then press the Configure-button to make sure the joints are correctly assigned. After that exit the configuration window.
5. Put the model into the scene.
6. Add the KinectScript/AvatarController-script as component to the model’s game object in the scene.
7. Make sure your model also has Animator-component, it is enabled and its Avatar-setting is set correctly.
8. Enable or disable (as needed) the MirroredMovement and VerticalMovement-settings of the AvatarController-component. Do mind when mirrored movement is enabled, the model’s transform should have Y-rotation of 180 degrees.
9. Run the scene to test the avatar model. If needed, tweak some settings of AvatarController and try again.

How to make the avatar hands twist around the bone:

To do it, you need to set ‘Allowed Hand Rotations’-setting of the KinectManager to ‘All’. KinectManager is a component of the MainCamera in the example scenes. This setting has three options: None – turns off all hand rotations, Default – turns on the hand rotations, except the twists around the bone, All – turns on all hand rotations.

How to utilize Kinect to interact with GUI buttons and components:

1. Add the InteractionManager to the main camera or to other persistent object in the scene. It is used to control the hand cursor and to detect hand grips, releases and clicks. Grip means closed hand with thumb over the other fingers, Release – opened hand, hand Click is generated when the user’s hand doesn’t move (stays still) for about 2 seconds.
2. Enable the ‘Control Mouse Cursor’-setting of the InteractionManager-component. This setting transfers the position and clicks of the hand cursor to the mouse cursor, this way enabling interaction with the GUI buttons, toggles and other components.
3. If you need drag-and-drop functionality for interaction with the GUI, enable the ‘Control Mouse Drag’-setting of the InteractionManager-component. This setting starts mouse dragging, as soon as it detects hand grip and continues the dragging until hand release is detected. If you enable this setting, you can also click on GUI buttons with a hand grip, instead of the usual hand click (i.e. staying in place, over the button, for about 2 seconds).

How to get the depth- or color-camera textures:

First off, make sure that ‘Compute User Map’-setting of the KinectManager-component is enabled, if you need the depth texture, or ‘Compute Color Map’-setting of the KinectManager-component is enabled, if you need the color camera texture. Then write something like this in the Update()-method of your script:

KinectManager manager = KinectManager.Instance;
if(manager && manager.IsInitialized())
{
    Texture2D depthTexture = manager.GetUsersLblTex();
    Texture2D colorTexture = manager.GetUsersClrTex();
    // do something with the textures
}

How to get the position of a body joint:

This is demonstrated in KinectScripts/Samples/GetJointPositionDemo-script. You can add it as a component to a game object in your scene to see it in action. Just select the needed joint and optionally enable saving to a csv-file. Do not forget that to add the KinectManager as component to a game object in your scene. It is usually a component of the MainCamera in the example scenes. Here is the main part of the demo-script that retrieves the position of the selected joint:

KinectInterop.JointType joint = KinectInterop.JointType.HandRight;
KinectManager manager = KinectManager.Instance;

if(manager && manager.IsInitialized())
{
    if(manager.IsUserDetected())
    {
        long userId = manager.GetPrimaryUserID();

        if(manager.IsJointTracked(userId, (int)joint))
        {
            Vector3 jointPos = manager.GetJointPosition(userId, (int)joint);
            // do something with the joint position
        }
    }
}

How to make a game object rotate as the user:

This is similar to the previous example and is demonstrated in KinectScripts/Samples/FollowUserRotation-script. To see it in action, you can create a cube in your scene and add the script as a component to it. Do not forget to add the KinectManager as component to a game object in your scene. It is usually a component of the MainCamera in the example scenes.

How to make a game object follow user’s head position and rotation:

You need the KinectManager and FacetrackingManager added as components to a game object in your scene. For example, they are components of the MainCamera in the KinectAvatarsDemo-scene. Then, to get the position of the head and orientation of the neck, you need code like this in your script:

KinectManager manager = KinectManager.Instance;

if(manager && manager.IsInitialized())
{
    if(manager.IsUserDetected())
    {
        long userId = manager.GetPrimaryUserID();

        if(manager.IsJointTracked(userId, (int)KinectInterop.JointType.Head))
        {
            Vector3 headPosition = manager.GetJointPosition(userId, (int)KinectInterop.JointType.Head);
            Quaternion neckRotation = manager.GetJointOrientation(userId, (int)KinectInterop.JointType.Neck);
            // do something with the head position and neck orientation
        }
    }
}

How to get the face-points’ coordinates:

You need a reference to the respective FaceFrameResult-object. This is demonstrated in KinectScripts/Samples/GetFacePointsDemo-script. You can add it as a component to a game object in your scene, to see it in action. To get a face point coordinates in your script you need to invoke its public GetFacePoint()-function. Do not forget to add the KinectManager and FacetrackingManager as components to a game object in your scene. For example, they are components of the MainCamera in the KinectAvatarsDemo-scene.

How to mix Kinect-captured movement with Mecanim animation

1. Use the AvatarControllerClassic instead of AvatarController-component. Assign only these joints that have to be animated by the sensor.
2. Set the SmoothFactor-setting of AvatarControllerClassic to 0, to apply the detected bone orientations instantly.
3. Create an avatar-body-mask and apply it to the Mecanim animation layer. In this mask, disable Mecanim animations of the Kinect-animated joints mentioned above. Do not disable the root-joint!
4. Enable the ‘Late Update Avatars’-setting of KinectManager (component of MainCamera in the example scenes).
5. Run the scene to check the setup. When a player gets recognized by the sensor, part of his joints will be animated by the AvatarControllerClassic component, and the other part – by the Animator component.

How to add your models to the FittingRoom-demo

1. For each of your fbx-models, import the model and select it in the Assets-view in Unity editor.
2. Select the Rig-tab in Inspector. Set the AnimationType to ‘Humanoid’ and the AvatarDefinition to ‘Create from this model’.
3. Press the Apply-button. Then press the Configure-button to check if all required joints are correctly assigned. The clothing models usually don’t use all joints, which can make the avatar definition invalid. In this case you can assign manually the missing joints (shown in red).
4. Keep in mind: The joint positions in the model must match the structure of the Kinect-joints. You can see them, for instance in the KinectOverlayDemo2. Otherwise the model may not overlay the user’s body properly.
5. Create a sub-folder for your model category (Shirts, Pants, Skirts, etc.) in the FittingRoomDemo/Resources-folder.
6. Create a sub-folders with subsequent numbers (0000, 0001, 0002, etc.) for all imported in p.1 models, in the model category folder.
7. Move your models into these numerical folders, one model per folder, along with the needed materials and textures. Rename the model’s fbx-file to ‘model.fbx’.
8. You can put a preview image for each model in jpeg-format (100 x 143px, 24bpp) in the respective model folder. Then rename it to ‘preview.jpg.bytes’. If you don’t put a preview image, the fitting-room demo will display ‘No preview’ in the model-selection menu.
9. Open the FittingRoomDemo1-scene.
10. Add ModelSelector-component for your model category to the KinectController game object. Set its ‘Model category’-setting to be the same as the name of sub-folder created in p.5 above. Set the ‘Number of models’-setting to reflect the number of sub-folders created in p.6 above.
11. The other settings of your ModelSelector-component must be similar to the existing ModelSelector in the demo. I.e. ‘Model relative to camera’ must be set to ‘BackgroundCamera’, ‘Foreground camera’ must be set to ‘MainCamera’, ‘Continuous scaling’ – enabled. The scale-factor settings may be set initially to 1 and the ‘Vertical offset’-setting to 0. Later you can adjust them slightly to provide the best model-to-body overlay.
12. Enable the ‘Keep selected model’-setting of the ModelSelector-component, if you want the selected model to continue overlaying user’s body, after the model category changes. This is useful, if there are several categories (i.e. ModelSelectors), for instance for shirts, pants, skirts, etc. In this case the selected shirt model will still overlay user’s body, when the category changes and the user starts selects pants, for instance.
13. The CategorySelector-component provides gesture control for changing models and categories, and takes care of switching model categories (e.g for shirts, pants, ties, etc.) for the same user. There is already a CategorySelector for the 1st user (player-index 0) in the scene, so you don’t need to add more.
14. If you plan for multi-user fitting-room, add one CategorySelector-component for each other user. You may also need to add the respective ModelSelector-components for model categories that will be used by these users, too.
15. Run the scene to ensure that your models can be selected in the list and they overlay the user’s body correctly. Experiment a bit if needed, to find the values of scale-factors and vertical-offset settings that provide the best model-to-body overlay.
16. If you want to turn off the cursor interaction in the scene, disable the InteractionManager-component of KinectController-game object. If you want to turn off the gestures (swipes for changing models & hand raises for changing categories), disable the respective settings of the CategorySelector-component. If you want to turn off or change the T-pose calibration, change the ‘Player calibration pose’-setting of KinectManager-component.
17. Last, but not least: You can use the FittingRoomDemo2 scene, to utilize or experiment with a single overlay model. Adjust the scale-factor settings of AvatarScaler to fine tune the scale of the whole body, arm- or leg-bones of the model, if needed. Enable the ‘Continuous Scaling’ setting, if you want the model to rescale on each Update.

How to set up the sensor height and angle

There are two very important settings of the KinectManager-component that influence the calculation of users’ and joints’ space coordinates, hence almost all user-related visualizations in the demo scenes. Here is how to set them correctly:

1. Set the ‘Sensor height’-setting, as to how high above the ground is the sensor, in meters. The by-default value is 1, i.e. 1.0 meter above the ground, which may not be your case.
2. Set the ‘Sensor angle’-setting, as to the tilt angle of the sensor, in degrees. Use positive degrees if the sensor is tilted up, negative degrees – if it is tilted down. The by-default value is 0, which means 0 degrees, i.e. the sensor is not tilted at all.
3. Because it is not so easy to estimate the sensor angle manually, you can use the ‘Auto height angle’-setting to find out this value. Select ‘Show info only’-option and run the demo-scene. Then stand in front of the sensor. The information on screen will show you the rough height and angle-settings, as estimated by the sensor itself. Repeat this 2-3 times and write down the values you see.
4. Finally, set the ‘Sensor height’ and ‘Sensor angle’ to the estimated values you find best. Set the ‘Auto height angle’-setting back to ‘Dont use’.
5. If you find the height and angle values estimated by the sensor good enough, or if your sensor setup is not fixed, you can set the ‘Auto height angle’-setting to ‘Auto update’. It will update the ‘Sensor height’ and ‘Sensor angle’-settings continuously, when there are users in the field of view of the sensor.

Are there any events, when a user is detected or lost

There are no special event handlers for user-detected/user-lost events, but there are two other options you can use:

1. In the Update()-method of your script, invoke the GetUsersCount()-function of KinectManager and compare the returned value to a previously saved value, like this:

KinectManager manager = KinectManager.Instance;
if(manager && manager.IsInitialized())
{
    int usersNow = manager.GetUsersCount();

    if(usersNow > usersSaved)
    {
        // new user detected
    }
    if(usersNow < usersSaved)
    {
        // user lost
    }

    usersSaved = usersNow;
}

2. Create a class that implements KinectGestures.GestureListenerInterface and add it as component to a game object in the scene. It has methods UserDetected() and UserLost(), which you can use as user-event handlers. The other methods could be left empty or return the default value (true). See the SimpleGestureListener or GestureListener-classes, if you need an example.

How to process discrete gestures like swipes and poses like hand-raises

Most of the gestures, like SwipeLeft, SwipeRight, Jump, Squat, etc. are discrete. All poses, like RaiseLeftHand, RaiseRightHand, etc. are also considered as discrete gestures. This means these gestures may report progress or not, but all of them get completed or cancelled at the end. Processing these gestures in a gesture-listener script is relatively easy. You need to do as follows:

1. In the UserDetected()-function of the script add the following line for each gesture you need to track:

manager.DetectGesture(userId, KinectGestures.Gestures.xxxxx);

2. In GestureCompleted() add code to process the discrete gesture, like this:

if(gesture == KinectGestures.Gestures.xxxxx)
{
    // gesture is detected - process it (for instance, set a flag or execute an action)
}

3. In the GestureCancelled()-function, add code to process the cancellation of the continuous gesture:

if(gesture == KinectGestures.Gestures.xxxxx)
{
    // gesture is cancelled - process it (for instance, clear the flag)
}

If you need code samples, see the SimpleGestureListener.cs or CubeGestureListener.cs-scripts.

4. From v2.8 on, KinectGestures.cs is not any more a static class, but a component that may be extended, for instance with the detection of new gestures or poses. You need to add it as component to the KinectController-game object, if you need gesture or pose detection in the scene.

How to process continuous gestures, like ZoomIn, ZoomOut and Wheel

Some of the gestures, like ZoomIn, ZoomOut and Wheel, are continuous. This means these gestures never get fully completed, but only report progress greater than 50%, as long as the gesture is detected. To process them in a gesture-listener script, do as follows:

1. In the UserDetected()-function of the script add the following line for each gesture you need to track:

manager.DetectGesture(userId, KinectGestures.Gestures.xxxxx);

2. In GestureInProgress() add code to process the continuous gesture, like this:

if(gesture == KinectGestures.Gestures.xxxxx)
{
    if(progress > 0.5f)
    {
        // gesture is detected - process it (for instance, set a flag, get zoom factor or angle)
    }
    else
    {
        // gesture is no more detected - process it (for instance, clear the flag)
    }
}

3. In the GestureCancelled()-function, add code to process the end of the continuous gesture:

if(gesture == KinectGestures.Gestures.xxxxx)
{
    // gesture is cancelled - process it (for instance, clear the flag)
}

If you need code samples, see the SimpleGestureListener.cs or ModelGestureListener.cs-scripts.

4. From v2.8 on, KinectGestures.cs is not any more a static class, but a component that may be extended, for instance with the detection of new gestures or poses. You need to add it as component to the KinectController-game object, if you need gesture or pose detection in the scene.

How to utilize visual (VGB) gestures in the K2-asset

The visual gestures, created by the Visual Gesture Builder (VGB) can be used in the K2-asset, too. To do it, follow these steps (and see the VisualGestures-game object and its components in the KinectGesturesDemo-scene):

1. Copy the gestures’ database (xxxxx.gbd) to the Resources-folder and rename it to ‘xxxxx.gbd.bytes’.
2. Add the VisualGestureManager-script as a component to a game object in the scene (see VisualGestures-game object).
3. Set the ‘Gesture Database’-setting of VisualGestureManager-component to the name of the gestures’ database, used in step 1 (‘xxxxx.gbd’).
4. Create a visual-gesture-listener to process the gestures, and add it as a component to a game object in the scene (see the SimpleVisualGestureListener-script).
5. In the GestureInProgress()-function of the gesture-listener add code to process the detected continuous gestures and in the GestureCompleted() add code to process the detected discrete gestures.

How to change the language or grammar for speech recognition

1. Make sure you have installed the needed language pack from here.
2. Set the ‘Language code’-setting of SpeechManager-component, as to the grammar language you need to use. The list of language codes can be found here (see ‘LCID Decimal’).
3. Make sure the ‘Grammar file name’-setting of SpeechManager-component corresponds to the name of the grxml.txt-file in Assets/Resources.
4. Open the grxml.txt-grammar file in Assets/Resources and set its ‘xml:lang’-attribute to the language that corresponds to the language code in step 2.
5. Make the other needed modifications in the grammar file and save it.
6. (Optional since v2.7) Delete the grxml-file with the same name in the root-folder of your Unity project (the parent folder of Assets-folder).
7. Run the scene to check, if speech recognition works correctly.

How to run the fitting-room or overlay demo in portrait mode

1. First off, add 9:16 (or 3:4) aspect-ratio to the Game view’s list of resolutions, if it is missing.
2. Select the 9:16 (or 3:4) aspect ratio of Game view, to set the main-camera output in portrait mode.
3. Open the fitting-room or overlay-demo scene and select the BackgroundImage-game object.
4. Enable its PortraitBackground-component (available since v2.7) and save the scene.
5. Run the scene to try it out in portrait mode.

How to build an exe from ‘Kinect-v2 with MS-SDK’ project

By default Unity builds the exe (and the respective xxx_Data-folder) in the root folder of your Unity project. It is recommended to you use another, empty folder instead. The reason is that building the exe in the folder of your Unity project may cause conflicts between the native libraries used by the editor and the ones used by the exe, if they have different architectures (for instance the editor is 64-bit, but the exe is 32-bit).

Also, before building the exe, make sure you’ve copied the Assets/Resources-folder from the K2-asset to your Unity project. It contains the needed native libraries and custom shaders. Optionally you can remove the unneeded zip.bytes-files from the Resources-folder. This will save a lot of space in the build. For instance, if you target Kinect-v2 only, you can remove the Kinect-v1 and OpenNi2-related zipped libraries. The exe won’t need them anyway.

How to make the Kinect-v2 package work with Kinect-v1

If you have only Kinect v2 SDK or Kinect v1 SDK installed on your machine, the KinectManager should detect the installed SDK and sensor correctly. But in case you have both Kinect SDK 2.0 and SDK 1.8 installed simultaneously, the KinectManager will put preference on Kinect v2 SDK and your Kinect v1 will not be detected. The reason for this is that you can use SDK 2.0 in offline mode as well, i.e. without sensor attached. In this case you can emulate the sensor by playing recorded files in Kinect Studio 2.0.

If you want to make the KinectManager utilize the appropriate interface, depending on the currently attached sensor, open KinectScripts/Interfaces/Kinect2Interface.cs and at its start change the value of ‘sensorAlwaysAvailable’ from ‘true’ to ‘false’. After this, close and reopen the Unity editor. Then, on each start, the KinectManager will try to detect which sensor is currently attached to your machine and use the respective sensor interface. This way you could switch the sensors (Kinect v2 or v1), as to your preference, but will not be able to use the offline mode for Kinect v2. To utilize the Kinect v2 offline mode again, you need to switch ‘sensorAlwaysAvailable’ back to true.

What do the options of ‘Compute user map’-setting mean

Here are one-line descriptions of the available options:

  • RawUserDepth means that only the raw depth image values, coming from the sensor will be available, via the GetRawDepthMap()-function for instance;
  • BodyTexture means that GetUsersLblTex()-function will return the white image of the tracked users;
  • UserTexture will cause GetUsersLblTex() to return the tracked users’ histogram image;
  • CutOutTexture, combined with enabled ‘Compute color map‘-setting, means that GetUsersLblTex() will return the cut-out image of the users.

All these options (except RawUserDepth) can be tested instantly, if you enable the ‘Display user map‘-setting of KinectManager-component, too.

How to set up the user detection order

There is a ‘User detection order’-setting of the KinectManager-component. You can use it to determine how the user detection should be done, depending on your requirements. Here are short descriptions of the available options:

  • Appearance is selected by default. It means that the player indices are assigned in order of user appearance. The first detected user gets player index 0, The next one gets index 1, etc. If the user 0 gets lost, the remaining users are not reordered. The next newly detected user will take its place;
  • Distance means that player indices are assigned depending on distance of the detected users to the sensor. The closest one will get player index 0, the next closest one – index 1, etc. If a user gets lost, the player indices are reordered, depending on the distances to the remaining users;
  • Left to right means that player indices are assigned depending on the X-position of the detected users. The leftmost one will get player index 0, the next leftmost one – index 1, etc. If a user gets lost, the player indices are reordered, depending on the X-positions of the remaining users;

The user-detection area can be further limited with ‘Min user distance’, ‘Max user distance’ and ‘Max left right distance’-settings, in meters from the sensor. The maximum number of detected user can be limited by lowering the value of ‘Max tracked user’-setting.

How enable body-blending in the FittingRoom-demo, or disable it to increase FPS

If you select the MainCamera in the KinectFittingRoom1-demo scene (in v2.10 or above), you will see a component called UserBodyBlender. It is responsible for mixing the clothing model (overlaying the user) with the real world objects (including user’s body parts), depending on the distance to camera. For instance, if you arms or other real-world objects are in front of the model, you will see them overlaying the model, as expected.

You can enable the component, to turn on the user’s body-blending functionality. The ‘Depth threshold’-setting may be used to adjust the minimum distance to the front of model (in meters). It determines when a real-world’s object will become visible. It is set by default to 0.1m, but you could experiment a bit to see, if any other value works better for your models. If the scene performance (in means of FPS) is not sufficient, and body-blending is not important, you can disable the UserBodyBlender-component to increase performance.

How to build Windows-Store (UWP-8.1) application

To do it, you need at least v2.10.1 of the K2-asset. To build for ‘Windows store’, first select ‘Windows store’ as platform in ‘Build settings’, and press the ‘Switch platform’-button. Then do as follows:

1. Unzip Assets/Plugins-Metro.zip. This will create Assets/Plugins-Metro-folder.
2. Delete the KinectScripts/SharpZipLib-folder.
3. Optionally, delete all zip.bytes-files in Assets/Resources. You won’t need these libraries in Windows/Store. All Kinect-v2 libraries reside in Plugins-Metro-folder.
4. Select ‘File / Build Settings’ from the menu. Add the scenes you want to build. Select ‘Windows Store’ as platform. Select ‘8.1’ as target SDK. Then click the Build-button. Select an empty folder for the Windows-store project and wait the build to complete.
5. Go to the build-folder and open the generated solution (.sln-file) with Visual studio.
6. Change the ‘by default’ ARM-processor target to ‘x86’. The Kinect sensor is not compatible with ARM processors.
7. Right click ‘References’ in the Project-windows and select ‘Add reference’. Select ‘Extensions’ and then WindowsPreview.Kinect and Microsoft.Kinect.Face libraries. Then press OK.
8. Open solution’s manifest-file ‘Package.appxmanifest’, go to ‘Capabilities’-tab and enable ‘Microphone’ and ‘Webcam’ in the left panel. Save the manifest. This is needed to to enable the sensor, when the UWP app starts up. Thanks to Yanis Lukes (aka Pendrokar) for providing this info!
9. Build the project. Run it, to test it locally. Don’t forget to turn on Windows developer mode on your machine.

How to work with multiple users

Kinect-v2 can fully track up to 6 users simultaneously. That’s why many of the Kinect-related components, like AvatarController, InteractionManager, model & category-selectors, gesture & interaction listeners, etc. have a setting called ‘Player index’. If set to 0, the respective component will track the 1st detected user. If set to 1, the component will track the 2nd detected use. If set to 2 – the 3rd user, etc. The order of user detection may be specified with the ‘User detection order’-setting of the KinectManager (component of KinectController game object).

How to use the FacetrackingManager

The FacetrackingManager-component may be used for several purposes. First, adding it as component of KinectController will provide more precise neck and head tracking, when there are avatars in the scene (humanoid models utilizing the AvatarController-component). If HD face tracking is needed, you can enable the ‘Get face model data’-setting of FacetrackingManager-component. Keep in mind that using HD face tracking will lower performance and may cause memory leaks, which can cause Unity crash after multiple scene restarts. Please use this feature carefully.

In case of ‘Get face model data’ enabled, don’t forget to assign a mesh object (e.g. Quad) to the ‘Face model mesh’-setting. Pay also attention to the ‘Textured model mesh’-setting. The available options are: ‘None’ – means the mesh will not be textured; ‘Color map’ – the mesh will get its texture from the color camera image, i.e. it will reproduce user’s face; ‘Face rectangle’ – the face mesh will be textured with its material’s Albedo texture, whereas the UI coordinates will match the detected face rectangle.

Finally, you can use the FacetrackingManager public API to get a lot of face-tracking data, like the user’s head position and rotation, animation units, shape units, face model vertices, etc.

How to use the background-removal image in FittingRoom-demo

To replace the color-camera background in the FittingRoom-demo scene with the background-removal image, please do as follows:

1. Add the KinectScripts/BackgroundRemovalManager.cs-script as component to the KinectController-game object in the scene.
2. Make sure the ‘Compute user map’-setting of KinectManager (component of the KinectController, too) is set to ‘Body texture’, and the ‘Compute color map’-setting is enabled.
3. Open the OverlayController-script (also a component of KinectController-game object), and near the end of script replace: ‘backgroundImage.texture = manager.GetUsersClrTex();’ with: ‘backgroundImage.texture = BackgroundRemovalManager.Instance.GetForegroundTex();’.
4. Select the MainCamera in Hierarchy, and disable its UserBodyBlender-component.

How to move the FPS-avatars of positionally tracked users in VR environment

There are two options for moving first-person avatars in VR-environment (the 1st avatar-demo scene in K2VR-asset):

1. If you use the Kinect’s positional tracking, turn off the Oculus/Vive positional tracking, because their coordinates are different to Kinect’s.
2. If you prefer to use the Oculus/Vive positional tracking:
– enable the ‘External root motion’-setting of the AvatarController-component of avatar’s game object. This will disable avatar motion as to Kinect special coordinates.
– enable the HeadMover-component of avatar’s game object, and assign the MainCamera as ‘Target transform’, to follow the Oculus/Vive position.

Now try to run the scene. If there are issues with the MainCamera used as positional target, do as follows:
– add an empty game object to the scene. It will be used to follow the Oculus/Vive positions.
– assign the newly created game object to the ‘Target transform’-setting of the HeadMover-component.
– add a script to the newly created game object, and in that script’s Update()-function set programatically the object’s transform position to be the current Oculus/Vive position.

How to create your own gestures

For gesture recognition there are two options – visual gestures (created with the Visual Gesture Builder, part of Kinect SDK 2.0) and programmatic gestures that are programmatically implemented in KinectGestures.cs. The latter are based mainly on the positions of the different joints, and how they stand to each other in different moments of time.

Here is a video on creating and checking for visual gestures. Please also check GesturesDemo/VisualGesturesDemo-scene, to see how to use visual gestures in Unity. One issue with the visual gestures is that they usually work in 32-bit builds only.

The programmatic gestures should be coded in C#, in KinectGestures.cs (or class that extends it). To get started with coding programmatic gestures, first read ‘How to use gestures…’-pdf document in the _Readme-folder of K2-asset. It may seem difficult at first, but it’s only a matter of time and experience to become expert in coding gestures. You have direct access to the jointsPos-array, containing all joint positions and jointsTracked-array, containing the respective joint-tracking states. Keep in mind that all joint positions are in world coordinates, in meters. Some helper functions are also there at your disposal, like SetGestureJoint(), SetGestureCancelled(), CheckPoseComplete(), etc. Maybe I’ll write a separate tutorial about gesture coding in the near future.

The demo scenes related to utilizing programmatic gestures are again in the GesturesDemo-folder. The KinectGesturesDemo1-scene shows how to utilize discrete gestures, and the KinectGesturesDemo2-scene is about continuous gestures.

More tips regarding listening for gestures in Unity scenes can be found above. See the tips for discrete, continuous and visual gestures (which could be discrete and continuous, as well).

How to enable or disable the tracking of inferred joints

First, keep in mind that:
1. There is ‘Ignore inferred joints’-setting of the KinectManager. KinectManager is usually a component of the KinectController-game object in demo scenes.
2. There is a public API method of KinectManager, called IsJointTracked(). This method is utilized by various scripts & components in the demo scenes.

Here is how it works:
The Kinect SDK tracks the positions of all body joints’ together with their respective tracking states. These states can be Tracked, NotTracked or Inferred. When the ‘Ignore inferred joints’-setting is enabled, the IsJointTracked()-method returns true, when the tracking state is Tracked or Inferred, and false when the state is NotTracked. I.e. both tracked and inferred joints are considered valid. When the setting is disabled, the IsJointTracked()-method returns true, when the tracking state is Tracked, and false when the state is NotTracked or Inferred. I.e. only the really tracked joints are considered valid.

How to build exe with the Kinect-v2 plugins provided by Microsoft

In case you’re targeting Kinect-v2 sensor only, and would like to skip packing all native libraries that come with the K2-asset in the build, as well as unpacking them into the working directory of the executable afterwards, do as follows:

1. Download and unzip the Kinect-v2 Unity Plugins from here.
2. Open your Unity project. Select ‘Assets / Import Package / Custom Package’ from the menu and import only the Plugins-folder from ‘Kinect.2.0.1410.19000.unitypackage’. You can find it in the unzipped package from p.1 above. Please don’t import anything from the ‘Standard Assets’-folder of unitypackage. All needed standard assets are already present in the K2-asset.
3. If you are using the FacetrackingManager in your scenes, import the Plugins-folder from ‘Kinect.Face.2.0.1410.19000.unitypackage’ as well. If you are using visual gestures (i.e. VisualGestureManager in your scenes), import the Plugins-folder from ‘Kinect.VisualGestureBuilder.2.0.1410.19000.unitypackage’, too. Again, please don’t import anything from the ‘Standard Assets’-folder of unitypackages. All needed standard assets are already present in the K2-asset.
4. Delete all zipped libraries in Assets/Resources-folder. You can see them as .zip-files in the Assets-window, or as .zip.bytes-files in the Windows explorer. Delete the Plugins-Metro (zip-file) in the Assets-folder, too. All these zipped libraries are no more needed at run-time.
5. Delete all dlls in the root-folder of your Unity project. The root-folder is the parent-folder of the Assets-folder of your project, and is not visible in the Editor. Delete the NuiDatabase- and vgbtechs-folders in the root-folder, too. These dlls and folders are no more needed, because they are part of the project’s Plugins-folder now.
6. Try to run the Kinect-v2 related scenes in the project, to make sure they still work as expected.
7. If everything is OK, build the executable again. This should work for both x86 and x86_64-architectures, as well as for Windows-Store, SDK 8.1.

How to build Windows-Store (UWP-10) application

To do it, you need at least v2.12.2 of the K2-asset. Then follow these steps:

1. Delete the KinectScripts/SharpZipLib-folder. It is not needed for UWP. If you leave it, it will cause syntax errors later.
2. Open ‘File / Build Settings’ in Unity editor, switch to ‘Windows store’ platform and select ‘Universal 10’ as SDK. Optionally enable the ‘Unity C# Project’ and ‘Development build’-settings, if you’d like to edit the Unity scripts in Visual studio later.
3. Press the ‘Build’-button, select output folder and wait for Unity to finish exporting the UWP-Visual studio solution.
4. Close or minimize the Unity editor, then open the exported UWP solution in Visual studio.
5. Select x86 or x64 as target platform in Visual studio.
6. Open ‘Package.appmanifest’ of the main project, and on tab ‘Capabilities’ enable ‘Microphone’ & ‘Webcam’. These may be enabled in the Windows-store’s Player settings in Unity, too.
7. If you have enabled the ‘Unity C# Project’-setting in p.2 above, right click on ‘Assembly-CSharp’-project in the Solution explorer, select ‘Properties’ from the context menu, and then select ‘Windows 10 Anniversary Edition (10.0; Build 14393)’ as ‘Target platform’. Otherwise you will get compilation errors.
8. Build and run the solution, on the local or remote machine. It should work now.

Please mind the FacetrackingManager and SpeechRecognitionManager-components, hence the scenes that use them, will not work with the current version of the K2-UWP interface.

How to run the projector-demo scene (v2.13 and later)

To run the KinectProjectorDemo-scene, you need to calibrate the projector to the Kinect sensor first. To do it, please follow these steps:

1. Go to the RoomAliveToolkit-GitHub page and follow the instructions of ‘ProCamCalibration README’ there.
2. To do the calibration, you need first to build the ProCamCalibration-project with Microsoft Visual Studio 2015 or later. For your convenience, here is a ready-made build of the needed executables, made with VS-2015.
3. After the ProCamCalibration finishes, copy the generated calibration xml-file to KinectDemos/ProjectorDemo/Resources.
4. Open the KinectProjectorDemo-scene in Unity editor, select the MainCamera-game object in Hierarchy, and drag the calibration xml-file generated by ProCamCalibrationTool to the ‘Calibration Xml’-setting of its ProjectorCamera-component. Please also check, if the ‘Proj name in config’ is the same as the projector name, set in the calibration xml-file.
5. Run the scene and walk in front of the Kinect sesnor, to check if the skeleton projection gets overlayed correctly by the projector over your body.

How to render the background-removal image or the color-camera image on scene background

First off, To replace the color-camera background in the FittingRoom-demo scene with the background-removal image, please follow these steps.
In all other demo-scenes: You can put the background-removal image or the color-camera image on scene background, by following this (rather complex) procedure:

1. Make sure there is BackgroundImage-object in the scene. If there isn’t any, create an empty game object, name it BackgroundImage, and add ‘GUI Texture’-component to it. Set its Transform position to (0.5, 0.5, 0) to center it on the screen, then set its Y-scale to -1, to flip the texture vertically (Unity textures have their origin at the bottom left corner, while Kinect’s are at top-left). 

2. Add the ForegroundToImage-script component to the BackgroundImage-game object, as well as the BackgroundRemovalManager-component to the KinectController-game object. ForegroundToImage will use the foregound texture, created by BackgroundRemovalManager, to set it as texture of the GUI-Texture component. If you want to render the color-camera image instead, add the BackgroundColorImage-component to KinectController-game object and set its ‘Background Image’-setting to reference the BackgroundImage-game object from p.1.

3. (The tricky part) Two cameras are needed to display an image on scene background – one to render the background, and a second one to render all objects on top of the background. Cameras in Unity have a setting called CullingMask, where you can set the layers rendered by each camera. In this regard, an extra layer will be needed for the background. Select ‘Add layer’ from the Layer-dropdown in the top-right corner of the Inspector and add ‘Background Layer’. Then go back to the BackgroundImage and set its layer to ‘Background Layer’. Unfortunately, when Unity exports packages, it doesn’t export the extra layers, that’s why the extra layers are invisible in the demo-scenes.

4. Make sure there is a BackgroundCamera-object in the scene. If there isn’t any, add a camera-object to the scene and name it BackgroundCamera. Set its CullingMask to ‘Background Layer’ only. Other important settings are ‘Depth’, which determines the order of rendering, and ‘Clear flags’ that determines if the camera clears the output texture or not. Set the ‘Clear flags’ of BackgroundCamera to ‘Skybox’ or ‘Solid color’, and ‘Depth’ to 0. This means this camera will render first, and it will clear the output texture before that. Also, don’t forget to disable its AudioListener-component. Otherwise expect endless warnings regarding the multiple audio listeners, when the scene is run.

5. Select the ‘Main Camera’-object in the scene. Set its ‘Clear flags’ to ‘Depth only’ and the ‘Depth’ to 1. This means it will not clear the output image, and will render 2nd. In the ‘Culling mask’-setting un-select ‘BackgroundLayer’ and leave all other layers selected. This way the BackgroundCamera will be the 1st one and will render the BackgroundLayer-only, while the MainCamera will be the 2nd one and will render everything else on top of it.

 

489 thoughts on “Kinect v2 Tips, Tricks and Examples

  1. Hello,
    Just to say that you got great package.

    I am trying to control 2D puppet like character with Kinect, how can I constrain joint movements, so that arms and legs are not getting in strange positions.

    Tomislav

    • Hi, there is a setting of KinectManager called ‘Ignore Z-Coordinates’. You can enable it to set 2D mode for the detected movements and joint orientations. KinectManager is component of the KinectController-game object in all demo scenes.

    • The BackgroundRemovalManager component has a setting called ‘Color camera resolution’. Make sure this setting is enabled in your scene, to get the maximum user-image coverage. The non-tracked strips left and right are unfortunately normal, even in this case, because the Kinect depth image is smaller than the color one, and obviously doesn’t cover these areas.

  2. Hi Rumen!

    I am trying to dynamically load rigged models into my scene. I followed your guide to add new models. (rigging – unity mecanim – avatar controller) It works perfectly if the object is in the scene during startup.

    But if I load the model from the resources folder with
    GameObject model= Instantiate(Resources.Load(“riggedGirl”, typeof(GameObject))) as GameObject;
    and Use Add.Component to attach the avatar controller, the model won’t move.

    does the kinectController check for avatar controllers during startup?
    do I need to tell the kinectController which avatar controller it should control?

    cheers

    • Hi Achim, see the LoadDressingModel()-method of ModelSelector.cs-script. It is the component in the 1st fitting-room demo that instantiates the selected model. I suppose you have forgot to add the instantiated avatar to the list of KinectManager’s avatar controllers.

  3. Hi Rumen, can you explain to me how to enable kinect to detect the user when turn around, i saw, kinect controller already handle it but it’s not working, i used it on fitting room demo that the dress doesn’t rotate 360 as well ? thanks.

  4. Hi Rumen, can you explain to me how to detect the user when its turn around, I saw kinect controller already handled it, but in fitting room demo i enable the flag to allow turn around and i also print the calibration text to show me its FACE or BACK and it allways print FACE. which is the dress doesn’t rotate 360 as well? thanks.

    • Hi, this setting was only experimental. Unfortunately, it’s not working correctly (yet). It’s purpose was to overcome the SDK “feature” to track correctly the users, only when they are facing the sensor. For the time being, you’d need to warn (or not allow) the users to turn more than ~80 degrees left or right. Sorry for this limitation..

  5. Hello, I have problems using more than one user in PhotoBooth, I have added 6 JoinOverlayer, 6 InteractionManager and 6 PhotoBootController. One for each user and configured. But I do not get every user to have his own model (Medusa, Batman, etc …) Only one user can have a mask at the same time.
    How should I have 6 users have their own independent mask.
    Sorry for my English.

    • The PhotoBoothController references static mask-models in the scene, as configured in Inspector. In your case you should instead make the available models as prefabs, and then instantiate them at run-time to fill the respective mask-arrays (headMasks, leftHandMasks, chestMasks) of PhotoBoothController.cs. I’m also not sure why would you need six InteractionManagers. Usually, only one user would be allowed to control the photo-shooting.

  6. Hey Rumen !

    In my Application I use the KinectRecordPlayer Script. But if the Script is playing I don’t get the UserDetection or UserLost Events from KinectGestures.GesturesListenerInterface (which i implemented in my Script) anymore. My Plan was to Play Recorded Avatar Movements till a User is Detected and then stop the KinectRecordPlayer and when the User is Lost start the Recorded Movements again. Is this possible without make big changings in KinectInterop and/or KinectManager ?
    thx !

    • Hi, sorry, I cannot test your issue right now, because I’m at a conference this week. But I remember I had requests for similar use cases before. If you look in the KinectDemos/RecorderDemo/Scripts-folder, you will see there is another script component called PlayerDetectorController.cs in there, which you can use in combination with KinectRecorderPlayer, to do what you need. Here is some more info regarding this component: https://ratemt.com/k2docs/PlayerDetectorController.html

  7. Hi,

    I just bought the Kinect v2 Examples on the Unity Asset store, and I like them so far 🙂

    I was wondering how to combine the Avatars Demo with the fourth Face Tracking Demo. I’ve tried adding the Model Face Controller and the Facetracking Manager to the Avatar object and playing around with the settings but it doesn’t seem to work. Would you have an example/how-to for this?

    • Hi, first you need an avatar with rigged head, in means of eyebrows, eyelids, lips and jaw. See the FaceRigged-model in the 4th FT-demo for reference. In this regard, the avatar-model in AvatarsDemo has only jaw rigged, as far as I see. So, you could animate only its jaw-bone with the ModelFaceController. The FacetrackingManager may be component of the KinectController-game oject, as in the demo, not the avatar’s object. Then you need to assign the jaw-bone to the respective setting of MFC, and then adjust the rotation axis and limits, if needed. Hope this is enough information for a start.

      • I have a simmilar problem as Christian, we want to combine Avatar Demo with the First Face Tracking Demo, so that the Avatar get die face of the User. Should I take also the approach with assign the jaw-bone and then adjust the rotation axis and limits ? As the ModelFaceController is not part of the 1. Face Tracking Demo, it should be easier ? My main Problem is to get the correct size and coordinates of the Avatars Face and map the user face on this position.

      • In your case I would suggest to parent the FaceModelMesh-quad to the neck or head-joint of the avatar, and to disable the ‘Move model mesh’-setting of the FacetrackingManager-component. Then experiment a bit to adjust the quad to fit the avatar’s head, as good as possible.

  8. Hi,
    When I connect a Logitech webcam to pc, and use WebCamTexture with it to show the image(because I must put the camera far away), the kinect starts dropping frames seriously…When I stop the WebCamTexture, the problem still exists…
    How to solve thie problem?
    Thanks.

    • I made a mistake, now when I stop the WebCamTexture, it becomes correct.
      How can I do about the first problem?

      • Kinect-v2 needs a dedicated USB-3 port, and has its own color camera as well. Why do you need a second color camera?
        Nevertheless, you could try to stop the unneeded streams and textures, for instance disable ‘Compute color map’-setting, and set ‘Compute user map’ to ‘Raw user depth’.

  9. Hi.
    I want to limit the z-axis movement of the avatar in KinectAvatarsDemo1.
    So I want to make the avatars move only in the x-axis.
    Is there a simple setting?
    Or what script should I change?

    • Oh, I find it! in your older answer.
      It was in the Kinect2AvatarPos() function inside the AvatarController.cs

      • You can also use the ‘Ignore Z-Coordinates’-setting of the KinectManager (usually component of KinectController-game object in the scene).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s