Kinect v2 Tips, Tricks and Examples

teaching4After answering so many different questions about how to use various parts and components of the “Kinect v2 with MS-SDK”-package, I think it would be easier, if I share some general tips, tricks and examples. I’m going to add more tips and tricks to this article in time. Feel free to drop by, from time to time, to check out what’s new.

And here is a link to the Online documentation of the K2-asset.

Table of Contents:

What is the purpose of all manages in the KinectScripts-folder
How to use the Kinect v2-Package functionality in your own Unity project
How to use your own model with the AvatarController
How to make the avatar hands twist around the bone
How to utilize Kinect to interact with GUI buttons and components
How to get the depth- or color-camera textures
How to get the position of a body joint
How to make a game object rotate as the user
How to make a game object follow user’s head position and rotation
How to get the face-points’ coordinates
How to mix Kinect-captured movement with Mecanim animation
How to add your models to the FittingRoom-demo
How to set up the sensor height and angle
Are there any events, when a user is detected or lost
How to process discrete gestures like swipes and poses like hand-raises
How to process continuous gestures, like ZoomIn, ZoomOut and Wheel
How to utilize visual (VGB) gestures in the K2-asset
How to change the language or grammar for speech recognition
How to run the fitting-room or overlay demo in portrait mode
How to build an exe from ‘Kinect-v2 with MS-SDK’ project
How to make the Kinect-v2 package work with Kinect-v1
What do the options of ‘Compute user map’-setting mean
How to set up the user detection order
How to enable body-blending in the FittingRoom-demo, or disable it to increase FPS
How to build Windows-Store (UWP-8.1) application
How to work with multiple users
How to use the FacetrackingManager
How to add background image to the FittingRoom-demo
How to move the FPS-avatars of positionally tracked users in VR environment
How to create your own gestures
How to enable or disable the tracking of inferred joints
How to build exe with the Kinect-v2 plugins provided by Microsoft
How to build Windows-Store (UWP-10) application
How to run the projector-demo scene
How to render background and the background-removal image on the scene background
How to run the demo scenes on non-Windows platforms
How to workaround the user tracking issue, when the user is turned back
How to get the full scene depth image as texture
Some useful hints regarding AvatarController and AvatarScaler

What is the purpose of all manages in the KinectScripts-folder:

The managers in the KinectScripts-folder are components. You can utilize them in your projects, depending on the features you need. The KinectManager is the most general component, needed to interact with the sensor and to get basic data from it, like the color and depth streams, and the bodies and joints’ positions in meters, in Kinect space. The purpose of the AvatarController is to transfer the detected joint positions and orientations to a rigged skeleton. The CubemanController is similar, but it works with transforms and lines to represent the joints and bones, in order to make locating the tracking issues easier. The FacetrackingManager deals with the face points and head/neck orientation. It is used internally by the KinectManager (if available at the same time) to get the precise position and orientation of the head and neck. The InteractionManager is used to control the hand cursor and to detect hand grips, releases and clicks. And finally, the SpeechManager is used for recognition of speech commands. Pay also attention to the Samples-folder. It contains several simple examples (some of them cited below) you can learn from, use directly or copy parts of the code into your scripts.

How to use the Kinect v2-Package functionality in your own Unity project:

1. Copy folder ‘KinectScripts’ from the Assets-folder of the package to the Assets-folder of your project. This folder contains the package scripts, filters and interfaces.
2. Copy folder ‘Resources’ from the Assets-folder of the package to the Assets-folder of your project. This folder contains all needed libraries and resources. You can skip copying the libraries you don’t plan to use (for instance the 64-bit libraries or KinectV1-libraries), to save space.
3. Copy folder ‘Standard Assets’ from the Assets-folder of the package to the Assets-folder of your project. It contains the MS-wrapper classes for Kinect v2. Wait until Unity detects and compiles the newly copied resources and scripts.
4. See this tip as well, if you like to build your project with the Kinect-v2 plugins provided by Microsoft.

How to use your own model with the AvatarController:

1. (Optional) Make sure your model is in T-pose. This is the zero-pose of Kinect joint orientations.
2. Select the model-asset in Assets-folder. Select the Rig-tab in Inspector window.
3. Set the AnimationType to ‘Humanoid’ and AvatarDefinition – to ‘Create from this model’.
4. Press the Apply-button. Then press the Configure-button to make sure the joints are correctly assigned. After that exit the configuration window.
5. Put the model into the scene.
6. Add the KinectScript/AvatarController-script as component to the model’s game object in the scene.
7. Make sure your model also has Animator-component, it is enabled and its Avatar-setting is set correctly.
8. Enable or disable (as needed) the MirroredMovement and VerticalMovement-settings of the AvatarController-component. Do mind when mirrored movement is enabled, the model’s transform should have Y-rotation of 180 degrees.
9. Run the scene to test the avatar model. If needed, tweak some settings of AvatarController and try again.

How to make the avatar hands twist around the bone:

To do it, you need to set ‘Allowed Hand Rotations’-setting of the KinectManager to ‘All’. KinectManager is a component of the MainCamera in the example scenes. This setting has three options: None – turns off all hand rotations, Default – turns on the hand rotations, except the twists around the bone, All – turns on all hand rotations.

How to utilize Kinect to interact with GUI buttons and components:

1. Add the InteractionManager to the main camera or to other persistent object in the scene. It is used to control the hand cursor and to detect hand grips, releases and clicks. Grip means closed hand with thumb over the other fingers, Release – opened hand, hand Click is generated when the user’s hand doesn’t move (stays still) for about 2 seconds.
2. Enable the ‘Control Mouse Cursor’-setting of the InteractionManager-component. This setting transfers the position and clicks of the hand cursor to the mouse cursor, this way enabling interaction with the GUI buttons, toggles and other components.
3. If you need drag-and-drop functionality for interaction with the GUI, enable the ‘Control Mouse Drag’-setting of the InteractionManager-component. This setting starts mouse dragging, as soon as it detects hand grip and continues the dragging until hand release is detected. If you enable this setting, you can also click on GUI buttons with a hand grip, instead of the usual hand click (i.e. staying in place, over the button, for about 2 seconds).

How to get the depth- or color-camera textures:

First off, make sure that ‘Compute User Map’-setting of the KinectManager-component is enabled, if you need the depth texture, or ‘Compute Color Map’-setting of the KinectManager-component is enabled, if you need the color camera texture. Then write something like this in the Update()-method of your script:

KinectManager manager = KinectManager.Instance;
if(manager && manager.IsInitialized())
{
    Texture2D depthTexture = manager.GetUsersLblTex();
    Texture2D colorTexture = manager.GetUsersClrTex();
    // do something with the textures
}

How to get the position of a body joint:

This is demonstrated in KinectScripts/Samples/GetJointPositionDemo-script. You can add it as a component to a game object in your scene to see it in action. Just select the needed joint and optionally enable saving to a csv-file. Do not forget that to add the KinectManager as component to a game object in your scene. It is usually a component of the MainCamera in the example scenes. Here is the main part of the demo-script that retrieves the position of the selected joint:

KinectInterop.JointType joint = KinectInterop.JointType.HandRight;
KinectManager manager = KinectManager.Instance;

if(manager && manager.IsInitialized())
{
    if(manager.IsUserDetected())
    {
        long userId = manager.GetPrimaryUserID();

        if(manager.IsJointTracked(userId, (int)joint))
        {
            Vector3 jointPos = manager.GetJointPosition(userId, (int)joint);
            // do something with the joint position
        }
    }
}

How to make a game object rotate as the user:

This is similar to the previous example and is demonstrated in KinectScripts/Samples/FollowUserRotation-script. To see it in action, you can create a cube in your scene and add the script as a component to it. Do not forget to add the KinectManager as component to a game object in your scene. It is usually a component of the MainCamera in the example scenes.

How to make a game object follow user’s head position and rotation:

You need the KinectManager and FacetrackingManager added as components to a game object in your scene. For example, they are components of the MainCamera in the KinectAvatarsDemo-scene. Then, to get the position of the head and orientation of the neck, you need code like this in your script:

KinectManager manager = KinectManager.Instance;

if(manager && manager.IsInitialized())
{
    if(manager.IsUserDetected())
    {
        long userId = manager.GetPrimaryUserID();

        if(manager.IsJointTracked(userId, (int)KinectInterop.JointType.Head))
        {
            Vector3 headPosition = manager.GetJointPosition(userId, (int)KinectInterop.JointType.Head);
            Quaternion neckRotation = manager.GetJointOrientation(userId, (int)KinectInterop.JointType.Neck);
            // do something with the head position and neck orientation
        }
    }
}

How to get the face-points’ coordinates:

You need a reference to the respective FaceFrameResult-object. This is demonstrated in KinectScripts/Samples/GetFacePointsDemo-script. You can add it as a component to a game object in your scene, to see it in action. To get a face point coordinates in your script you need to invoke its public GetFacePoint()-function. Do not forget to add the KinectManager and FacetrackingManager as components to a game object in your scene. For example, they are components of the MainCamera in the KinectAvatarsDemo-scene.

How to mix Kinect-captured movement with Mecanim animation

1. Use the AvatarControllerClassic instead of AvatarController-component. Assign only these joints that have to be animated by the sensor.
2. Set the SmoothFactor-setting of AvatarControllerClassic to 0, to apply the detected bone orientations instantly.
3. Create an avatar-body-mask and apply it to the Mecanim animation layer. In this mask, disable Mecanim animations of the Kinect-animated joints mentioned above. Do not disable the root-joint!
4. Enable the ‘Late Update Avatars’-setting of KinectManager (component of MainCamera in the example scenes).
5. Run the scene to check the setup. When a player gets recognized by the sensor, part of his joints will be animated by the AvatarControllerClassic component, and the other part – by the Animator component.

How to add your models to the FittingRoom-demo

1. For each of your fbx-models, import the model and select it in the Assets-view in Unity editor.
2. Select the Rig-tab in Inspector. Set the AnimationType to ‘Humanoid’ and the AvatarDefinition to ‘Create from this model’.
3. Press the Apply-button. Then press the Configure-button to check if all required joints are correctly assigned. The clothing models usually don’t use all joints, which can make the avatar definition invalid. In this case you can assign manually the missing joints (shown in red).
4. Keep in mind: The joint positions in the model must match the structure of the Kinect-joints. You can see them, for instance in the KinectOverlayDemo2. Otherwise the model may not overlay the user’s body properly.
5. Create a sub-folder for your model category (Shirts, Pants, Skirts, etc.) in the FittingRoomDemo/Resources-folder.
6. Create a sub-folders with subsequent numbers (0000, 0001, 0002, etc.) for all imported in p.1 models, in the model category folder.
7. Move your models into these numerical folders, one model per folder, along with the needed materials and textures. Rename the model’s fbx-file to ‘model.fbx’.
8. You can put a preview image for each model in jpeg-format (100 x 143px, 24bpp) in the respective model folder. Then rename it to ‘preview.jpg.bytes’. If you don’t put a preview image, the fitting-room demo will display ‘No preview’ in the model-selection menu.
9. Open the FittingRoomDemo1-scene.
10. Add ModelSelector-component for your model category to the KinectController game object. Set its ‘Model category’-setting to be the same as the name of sub-folder created in p.5 above. Set the ‘Number of models’-setting to reflect the number of sub-folders created in p.6 above.
11. The other settings of your ModelSelector-component must be similar to the existing ModelSelector in the demo. I.e. ‘Model relative to camera’ must be set to ‘BackgroundCamera’, ‘Foreground camera’ must be set to ‘MainCamera’, ‘Continuous scaling’ – enabled. The scale-factor settings may be set initially to 1 and the ‘Vertical offset’-setting to 0. Later you can adjust them slightly to provide the best model-to-body overlay.
12. Enable the ‘Keep selected model’-setting of the ModelSelector-component, if you want the selected model to continue overlaying user’s body, after the model category changes. This is useful, if there are several categories (i.e. ModelSelectors), for instance for shirts, pants, skirts, etc. In this case the selected shirt model will still overlay user’s body, when the category changes and the user starts selects pants, for instance.
13. The CategorySelector-component provides gesture control for changing models and categories, and takes care of switching model categories (e.g for shirts, pants, ties, etc.) for the same user. There is already a CategorySelector for the 1st user (player-index 0) in the scene, so you don’t need to add more.
14. If you plan for multi-user fitting-room, add one CategorySelector-component for each other user. You may also need to add the respective ModelSelector-components for model categories that will be used by these users, too.
15. Run the scene to ensure that your models can be selected in the list and they overlay the user’s body correctly. Experiment a bit if needed, to find the values of scale-factors and vertical-offset settings that provide the best model-to-body overlay.
16. If you want to turn off the cursor interaction in the scene, disable the InteractionManager-component of KinectController-game object. If you want to turn off the gestures (swipes for changing models & hand raises for changing categories), disable the respective settings of the CategorySelector-component. If you want to turn off or change the T-pose calibration, change the ‘Player calibration pose’-setting of KinectManager-component.
17. Last, but not least: You can use the FittingRoomDemo2 scene, to utilize or experiment with a single overlay model. Adjust the scale-factor settings of AvatarScaler to fine tune the scale of the whole body, arm- or leg-bones of the model, if needed. Enable the ‘Continuous Scaling’ setting, if you want the model to rescale on each Update.

How to set up the sensor height and angle

There are two very important settings of the KinectManager-component that influence the calculation of users’ and joints’ space coordinates, hence almost all user-related visualizations in the demo scenes. Here is how to set them correctly:

1. Set the ‘Sensor height’-setting, as to how high above the ground is the sensor, in meters. The by-default value is 1, i.e. 1.0 meter above the ground, which may not be your case.
2. Set the ‘Sensor angle’-setting, as to the tilt angle of the sensor, in degrees. Use positive degrees if the sensor is tilted up, negative degrees – if it is tilted down. The by-default value is 0, which means 0 degrees, i.e. the sensor is not tilted at all.
3. Because it is not so easy to estimate the sensor angle manually, you can use the ‘Auto height angle’-setting to find out this value. Select ‘Show info only’-option and run the demo-scene. Then stand in front of the sensor. The information on screen will show you the rough height and angle-settings, as estimated by the sensor itself. Repeat this 2-3 times and write down the values you see.
4. Finally, set the ‘Sensor height’ and ‘Sensor angle’ to the estimated values you find best. Set the ‘Auto height angle’-setting back to ‘Dont use’.
5. If you find the height and angle values estimated by the sensor good enough, or if your sensor setup is not fixed, you can set the ‘Auto height angle’-setting to ‘Auto update’. It will update the ‘Sensor height’ and ‘Sensor angle’-settings continuously, when there are users in the field of view of the sensor.

Are there any events, when a user is detected or lost

There are no special event handlers for user-detected/user-lost events, but there are two other options you can use:

1. In the Update()-method of your script, invoke the GetUsersCount()-function of KinectManager and compare the returned value to a previously saved value, like this:

KinectManager manager = KinectManager.Instance;
if(manager && manager.IsInitialized())
{
    int usersNow = manager.GetUsersCount();

    if(usersNow > usersSaved)
    {
        // new user detected
    }
    if(usersNow < usersSaved)
    {
        // user lost
    }

    usersSaved = usersNow;
}

2. Create a class that implements KinectGestures.GestureListenerInterface and add it as component to a game object in the scene. It has methods UserDetected() and UserLost(), which you can use as user-event handlers. The other methods could be left empty or return the default value (true). See the SimpleGestureListener or GestureListener-classes, if you need an example.

How to process discrete gestures like swipes and poses like hand-raises

Most of the gestures, like SwipeLeft, SwipeRight, Jump, Squat, etc. are discrete. All poses, like RaiseLeftHand, RaiseRightHand, etc. are also considered as discrete gestures. This means these gestures may report progress or not, but all of them get completed or cancelled at the end. Processing these gestures in a gesture-listener script is relatively easy. You need to do as follows:

1. In the UserDetected()-function of the script add the following line for each gesture you need to track:

manager.DetectGesture(userId, KinectGestures.Gestures.xxxxx);

2. In GestureCompleted() add code to process the discrete gesture, like this:

if(gesture == KinectGestures.Gestures.xxxxx)
{
    // gesture is detected - process it (for instance, set a flag or execute an action)
}

3. In the GestureCancelled()-function, add code to process the cancellation of the continuous gesture:

if(gesture == KinectGestures.Gestures.xxxxx)
{
    // gesture is cancelled - process it (for instance, clear the flag)
}

If you need code samples, see the SimpleGestureListener.cs or CubeGestureListener.cs-scripts.

4. From v2.8 on, KinectGestures.cs is not any more a static class, but a component that may be extended, for instance with the detection of new gestures or poses. You need to add it as component to the KinectController-game object, if you need gesture or pose detection in the scene.

How to process continuous gestures, like ZoomIn, ZoomOut and Wheel

Some of the gestures, like ZoomIn, ZoomOut and Wheel, are continuous. This means these gestures never get fully completed, but only report progress greater than 50%, as long as the gesture is detected. To process them in a gesture-listener script, do as follows:

1. In the UserDetected()-function of the script add the following line for each gesture you need to track:

manager.DetectGesture(userId, KinectGestures.Gestures.xxxxx);

2. In GestureInProgress() add code to process the continuous gesture, like this:

if(gesture == KinectGestures.Gestures.xxxxx)
{
    if(progress > 0.5f)
    {
        // gesture is detected - process it (for instance, set a flag, get zoom factor or angle)
    }
    else
    {
        // gesture is no more detected - process it (for instance, clear the flag)
    }
}

3. In the GestureCancelled()-function, add code to process the end of the continuous gesture:

if(gesture == KinectGestures.Gestures.xxxxx)
{
    // gesture is cancelled - process it (for instance, clear the flag)
}

If you need code samples, see the SimpleGestureListener.cs or ModelGestureListener.cs-scripts.

4. From v2.8 on, KinectGestures.cs is not any more a static class, but a component that may be extended, for instance with the detection of new gestures or poses. You need to add it as component to the KinectController-game object, if you need gesture or pose detection in the scene.

How to utilize visual (VGB) gestures in the K2-asset

The visual gestures, created by the Visual Gesture Builder (VGB) can be used in the K2-asset, too. To do it, follow these steps (and see the VisualGestures-game object and its components in the KinectGesturesDemo-scene):

1. Copy the gestures’ database (xxxxx.gbd) to the Resources-folder and rename it to ‘xxxxx.gbd.bytes’.
2. Add the VisualGestureManager-script as a component to a game object in the scene (see VisualGestures-game object).
3. Set the ‘Gesture Database’-setting of VisualGestureManager-component to the name of the gestures’ database, used in step 1 (‘xxxxx.gbd’).
4. Create a visual-gesture-listener to process the gestures, and add it as a component to a game object in the scene (see the SimpleVisualGestureListener-script).
5. In the GestureInProgress()-function of the gesture-listener add code to process the detected continuous gestures and in the GestureCompleted() add code to process the detected discrete gestures.

How to change the language or grammar for speech recognition

1. Make sure you have installed the needed language pack from here.
2. Set the ‘Language code’-setting of SpeechManager-component, as to the grammar language you need to use. The list of language codes can be found here (see ‘LCID Decimal’).
3. Make sure the ‘Grammar file name’-setting of SpeechManager-component corresponds to the name of the grxml.txt-file in Assets/Resources.
4. Open the grxml.txt-grammar file in Assets/Resources and set its ‘xml:lang’-attribute to the language that corresponds to the language code in step 2.
5. Make the other needed modifications in the grammar file and save it.
6. (Optional since v2.7) Delete the grxml-file with the same name in the root-folder of your Unity project (the parent folder of Assets-folder).
7. Run the scene to check, if speech recognition works correctly.

How to run the fitting-room or overlay demo in portrait mode

1. First off, add 9:16 (or 3:4) aspect-ratio to the Game view’s list of resolutions, if it is missing.
2. Select the 9:16 (or 3:4) aspect ratio of Game view, to set the main-camera output in portrait mode.
3. Open the fitting-room or overlay-demo scene and select the BackgroundImage-game object.
4. Enable its PortraitBackground-component (available since v2.7) and save the scene.
5. Run the scene to try it out in portrait mode.

How to build an exe from ‘Kinect-v2 with MS-SDK’ project

By default Unity builds the exe (and the respective xxx_Data-folder) in the root folder of your Unity project. It is recommended to you use another, empty folder instead. The reason is that building the exe in the folder of your Unity project may cause conflicts between the native libraries used by the editor and the ones used by the exe, if they have different architectures (for instance the editor is 64-bit, but the exe is 32-bit).

Also, before building the exe, make sure you’ve copied the Assets/Resources-folder from the K2-asset to your Unity project. It contains the needed native libraries and custom shaders. Optionally you can remove the unneeded zip.bytes-files from the Resources-folder. This will save a lot of space in the build. For instance, if you target Kinect-v2 only, you can remove the Kinect-v1 and OpenNi2-related zipped libraries. The exe won’t need them anyway.

How to make the Kinect-v2 package work with Kinect-v1

If you have only Kinect v2 SDK or Kinect v1 SDK installed on your machine, the KinectManager should detect the installed SDK and sensor correctly. But in case you have both Kinect SDK 2.0 and SDK 1.8 installed simultaneously, the KinectManager will put preference on Kinect v2 SDK and your Kinect v1 will not be detected. The reason for this is that you can use SDK 2.0 in offline mode as well, i.e. without sensor attached. In this case you can emulate the sensor by playing recorded files in Kinect Studio 2.0.

If you want to make the KinectManager utilize the appropriate interface, depending on the currently attached sensor, open KinectScripts/Interfaces/Kinect2Interface.cs and at its start change the value of ‘sensorAlwaysAvailable’ from ‘true’ to ‘false’. After this, close and reopen the Unity editor. Then, on each start, the KinectManager will try to detect which sensor is currently attached to your machine and use the respective sensor interface. This way you could switch the sensors (Kinect v2 or v1), as to your preference, but will not be able to use the offline mode for Kinect v2. To utilize the Kinect v2 offline mode again, you need to switch ‘sensorAlwaysAvailable’ back to true.

What do the options of ‘Compute user map’-setting mean

Here are one-line descriptions of the available options:

  • RawUserDepth means that only the raw depth image values, coming from the sensor will be available, via the GetRawDepthMap()-function for instance;
  • BodyTexture means that GetUsersLblTex()-function will return the white image of the tracked users;
  • UserTexture will cause GetUsersLblTex() to return the tracked users’ histogram image;
  • CutOutTexture, combined with enabled ‘Compute color map‘-setting, means that GetUsersLblTex() will return the cut-out image of the users.

All these options (except RawUserDepth) can be tested instantly, if you enable the ‘Display user map‘-setting of KinectManager-component, too.

How to set up the user detection order

There is a ‘User detection order’-setting of the KinectManager-component. You can use it to determine how the user detection should be done, depending on your requirements. Here are short descriptions of the available options:

  • Appearance is selected by default. It means that the player indices are assigned in order of user appearance. The first detected user gets player index 0, The next one gets index 1, etc. If the user 0 gets lost, the remaining users are not reordered. The next newly detected user will take its place;
  • Distance means that player indices are assigned depending on distance of the detected users to the sensor. The closest one will get player index 0, the next closest one – index 1, etc. If a user gets lost, the player indices are reordered, depending on the distances to the remaining users;
  • Left to right means that player indices are assigned depending on the X-position of the detected users. The leftmost one will get player index 0, the next leftmost one – index 1, etc. If a user gets lost, the player indices are reordered, depending on the X-positions of the remaining users;

The user-detection area can be further limited with ‘Min user distance’, ‘Max user distance’ and ‘Max left right distance’-settings, in meters from the sensor. The maximum number of detected user can be limited by lowering the value of ‘Max tracked user’-setting.

How enable body-blending in the FittingRoom-demo, or disable it to increase FPS

If you select the MainCamera in the KinectFittingRoom1-demo scene (in v2.10 or above), you will see a component called UserBodyBlender. It is responsible for mixing the clothing model (overlaying the user) with the real world objects (including user’s body parts), depending on the distance to camera. For instance, if you arms or other real-world objects are in front of the model, you will see them overlaying the model, as expected.

You can enable the component, to turn on the user’s body-blending functionality. The ‘Depth threshold’-setting may be used to adjust the minimum distance to the front of model (in meters). It determines when a real-world’s object will become visible. It is set by default to 0.1m, but you could experiment a bit to see, if any other value works better for your models. If the scene performance (in means of FPS) is not sufficient, and body-blending is not important, you can disable the UserBodyBlender-component to increase performance.

How to build Windows-Store (UWP-8.1) application

To do it, you need at least v2.10.1 of the K2-asset. To build for ‘Windows store’, first select ‘Windows store’ as platform in ‘Build settings’, and press the ‘Switch platform’-button. Then do as follows:

1. Unzip Assets/Plugins-Metro.zip. This will create Assets/Plugins-Metro-folder.
2. Delete the KinectScripts/SharpZipLib-folder.
3. Optionally, delete all zip.bytes-files in Assets/Resources. You won’t need these libraries in Windows/Store. All Kinect-v2 libraries reside in Plugins-Metro-folder.
4. Select ‘File / Build Settings’ from the menu. Add the scenes you want to build. Select ‘Windows Store’ as platform. Select ‘8.1’ as target SDK. Then click the Build-button. Select an empty folder for the Windows-store project and wait the build to complete.
5. Go to the build-folder and open the generated solution (.sln-file) with Visual studio.
6. Change the ‘by default’ ARM-processor target to ‘x86’. The Kinect sensor is not compatible with ARM processors.
7. Right click ‘References’ in the Project-windows and select ‘Add reference’. Select ‘Extensions’ and then WindowsPreview.Kinect and Microsoft.Kinect.Face libraries. Then press OK.
8. Open solution’s manifest-file ‘Package.appxmanifest’, go to ‘Capabilities’-tab and enable ‘Microphone’ and ‘Webcam’ in the left panel. Save the manifest. This is needed to to enable the sensor, when the UWP app starts up. Thanks to Yanis Lukes (aka Pendrokar) for providing this info!
9. Build the project. Run it, to test it locally. Don’t forget to turn on Windows developer mode on your machine.

How to work with multiple users

Kinect-v2 can fully track up to 6 users simultaneously. That’s why many of the Kinect-related components, like AvatarController, InteractionManager, model & category-selectors, gesture & interaction listeners, etc. have a setting called ‘Player index’. If set to 0, the respective component will track the 1st detected user. If set to 1, the component will track the 2nd detected use. If set to 2 – the 3rd user, etc. The order of user detection may be specified with the ‘User detection order’-setting of the KinectManager (component of KinectController game object).

How to use the FacetrackingManager

The FacetrackingManager-component may be used for several purposes. First, adding it as component of KinectController will provide more precise neck and head tracking, when there are avatars in the scene (humanoid models utilizing the AvatarController-component). If HD face tracking is needed, you can enable the ‘Get face model data’-setting of FacetrackingManager-component. Keep in mind that using HD face tracking will lower performance and may cause memory leaks, which can cause Unity crash after multiple scene restarts. Please use this feature carefully.

In case of ‘Get face model data’ enabled, don’t forget to assign a mesh object (e.g. Quad) to the ‘Face model mesh’-setting. Pay also attention to the ‘Textured model mesh’-setting. The available options are: ‘None’ – means the mesh will not be textured; ‘Color map’ – the mesh will get its texture from the color camera image, i.e. it will reproduce user’s face; ‘Face rectangle’ – the face mesh will be textured with its material’s Albedo texture, whereas the UI coordinates will match the detected face rectangle.

Finally, you can use the FacetrackingManager public API to get a lot of face-tracking data, like the user’s head position and rotation, animation units, shape units, face model vertices, etc.

How to add background image to the FittingRoom-demo (updated for v2.14 and later)

To replace the color-camera background in the FittingRoom-scene with a background image of your choice, please do as follows:

1. Enable the BackgroundRemovalManager-component of the KinectController-game object in the scene.
2. Make sure the ‘Compute user map’-setting of KinectManager (component of the KinectController, too) is set to ‘Body texture’, and the ‘Compute color map’-setting is enabled.
3. Set the background image as texture of the GUITexture-component of the BackgroundImage1-game object in the scene.

How to move the FPS-avatars of positionally tracked users in VR environment

There are two options for moving first-person avatars in VR-environment (the 1st avatar-demo scene in K2VR-asset):

1. If you use the Kinect’s positional tracking, turn off the Oculus/Vive positional tracking, because their coordinates are different to Kinect’s.
2. If you prefer to use the Oculus/Vive positional tracking:
– enable the ‘External root motion’-setting of the AvatarController-component of avatar’s game object. This will disable avatar motion as to Kinect special coordinates.
– enable the HeadMover-component of avatar’s game object, and assign the MainCamera as ‘Target transform’, to follow the Oculus/Vive position.

Now try to run the scene. If there are issues with the MainCamera used as positional target, do as follows:
– add an empty game object to the scene. It will be used to follow the Oculus/Vive positions.
– assign the newly created game object to the ‘Target transform’-setting of the HeadMover-component.
– add a script to the newly created game object, and in that script’s Update()-function set programatically the object’s transform position to be the current Oculus/Vive position.

How to create your own gestures

For gesture recognition there are two options – visual gestures (created with the Visual Gesture Builder, part of Kinect SDK 2.0) and programmatic gestures that are programmatically implemented in KinectGestures.cs. The latter are based mainly on the positions of the different joints, and how they stand to each other in different moments of time.

Here is a video on creating and checking for visual gestures. Please also check GesturesDemo/VisualGesturesDemo-scene, to see how to use visual gestures in Unity. One issue with the visual gestures is that they usually work in 32-bit builds only.

The programmatic gestures should be coded in C#, in KinectGestures.cs (or class that extends it). To get started with coding programmatic gestures, first read ‘How to use gestures…’-pdf document in the _Readme-folder of K2-asset. It may seem difficult at first, but it’s only a matter of time and experience to become expert in coding gestures. You have direct access to the jointsPos-array, containing all joint positions and jointsTracked-array, containing the respective joint-tracking states. Keep in mind that all joint positions are in world coordinates, in meters. Some helper functions are also there at your disposal, like SetGestureJoint(), SetGestureCancelled(), CheckPoseComplete(), etc. Maybe I’ll write a separate tutorial about gesture coding in the near future.

The demo scenes related to utilizing programmatic gestures are again in the GesturesDemo-folder. The KinectGesturesDemo1-scene shows how to utilize discrete gestures, and the KinectGesturesDemo2-scene is about continuous gestures.

More tips regarding listening for gestures in Unity scenes can be found above. See the tips for discrete, continuous and visual gestures (which could be discrete and continuous, as well).

How to enable or disable the tracking of inferred joints

First, keep in mind that:
1. There is ‘Ignore inferred joints’-setting of the KinectManager. KinectManager is usually a component of the KinectController-game object in demo scenes.
2. There is a public API method of KinectManager, called IsJointTracked(). This method is utilized by various scripts & components in the demo scenes.

Here is how it works:
The Kinect SDK tracks the positions of all body joints’ together with their respective tracking states. These states can be Tracked, NotTracked or Inferred. When the ‘Ignore inferred joints’-setting is enabled, the IsJointTracked()-method returns true, when the tracking state is Tracked or Inferred, and false when the state is NotTracked. I.e. both tracked and inferred joints are considered valid. When the setting is disabled, the IsJointTracked()-method returns true, when the tracking state is Tracked, and false when the state is NotTracked or Inferred. I.e. only the really tracked joints are considered valid.

How to build exe with the Kinect-v2 plugins provided by Microsoft

In case you’re targeting Kinect-v2 sensor only, and would like to skip packing all native libraries that come with the K2-asset in the build, as well as unpacking them into the working directory of the executable afterwards, do as follows:

1. Download and unzip the Kinect-v2 Unity Plugins from here.
2. Open your Unity project. Select ‘Assets / Import Package / Custom Package’ from the menu and import only the Plugins-folder from ‘Kinect.2.0.1410.19000.unitypackage’. You can find it in the unzipped package from p.1 above. Please don’t import anything from the ‘Standard Assets’-folder of unitypackage. All needed standard assets are already present in the K2-asset.
3. If you are using the FacetrackingManager in your scenes, import the Plugins-folder from ‘Kinect.Face.2.0.1410.19000.unitypackage’ as well. If you are using visual gestures (i.e. VisualGestureManager in your scenes), import the Plugins-folder from ‘Kinect.VisualGestureBuilder.2.0.1410.19000.unitypackage’, too. Again, please don’t import anything from the ‘Standard Assets’-folder of unitypackages. All needed standard assets are already present in the K2-asset.
4. Delete all zipped libraries in Assets/Resources-folder. You can see them as .zip-files in the Assets-window, or as .zip.bytes-files in the Windows explorer. Delete the Plugins-Metro (zip-file) in the Assets-folder, too. All these zipped libraries are no more needed at run-time.
5. Delete all dlls in the root-folder of your Unity project. The root-folder is the parent-folder of the Assets-folder of your project, and is not visible in the Editor. Delete the NuiDatabase- and vgbtechs-folders in the root-folder, too. These dlls and folders are no more needed, because they are part of the project’s Plugins-folder now.
6. Try to run the Kinect-v2 related scenes in the project, to make sure they still work as expected.
7. If everything is OK, build the executable again. This should work for both x86 and x86_64-architectures, as well as for Windows-Store, SDK 8.1.

How to build Windows-Store (UWP-10) application

To do it, you need at least v2.12.2 of the K2-asset. Then follow these steps:

1. (optional, as of v2.14.1) Delete the KinectScripts/SharpZipLib-folder. It is not needed for UWP. If you leave it, it may cause syntax errors later.
2. Open ‘File / Build Settings’ in Unity editor, switch to ‘Windows store’ platform and select ‘Universal 10’ as SDK. Make sure ‘.Net’ is selected as scripting backend. Optionally enable the ‘Unity C# Project’ and ‘Development build’-settings, if you’d like to edit the Unity scripts in Visual studio later.
3. Press the ‘Build’-button, select output folder and wait for Unity to finish exporting the UWP-Visual studio solution.
4. Close or minimize the Unity editor, then open the exported UWP solution in Visual studio.
5. Select x86 or x64 as target platform in Visual studio.
6. Open ‘Package.appmanifest’ of the main project, and on tab ‘Capabilities’ enable ‘Microphone’ & ‘Webcam’. These may be enabled in the Windows-store’s Player settings in Unity, too.
7. If you have enabled the ‘Unity C# Project’-setting in p.2 above, right click on ‘Assembly-CSharp’-project in the Solution explorer, select ‘Properties’ from the context menu, and then select ‘Windows 10 Anniversary Edition (10.0; Build 14393)’ as ‘Target platform’. Otherwise you will get compilation errors.
8. Build and run the solution, on the local or remote machine. It should work now.

Please mind the FacetrackingManager and SpeechRecognitionManager-components, hence the scenes that use them, will not work with the current version of the K2-UWP interface.

How to run the projector-demo scene (v2.13 and later)

To run the KinectProjectorDemo-scene, you need to calibrate the projector to the Kinect sensor first. To do it, please follow these steps:

1. Go to the RoomAliveToolkit-GitHub page and follow the instructions of ‘ProCamCalibration README’ there.
2. To do the calibration, you need first to build the ProCamCalibration-project with Microsoft Visual Studio 2015 or later. For your convenience, here is a ready-made build of the needed executables, made with VS-2015.
3. After the ProCamCalibration finishes, copy the generated calibration xml-file to KinectDemos/ProjectorDemo/Resources.
4. Open the KinectProjectorDemo-scene in Unity editor, select the MainCamera-game object in Hierarchy, and drag the calibration xml-file generated by ProCamCalibrationTool to the ‘Calibration Xml’-setting of its ProjectorCamera-component. Please also check, if the ‘Proj name in config’ is the same as the projector name, set in the calibration xml-file.
5. Run the scene and walk in front of the Kinect sesnor, to check if the skeleton projection gets overlayed correctly by the projector over your body.

How to render background and the background-removal image on the scene background

First off, if you want to replace the color-camera background in the FittingRoom-demo scene with the background-removal image, please see and follow these steps.

For all other demo-scenes: You can replace the color-camera image on scene background with the background-removal image, by following these (rather complex) steps:

1. Create an empty game object in the scene, name it BackgroundImage1, and add ‘GUI Texture’-component to it (this will change after the release of Unity 2017.2, because it deprecates GUI-Textures). Set its Transform position to (0.5, 0.5, 0) to center it on the screen. This object will be used to render the scene background, so you can select a suitable picture for the Texture-setting of its GUITexture-component. If you leave its Texture-setting to None, a skybox or solid color will be rendered as scene background.

2. In a similar way, create a BackgroundImage2-game object. This object will be used to render the detected users, so leave the Texture-setting of its GUITexture-component to None (it will be set at runtime by a script), and set the Y-scale of the object to -1. This is needed to flip the rendered texture vertically. The reason: Unity textures are rendered bottom to top, while the Kinect images are top to bottom.

3. Add KinectScripts/BackgroundRemovalManager-script as component to the KinectController-game object in the scene (if it is not there yet). This is needed to provide the background removal functionality to the scene.

4. Add KinectDemos/BackgroundRemovalDemo/Scripts/ForegroundToImage-script as component to the BackgroundImage2-game object. This component will set the foreground texture, created at runtime by the BackgroundRemovalManager-component, as Texture of the GUI-Texture component (see p2 above).

Now the tricky part: Two more cameras are needed to display the user image over the scene background – one to render the background picture, 2nd one to render the user image on top of it, and finally – the main camera – to render the 3D objects on top of the background cameras.  Cameras in Unity have a setting called ‘Culling Mask’, where you can set the layers rendered by each camera. There are also two more settings: Depth and ‘Clear flags’ that may be used to change the cameras rendering order.

5. In our case, two extra layers will be needed for the correct rendering of background cameras. Select ‘Add layer’ from the Layer-dropdown in the top-right corner of the Inspector and add 2 layers – ‘BackgroundLayer1’ and ‘BackgroundLayer2’, as shown below. Unfortunately, when Unity exports the K2-package, it doesn’t export the extra layers too. That’s why the extra layers are missing in the demo-scenes.

6. After you have added the extra layers, select the BackgroundImage1-object in Hierarchy and set its layer to ‘BackgroundLayer1’. Then select the BackgroundImage2 and set its layer to ‘BackgroundLayer2’.

7. Create a camera-object in the scene and name it BackgroundCamera1. Set its CullingMask to ‘BackgroundLayer1’ only. Then set its ‘Depth’-setting to (-2) and its ‘Clear flags’-setting to ‘Skybox’ or ‘Solid color’. This means this camera will render first, will clear the output and then render the texture of BackgroundImage1. Don’t forget to disable its AudioListener-component, too. Otherwise, expect endless warnings in the console, regarding multiple audio listeners in the scene.

8. Create a 2nd camera-object and name it BackgroundCamera2. Set its CullingMask to ‘BackgroundLayer2’ only, its ‘Depth’ to (-1) and its ‘Clear flags’ to ‘Depth only’. This means this camera will render 2nd (because -1 > -2), will not clear the previous camera rendering, but instead render the BackgroundImage2 texture on top of it. Again, don’t forget to disable its AudioListener-component.

9. Finally, select the ‘Main Camera’ in the scene. Set its ‘Depth’ to 0 and ‘Clear flags’ to ‘Depth only’. In its ‘Culling mask’ disable ‘BackgroundLayer1’ and ‘BackgroundLayer2’, because they are already rendered by the background cameras. This way the main camera will render all other layers in the scene, on top of the background cameras (depth: 0 > -1 > -2).

If you need a practical example of the above setup, please look at the objects, layers and cameras of the KinectDemos/BackgroundRemovalDemo/KinectBackgroundRemoval1-demo scene.

How to run the demo scenes on non-Windows platforms

Starting with v2.14 of the K2-asset you can run and build many of the demo-scenes on non-Windows platform. In this case you can utilize the KinectDataServer and KinectDataClient components, to transfer the Kinect body and interaction data over the network. The same approach is used by the K2VR-asset. Here is what to do:

1. Add KinectScripts/KinectDataClient.cs as component to KinectController-game object in the client scene. It will replace the direct connection to the sensor with connection to the KinectDataServer-app over the network.
2. On the machine, where the Kinect-sensor is connected, run KinectDemos/KinectDataServer/KinectDataServer-scene or download the ready-built KinectDataServer-app for the same version of Unity editor, as the one running the client scene. The ready-built KinectDataServer-app can be found on this page.
3. Make sure the KinectDataServer and the client scene run in the same subnet. This is needed, if you’d like the client to discover automatically the running instance of KinectDataServer. Otherwise you would need to set manually the ‘Server host’ and ‘Server port’-settings of the KinectDataClient-component.
4. Run the client scene to make sure it connects to the server. If it doesn’t, check the console for error messages.
5. If the connection between the client and server is OK, and the client scene works as expected, build it for the target platform and test it there too.

How to workaround the user tracking issue, when the user is turned back

Starting with v2.14 of the K2-asset you can (at least roughly) work around the user tracking issue, when the user is turned back. Here is what to do:

1. Add FacetrackingManager-component to your scene, if there isn’t one there already. The face-tracking is needed for front & back user detection.
2. Enable the ‘Allow turn arounds’-setting of KinectManager. The KinectManager is component of KinectController-game object in all demo scenes.
3. Run the scene to test it. Keep in mind this feature is only a workaround (not a solution) for an issue in Kinect SDK. The issue is that by design Kinect tracks correctly only users who face the sensor. The side tracking is not smooth, as well. And finally, this workaround is experimental and may not work in all cases.

How to get the full scene depth image as texture

If you’d like to get the full scene depth image, instead of user-only depth image, please follow these steps:

1. Open Resources/DepthShader.shader and uncomment the commented out else-part of the ‘if’, you can see near the end of the shader. Save the shader and go back to the Unity editor.
2. Make sure the ‘Compute user map’-setting of the KinectManager is set to ‘User texture’. KinectManager is component of the KinectController-game object in all demo scenes.
3. Optionally enable the ‘Display user map’-setting of KinectManager, if you want to see the depth texture on screen.
4. You can also get the depth texture by calling ‘KinectManager.Instance.GetUsersLblTex()’ in your scripts, and then use it the way you want.

Some useful hints regarding AvatarController and AvatarScaler

The AvatarController-component moves the joints of the humanoid model it is attached to, according to the user’s movements in front of the Kinect-sensor. The AvatarScaler-component (used mainly in the fitting-room scenes) scales the model to match the user in means of height, arms length, etc. Here are some useful hints regarding these components:

1. If you need the avatar to move around its initial position, make sure the ‘Pos relative to camera’-setting of its AvatarController is set to ‘None’.
2. If ‘Pos relative to camera’ references a camera instead, the avatar’s position with respect to that camera will be the same as the user’s position with respect to the Kinect sensor.
3. If ‘Pos relative to camera’ references a camera and ‘Pos rel overlay color’-setting is enabled too, the 3d position of avatar is adjusted to overlay the user on color camera feed.
4. In this last case, if the model has AvatarScaler component too, you should set the ‘Foreground camera’-setting of AvatarScaler to the same camera. Then scaling calculations will be based on the adjusted (overlayed) joint positions, instead of on the joint positions in space.
5. The ‘Continuous scaling’-setting of AvatarScaler determines whether the model scaling should take place only once when the user is detected (when the setting is disabled), or continuously – on each update (when the setting is enabled).

If you need the avatar to obey physics and gravity, disable the ‘Vertical movement’-setting of the AvatarController-component. Disable the ‘Grounded feet’-setting too, if it is enabled. Then enable the ‘Freeze rotation’-setting of its Rigidbody-component for all axes (X, Y & Z). Make sure the ‘Is Kinematic’-setting is disabled as well, to make the physics control the avatar’s rigid body.

If you want to stop the sensor control of the humanoid model in the scene, you can remove the AvatarController-component of the model. If you want to resume the sensor control of the model, add the AvatarController-component to the humanoid model again. After you remove or add this component, don’t forget to call ‘KinectManager.Instance.refreshAvatarControllers();’, to update the list of avatars KinectManager keeps track of.

 

 

547 thoughts on “Kinect v2 Tips, Tricks and Examples

  1. Hello,
    Just to say that you got great package.

    I am trying to control 2D puppet like character with Kinect, how can I constrain joint movements, so that arms and legs are not getting in strange positions.

    Tomislav

    • Hi, there is a setting of KinectManager called ‘Ignore Z-Coordinates’. You can enable it to set 2D mode for the detected movements and joint orientations. KinectManager is component of the KinectController-game object in all demo scenes.

    • The BackgroundRemovalManager component has a setting called ‘Color camera resolution’. Make sure this setting is enabled in your scene, to get the maximum user-image coverage. The non-tracked strips left and right are unfortunately normal, even in this case, because the Kinect depth image is smaller than the color one, and obviously doesn’t cover these areas.

  2. Hi Rumen!

    I am trying to dynamically load rigged models into my scene. I followed your guide to add new models. (rigging – unity mecanim – avatar controller) It works perfectly if the object is in the scene during startup.

    But if I load the model from the resources folder with
    GameObject model= Instantiate(Resources.Load(“riggedGirl”, typeof(GameObject))) as GameObject;
    and Use Add.Component to attach the avatar controller, the model won’t move.

    does the kinectController check for avatar controllers during startup?
    do I need to tell the kinectController which avatar controller it should control?

    cheers

    • Hi Achim, see the LoadDressingModel()-method of ModelSelector.cs-script. It is the component in the 1st fitting-room demo that instantiates the selected model. I suppose you have forgot to add the instantiated avatar to the list of KinectManager’s avatar controllers.

  3. Hi Rumen, can you explain to me how to enable kinect to detect the user when turn around, i saw, kinect controller already handle it but it’s not working, i used it on fitting room demo that the dress doesn’t rotate 360 as well ? thanks.

  4. Hi Rumen, can you explain to me how to detect the user when its turn around, I saw kinect controller already handled it, but in fitting room demo i enable the flag to allow turn around and i also print the calibration text to show me its FACE or BACK and it allways print FACE. which is the dress doesn’t rotate 360 as well? thanks.

    • Hi, this setting was only experimental. Unfortunately, it’s not working correctly (yet). It’s purpose was to overcome the SDK “feature” to track correctly the users, only when they are facing the sensor. For the time being, you’d need to warn (or not allow) the users to turn more than ~80 degrees left or right. Sorry for this limitation..

  5. Hello, I have problems using more than one user in PhotoBooth, I have added 6 JoinOverlayer, 6 InteractionManager and 6 PhotoBootController. One for each user and configured. But I do not get every user to have his own model (Medusa, Batman, etc …) Only one user can have a mask at the same time.
    How should I have 6 users have their own independent mask.
    Sorry for my English.

    • The PhotoBoothController references static mask-models in the scene, as configured in Inspector. In your case you should instead make the available models as prefabs, and then instantiate them at run-time to fill the respective mask-arrays (headMasks, leftHandMasks, chestMasks) of PhotoBoothController.cs. I’m also not sure why would you need six InteractionManagers. Usually, only one user would be allowed to control the photo-shooting.

  6. Hey Rumen !

    In my Application I use the KinectRecordPlayer Script. But if the Script is playing I don’t get the UserDetection or UserLost Events from KinectGestures.GesturesListenerInterface (which i implemented in my Script) anymore. My Plan was to Play Recorded Avatar Movements till a User is Detected and then stop the KinectRecordPlayer and when the User is Lost start the Recorded Movements again. Is this possible without make big changings in KinectInterop and/or KinectManager ?
    thx !

    • Hi, sorry, I cannot test your issue right now, because I’m at a conference this week. But I remember I had requests for similar use cases before. If you look in the KinectDemos/RecorderDemo/Scripts-folder, you will see there is another script component called PlayerDetectorController.cs in there, which you can use in combination with KinectRecorderPlayer, to do what you need. Here is some more info regarding this component: https://ratemt.com/k2docs/PlayerDetectorController.html

  7. Hi,

    I just bought the Kinect v2 Examples on the Unity Asset store, and I like them so far 🙂

    I was wondering how to combine the Avatars Demo with the fourth Face Tracking Demo. I’ve tried adding the Model Face Controller and the Facetracking Manager to the Avatar object and playing around with the settings but it doesn’t seem to work. Would you have an example/how-to for this?

    • Hi, first you need an avatar with rigged head, in means of eyebrows, eyelids, lips and jaw. See the FaceRigged-model in the 4th FT-demo for reference. In this regard, the avatar-model in AvatarsDemo has only jaw rigged, as far as I see. So, you could animate only its jaw-bone with the ModelFaceController. The FacetrackingManager may be component of the KinectController-game oject, as in the demo, not the avatar’s object. Then you need to assign the jaw-bone to the respective setting of MFC, and then adjust the rotation axis and limits, if needed. Hope this is enough information for a start.

      • I have a simmilar problem as Christian, we want to combine Avatar Demo with the First Face Tracking Demo, so that the Avatar get die face of the User. Should I take also the approach with assign the jaw-bone and then adjust the rotation axis and limits ? As the ModelFaceController is not part of the 1. Face Tracking Demo, it should be easier ? My main Problem is to get the correct size and coordinates of the Avatars Face and map the user face on this position.

      • In your case I would suggest to parent the FaceModelMesh-quad to the neck or head-joint of the avatar, and to disable the ‘Move model mesh’-setting of the FacetrackingManager-component. Then experiment a bit to adjust the quad to fit the avatar’s head, as good as possible.

  8. Hi,
    When I connect a Logitech webcam to pc, and use WebCamTexture with it to show the image(because I must put the camera far away), the kinect starts dropping frames seriously…When I stop the WebCamTexture, the problem still exists…
    How to solve thie problem?
    Thanks.

    • I made a mistake, now when I stop the WebCamTexture, it becomes correct.
      How can I do about the first problem?

      • Kinect-v2 needs a dedicated USB-3 port, and has its own color camera as well. Why do you need a second color camera?
        Nevertheless, you could try to stop the unneeded streams and textures, for instance disable ‘Compute color map’-setting, and set ‘Compute user map’ to ‘Raw user depth’.

  9. Hi.
    I want to limit the z-axis movement of the avatar in KinectAvatarsDemo1.
    So I want to make the avatars move only in the x-axis.
    Is there a simple setting?
    Or what script should I change?

    • Oh, I find it! in your older answer.
      It was in the Kinect2AvatarPos() function inside the AvatarController.cs

      • You can also use the ‘Ignore Z-Coordinates’-setting of the KinectManager (usually component of KinectController-game object in the scene).

      • I’ve been using a modified version of Avatars Demo 1 for a VR project by attaching the main camera to one of the avatar’s character controller. It works great in all versions of Unity 5 (like being inside one body and having your actions mirrored by another), but in Unity 2017 it is like looking out of two cameras simultaneously.

      • I just checked the 2nd avatar demo scene in Unity 2017.1 editor. It shows first person camera view. The camera is parented to avatar’s neck, like in your case. The scene runs as expected, and the camera view is OK, as far as I see. So, maybe there is something else in your case (a component, setting, etc.) that causes the issue.

      • Two-camera problem resolved, but there is another disconcerting and persistent issue. There is a sense of stepping in to and out of the character, as well as a sense of “body lag”, where the camera (and the user’s virtual POV) lags behind the user as they move through physical space. This causes two major problems:

        1) The user steps into and out of the avatar body as they move through virtual and physical space. Additionally, tall or short people do not have a naturalistic experience.

        2) This also causes the user’s vision to move out of sync, resulting in serious vertigo. This sensation is worse when the camera is parented to the neck or head, as in Avatar Demo 2.

        This had not been a problem previously, but is now an issue in Unity 2017 and all Unity 5 versions. The issue occurs in my project, as well as the Avatar Demo 1 scenes in fresh projects. Avatar Demo 2 in a fresh project is even worse. I’m using an HTC vive.

        Thanks for your excellent package! Any advice would be greatly appreciated.

      • There is a setting of AvatarController called ‘External root motion’. You can enable it to stop the avatar body movement based on the Kinect body estimations, and instead move the avatar based on the headset position reported by Vive. This will prevent your issues, I think. There was a HeadMover-script component as well, as far as I remember, that could help you estimate the body position from the head position and spine orientation.

      • Thanks for your quick follow-up, Rumen. I’ve enabled External Root Motion, which helps slightly with the vertigo, but hasn’t changed the problem of stepping in and out of the body. Even though the mirrored avatar moves just fine, the first person avatar seems to be stuck in one place.

        There is no HeadMover-component of Cubeman (the avatar’s gameobject?), or of the avatar. I am unable to find it using the Add Component button, either. Is the HeadMover-component somewhere else?

        The avatar does move based on motion picked up from the Kinect. The Kinect and the Vive have been calibrated together at Room Scale. The problem is repeatable in Unity 5.6 and 2017. If it will help, I can send you screenshots.

      • The idea of ‘External root motion’-setting is that the avatar will be moved by some other script or component. That’s why it doesn’t move anymore and stays in place. But you are right – HeadMover was part of the K2VR-asset before, and I forgot to move it to the core K2-asset scripts. Please e-mail me some screenshots. It will be good to see what you mean on a picture. And I’ll send you back the HeadMover-component.

  10. I bought your sdk2 for kinect2, I have two doubts regarding fiiting room, first is it possible to have 2 or 3 persons in the fitting room? how to do that?
    second, is it possible that the dress also is displayed in the back? or at least display a full model? because I need to display a SUPERMAN costume, back I dont know how to display the superman cape

    thanks

    • To your questions:
      1. Add CategorySelector & ModelSelector-components for users with player-index 0, 1, 2…
      2. Not sure what you mean, but if you need the user to turn around, there is ‘Allow turn arounds’-setting of KinectManager-component in the scene. If you enable it, please add FacetrackingManager-component to KinectController too, because it utilizes the user face-tracking. Keep in mind this setting is only workaround for a bug in Kinect tracking, not full featured back-tracking.

  11. hey rumen !

    I work on a Unity Project with different scenes and opposite to your MultipleSceneDemo I need a different KinectManager in every Scene.
    Jumping from 1. Scene to 2. Scene is no Problem:
    GameObject.Find(“MainScene”).SetActive(false);
    SceneManager.LoadScene(1, LoadSceneMode.Additive);

    But after jumping back to 1. Scene Kinect is turning of lights …:
    mainScene.SetActive(true);
    SceneManager.UnloadScene(1);

    The line “//#define USE_SINGLE_KM_IN_MULTIPLE_SCENES” is comment out .

    Maybe you or someone here has a hint to this problem.
    Thx !

    • If it where possible to change this public Settings of the KinectManager at runtime:

      Sensor Height
      Senor Angle
      Auto Height Angle
      Compute User Map
      Compute Color Map
      Display User Map
      Use Bone Orientation Controlls

      Then it should work with only one KinectManger for all Scenes too.
      But somehow i remember that not all this setting could be changed at runtime, is this right ?

      • Yes, you are right there is some issue when Kinect is turned off then back on in subsequent scenes. One other customer told me recently there should be a “middle scene”, between the two that use KinectManager, in order this approach to work. Anyway, I would recommend to use a single KinectManager across the scenes, as shown in the KinectDemos/MultiSceneDemo.

        Regarding Height & Angle-settings: After you change the sensor height an/ord angle, call KinectManager.Instance.UpdateKinectToWorldMatrix() to apply the new values to the transformation matrix. AutoHeightAngle automates these settings with the values returned by the sensor, when there is user around.
        Regarding ComputeUserMap, ComputeColorMap – better enable both.
        Regarding DisplayUserMap – I think you can enable/disable it at runtime, but you could also use your own raw-image panel. Just set its texture to KinectManager.Instance.GetUsersLblTex()
        Regarding UseBoneOrientationConstraints – enable it at beginning. Then you can disable or enable it at runtime, I think.

  12. Hi Rumen!
    Thanks again for your continous work, and for the last update of your asset!
    I want to add a “Camera Filter” (from this asset: https://www.assetstore.unity3d.com/en/#!/content/18433) to the “Color Map” of the user on the “BackgroundRemoval1” scene, to do something like this:

    But appliying the “filter effect” to the real-time Color Map camera, instead of apliying the filter to the background image (like in “BackgroundRemoval 3” scene).
    How can I do something like that?

    Thanks in advance!

    Best regards,

    Chris.

    • Hi, open KinectScripts/KinectInterop.cs, find the UpdateBackgroundRemoval()-function, then look for this line: ‘sensorData.color2DepthMaterial.SetTexture(“_ColorTex”, sensorData.colorImageTexture);’. You can insert the color image processing before this line and then replace ‘sensorData.colorImageTexture’ in it with the processed texture. Hope I understood your issue correctly..

      • Hi again Rumen,

        I’ve done what you said, but I have some questions.

        I’ve seen that the ‘sensorData.colorImageTexture’ is a Texture 2D object.
        The Camera Filters I want to use, are applied on the “OnRenderImage ()” event of a camera.

        https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnRenderImage.html

        I need to have the “Only the User Color Texture” in one layer that one camera shows it.
        Is this the “BackgroundImage2”?

        And in another layer I need to have the “Background without the User Color Texture” from the Kinect.
        Is this the “BackgroundImage1”?

        So I can apply different Camera Filters to each layer, and then render all the layers with the main camera.

        Do I have to work with the ‘sensorData.colorImageTexture’ object, that you said?

        Best regards,

        Cris

      • Hi Rumen,

        Don’t worry, I’ve solved it =) You were right. I had to apply the Camera Filter to the “BackgroundRemovalManager.GetForegroundTex()”. I used your “ForegroundToImage.cs” Script, and then I applied the texture to a “RawImage” of the UI. After getting the ForegroundTexture I applied the Camera Filter and then ok!

        Thanks for your usefull tip of the “KinectInterop.cs” Script! That was my starter point to get the solution!

        Best regards,

        Cris.

      • Hi Rumen,

        Thanks again for this usefull tip.
        I will render it using multiple cameras and the Layers that you suggest.

        Best regards,

        Cris.

  13. hi rumen,
    i wanna ask u about avatar controlling with kinect my 3d humonoid mimic my gesture but i wanna jump humunoid from building to ground. plzz help

    • As far as I understand, you want the avatar to be controlled by Kinect, then animate and move it somewhere else, and then to control it with Kinect again. If this is your case, you can remove the AvatarController-component (in your script) before the animation/movement to the new position. When it lands there, add the AvatarController-component again and it will be controlled by Kinect again. After you remove or add this component, don’t forget to call KinectManager.Instance.refreshAvatarControllers(), to update the KM list of avatars.

  14. Hello there!
    I try to make a portrait version of an app, utilizing the ColorColliderDemo. It seems this example is not properly made for portrait mode. Despite enabling the Portrait Background script on the BackgroundImage object and setting the Game display to 1080 x 1920, the position of the Hand Colliders seem to appear on the wrong positions on the X axis.

    I managed to make them follow the hands properly but the way I did it is not very elegant, plus I’m not even sure why this works.

    There is this line in the HandColorOverlayer.cs:

    //float xNorm = (float)posColor.x / manager.GetColorImageWidth();

    manager.GetColorImageWidth() still returned 1920 instead of, say, 608. But hard-coding “608” in there still didn’t properly work, so I also added a small offset, and eventually I got what I wanted:

    float xNorm = – 1.06f + (float)posColor.x / 608;

    Weird, right? I’m sure it’s perfectly reasonable for reasons I’m not quite sure I understand.

    • Hi, yes – you are right. Please e-mail me and I’ll send you the fixed HandColorOverlayer-script that works in both portrait and landscape mode, if you still need it.

  15. Hi Rumen! thank you for still answer our questions
    I want to ask how can or if is possible to have two or more User Mesh Visualizer at the same time, I try to duplicate and change the Player Index parameter but just one of the user is show

    • Hi Aldo, please open UserMeshVisualizer.cs, then search for and comment out this line: ‘sensorData.spaceCoordsBufferReady = false;’. This is a workaround. I’ll try to find a better solution for the next release.

  16. Dear Rumen, what’s the tricks for putting 3D-objects behind it, or in front of it in “BackgroundRemovalDemo / KinectBackgroundRemoval2”?
    Thanks.

  17. Hi again Rumen!

    I have to do another effect with the Kinect: a “wall of hands”, imitating the back side of a “Pin Art / Pinscreen”.

    Here are some picures of a pinscreen:
    -> https://images-eu.ssl-images-amazon.com/images/I/51gHhlPOchL.jpg
    -> https://cdn-all.coolstuff.com/autogen/preset/aspectThumb/1200×900/0f1396b458d22d53c00e8088c7fbf563.jpg

    I need to have in Unity the back side of a “Pinscreen”, but instead of “pins” I will have animated arms with hands.
    The idea is to use the “KinectUserVisualizerDemo”, so the user can stay over this “wall of hands (inverse pinscreen)” and many hands can “hug” the user.

    “Wall of hands” like this:
    -> https://goo.gl/photos/zUoDAcEDH269fuUVA

    I want to use a mesh collider in the user mesh, and in the hands of the wall. The idea is, when the user collides a hands of the wall, the user’s mesh push the arms backwards according to the user’s deep. But when the user don’t collided a hand, it move forwards to their original position. It’s just like the back side of a “pinscreen”.

    Can you please give me some help to do that?

    As always, any guide will be very appreciated.

    Best regards!

    Cris.

    • How can I help you? There is ‘Update mesh collider’-setting of UserMeshVisualizer, but keep in mind mesh colliders are slow, as far as I remember. Maybe the collider should not be updated at every mesh update, but more rarely – for instance at 0.1 seconds, if the user’s motions are not so fast.

      • Hi Rumen, ok I understand…
        Do you know a way to do the same, maybe using the alpha-texture, showing the 3d-hands just where the user is not?

      • I don’t know. Maybe a shader would not be a good idea in your use case. Think and experiment a bit more, before start implementing it.

      • Hi Rumen!

        Thanks for your guide. I think the “KinectBackgroundRemoval5” demo scene and the “DepthColliderDemo2D” are very usefull for me, because I want to use the 2D user mesh (with the texture).

        I see you’re using simple 2D colliders in the “DepthColliderDemo2D”, but I need to adjust them to the real body width and height in runtime..

        It is possible to have the “Update mesh collider”-setting, applied to the 2D user mesh, like in the “KinectBackgroundRemoval5” scene or in the “DepthColliderDemo2D” scene, instead of aplying it to the 3D mesh of the user (in “KinectUserVisualizer” scene)?

        Best regards,

        Chris.

      • Hi again Rumen!

        I want to make a Raycast to different points of the “UserImage” from the “KinectBackgroundRemoval5” demo scene, and see if the alphachannel of the hit point is 1 or 0…
        How can I get the alpha value of the Raycast?

        I wanted to apply this solution:

        http://answers.unity3d.com/questions/189998/2d-collisions-on-a-texture2d-with-transparent-area.html

        But I’m stuck, because I’m getting a RenderTexture when I use the “hitRender.material.mainTexture”, and I need a Texture2D to apply the “GetPixel()” function..

        Can you please give me some help?

        Thanks,

        Chris.

      • You can use KinectInterop.RenderTex2Tex2D()-method to convert a render texture to texture2d. Then it is a matter of texture-coordinate calculation and getting the pixel value.

      • Ok, good idea Rumen, thanks!

        Last question.. Sorry that I insist with my questions…
        I’m using the “DepthColliderDemo2D”, using the user 2d colliders that triggers with about 50 2d colliders that fill the wall, so can I hide the objects that are colliding with the user’s collider.

        How can I adapt automatically the width of every user collider, for example the “SpineMidCollider” to approximately cover the real user width? So can I approximately adapt the colliders 2D to the user texture.

        I will actually appreciate any help about that.

        As always, thanks for your help!

        Best Regards,

        Chris.

      • Hi Rumen! Please, I’m stuck on a stupid thing… I’m developing a “MultiDisplay” app (a video wall), and I’m using the Collider2D Demo scene. I’m usig the class “DepthSpriteViewerMod” to generate the colliders and the sprite, but I see that when run the app in different screen resolutions, the generated colliders loose their correct position… How can I generate the 2D User colliders according to the user texture (sprite), independentrly of the display resolution used?

        Please, please I just have to solve this to finish…

        Thanks!

        Best regards,

        Chris.

      • Hi Chris. Sorry for the delay, but I’m busy with another project at the moment and can’t answer too many questions. Regarding the 2D-collider, I think this is a pure 2D-image question. There should be a way to get the rough outline of the user in 2D (opaque points in the image), or at least some kind of convex polygon of the user shape. For instance find the outer user joints on the image (with mix x or y, max x or y), then connect them and add them to a Polygon2D-collider, for instance. Keep in mind to get the overlayed 2d-positions of the joints, as in the demo scenes. I don’t know what this DepthSpriteViewerMod-class does, but it’s probably something similar.

  18. Hi, Rumen! I am wondering how to display a prefab model more realistic when detecting a user to get AR(i.e. augmented reality) experience, in the scene where the model may walk in front of the detected user. To be more exact, how to make the model like walking on the real ground.

    • Hi, I’m not sure if you need a model or background-removal image of the user, to make it more realistic. If it is a model, look at the front-facing model in the 1st avatar-demo. If it is the foreground image of the user, look at the 5th background-removal demo.

  19. Hi Rumen,

    How do I get both the depth and the color textures at the same time (in one game scene)? I need both of them then use opencv to process the images.

    • Hi, set the ‘Compute user map’-setting of KinectManager in the scene to ‘User texture’ or ‘Body texture’, and enable its ‘Compute color map’-setting as well. Then, you can get the depth texture in your script by calling ‘KinectManager.Instance.GetUsersLblTex()’, and the color camera texture – with ‘KinectManager.Instance.GetUsersClrTex()’. Hope I understood your question correctly.

  20. Hey bro, just purchase the unity asset. Firstly thank you for making it, since it helps alot with my project that im working on.

    But i have question regarding this, for lets say I have a scene already made (and a mirror in the center) How do i use it with the Changing room scene demo where the real life camera and the changing room items will be in the created scene’s mirror.

    I really hope i explain it well, and sorry for my bad english mate.

  21. Im sorry if i posted twice, Im still pretty new to wordpress. But thanks for the package for the first part since it helped alot with my project.

    My problem is that, I made a scene and have a mirror made where I want to have the fitting room interaction to be there as well. How do i make that the camera only showed up at the mirror and not the full game scene view?

    Thanks in advance and Im sorry if i make any grammar mistake!

      • If let’s say I make it in already in a 3d environment, is it possible to do the UI overall layer. I’ll take a picture to further explain what I mean.

        https://imgur.com/MXf3plN

        Like i want the Kinect changing room interaction to just be on the mirror and you can see the space around him. Hopefully that’s explainable on my end. And also, thanks for the early reply too !

      • I forgot to add but the mirror is in 3d model, what do i need it to be if it in 3d model doesn’t work? I apologized as I just started learning about kinect in unity so I have alot of questions on how to implement it.

      • Well, I can’t answer all possible questions. You would need to research a bit by yourself. I remember seeing Unity demo with a mirror in a dressing room (called character-demo or something like this), but this was very long time ago, maybe on Unity3. I think they used a shader for the mirror reflection back then, but I’m not sure. Sorry, can’t help you more than this.

      • I see thanks for that, Im sorry if i got a little bit too carried away there and a bit too confusing. But ill make a mock up first to better explain what I mean.

        My idea is something like this although just a mockup to show it in pictures of what I mean in the beginning. Again thank you for taking your time to reply.

        https://imgur.com/a/gY57e

  22. Hi Rumen F
    I’m a developer from Thailand. I use Kinect for XBOX ONE.
    I have a problem about FittingRoomDemo1.
    I do follow your tips and tricks
    I add new model to my file and put a preview image for each model in jpeg-format (100 x 143px, 24bpp) in the respective model folder and rename it to ‘preview.jpg.bytes’ but my preview image display ‘No preview’ in the model-selection menu.

    How can I do for it?

    *** I see a type of preview imageฺํ file in the FittingRoomDemo/Resources-folder (original demo file) is not a JPG file, it is ฺBYTES file.

    Thanks in advance!

    Best regards,

    KRICH

    • Hi, look at the ‘preview.jpg.bytes’-preview files in FittingRoomDemo/Resources/Clothing/000x-folders. Your preview jpg-files need to be placed in a similar way in the respective model-folders of the clothing category. These preview files must be still in JPEG format. Renaming them to ‘.jpg.bytes’-extension is needed by the Unity resource-loader, in order to consider them as binary ones.

      • Thanks for your guide.
        I followed your guide but my preview image display ‘No preview’ in the model-selection menu.
        I see something different.
        I recorded my desktop image to describe you
        please look at this

        preview image file in original FittingRoomDemo1
        https://ibb.co/g4givm

        preview image file in My Project
        https://ibb.co/cMSOvm

        Can you please give me some help?

        I really hope i explain it well, and sorry for my bad english mate.

        Thank.

        Krich

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s