Kinect v2 Tips, Tricks and Examples

teaching4After answering so many different questions about how to use various parts and components of the “Kinect v2 with MS-SDK”-package, I think it would be easier, if I share some general tips, tricks and examples. I’m going to add more tips and tricks to this article in time. Feel free to drop by, from time to time, to check out what’s new.

And here is a link to the Online documentation of the K2-asset.

Table of Contents:

What is the purpose of all manages in the KinectScripts-folder
How to use the Kinect v2-Package functionality in your own Unity project
How to use your own model with the AvatarController
How to make the avatar hands twist around the bone
How to utilize Kinect to interact with GUI buttons and components
How to get the depth- or color-camera textures
How to get the position of a body joint
How to make a game object rotate as the user
How to make a game object follow user’s head position and rotation
How to get the face-points’ coordinates
How to mix Kinect-captured movement with Mecanim animation
How to add your models to the FittingRoom-demo
How to set up the sensor height and angle
Are there any events, when a user is detected or lost
How to process discrete gestures like swipes and poses like hand-raises
How to process continuous gestures, like ZoomIn, ZoomOut and Wheel
How to utilize visual (VGB) gestures in the K2-asset
How to change the language or grammar for speech recognition
How to run the fitting-room or overlay demo in portrait mode
How to build an exe from ‘Kinect-v2 with MS-SDK’ project
How to make the Kinect-v2 package work with Kinect-v1
What do the options of ‘Compute user map’-setting mean
How to set up the user detection order
How to enable body-blending in the FittingRoom-demo, or disable it to increase FPS
How to build Windows-Store (UWP-8.1) application
How to work with multiple users
How to use the FacetrackingManager
How to add background image to the FittingRoom-demo
How to move the FPS-avatars of positionally tracked users in VR environment
How to create your own gestures
How to enable or disable the tracking of inferred joints
How to build exe with the Kinect-v2 plugins provided by Microsoft
How to build Windows-Store (UWP-10) application
How to run the projector-demo scene
How to render background and the background-removal image on the scene background
How to run the demo scenes on non-Windows platforms
How to workaround the user tracking issue, when the user is turned back
How to get the full scene depth image as texture
Some useful hints regarding AvatarController and AvatarScaler
How to setup the K2-package to work with Orbbec Astra sensors (deprecated)
How to setup the K2-asset to work with Nuitrack body tracking SDK
How to control Keijiro’s Skinner-avatars with the Avatar-Controller component
How to track a ball hitting a wall (hints)
How to create your own programmatic gestures
What is the file-format used by the KinectRecorderPlayer-component (KinectRecorderDemo)
How to enable user gender and age detection in KinectFittingRoom1-demo scene

What is the purpose of all manages in the KinectScripts-folder:

The managers in the KinectScripts-folder are components. You can utilize them in your projects, depending on the features you need. The KinectManager is the most general component, needed to interact with the sensor and to get basic data from it, like the color and depth streams, and the bodies and joints’ positions in meters, in Kinect space. The purpose of the AvatarController is to transfer the detected joint positions and orientations to a rigged skeleton. The CubemanController is similar, but it works with transforms and lines to represent the joints and bones, in order to make locating the tracking issues easier. The FacetrackingManager deals with the face points and head/neck orientation. It is used internally by the KinectManager (if available at the same time) to get the precise position and orientation of the head and neck. The InteractionManager is used to control the hand cursor and to detect hand grips, releases and clicks. And finally, the SpeechManager is used for recognition of speech commands. Pay also attention to the Samples-folder. It contains several simple examples (some of them cited below) you can learn from, use directly or copy parts of the code into your scripts.

How to use the Kinect v2-Package functionality in your own Unity project:

1. Copy folder ‘KinectScripts’ from the Assets/K2Examples-folder of the package to your project. This folder contains the package scripts, filters and interfaces.
2. Copy folder ‘Resources’ from the Assets/K2Examples-folder of the package to your project. This folder contains all needed libraries and resources. You can skip copying the libraries you don’t plan to use, in order to save space.
3. Copy folder ‘Standard Assets’ from the Assets/K2Examples-folder of the package to your project. It contains the wrapper classes for Kinect-v2 SDK.
4. Wait until Unity detects and compiles the newly copied resources, folders and scripts.
See this tip as well, if you like to build your project with the Kinect-v2 plugins provided by Microsoft.

How to use your own model with the AvatarController:

1. (Optional) Make sure your model is in T-pose. This is the zero-pose of Kinect joint orientations.
2. Select the model-asset in Assets-folder. Select the Rig-tab in Inspector window.
3. Set the AnimationType to ‘Humanoid’ and AvatarDefinition – to ‘Create from this model’.
4. Press the Apply-button. Then press the Configure-button to make sure the joints are correctly assigned. After that exit the configuration window.
5. Put the model into the scene.
6. Add the KinectScript/AvatarController-script as component to the model’s game object in the scene.
7. Make sure your model also has Animator-component, it is enabled and its Avatar-setting is set correctly.
8. Enable or disable (as needed) the MirroredMovement and VerticalMovement-settings of the AvatarController-component. Do mind when mirrored movement is enabled, the model’s transform should have Y-rotation of 180 degrees.
9. Run the scene to test the avatar model. If needed, tweak some settings of AvatarController and try again.

How to make the avatar hands twist around the bone:

To do it, you need to set ‘Allowed Hand Rotations’-setting of the KinectManager to ‘All’. KinectManager is a component of the MainCamera in the example scenes. This setting has three options: None – turns off all hand rotations, Default – turns on the hand rotations, except the twists around the bone, All – turns on all hand rotations.

How to utilize Kinect to interact with GUI buttons and components:

1. Add the InteractionManager to the main camera or to other persistent object in the scene. It is used to control the hand cursor and to detect hand grips, releases and clicks. Grip means closed hand with thumb over the other fingers, Release – opened hand, hand Click is generated when the user’s hand doesn’t move (stays still) for about 2 seconds.
2. Enable the ‘Control Mouse Cursor’-setting of the InteractionManager-component. This setting transfers the position and clicks of the hand cursor to the mouse cursor, this way enabling interaction with the GUI buttons, toggles and other components.
3. If you need drag-and-drop functionality for interaction with the GUI, enable the ‘Control Mouse Drag’-setting of the InteractionManager-component. This setting starts mouse dragging, as soon as it detects hand grip and continues the dragging until hand release is detected. If you enable this setting, you can also click on GUI buttons with a hand grip, instead of the usual hand click (i.e. staying in place, over the button, for about 2 seconds).

How to get the depth- or color-camera textures:

First off, make sure that ‘Compute User Map’-setting of the KinectManager-component is enabled, if you need the depth texture, or ‘Compute Color Map’-setting of the KinectManager-component is enabled, if you need the color camera texture. Then write something like this in the Update()-method of your script:

KinectManager manager = KinectManager.Instance;
if(manager && manager.IsInitialized())
{
    Texture2D depthTexture = manager.GetUsersLblTex();
    Texture2D colorTexture = manager.GetUsersClrTex();
    // do something with the textures
}

How to get the position of a body joint:

This is demonstrated in KinectScripts/Samples/GetJointPositionDemo-script. You can add it as a component to a game object in your scene to see it in action. Just select the needed joint and optionally enable saving to a csv-file. Do not forget that to add the KinectManager as component to a game object in your scene. It is usually a component of the MainCamera in the example scenes. Here is the main part of the demo-script that retrieves the position of the selected joint:

KinectInterop.JointType joint = KinectInterop.JointType.HandRight;
KinectManager manager = KinectManager.Instance;

if(manager && manager.IsInitialized())
{
    if(manager.IsUserDetected())
    {
        long userId = manager.GetPrimaryUserID();

        if(manager.IsJointTracked(userId, (int)joint))
        {
            Vector3 jointPos = manager.GetJointPosition(userId, (int)joint);
            // do something with the joint position
        }
    }
}

How to make a game object rotate as the user:

This is similar to the previous example and is demonstrated in KinectScripts/Samples/FollowUserRotation-script. To see it in action, you can create a cube in your scene and add the script as a component to it. Do not forget to add the KinectManager as component to a game object in your scene. It is usually a component of the MainCamera in the example scenes.

How to make a game object follow user’s head position and rotation:

You need the KinectManager and FacetrackingManager added as components to a game object in your scene. For example, they are components of the MainCamera in the KinectAvatarsDemo-scene. Then, to get the position of the head and orientation of the neck, you need code like this in your script:

KinectManager manager = KinectManager.Instance;

if(manager && manager.IsInitialized())
{
    if(manager.IsUserDetected())
    {
        long userId = manager.GetPrimaryUserID();

        if(manager.IsJointTracked(userId, (int)KinectInterop.JointType.Head))
        {
            Vector3 headPosition = manager.GetJointPosition(userId, (int)KinectInterop.JointType.Head);
            Quaternion neckRotation = manager.GetJointOrientation(userId, (int)KinectInterop.JointType.Neck);
            // do something with the head position and neck orientation
        }
    }
}

How to get the face-points’ coordinates:

You need a reference to the respective FaceFrameResult-object. This is demonstrated in KinectScripts/Samples/GetFacePointsDemo-script. You can add it as a component to a game object in your scene, to see it in action. To get a face point coordinates in your script you need to invoke its public GetFacePoint()-function. Do not forget to add the KinectManager and FacetrackingManager as components to a game object in your scene. For example, they are components of the MainCamera in the KinectAvatarsDemo-scene.

How to mix Kinect-captured movement with Mecanim animation

1. Use the AvatarControllerClassic instead of AvatarController-component. Assign only these joints that have to be animated by the sensor.
2. Set the SmoothFactor-setting of AvatarControllerClassic to 0, to apply the detected bone orientations instantly.
3. Create an avatar-body-mask and apply it to the Mecanim animation layer. In this mask, disable Mecanim animations of the Kinect-animated joints mentioned above. Do not disable the root-joint!
4. Enable the ‘Late Update Avatars’-setting of KinectManager (component of MainCamera in the example scenes).
5. Run the scene to check the setup. When a player gets recognized by the sensor, part of his joints will be animated by the AvatarControllerClassic component, and the other part – by the Animator component.

How to add your models to the FittingRoom-demo

1. For each of your fbx-models, import the model and select it in the Assets-view in Unity editor.
2. Select the Rig-tab in Inspector. Set the AnimationType to ‘Humanoid’ and the AvatarDefinition to ‘Create from this model’.
3. Press the Apply-button. Then press the Configure-button to check if all required joints are correctly assigned. The clothing models usually don’t use all joints, which can make the avatar definition invalid. In this case you can assign manually the missing joints (shown in red).
4. Keep in mind: The joint positions in the model must match the structure of the Kinect-joints. You can see them, for instance in the KinectOverlayDemo2. Otherwise the model may not overlay the user’s body properly.
5. Create a sub-folder for your model category (Shirts, Pants, Skirts, etc.) in the FittingRoomDemo/Resources-folder.
6. Create a sub-folders with subsequent numbers (0000, 0001, 0002, etc.) for all imported in p.1 models, in the model category folder.
7. Move your models into these numerical folders, one model per folder, along with the needed materials and textures. Rename the model’s fbx-file to ‘model.fbx’.
8. You can put a preview image for each model in jpeg-format (100 x 143px, 24bpp) in the respective model folder. Then rename it to ‘preview.jpg.bytes’. If you don’t put a preview image, the fitting-room demo will display ‘No preview’ in the model-selection menu.
9. Open the FittingRoomDemo1-scene.
10. Add a ModelSelector-component for each model category to the KinectController game object. Set its ‘Model category’-setting to be the same as the name of sub-folder created in p.5 above. Set the ‘Number of models’-setting to reflect the number of sub-folders created in p.6 above.
11. The other settings of your ModelSelector-component must be similar to the existing ModelSelector in the demo. I.e. ‘Model relative to camera’ must be set to ‘BackgroundCamera’, ‘Foreground camera’ must be set to ‘MainCamera’, ‘Continuous scaling’ – enabled. The scale-factor settings may be set initially to 1 and the ‘Vertical offset’-setting to 0. Later you can adjust them slightly to provide the best model-to-body overlay.
12. Enable the ‘Keep selected model’-setting of the ModelSelector-component, if you want the selected model to continue overlaying user’s body, after the model category changes. This is useful, if there are several categories (i.e. ModelSelectors), for instance for shirts, pants, skirts, etc. In this case the selected shirt model will still overlay user’s body, when the category changes and the user starts selects pants, for instance.
13. The CategorySelector-component provides gesture control for changing models and categories, and takes care of switching model categories (e.g for shirts, pants, ties, etc.) for the same user. There is already a CategorySelector for the 1st user (player-index 0) in the scene, so you don’t need to add more.
14. If you plan for multi-user fitting-room, add one CategorySelector-component for each other user. You may also need to add the respective ModelSelector-components for model categories that will be used by these users, too.
15. Run the scene to ensure that your models can be selected in the list and they overlay the user’s body correctly. Experiment a bit if needed, to find the values of scale-factors and vertical-offset settings that provide the best model-to-body overlay.
16. If you want to turn off the cursor interaction in the scene, disable the InteractionManager-component of KinectController-game object. If you want to turn off the gestures (swipes for changing models & hand raises for changing categories), disable the respective settings of the CategorySelector-component. If you want to turn off or change the T-pose calibration, change the ‘Player calibration pose’-setting of KinectManager-component.
17. You can use the FittingRoomDemo2 scene, to utilize or experiment with a single overlay model. Adjust the scale-factor settings of AvatarScaler to fine tune the scale of the whole body, arm- or leg-bones of the model, if needed. Enable the ‘Continuous Scaling’ setting, if you want the model to rescale on each Update.
18. If the clothing/overlay model uses the Standard shader, set its ‘Rendering mode’ to ‘Cutout’. See this comment below for more information.

How to set up the sensor height and angle

There are two very important settings of the KinectManager-component that influence the calculation of users’ and joints’ space coordinates, hence almost all user-related visualizations in the demo scenes. Here is how to set them correctly:

1. Set the ‘Sensor height’-setting, as to how high above the ground is the sensor, in meters. The by-default value is 1, i.e. 1.0 meter above the ground, which may not be your case.
2. Set the ‘Sensor angle’-setting, as to the tilt angle of the sensor, in degrees. Use positive degrees if the sensor is tilted up, negative degrees – if it is tilted down. The by-default value is 0, which means 0 degrees, i.e. the sensor is not tilted at all.
3. Because it is not so easy to estimate the sensor angle manually, you can use the ‘Auto height angle’-setting to find out this value. Select ‘Show info only’-option and run the demo-scene. Then stand in front of the sensor. The information on screen will show you the rough height and angle-settings, as estimated by the sensor itself. Repeat this 2-3 times and write down the values you see.
4. Finally, set the ‘Sensor height’ and ‘Sensor angle’ to the estimated values you find best. Set the ‘Auto height angle’-setting back to ‘Dont use’.
5. If you find the height and angle values estimated by the sensor good enough, or if your sensor setup is not fixed, you can set the ‘Auto height angle’-setting to ‘Auto update’. It will update the ‘Sensor height’ and ‘Sensor angle’-settings continuously, when there are users in the field of view of the sensor.

Are there any events, when a user is detected or lost

There are no special event handlers for user-detected/user-lost events, but there are two other options you can use:

1. In the Update()-method of your script, invoke the GetUsersCount()-function of KinectManager and compare the returned value to a previously saved value, like this:

KinectManager manager = KinectManager.Instance;
if(manager && manager.IsInitialized())
{
    int usersNow = manager.GetUsersCount();

    if(usersNow > usersSaved)
    {
        // new user detected
    }
    if(usersNow < usersSaved)
    {
        // user lost
    }

    usersSaved = usersNow;
}

2. Create a class that implements KinectGestures.GestureListenerInterface and add it as component to a game object in the scene. It has methods UserDetected() and UserLost(), which you can use as user-event handlers. The other methods could be left empty or return the default value (true). See the SimpleGestureListener or GestureListener-classes, if you need an example.

How to process discrete gestures like swipes and poses like hand-raises

Most of the gestures, like SwipeLeft, SwipeRight, Jump, Squat, etc. are discrete. All poses, like RaiseLeftHand, RaiseRightHand, etc. are also considered as discrete gestures. This means these gestures may report progress or not, but all of them get completed or cancelled at the end. Processing these gestures in a gesture-listener script is relatively easy. You need to do as follows:

1. In the UserDetected()-function of the script add the following line for each gesture you need to track:

manager.DetectGesture(userId, KinectGestures.Gestures.xxxxx);

2. In GestureCompleted() add code to process the discrete gesture, like this:

if(gesture == KinectGestures.Gestures.xxxxx)
{
    // gesture is detected - process it (for instance, set a flag or execute an action)
}

3. In the GestureCancelled()-function, add code to process the cancellation of the continuous gesture:

if(gesture == KinectGestures.Gestures.xxxxx)
{
    // gesture is cancelled - process it (for instance, clear the flag)
}

If you need code samples, see the SimpleGestureListener.cs or CubeGestureListener.cs-scripts.

4. From v2.8 on, KinectGestures.cs is not any more a static class, but a component that may be extended, for instance with the detection of new gestures or poses. You need to add it as component to the KinectController-game object, if you need gesture or pose detection in the scene.

How to process continuous gestures, like ZoomIn, ZoomOut and Wheel

Some of the gestures, like ZoomIn, ZoomOut and Wheel, are continuous. This means these gestures never get fully completed, but only report progress greater than 50%, as long as the gesture is detected. To process them in a gesture-listener script, do as follows:

1. In the UserDetected()-function of the script add the following line for each gesture you need to track:

manager.DetectGesture(userId, KinectGestures.Gestures.xxxxx);

2. In GestureInProgress() add code to process the continuous gesture, like this:

if(gesture == KinectGestures.Gestures.xxxxx)
{
    if(progress > 0.5f)
    {
        // gesture is detected - process it (for instance, set a flag, get zoom factor or angle)
    }
    else
    {
        // gesture is no more detected - process it (for instance, clear the flag)
    }
}

3. In the GestureCancelled()-function, add code to process the end of the continuous gesture:

if(gesture == KinectGestures.Gestures.xxxxx)
{
    // gesture is cancelled - process it (for instance, clear the flag)
}

If you need code samples, see the SimpleGestureListener.cs or ModelGestureListener.cs-scripts.

4. From v2.8 on, KinectGestures.cs is not any more a static class, but a component that may be extended, for instance with the detection of new gestures or poses. You need to add it as component to the KinectController-game object, if you need gesture or pose detection in the scene.

How to utilize visual (VGB) gestures in the K2-asset

The visual gestures, created by the Visual Gesture Builder (VGB) can be used in the K2-asset, too. To do it, follow these steps (and see the VisualGestures-game object and its components in the KinectGesturesDemo-scene):

1. Copy the gestures’ database (xxxxx.gbd) to the Resources-folder and rename it to ‘xxxxx.gbd.bytes’.
2. Add the VisualGestureManager-script as a component to a game object in the scene (see VisualGestures-game object).
3. Set the ‘Gesture Database’-setting of VisualGestureManager-component to the name of the gestures’ database, used in step 1 (‘xxxxx.gbd’).
4. Create a visual-gesture-listener to process the gestures, and add it as a component to a game object in the scene (see the SimpleVisualGestureListener-script).
5. In the GestureInProgress()-function of the gesture-listener add code to process the detected continuous gestures and in the GestureCompleted() add code to process the detected discrete gestures.

How to change the language or grammar for speech recognition

1. Make sure you have installed the needed language pack from here.
2. Set the ‘Language code’-setting of SpeechManager-component, as to the grammar language you need to use. The list of language codes can be found here (see ‘LCID Decimal’).
3. Make sure the ‘Grammar file name’-setting of SpeechManager-component corresponds to the name of the grxml.txt-file in Assets/Resources.
4. Open the grxml.txt-grammar file in Assets/Resources and set its ‘xml:lang’-attribute to the language that corresponds to the language code in step 2.
5. Make the other needed modifications in the grammar file and save it.
6. (Optional since v2.7) Delete the grxml-file with the same name in the root-folder of your Unity project (the parent folder of Assets-folder).
7. Run the scene to check, if speech recognition works correctly.

How to run the fitting-room or overlay demo in portrait mode

1. First off, add 9:16 (or 3:4) aspect-ratio to the Game view’s list of resolutions, if it is missing.
2. Select the 9:16 (or 3:4) aspect ratio of Game view, to set the main-camera output in portrait mode.
3. Open the fitting-room or overlay-demo scene and select each of the BackgroundImage(X)-game object(s). If it has a child object called RawImage, select this sub-object instead.
4. Enable the PortraitBackground-component of each of the selected BackgroundImage object(s). When finished, save the scene.
5. Run the scene and test it in portrait mode.

How to build an exe from ‘Kinect-v2 with MS-SDK’ project

By default Unity builds the exe (and the respective xxx_Data-folder) in the root folder of your Unity project. It is recommended to you use another, empty folder instead. The reason is that building the exe in the folder of your Unity project may cause conflicts between the native libraries used by the editor and the ones used by the exe, if they have different architectures (for instance the editor is 64-bit, but the exe is 32-bit).

Also, before building the exe, make sure you’ve copied the Assets/Resources-folder from the K2-asset to your Unity project. It contains the needed native libraries and custom shaders. Optionally you can remove the unneeded zip.bytes-files from the Resources-folder. This will save a lot of space in the build. For instance, if you target Kinect-v2 only, you can remove the Kinect-v1 and OpenNi2-related zipped libraries. The exe won’t need them anyway.

How to make the Kinect-v2 package work with Kinect-v1

If you have only Kinect v2 SDK or Kinect v1 SDK installed on your machine, the KinectManager should detect the installed SDK and sensor correctly. But in case you have both Kinect SDK 2.0 and SDK 1.8 installed simultaneously, the KinectManager will put preference on Kinect v2 SDK and your Kinect v1 will not be detected. The reason for this is that you can use SDK 2.0 in offline mode as well, i.e. without sensor attached. In this case you can emulate the sensor by playing recorded files in Kinect Studio 2.0.

If you want to make the KinectManager utilize the appropriate interface, depending on the currently attached sensor, open KinectScripts/Interfaces/Kinect2Interface.cs and at its start change the value of ‘sensorAlwaysAvailable’ from ‘true’ to ‘false’. After this, close and reopen the Unity editor. Then, on each start, the KinectManager will try to detect which sensor is currently attached to your machine and use the respective sensor interface. This way you could switch the sensors (Kinect v2 or v1), as to your preference, but will not be able to use the offline mode for Kinect v2. To utilize the Kinect v2 offline mode again, you need to switch ‘sensorAlwaysAvailable’ back to true.

What do the options of ‘Compute user map’-setting mean

Here are one-line descriptions of the available options:

  • RawUserDepth means that only the raw depth image values, coming from the sensor will be available, via the GetRawDepthMap()-function for instance;
  • BodyTexture means that GetUsersLblTex()-function will return the white image of the tracked users;
  • UserTexture will cause GetUsersLblTex() to return the tracked users’ histogram image;
  • CutOutTexture, combined with enabled ‘Compute color map‘-setting, means that GetUsersLblTex() will return the cut-out image of the users.

All these options (except RawUserDepth) can be tested instantly, if you enable the ‘Display user map‘-setting of KinectManager-component, too.

How to set up the user detection order

There is a ‘User detection order’-setting of the KinectManager-component. You can use it to determine how the user detection should be done, depending on your requirements. Here are short descriptions of the available options:

  • Appearance is selected by default. It means that the player indices are assigned in order of user appearance. The first detected user gets player index 0, The next one gets index 1, etc. If the user 0 gets lost, the remaining users are not reordered. The next newly detected user will take its place;
  • Distance means that player indices are assigned depending on distance of the detected users to the sensor. The closest one will get player index 0, the next closest one – index 1, etc. If a user gets lost, the player indices are reordered, depending on the distances to the remaining users;
  • Left to right means that player indices are assigned depending on the X-position of the detected users. The leftmost one will get player index 0, the next leftmost one – index 1, etc. If a user gets lost, the player indices are reordered, depending on the X-positions of the remaining users;

The user-detection area can be further limited with ‘Min user distance’, ‘Max user distance’ and ‘Max left right distance’-settings, in meters from the sensor. The maximum number of detected user can be limited by lowering the value of ‘Max tracked user’-setting.

How enable body-blending in the FittingRoom-demo, or disable it to increase FPS

If you select the MainCamera in the KinectFittingRoom1-demo scene (in v2.10 or above), you will see a component called UserBodyBlender. It is responsible for mixing the clothing model (overlaying the user) with the real world objects (including user’s body parts), depending on the distance to camera. For instance, if you arms or other real-world objects are in front of the model, you will see them overlaying the model, as expected.

You can enable the component, to turn on the user’s body-blending functionality. The ‘Depth threshold’-setting may be used to adjust the minimum distance to the front of model (in meters). It determines when a real-world’s object will become visible. It is set by default to 0.1m, but you could experiment a bit to see, if any other value works better for your models. If the scene performance (in means of FPS) is not sufficient, and body-blending is not important, you can disable the UserBodyBlender-component to increase performance.

How to build Windows-Store (UWP-8.1) application

To do it, you need at least v2.10.1 of the K2-asset. To build for ‘Windows store’, first select ‘Windows store’ as platform in ‘Build settings’, and press the ‘Switch platform’-button. Then do as follows:

1. Unzip Assets/Plugins-Metro.zip. This will create Assets/Plugins-Metro-folder.
2. Delete the KinectScripts/SharpZipLib-folder.
3. Optionally, delete all zip.bytes-files in Assets/Resources. You won’t need these libraries in Windows/Store. All Kinect-v2 libraries reside in Plugins-Metro-folder.
4. Select ‘File / Build Settings’ from the menu. Add the scenes you want to build. Select ‘Windows Store’ as platform. Select ‘8.1’ as target SDK. Then click the Build-button. Select an empty folder for the Windows-store project and wait the build to complete.
5. Go to the build-folder and open the generated solution (.sln-file) with Visual studio.
6. Change the ‘by default’ ARM-processor target to ‘x86’. The Kinect sensor is not compatible with ARM processors.
7. Right click ‘References’ in the Project-windows and select ‘Add reference’. Select ‘Extensions’ and then WindowsPreview.Kinect and Microsoft.Kinect.Face libraries. Then press OK.
8. Open solution’s manifest-file ‘Package.appxmanifest’, go to ‘Capabilities’-tab and enable ‘Microphone’ and ‘Webcam’ in the left panel. Save the manifest. This is needed to to enable the sensor, when the UWP app starts up. Thanks to Yanis Lukes (aka Pendrokar) for providing this info!
9. Build the project. Run it, to test it locally. Don’t forget to turn on Windows developer mode on your machine.

How to work with multiple users

Kinect-v2 can fully track up to 6 users simultaneously. That’s why many of the Kinect-related components, like AvatarController, InteractionManager, model & category-selectors, gesture & interaction listeners, etc. have a setting called ‘Player index’. If set to 0, the respective component will track the 1st detected user. If set to 1, the component will track the 2nd detected use. If set to 2 – the 3rd user, etc. The order of user detection may be specified with the ‘User detection order’-setting of the KinectManager (component of KinectController game object).

How to use the FacetrackingManager

The FacetrackingManager-component may be used for several purposes. First, adding it as component of KinectController will provide more precise neck and head tracking, when there are avatars in the scene (humanoid models utilizing the AvatarController-component). If HD face tracking is needed, you can enable the ‘Get face model data’-setting of FacetrackingManager-component. Keep in mind that using HD face tracking will lower performance and may cause memory leaks, which can cause Unity crash after multiple scene restarts. Please use this feature carefully.

In case of ‘Get face model data’ enabled, don’t forget to assign a mesh object (e.g. Quad) to the ‘Face model mesh’-setting. Pay also attention to the ‘Textured model mesh’-setting. The available options are: ‘None’ – means the mesh will not be textured; ‘Color map’ – the mesh will get its texture from the color camera image, i.e. it will reproduce user’s face; ‘Face rectangle’ – the face mesh will be textured with its material’s Albedo texture, whereas the UI coordinates will match the detected face rectangle.

Finally, you can use the FacetrackingManager public API to get a lot of face-tracking data, like the user’s head position and rotation, animation units, shape units, face model vertices, etc.

How to add background image to the FittingRoom-demo (updated for v2.14 and later)

To replace the color-camera background in the FittingRoom-scene with a background image of your choice, please do as follows:

1. Enable the BackgroundRemovalManager-component of the KinectController-game object in the scene.
2. Make sure the ‘Compute user map’-setting of KinectManager (component of the KinectController, too) is set to ‘Body texture’, and the ‘Compute color map’-setting is enabled.
3. Set the needed background image as texture of the RawImage-component of BackgroundImage1-game object in the scene.
4. Run the scene to check, if it works as expected.

How to move the FPS-avatars of positionally tracked users in VR environment

There are two options for moving first-person avatars in VR-environment (the 1st avatar-demo scene in K2VR-asset):

1. If you use the Kinect’s positional tracking, turn off the Oculus/Vive positional tracking, because their coordinates are different to Kinect’s.
2. If you prefer to use the Oculus/Vive positional tracking:
– enable the ‘External root motion’-setting of the AvatarController-component of avatar’s game object. This will disable avatar motion as to Kinect special coordinates.
– enable the HeadMover-component of avatar’s game object, and assign the MainCamera as ‘Target transform’, to follow the Oculus/Vive position.

Now try to run the scene. If there are issues with the MainCamera used as positional target, do as follows:
– add an empty game object to the scene. It will be used to follow the Oculus/Vive positions.
– assign the newly created game object to the ‘Target transform’-setting of the HeadMover-component.
– add a script to the newly created game object, and in that script’s Update()-function set programatically the object’s transform position to be the current Oculus/Vive position.

How to create your own gestures

For gesture recognition there are two options – visual gestures (created with the Visual Gesture Builder, part of Kinect SDK 2.0) and programmatic gestures, coded in KinectGestures.cs or a class that extends it. The programmatic gestures detection consists mainly of tracking the position and movement of specific joints, relative to some other joints. For more info regarding how to create your own programmatic gestures look at this tip below.

The scenes demonstrating the detection of programmatic gestures are located in the KinectDemos/GesturesDemo-folder. The KinectGesturesDemo1-scene shows how to utilize discrete gestures, and the KinectGesturesDemo2-scene is about continuous gestures.

And here is a video on creating and checking for visual gestures. Please check KinectDemos/GesturesDemo/VisualGesturesDemo-scene too, to see how to use visual gestures in Unity. A major issue with the visual gestures is that they usually work in the 32-bit builds only.

How to enable or disable the tracking of inferred joints

First, keep in mind that:
1. There is ‘Ignore inferred joints’-setting of the KinectManager. KinectManager is usually a component of the KinectController-game object in demo scenes.
2. There is a public API method of KinectManager, called IsJointTracked(). This method is utilized by various scripts & components in the demo scenes.

Here is how it works:
The Kinect SDK tracks the positions of all body joints’ together with their respective tracking states. These states can be Tracked, NotTracked or Inferred. When the ‘Ignore inferred joints’-setting is enabled, the IsJointTracked()-method returns true, when the tracking state is Tracked or Inferred, and false when the state is NotTracked. I.e. both tracked and inferred joints are considered valid. When the setting is disabled, the IsJointTracked()-method returns true, when the tracking state is Tracked, and false when the state is NotTracked or Inferred. I.e. only the really tracked joints are considered valid.

How to build exe with the Kinect-v2 plugins provided by Microsoft

In case you’re targeting Kinect-v2 sensor only, and would like to avoid packing all native libraries that come with the K2-asset in the build, as well as unpacking them into the working directory of the executable afterwards, do as follows:

1. Download and unzip the Kinect-v2 Unity Plugins from here.
2. Open your Unity project. Select ‘Assets / Import Package / Custom Package’ from the menu and import only the Plugins-folder from ‘Kinect.2.0.1410.19000.unitypackage’. You can find it in the unzipped package from p.1 above. Please don’t import anything from the ‘Standard Assets’-folder of the unitypackage. All needed standard assets are already present in the K2-asset.
3. If you are using the FacetrackingManager in your scenes, import only the Plugins-folder from ‘Kinect.Face.2.0.1410.19000.unitypackage’ as well. If you are using visual gestures (i.e. VisualGestureManager in your scenes), import only the Plugins-folder from ‘Kinect.VisualGestureBuilder.2.0.1410.19000.unitypackage’, too. Again, please don’t import anything from the ‘Standard Assets’-folders of these unitypackages. All needed standard assets are already present in the K2-asset.
4. Delete KinectV2UnityAddin.x64.zip & NuiDatabase.zip (or all zipped libraries) from the K2Examples/Resources-folder. You can see them as .zip-files in the Assets-window, or as .zip.bytes-files in the Windows explorer. You are going to use the Kinect-v2 sensor only, so all these zipped libraries are not needed any more.
5. Delete all dlls in the root-folder of your Unity project. The root-folder is the parent-folder of the Assets-folder of your project, and is not visible in the Editor. You may need to stop the Unity editor. Delete the NuiDatabase- and vgbtechs-folders in the root-folder, as well. These dlls and folders are no more needed, because they are part of the project’s Plugins-folder now.
6. Open Unity editor again, load the project and try to run the demo scenes in the project, to make sure they work as expected.
7. If everything is OK, build the executable again. This should work for both x86 and x86_64-architectures, as well as for Windows-Store, SDK 8.1.

How to build Windows-Store (UWP-10) application

To do it, you need at least v2.12.2 of the K2-asset. Then follow these steps:

1. (optional, as of v2.14.1) Delete the KinectScripts/SharpZipLib-folder. It is not needed for UWP. If you leave it, it may cause syntax errors later.
2. Open ‘File / Build Settings’ in Unity editor, switch to ‘Windows store’ platform and select ‘Universal 10’ as SDK. Make sure ‘.Net’ is selected as scripting backend. Optionally enable the ‘Unity C# Project’ and ‘Development build’-settings, if you’d like to edit the Unity scripts in Visual studio later.
3. Press the ‘Build’-button, select output folder and wait for Unity to finish exporting the UWP-Visual studio solution.
4. Close or minimize the Unity editor, then open the exported UWP solution in Visual studio.
5. Select x86 or x64 as target platform in Visual studio.
6. Open ‘Package.appmanifest’ of the main project, and on tab ‘Capabilities’ enable ‘Microphone’ & ‘Webcam’. These may be enabled in the Windows-store’s Player settings in Unity, too.
7. If you have enabled the ‘Unity C# Project’-setting in p.2 above, right click on ‘Assembly-CSharp’-project in the Solution explorer, select ‘Properties’ from the context menu, and then select ‘Windows 10 Anniversary Edition (10.0; Build 14393)’ as ‘Target platform’. Otherwise you will get compilation errors.
8. Build and run the solution, on the local or remote machine. It should work now.

Please mind the FacetrackingManager and SpeechRecognitionManager-components, hence the scenes that use them, will not work with the current version of the K2-UWP interface.

How to run the projector-demo scene (v2.13 and later)

To run the KinectProjectorDemo-scene, you need to calibrate the projector to the Kinect sensor first. To do it, please follow these steps:

1. To do the needed sensor-projector calibration, you first need to download RoomAliveToolkit, and then open and build the ProCamCalibration-project in Microsoft Visual Studio 2015 or later. For your convenience, here is a ready-made build of the needed executables, made with VS-2015.
2. Then open the ProCamCalibration-page and follow carefully the instructions in ‘Tutorial: Calibrating One Camera and One Projector’, from ‘Room setup’ to ‘Inspect the results’.
3. After the ProCamCalibration finishes successfully, copy the generated calibration xml-file to the KinectDemos/ProjectorDemo/Resources-folder of the K2-asset.
4. Open the KinectProjectorDemo-scene in Unity editor, select the MainCamera-game object in Hierarchy, and drag the calibration xml-file generated by ProCamCalibrationTool to the ‘Calibration Xml’-setting of its ProjectorCamera-component. Please also check, if the value of ‘Proj name in config’-setting is the same as the projector name set in the calibration xml-file (usually ‘0’).
5. Set the projector to duplicate the main screen, enable ‘Maximize on play’ in Editor (or build the scene), and run the scene in full-screen mode. Walk in front of the sensor, to check if the projected skeleton overlays correctly the user’s body. You can also try to enable ‘U_Character’ game object in the scene, to see how a virtual 3D-model can overlay the user’s body at runtime.

How to render background and the background-removal image on the scene background

First off, if you want to replace the color-camera background in the FittingRoom-demo scene with the background-removal image, please see and follow these steps.

For all other demo-scenes: You can replace the color-camera image on scene background with the background-removal image, by following these (rather complex) steps:

1. Create an empty game object in the scene, name it BackgroundImage1, and add ‘GUI Texture’-component to it (this will change after the release of Unity 2017.2, because it deprecates GUI-Textures). Set its Transform position to (0.5, 0.5, 0) to center it on the screen. This object will be used to render the scene background, so you can select a suitable picture for the Texture-setting of its GUITexture-component. If you leave its Texture-setting to None, a skybox or solid color will be rendered as scene background.

2. In a similar way, create a BackgroundImage2-game object. This object will be used to render the detected users, so leave the Texture-setting of its GUITexture-component to None (it will be set at runtime by a script), and set the Y-scale of the object to -1. This is needed to flip the rendered texture vertically. The reason: Unity textures are rendered bottom to top, while the Kinect images are top to bottom.

3. Add KinectScripts/BackgroundRemovalManager-script as component to the KinectController-game object in the scene (if it is not there yet). This is needed to provide the background removal functionality to the scene.

4. Add KinectDemos/BackgroundRemovalDemo/Scripts/ForegroundToImage-script as component to the BackgroundImage2-game object. This component will set the foreground texture, created at runtime by the BackgroundRemovalManager-component, as Texture of the GUI-Texture component (see p2 above).

Now the tricky part: Two more cameras are needed to display the user image over the scene background – one to render the background picture, 2nd one to render the user image on top of it, and finally – the main camera – to render the 3D objects on top of the background cameras.  Cameras in Unity have a setting called ‘Culling Mask’, where you can set the layers rendered by each camera. There are also two more settings: Depth and ‘Clear flags’ that may be used to change the cameras rendering order.

5. In our case, two extra layers will be needed for the correct rendering of background cameras. Select ‘Add layer’ from the Layer-dropdown in the top-right corner of the Inspector and add 2 layers – ‘BackgroundLayer1’ and ‘BackgroundLayer2’, as shown below. Unfortunately, when Unity exports the K2-package, it doesn’t export the extra layers too. That’s why the extra layers are missing in the demo-scenes.

6. After you have added the extra layers, select the BackgroundImage1-object in Hierarchy and set its layer to ‘BackgroundLayer1’. Then select the BackgroundImage2 and set its layer to ‘BackgroundLayer2’.

7. Create a camera-object in the scene and name it BackgroundCamera1. Set its CullingMask to ‘BackgroundLayer1’ only. Then set its ‘Depth’-setting to (-2) and its ‘Clear flags’-setting to ‘Skybox’ or ‘Solid color’. This means this camera will render first, will clear the output and then render the texture of BackgroundImage1. Don’t forget to disable its AudioListener-component, too. Otherwise, expect endless warnings in the console, regarding multiple audio listeners in the scene.

8. Create a 2nd camera-object and name it BackgroundCamera2. Set its CullingMask to ‘BackgroundLayer2’ only, its ‘Depth’ to (-1) and its ‘Clear flags’ to ‘Depth only’. This means this camera will render 2nd (because -1 > -2), will not clear the previous camera rendering, but instead render the BackgroundImage2 texture on top of it. Again, don’t forget to disable its AudioListener-component.

9. Finally, select the ‘Main Camera’ in the scene. Set its ‘Depth’ to 0 and ‘Clear flags’ to ‘Depth only’. In its ‘Culling mask’ disable ‘BackgroundLayer1’ and ‘BackgroundLayer2’, because they are already rendered by the background cameras. This way the main camera will render all other layers in the scene, on top of the background cameras (depth: 0 > -1 > -2).

If you need a practical example of the above setup, please look at the objects, layers and cameras of the KinectDemos/BackgroundRemovalDemo/KinectBackgroundRemoval1-demo scene.

How to run the demo scenes on non-Windows platforms

Starting with v2.14 of the K2-asset you can run and build many of the demo-scenes on non-Windows platform. In this case you can utilize the KinectDataServer and KinectDataClient components, to transfer the Kinect body and interaction data over the network. The same approach is used by the K2VR-asset. Here is what to do:

1. Add KinectScripts/KinectDataClient.cs as component to KinectController-game object in the client scene. It will replace the direct connection to the sensor with connection to the KinectDataServer-app over the network.
2. On the machine, where the Kinect-sensor is connected, run KinectDemos/KinectDataServer/KinectDataServer-scene or download the ready-built KinectDataServer-app for the same version of Unity editor, as the one running the client scene. The ready-built KinectDataServer-app can be found on this page.
3. Make sure the KinectDataServer and the client scene run in the same subnet. This is needed, if you’d like the client to discover automatically the running instance of KinectDataServer. Otherwise you would need to set manually the ‘Server host’ and ‘Server port’-settings of the KinectDataClient-component.
4. Run the client scene to make sure it connects to the server. If it doesn’t, check the console for error messages.
5. If the connection between the client and server is OK, and the client scene works as expected, build it for the target platform and test it there too.

How to workaround the user tracking issue, when the user is turned back

Starting with v2.14 of the K2-asset you can (at least roughly) work around the user tracking issue, when the user is turned back. Here is what to do:

1. Add FacetrackingManager-component to your scene, if there isn’t one there already. The face-tracking is needed for front & back user detection.
2. Enable the ‘Allow turn arounds’-setting of KinectManager. The KinectManager is component of KinectController-game object in all demo scenes.
3. Run the scene to test it. Keep in mind this feature is only a workaround (not a solution) for an issue in Kinect SDK. The issue is that by design Kinect tracks correctly only users who face the sensor. The side tracking is not smooth, as well. And finally, this workaround is experimental and may not work in all cases.

How to get the full scene depth image as texture

If you’d like to get the full scene depth image, instead of user-only depth image, please follow these steps:

1. Open Resources/DepthShader.shader and uncomment the commented out else-part of the ‘if’, you can see near the end of the shader. Save the shader and go back to the Unity editor.
2. Make sure the ‘Compute user map’-setting of the KinectManager is set to ‘User texture’. KinectManager is component of the KinectController-game object in all demo scenes.
3. Optionally enable the ‘Display user map’-setting of KinectManager, if you want to see the depth texture on screen.
4. You can also get the depth texture by calling ‘KinectManager.Instance.GetUsersLblTex()’ in your scripts, and then use it the way you want.

Some useful hints regarding AvatarController and AvatarScaler

The AvatarController-component moves the joints of the humanoid model it is attached to, according to the user’s movements in front of the Kinect-sensor. The AvatarScaler-component (used mainly in the fitting-room scenes) scales the model to match the user in means of height, arms length, etc. Here are some useful hints regarding these components:

1. If you need the avatar to move around its initial position, make sure the ‘Pos relative to camera’-setting of its AvatarController is set to ‘None’.
2. If ‘Pos relative to camera’ references a camera instead, the avatar’s position with respect to that camera will be the same as the user’s position with respect to the Kinect sensor.
3. If ‘Pos relative to camera’ references a camera and ‘Pos rel overlay color’-setting is enabled too, the 3d position of avatar is adjusted to overlay the user on color camera feed.
4. In this last case, if the model has AvatarScaler component too, you should set the ‘Foreground camera’-setting of AvatarScaler to the same camera. Then scaling calculations will be based on the adjusted (overlayed) joint positions, instead of on the joint positions in space.
5. The ‘Continuous scaling’-setting of AvatarScaler determines whether the model scaling should take place only once when the user is detected (when the setting is disabled), or continuously – on each update (when the setting is enabled).

If you need the avatar to obey physics and gravity, disable the ‘Vertical movement’-setting of the AvatarController-component. Disable the ‘Grounded feet’-setting too, if it is enabled. Then enable the ‘Freeze rotation’-setting of its Rigidbody-component for all axes (X, Y & Z). Make sure the ‘Is Kinematic’-setting is disabled as well, to make the physics control the avatar’s rigid body.

If you want to stop the sensor control of the humanoid model in the scene, you can remove the AvatarController-component of the model. If you want to resume the sensor control of the model, add the AvatarController-component to the humanoid model again. After you remove or add this component, don’t forget to call ‘KinectManager.Instance.refreshAvatarControllers();’, to update the list of avatars KinectManager keeps track of.

How to setup the K2-package (v2.16 or later) to work with Orbbec Astra sensors (deprecated – use Nuitrack)

1. Go to https://orbbec3d.com/develop/ and click on ‘Download Astra Driver and OpenNI 2’. Here is the shortcut: http://www.orbbec3d.net/Tools_SDK_OpenNI/3-Windows.zip
2. Unzip the downloaded file, go to ‘Sensor Driver’-folder and run SensorDriver_V4.3.0.4.exe to install the Orbbec Astra driver.
3. Connect the Orbbec Astra sensor. If the driver is installed correctly, you should see it in the Device Manager, under ‘Orbbec’.
4. If you have Kinect SDK 2.0 installed, please open KinectScripts/Interfaces/Kinect2Interface.cs and change ‘sensorAlwaysAvailable = true;’ at the beginning of the class to ‘sensorAlwaysAvailable = false;’. More information about this action can be found here.
5. If you have ‘Kinect SDK 2.0’ installed on the same machine, look at this tip above, to see how to turn off the K2-sensor-always-available flag.
6. Run one of the avatar-demo scenes to check, if the Orbbec Astra interface works. The sensor should light up and the user(s) should be detected.

How to setup the K2-asset (v2.17 or later) to work with Nuitrack body tracking SDK (updated 11.Jun.2018)

1. To install Nuitrack SDK, follow the instructions on this page, for your respective platform. Nuitrack installation archives can be found here.
2. Connect the sensor, go to [NUITRACK_HOME]/activation_tool-folder and run the Nuitrack-executable. Press the Test-button at the top. You should see the depth stream coming from the sensor. And if you move in front of the sensor, you should see how Nuitrack SDK tracks your body and joints.
3. If you can’t see the depth image and body tracking, when the sensor connected, this would mean Nuitrack SDK is not working properly. Close the Nuitrack-executable, go to [NUITRACK_HOME]/bin/OpenNI2/Drivers and delete (or move somewhere else) the SenDuck-driver and its ini-file. Then go back to step 2 above, and try again.
4. If you have ‘Kinect SDK 2.0‘ installed on the same machine, look at this tip, to see how to turn off the K2-sensor always-available flag.
5. Please mind, you can expect crashes, while using Nuitrack SDK with Unity. The two most common crash-causes are: a) the sensor is not connected when you start the scene; b) you’re using Nuitrack trial version, which will stop after 3 minutes of scene run, and will cause Unity crash as side effect.
6. If you buy a Nuitrack license, don’t forget to import it into Nuitrack’s activation tool. On Windows this is: <nuitrack-home>\activation_tool\Nuitrack.exe. You can use the same app to test the currently connected sensor, as well. If everything works, you are ready to test the Nuitrack interface in Unity..
7. Run one of the avatar-demo scenes to check, if the Nuitrack interface, the sensor depth stream and Nuitrack body tracking works. Run the color-collider demo scene, to check if the color stream works, as well.
8. Please mind: The scenes that rely on color image overlays may or may not work correctly. This may be fixed in future K2-asset updates.

How to control Keijiro’s Skinner-avatars with the Avatar-Controller component

1. Download Keijiro’s Skinner project from its GitHub-repository.
2. Import the K2-asset from Unity asset store into the same project. Delete K2Examples/KinectDemos-folder. The demo scenes are not needed here.
3. Open Assets/Test/Test-scene. Disable Neo-game object in Hierarchy. It is not really needed.
4. Create an empty game object in Hierarchy and name it KinectController, to be consistent with the other demo scenes. Add K2Examples/KinectScripts/KinectManager.cs as component to this object. The KinectManager-component is needed by all other Kinect-related components.
5. Select ‘Neo (Skinner Source)’-game object in Hierarchy. Delete ‘Mocaps’ from the Controller-setting of its Animator-component, to prevent playing the recorded mo-cap animation, when the scene starts.
6. Press ‘Select’ below the object’s name, to find model’s asset in the project. Disable ‘Optimize game objects’-setting on its Rig-tab, and make sure its rig is Humanoid. Otherwise the AvatarController will not find the model’s joints it needs to control.
7. Add K2Examples/KinectScripts/AvatarController-component to ‘Neo (Skinner Source)’-game object in the scene, and enable its ‘Mirrored movement’ and ‘Vertical movement’-settings. Make sure the object’s transform rotation is (0, 180, 0).
8. Optionally, disable the script components of ‘Camera tracker’, ‘Rotation’, ‘Distance’ & ‘Shake’-parent game objects of the main camera in the scene, if you’d like to prevent the camera’s own animated movements.
9. Run the scene and start moving in front of the sensor, to see the effect. Try the other skinner renderers as well. They are children of ‘Skinner Renderers’-game object in the scene.

How to track a ball hitting a wall (hints)

This is a question I was asked quite a lot recently, because there are many possibilities for interactive playgrounds out there. For instance: virtual football or basketball shooting, kids throwing balls at projected animals on the wall, people stepping on virtual floor, etc. Here are some hints how to achieve it:

1. The only thing you need in this case, is to process the raw depth image coming from the sensor. You can get it by calling KinectManager.Instance.GetRawDepthMap(). It is an array of short-integers (DepthW x DepthH in size), representing the distance to the detected objects for each point of the depth image, in mm.
2. You know the distance from the sensor to the wall in meters, hence in mm too. It is a constant, so you can filter out all depth points that represent object distances closer than 1-2 meters (or less) from the wall. They are of no interest here, because too far from the wall. You need to experiment a bit to find the exact filtering distance.
3. Use some CV algorithm to locate the centers of the blobs of remaining, unfiltered depth points. There may be only one blob in case of one ball, or many blobs in case of many balls, or people walking on the floor.
4. When these blobs (and their respective centers) is at maximum distance, close to the fixed distance to the wall, this would mean the ball(s) have hit the wall.
5. Map the depth coordinates of the blob centers to color camera coordinates, by using KinectManager.Instance.MapDepthPointToColorCoords(), and you will have the screen point of impact, or KinectManager.Instance.MapDepthPointToSpaceCoords(), if you prefer to get the 3D position of the ball at the moment of impact. If you are not sure how to do the sensor to projector calibration, look at this tip.

How to create your own programmatic gestures

The programmatic gestures are implemented in KinectScripts/KinectGestures.cs or class that extends it. The detection of a gesture consists of checking for gesture-specific poses in the different gesture states. Look below for more information.

1. Open KinectScripts/KinectGestures.cs and add the name of your gesture(s) to the Gestures-enum. As you probably know, the enums in C# cannot be extended. This is the reason you should modify it to add your unique gesture names here. Alternatively, you can use the predefined UserGestureX-names for your gestures, if you prefer not to modify KinectGestures.cs.
2. Find CheckForGesture()-method in the opened class and add case(s) for the new gesture(s), at the end of its internal switch. It will contain the code that detects the gesture.
3. In the gesture-case, add an internal switch that will check for user pose in the respective state. See the code of other simple gesture (like RaiseLeftHand, RaiseRightHand, SwipeLeft or SwipeRight), if you need example.

In CheckForGestures() you have access to the jointsPos-array, containing all joint positions and jointsTracked-array, containing showing whether the the respective joints are currently tracked or not. The joint positions are in world coordinates, in meters.

The gesture detection code usually consists of checking for specific user poses in the current gesture state. The gesture detection always starts with the initial state 0. At this state you should check if the gesture has started. For instance, if the tracked joint (hand, foot or knee) is positioned properly relative to some other joint (like body center, hip or shoulder). If it is, this means the gesture has started. Save the position of the tracked joint, the current time and increment the state to 1. All this may be done by calling SetGestureJoint().

Then, at the next state (1), you should check if the gesture continues successfully or not. For instance, if the tracked joint has moved as expected relative to the other joint, and within the expected time frame in seconds. If it’s not, cancel the gesture and start over from state 0. This could be done by calling SetGestureCancelled()-method.

Otherwise, if this is the last expected state, consider the gesture is completed, call CheckPoseComplete() with last parameter 0 (i.e. don’t wait), to mark the gesture as complete. In case of gesture cancelation or completion, the gesture listeners will be notified.

If the gesture is successful so far, but not yet completed, call SetGestureJoint() again to save the current joint position and timestamp, as well as increment the gesture state again. Then go on with the next gesture-state processing, until the gesture gets completed. It would be also good to set the progress of the gesture the gestureData-structure, when the gesture consists of more than two states.

The demo scenes related to checking for programmatic gestures are located in the KinectDemos/GesturesDemo-folder. The KinectGesturesDemo1-scene shows how to utilize discrete gestures, and the KinectGesturesDemo2-scene is about the continuous gestures.

More tips regarding listening for discrete and continuous programmatic gestures in Unity scenes can be found above.

What is the file-format used by the KinectRecorderPlayer-component (KinectRecorderDemo)

The KinectRecorderPlayer-component can record or replay body-recording files. These are text files, where each line represents a body-frame at a specific moment in time. You can use it to replay or analyze the body-frame recordings in your own tools. Here is the format of each line. See the sample body-frames below, for reference.

0. time in seconds, since the start of recording, followed by ‘|’. All other field separators are ‘,’.
This value is used by the KinectRecorderPlayer-component for time-sync, when it needs to replay the body recording.

1. body frame identifier. should be ‘kb’.
2. body-frame timestamp, coming from the Kinect SDK (ignored by the KinectManager)
3. number of max tracked bodies (6).
4. number of max tracked body joints (25).

then, follows the data for each body (6 times)
6. body tracking flag – 1 if the body is tracked, 0 if it is not tracked (the 5 zeros at the end of the lines below are for the 5 missing bodies)

if the body is tracked, then the bodyId and the data for all body joints follow. if it is not tracked – the bodyId and joint data (7-9) are skipped
7. body ID

body joint data (25 times, for all body joints – ordered by JointType (see KinectScripts/KinectInterop.cs)
8. joint tracking state – 0 means not-tracked; 1 – inferred; 2 – tracked

if the joint is inferred or tracked, the joint position data follows. if it is not-tracked, the joint position data (9) is skipped.
9. joint position data – X, Y & Z.

And here are two body-frame samples, for reference:

0.774|kb,101856122898130000,6,25,1,72057594037928806,2,-0.415,-0.351,1.922,2,-0.453,-0.058,1.971,2,-0.488,0.223,2.008,2,-0.450,0.342,2.032,2,-0.548,0.115,1.886,1,-0.555,-0.047,1.747,1,-0.374,-0.104,1.760,1,-0.364,-0.105,1.828,2,-0.330,0.103,2.065,2,-0.262,-0.100,1.963,2,-0.363,-0.068,1.798,1,-0.416,-0.078,1.789,2,-0.457,-0.334,1.847,2,-0.478,-0.757,1.915,2,-0.467,-1.048,1.943,2,-0.365,-1.043,1.839,2,-0.361,-0.356,1.929,2,-0.402,-0.663,1.795,1,-0.294,-1.098,1.806,1,-0.218,-1.081,1.710,2,-0.480,0.154,2.001,2,-0.335,-0.109,1.840,2,-0.338,-0.062,1.804,2,-0.450,-0.067,1.736,2,-0.435,-0.031,1.800,0,0,0,0,0

1.710|kb,101856132898750000,6,25,1,72057594037928806,2,-0.416,-0.351,1.922,2,-0.453,-0.059,1.972,2,-0.487,0.223,2.008,2,-0.449,0.342,2.032,2,-0.542,0.116,1.881,1,-0.555,-0.047,1.748,1,-0.374,-0.102,1.760,1,-0.364,-0.104,1.826,2,-0.327,0.102,2.063,2,-0.262,-0.100,1.963,2,-0.363,-0.065,1.799,2,-0.415,-0.071,1.785,2,-0.458,-0.334,1.848,2,-0.477,-0.757,1.914,1,-0.483,-1.116,2.008,1,-0.406,-1.127,1.917,2,-0.361,-0.356,1.928,2,-0.402,-0.670,1.796,1,-0.295,-1.100,1.805,1,-0.218,-1.083,1.710,2,-0.480,0.154,2.001,2,-0.334,-0.106,1.840,2,-0.339,-0.061,1.799,2,-0.453,-0.062,1.731,2,-0.435,-0.020,1.798,0,0,0,0,0

How to enable user gender and age detection in KinectFittingRoom1-demo scene

You can utilize the cloud face detection in KinectFittingRoom1-demo scene, if you’d like to detect the user’s gender and age, and properly adjust the model categories for him or her. The CloudFaceDetector-component uses Azure Cognitive Services for user-face detection and analysis. These services are free of charge, if you don’t exceed a certain limit (30000 requests per month and 20 per minute). Here is how to do it:

1. Go to this page and press the big blue ‘Get API Key’-button next to ‘Face API’. See this screenshot, if you need more details.
2. You will be asked to sign-in with your Microsoft account and select your Azure subscription. At the end you should land on this page.
3. Press the ‘Create a resource’-button at the upper left part of the dashboard, then select ‘AI + Machine Learning’ and then ‘Face’. You need to give the Face-service a name & resource group and select endpoint (server address) near you. Select the free payment tier, if you don’t plan a bulk of requests. Then create the service. See this screenshot, if you need more details.
4. After the Face-service is created and deployed, select ‘All resources’ at the left side of the dashboard, then the name of the created Face-serice from the list of services. Then select ‘Quick start’ from the service menu, if not already selected. Once you are there, press the ‘Keys’-link and copy one of the provided subscription keys. Don’t forget to write down the first part of the endpoint address, as well. These parameters will be needed in the next step. See this screenshot, if you need more details.
5. Go back to Unity, open KinectFittingRoom1-demo scene and select the CloudFaceController-game object in Hierarchy. Type in the (written down in 4. above) first part of the endpoint address to ‘Face service location’, and paste the copied subscription key to ‘Face subscription key’. See this screenshot, if you need more details.
6. Finally, select the KinectController-game object in Hierarchy, find its ‘Category Selector’-component and enable the ‘Detect gender age’-setting. See this screenshot, if you need more details.
7. That’s it. Save the scene and run it to try it out. Now, after the T-pose detection, the user’s face will be analyzed and you will get information regarding the user’s gender and age at the lower left part of the screen.
8. If everything is OK, you can setup the model selectors in the scene to be available only for users with specific gender and age range, as needed. See the ‘Model gender’, ‘Minimum age’ and ‘Maximum age’-settings of the available ModelSelector-components.

 

 

1,098 thoughts on “Kinect v2 Tips, Tricks and Examples

  1. Hello,
    Just to say that you got great package.

    I am trying to control 2D puppet like character with Kinect, how can I constrain joint movements, so that arms and legs are not getting in strange positions.

    Tomislav

    • Hi, there is a setting of KinectManager called ‘Ignore Z-Coordinates’. You can enable it to set 2D mode for the detected movements and joint orientations. KinectManager is component of the KinectController-game object in all demo scenes.

    • The BackgroundRemovalManager component has a setting called ‘Color camera resolution’. Make sure this setting is enabled in your scene, to get the maximum user-image coverage. The non-tracked strips left and right are unfortunately normal, even in this case, because the Kinect depth image is smaller than the color one, and obviously doesn’t cover these areas.

  2. Hi Rumen!

    I am trying to dynamically load rigged models into my scene. I followed your guide to add new models. (rigging – unity mecanim – avatar controller) It works perfectly if the object is in the scene during startup.

    But if I load the model from the resources folder with
    GameObject model= Instantiate(Resources.Load(“riggedGirl”, typeof(GameObject))) as GameObject;
    and Use Add.Component to attach the avatar controller, the model won’t move.

    does the kinectController check for avatar controllers during startup?
    do I need to tell the kinectController which avatar controller it should control?

    cheers

    • Hi Achim, see the LoadDressingModel()-method of ModelSelector.cs-script. It is the component in the 1st fitting-room demo that instantiates the selected model. I suppose you have forgot to add the instantiated avatar to the list of KinectManager’s avatar controllers.

  3. Hi Rumen, maybe you know how to make work in Unity point cloud with color like this

  4. Hi Rumen, can you explain to me how to enable kinect to detect the user when turn around, i saw, kinect controller already handle it but it’s not working, i used it on fitting room demo that the dress doesn’t rotate 360 as well ? thanks.

  5. Hi Rumen, can you explain to me how to detect the user when its turn around, I saw kinect controller already handled it, but in fitting room demo i enable the flag to allow turn around and i also print the calibration text to show me its FACE or BACK and it allways print FACE. which is the dress doesn’t rotate 360 as well? thanks.

    • Hi, this setting was only experimental. Unfortunately, it’s not working correctly (yet). It’s purpose was to overcome the SDK “feature” to track correctly the users, only when they are facing the sensor. For the time being, you’d need to warn (or not allow) the users to turn more than ~80 degrees left or right. Sorry for this limitation..

  6. Hello, I have problems using more than one user in PhotoBooth, I have added 6 JoinOverlayer, 6 InteractionManager and 6 PhotoBootController. One for each user and configured. But I do not get every user to have his own model (Medusa, Batman, etc …) Only one user can have a mask at the same time.
    How should I have 6 users have their own independent mask.
    Sorry for my English.

    • The PhotoBoothController references static mask-models in the scene, as configured in Inspector. In your case you should instead make the available models as prefabs, and then instantiate them at run-time to fill the respective mask-arrays (headMasks, leftHandMasks, chestMasks) of PhotoBoothController.cs. I’m also not sure why would you need six InteractionManagers. Usually, only one user would be allowed to control the photo-shooting.

  7. Hey Rumen !

    In my Application I use the KinectRecordPlayer Script. But if the Script is playing I don’t get the UserDetection or UserLost Events from KinectGestures.GesturesListenerInterface (which i implemented in my Script) anymore. My Plan was to Play Recorded Avatar Movements till a User is Detected and then stop the KinectRecordPlayer and when the User is Lost start the Recorded Movements again. Is this possible without make big changings in KinectInterop and/or KinectManager ?
    thx !

    • Hi, sorry, I cannot test your issue right now, because I’m at a conference this week. But I remember I had requests for similar use cases before. If you look in the KinectDemos/RecorderDemo/Scripts-folder, you will see there is another script component called PlayerDetectorController.cs in there, which you can use in combination with KinectRecorderPlayer, to do what you need. Here is some more info regarding this component: https://ratemt.com/k2docs/PlayerDetectorController.html

  8. Hi,

    I just bought the Kinect v2 Examples on the Unity Asset store, and I like them so far 🙂

    I was wondering how to combine the Avatars Demo with the fourth Face Tracking Demo. I’ve tried adding the Model Face Controller and the Facetracking Manager to the Avatar object and playing around with the settings but it doesn’t seem to work. Would you have an example/how-to for this?

    • Hi, first you need an avatar with rigged head, in means of eyebrows, eyelids, lips and jaw. See the FaceRigged-model in the 4th FT-demo for reference. In this regard, the avatar-model in AvatarsDemo has only jaw rigged, as far as I see. So, you could animate only its jaw-bone with the ModelFaceController. The FacetrackingManager may be component of the KinectController-game oject, as in the demo, not the avatar’s object. Then you need to assign the jaw-bone to the respective setting of MFC, and then adjust the rotation axis and limits, if needed. Hope this is enough information for a start.

      • I have a simmilar problem as Christian, we want to combine Avatar Demo with the First Face Tracking Demo, so that the Avatar get die face of the User. Should I take also the approach with assign the jaw-bone and then adjust the rotation axis and limits ? As the ModelFaceController is not part of the 1. Face Tracking Demo, it should be easier ? My main Problem is to get the correct size and coordinates of the Avatars Face and map the user face on this position.

      • In your case I would suggest to parent the FaceModelMesh-quad to the neck or head-joint of the avatar, and to disable the ‘Move model mesh’-setting of the FacetrackingManager-component. Then experiment a bit to adjust the quad to fit the avatar’s head, as good as possible.

  9. Hi,
    When I connect a Logitech webcam to pc, and use WebCamTexture with it to show the image(because I must put the camera far away), the kinect starts dropping frames seriously…When I stop the WebCamTexture, the problem still exists…
    How to solve thie problem?
    Thanks.

    • I made a mistake, now when I stop the WebCamTexture, it becomes correct.
      How can I do about the first problem?

      • Kinect-v2 needs a dedicated USB-3 port, and has its own color camera as well. Why do you need a second color camera?
        Nevertheless, you could try to stop the unneeded streams and textures, for instance disable ‘Compute color map’-setting, and set ‘Compute user map’ to ‘Raw user depth’.

  10. Hi.
    I want to limit the z-axis movement of the avatar in KinectAvatarsDemo1.
    So I want to make the avatars move only in the x-axis.
    Is there a simple setting?
    Or what script should I change?

    • Oh, I find it! in your older answer.
      It was in the Kinect2AvatarPos() function inside the AvatarController.cs

      • You can also use the ‘Ignore Z-Coordinates’-setting of the KinectManager (usually component of KinectController-game object in the scene).

      • I’ve been using a modified version of Avatars Demo 1 for a VR project by attaching the main camera to one of the avatar’s character controller. It works great in all versions of Unity 5 (like being inside one body and having your actions mirrored by another), but in Unity 2017 it is like looking out of two cameras simultaneously.

      • I just checked the 2nd avatar demo scene in Unity 2017.1 editor. It shows first person camera view. The camera is parented to avatar’s neck, like in your case. The scene runs as expected, and the camera view is OK, as far as I see. So, maybe there is something else in your case (a component, setting, etc.) that causes the issue.

      • Two-camera problem resolved, but there is another disconcerting and persistent issue. There is a sense of stepping in to and out of the character, as well as a sense of “body lag”, where the camera (and the user’s virtual POV) lags behind the user as they move through physical space. This causes two major problems:

        1) The user steps into and out of the avatar body as they move through virtual and physical space. Additionally, tall or short people do not have a naturalistic experience.

        2) This also causes the user’s vision to move out of sync, resulting in serious vertigo. This sensation is worse when the camera is parented to the neck or head, as in Avatar Demo 2.

        This had not been a problem previously, but is now an issue in Unity 2017 and all Unity 5 versions. The issue occurs in my project, as well as the Avatar Demo 1 scenes in fresh projects. Avatar Demo 2 in a fresh project is even worse. I’m using an HTC vive.

        Thanks for your excellent package! Any advice would be greatly appreciated.

      • There is a setting of AvatarController called ‘External root motion’. You can enable it to stop the avatar body movement based on the Kinect body estimations, and instead move the avatar based on the headset position reported by Vive. This will prevent your issues, I think. There was a HeadMover-script component as well, as far as I remember, that could help you estimate the body position from the head position and spine orientation.

      • Thanks for your quick follow-up, Rumen. I’ve enabled External Root Motion, which helps slightly with the vertigo, but hasn’t changed the problem of stepping in and out of the body. Even though the mirrored avatar moves just fine, the first person avatar seems to be stuck in one place.

        There is no HeadMover-component of Cubeman (the avatar’s gameobject?), or of the avatar. I am unable to find it using the Add Component button, either. Is the HeadMover-component somewhere else?

        The avatar does move based on motion picked up from the Kinect. The Kinect and the Vive have been calibrated together at Room Scale. The problem is repeatable in Unity 5.6 and 2017. If it will help, I can send you screenshots.

      • The idea of ‘External root motion’-setting is that the avatar will be moved by some other script or component. That’s why it doesn’t move anymore and stays in place. But you are right – HeadMover was part of the K2VR-asset before, and I forgot to move it to the core K2-asset scripts. Please e-mail me some screenshots. It will be good to see what you mean on a picture. And I’ll send you back the HeadMover-component.

  11. I bought your sdk2 for kinect2, I have two doubts regarding fiiting room, first is it possible to have 2 or 3 persons in the fitting room? how to do that?
    second, is it possible that the dress also is displayed in the back? or at least display a full model? because I need to display a SUPERMAN costume, back I dont know how to display the superman cape

    thanks

    • To your questions:
      1. Add CategorySelector & ModelSelector-components for users with player-index 0, 1, 2…
      2. Not sure what you mean, but if you need the user to turn around, there is ‘Allow turn arounds’-setting of KinectManager-component in the scene. If you enable it, please add FacetrackingManager-component to KinectController too, because it utilizes the user face-tracking. Keep in mind this setting is only workaround for a bug in Kinect tracking, not full featured back-tracking.

      • No, there isn’t a predefined object for them. The only requirement, as far as I remember, was that the CategorySelector and all its ModelSelectors (for the same player-index) should be components of the same object. I’ve put them all on KinectController-game object in the demo scene, to make them easier to be found by the new users of the K2-asset.

  12. hey rumen !

    I work on a Unity Project with different scenes and opposite to your MultipleSceneDemo I need a different KinectManager in every Scene.
    Jumping from 1. Scene to 2. Scene is no Problem:
    GameObject.Find(“MainScene”).SetActive(false);
    SceneManager.LoadScene(1, LoadSceneMode.Additive);

    But after jumping back to 1. Scene Kinect is turning of lights …:
    mainScene.SetActive(true);
    SceneManager.UnloadScene(1);

    The line “//#define USE_SINGLE_KM_IN_MULTIPLE_SCENES” is comment out .

    Maybe you or someone here has a hint to this problem.
    Thx !

    • If it where possible to change this public Settings of the KinectManager at runtime:

      Sensor Height
      Senor Angle
      Auto Height Angle
      Compute User Map
      Compute Color Map
      Display User Map
      Use Bone Orientation Controlls

      Then it should work with only one KinectManger for all Scenes too.
      But somehow i remember that not all this setting could be changed at runtime, is this right ?

      • Yes, you are right there is some issue when Kinect is turned off then back on in subsequent scenes. One other customer told me recently there should be a “middle scene”, between the two that use KinectManager, in order this approach to work. Anyway, I would recommend to use a single KinectManager across the scenes, as shown in the KinectDemos/MultiSceneDemo.

        Regarding Height & Angle-settings: After you change the sensor height an/ord angle, call KinectManager.Instance.UpdateKinectToWorldMatrix() to apply the new values to the transformation matrix. AutoHeightAngle automates these settings with the values returned by the sensor, when there is user around.
        Regarding ComputeUserMap, ComputeColorMap – better enable both.
        Regarding DisplayUserMap – I think you can enable/disable it at runtime, but you could also use your own raw-image panel. Just set its texture to KinectManager.Instance.GetUsersLblTex()
        Regarding UseBoneOrientationConstraints – enable it at beginning. Then you can disable or enable it at runtime, I think.

  13. Hi Rumen!
    Thanks again for your continous work, and for the last update of your asset!
    I want to add a “Camera Filter” (from this asset: https://www.assetstore.unity3d.com/en/#!/content/18433) to the “Color Map” of the user on the “BackgroundRemoval1” scene, to do something like this:
    https://www.youtube.com/watch?v=xw-7R1tRvdM
    But appliying the “filter effect” to the real-time Color Map camera, instead of apliying the filter to the background image (like in “BackgroundRemoval 3” scene).
    How can I do something like that?

    Thanks in advance!

    Best regards,

    Chris.

    • Hi, open KinectScripts/KinectInterop.cs, find the UpdateBackgroundRemoval()-function, then look for this line: ‘sensorData.color2DepthMaterial.SetTexture(“_ColorTex”, sensorData.colorImageTexture);’. You can insert the color image processing before this line and then replace ‘sensorData.colorImageTexture’ in it with the processed texture. Hope I understood your issue correctly..

      • OK Rumen,
        Thanks for your help! I will try to do what you suggest.

        Best regards,

        Cris.

      • Hi again Rumen,

        I’ve done what you said, but I have some questions.

        I’ve seen that the ‘sensorData.colorImageTexture’ is a Texture 2D object.
        The Camera Filters I want to use, are applied on the “OnRenderImage ()” event of a camera.

        https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnRenderImage.html

        I need to have the “Only the User Color Texture” in one layer that one camera shows it.
        Is this the “BackgroundImage2”?

        And in another layer I need to have the “Background without the User Color Texture” from the Kinect.
        Is this the “BackgroundImage1”?

        So I can apply different Camera Filters to each layer, and then render all the layers with the main camera.

        Do I have to work with the ‘sensorData.colorImageTexture’ object, that you said?

        Best regards,

        Cris

      • Hi Rumen,

        Don’t worry, I’ve solved it =) You were right. I had to apply the Camera Filter to the “BackgroundRemovalManager.GetForegroundTex()”. I used your “ForegroundToImage.cs” Script, and then I applied the texture to a “RawImage” of the UI. After getting the ForegroundTexture I applied the Camera Filter and then ok!

        Thanks for your usefull tip of the “KinectInterop.cs” Script! That was my starter point to get the solution!

        Best regards,

        Cris.

      • Hi Rumen,

        Thanks again for this usefull tip.
        I will render it using multiple cameras and the Layers that you suggest.

        Best regards,

        Cris.

  14. hi rumen,
    i wanna ask u about avatar controlling with kinect my 3d humonoid mimic my gesture but i wanna jump humunoid from building to ground. plzz help

    • As far as I understand, you want the avatar to be controlled by Kinect, then animate and move it somewhere else, and then to control it with Kinect again. If this is your case, you can remove the AvatarController-component (in your script) before the animation/movement to the new position. When it lands there, add the AvatarController-component again and it will be controlled by Kinect again. After you remove or add this component, don’t forget to call KinectManager.Instance.refreshAvatarControllers(), to update the KM list of avatars.

  15. Hello there!
    I try to make a portrait version of an app, utilizing the ColorColliderDemo. It seems this example is not properly made for portrait mode. Despite enabling the Portrait Background script on the BackgroundImage object and setting the Game display to 1080 x 1920, the position of the Hand Colliders seem to appear on the wrong positions on the X axis.

    I managed to make them follow the hands properly but the way I did it is not very elegant, plus I’m not even sure why this works.

    There is this line in the HandColorOverlayer.cs:

    //float xNorm = (float)posColor.x / manager.GetColorImageWidth();

    manager.GetColorImageWidth() still returned 1920 instead of, say, 608. But hard-coding “608” in there still didn’t properly work, so I also added a small offset, and eventually I got what I wanted:

    float xNorm = – 1.06f + (float)posColor.x / 608;

    Weird, right? I’m sure it’s perfectly reasonable for reasons I’m not quite sure I understand.

    • Hi, yes – you are right. Please e-mail me and I’ll send you the fixed HandColorOverlayer-script that works in both portrait and landscape mode, if you still need it.

      • Hi, I see the original question is from 2017. So the aforementioned fix should have been included in the subsequent 2017 update. If your K2-asset is older, just download the latest version. If you have a more recent version and still experiencing the issue, please e-mail me with more details on how to reproduce it. Please also mention your invoice number in the e-mail.

  16. Hi Rumen! thank you for still answer our questions
    I want to ask how can or if is possible to have two or more User Mesh Visualizer at the same time, I try to duplicate and change the Player Index parameter but just one of the user is show
    http://imgur.com/a/JUjJ7

    • Hi Aldo, please open UserMeshVisualizer.cs, then search for and comment out this line: ‘sensorData.spaceCoordsBufferReady = false;’. This is a workaround. I’ll try to find a better solution for the next release.

  17. Dear Rumen, what’s the tricks for putting 3D-objects behind it, or in front of it in “BackgroundRemovalDemo / KinectBackgroundRemoval2”?
    Thanks.

  18. Hi again Rumen!

    I have to do another effect with the Kinect: a “wall of hands”, imitating the back side of a “Pin Art / Pinscreen”.

    Here are some picures of a pinscreen:
    -> https://images-eu.ssl-images-amazon.com/images/I/51gHhlPOchL.jpg
    -> https://cdn-all.coolstuff.com/autogen/preset/aspectThumb/1200×900/0f1396b458d22d53c00e8088c7fbf563.jpg

    I need to have in Unity the back side of a “Pinscreen”, but instead of “pins” I will have animated arms with hands.
    The idea is to use the “KinectUserVisualizerDemo”, so the user can stay over this “wall of hands (inverse pinscreen)” and many hands can “hug” the user.

    “Wall of hands” like this:
    -> https://goo.gl/photos/zUoDAcEDH269fuUVA

    I want to use a mesh collider in the user mesh, and in the hands of the wall. The idea is, when the user collides a hands of the wall, the user’s mesh push the arms backwards according to the user’s deep. But when the user don’t collided a hand, it move forwards to their original position. It’s just like the back side of a “pinscreen”.

    Can you please give me some help to do that?

    As always, any guide will be very appreciated.

    Best regards!

    Cris.

    • How can I help you? There is ‘Update mesh collider’-setting of UserMeshVisualizer, but keep in mind mesh colliders are slow, as far as I remember. Maybe the collider should not be updated at every mesh update, but more rarely – for instance at 0.1 seconds, if the user’s motions are not so fast.

      • Hi Rumen, ok I understand…
        Do you know a way to do the same, maybe using the alpha-texture, showing the 3d-hands just where the user is not?

      • I don’t know. Maybe a shader would not be a good idea in your use case. Think and experiment a bit more, before start implementing it.

      • Hi Rumen!

        Thanks for your guide. I think the “KinectBackgroundRemoval5” demo scene and the “DepthColliderDemo2D” are very usefull for me, because I want to use the 2D user mesh (with the texture).

        I see you’re using simple 2D colliders in the “DepthColliderDemo2D”, but I need to adjust them to the real body width and height in runtime..

        It is possible to have the “Update mesh collider”-setting, applied to the 2D user mesh, like in the “KinectBackgroundRemoval5” scene or in the “DepthColliderDemo2D” scene, instead of aplying it to the 3D mesh of the user (in “KinectUserVisualizer” scene)?

        Best regards,

        Chris.

      • Hi again Rumen!

        I want to make a Raycast to different points of the “UserImage” from the “KinectBackgroundRemoval5” demo scene, and see if the alphachannel of the hit point is 1 or 0…
        How can I get the alpha value of the Raycast?

        I wanted to apply this solution:

        http://answers.unity3d.com/questions/189998/2d-collisions-on-a-texture2d-with-transparent-area.html

        But I’m stuck, because I’m getting a RenderTexture when I use the “hitRender.material.mainTexture”, and I need a Texture2D to apply the “GetPixel()” function..

        Can you please give me some help?

        Thanks,

        Chris.

      • You can use KinectInterop.RenderTex2Tex2D()-method to convert a render texture to texture2d. Then it is a matter of texture-coordinate calculation and getting the pixel value.

      • Ok, good idea Rumen, thanks!

        Last question.. Sorry that I insist with my questions…
        I’m using the “DepthColliderDemo2D”, using the user 2d colliders that triggers with about 50 2d colliders that fill the wall, so can I hide the objects that are colliding with the user’s collider.

        How can I adapt automatically the width of every user collider, for example the “SpineMidCollider” to approximately cover the real user width? So can I approximately adapt the colliders 2D to the user texture.

        I will actually appreciate any help about that.

        As always, thanks for your help!

        Best Regards,

        Chris.

      • Hi Rumen! Please, I’m stuck on a stupid thing… I’m developing a “MultiDisplay” app (a video wall), and I’m using the Collider2D Demo scene. I’m usig the class “DepthSpriteViewerMod” to generate the colliders and the sprite, but I see that when run the app in different screen resolutions, the generated colliders loose their correct position… How can I generate the 2D User colliders according to the user texture (sprite), independentrly of the display resolution used?

        Please, please I just have to solve this to finish…

        Thanks!

        Best regards,

        Chris.

      • Hi Chris. Sorry for the delay, but I’m busy with another project at the moment and can’t answer too many questions. Regarding the 2D-collider, I think this is a pure 2D-image question. There should be a way to get the rough outline of the user in 2D (opaque points in the image), or at least some kind of convex polygon of the user shape. For instance find the outer user joints on the image (with mix x or y, max x or y), then connect them and add them to a Polygon2D-collider, for instance. Keep in mind to get the overlayed 2d-positions of the joints, as in the demo scenes. I don’t know what this DepthSpriteViewerMod-class does, but it’s probably something similar.

  19. Hi, Rumen! I am wondering how to display a prefab model more realistic when detecting a user to get AR(i.e. augmented reality) experience, in the scene where the model may walk in front of the detected user. To be more exact, how to make the model like walking on the real ground.

    • Hi, I’m not sure if you need a model or background-removal image of the user, to make it more realistic. If it is a model, look at the front-facing model in the 1st avatar-demo. If it is the foreground image of the user, look at the 5th background-removal demo.

  20. Hi Rumen,

    How do I get both the depth and the color textures at the same time (in one game scene)? I need both of them then use opencv to process the images.

    • Hi, set the ‘Compute user map’-setting of KinectManager in the scene to ‘User texture’ or ‘Body texture’, and enable its ‘Compute color map’-setting as well. Then, you can get the depth texture in your script by calling ‘KinectManager.Instance.GetUsersLblTex()’, and the color camera texture – with ‘KinectManager.Instance.GetUsersClrTex()’. Hope I understood your question correctly.

  21. Hello Rumen
    On the FittingRoom2 scene, the clothes are not sitting right on the model. Can you help me with this?

  22. Hey bro, just purchase the unity asset. Firstly thank you for making it, since it helps alot with my project that im working on.

    But i have question regarding this, for lets say I have a scene already made (and a mirror in the center) How do i use it with the Changing room scene demo where the real life camera and the changing room items will be in the created scene’s mirror.

    I really hope i explain it well, and sorry for my bad english mate.

  23. Im sorry if i posted twice, Im still pretty new to wordpress. But thanks for the package for the first part since it helped alot with my project.

    My problem is that, I made a scene and have a mirror made where I want to have the fitting room interaction to be there as well. How do i make that the camera only showed up at the mirror and not the full game scene view?

    Thanks in advance and Im sorry if i make any grammar mistake!

      • If let’s say I make it in already in a 3d environment, is it possible to do the UI overall layer. I’ll take a picture to further explain what I mean.

        https://imgur.com/MXf3plN

        Like i want the Kinect changing room interaction to just be on the mirror and you can see the space around him. Hopefully that’s explainable on my end. And also, thanks for the early reply too !

      • I forgot to add but the mirror is in 3d model, what do i need it to be if it in 3d model doesn’t work? I apologized as I just started learning about kinect in unity so I have alot of questions on how to implement it.

      • Well, I can’t answer all possible questions. You would need to research a bit by yourself. I remember seeing Unity demo with a mirror in a dressing room (called character-demo or something like this), but this was very long time ago, maybe on Unity3. I think they used a shader for the mirror reflection back then, but I’m not sure. Sorry, can’t help you more than this.

      • I see thanks for that, Im sorry if i got a little bit too carried away there and a bit too confusing. But ill make a mock up first to better explain what I mean.

        My idea is something like this although just a mockup to show it in pictures of what I mean in the beginning. Again thank you for taking your time to reply.

        https://imgur.com/a/gY57e

  24. Hi Rumen F
    I’m a developer from Thailand. I use Kinect for XBOX ONE.
    I have a problem about FittingRoomDemo1.
    I do follow your tips and tricks
    I add new model to my file and put a preview image for each model in jpeg-format (100 x 143px, 24bpp) in the respective model folder and rename it to ‘preview.jpg.bytes’ but my preview image display ‘No preview’ in the model-selection menu.

    How can I do for it?

    *** I see a type of preview imageฺํ file in the FittingRoomDemo/Resources-folder (original demo file) is not a JPG file, it is ฺBYTES file.

    Thanks in advance!

    Best regards,

    KRICH

    • Hi, look at the ‘preview.jpg.bytes’-preview files in FittingRoomDemo/Resources/Clothing/000x-folders. Your preview jpg-files need to be placed in a similar way in the respective model-folders of the clothing category. These preview files must be still in JPEG format. Renaming them to ‘.jpg.bytes’-extension is needed by the Unity resource-loader, in order to consider them as binary ones.

      • Thanks for your guide.
        I followed your guide but my preview image display ‘No preview’ in the model-selection menu.
        I see something different.
        I recorded my desktop image to describe you
        please look at this

        preview image file in original FittingRoomDemo1
        https://ibb.co/g4givm

        preview image file in My Project
        https://ibb.co/cMSOvm

        Can you please give me some help?

        I really hope i explain it well, and sorry for my bad english mate.

        Thank.

        Krich

      • Look again at Type-column in these 2 images. In my setup, the type is BYTES, while in your setup the type is ‘ACDSee Photo…’. Open the Explorer options, and on the ‘View’-tab disable ‘Hide extensions for known file types’-option. This way, you will see the real file names, along with their extensions.

  25. Thank for your help.
    I can do it !!

    then I would like to ask you a little more
    Can I switching model categories (e.g for shirts, pants, ties, etc.) for the same user.

    i see and followed your guide ” How to add your models to the FittingRoom-demo ”
    STEP 12. Enable the ‘Keep selected model’-setting of the ………………………
    STEP 13. The CategorySelector-component provides …………………….
    and ‘Raise my hand’ in the PLAY mode but my model categories can’t change.

    How can I do for it?
    Do I have to edit Script?

    Thanks in advance!

    Best regards,

    KRICH

    • To switch categories, you need first to set up several ModelSelector-components on KinectController-game object. One ModelSelector-component for each category. The ‘Model category’ and ‘Number of models’ need to be set according to the folder-name of number of the models in that folder. The models need to be placed in subfolders 0000, 0001, … etc. Look at the Clothing-folder, if you need example. After model-selectors are ready, make sure that ‘Raise hand to change category’-setting of CategorySelector is enabled. Then run the scene and raise left or right hand to change the category. This should change the models in the model-menu.

  26. first MODEL CATEGORY
    main folder name is Cloths and sub-folder name start from 0000, 0001, …….
    https://ibb.co/e80M1R

    second MODEL CATEGORY
    main folder name is Pants and sub-folder name start from 0000, 0001, …….
    I can switching to the second model category but not have any item show in this menu.
    https://ibb.co/gvFX86

    this is my set up
    I followed from your guide
    and create a new GameObject – DressingMenu2 in UI-Canvas
    https://ibb.co/nGRuMR

    what do i do wrong?

    • I think I would do the wrong way.
      You guide me that …”To switch categories, you need first to set up several ModelSelector-components on KinectController-game object. One ModelSelector-component for each category.”

      This is my step
      – I copy the first category ‘ModelSelector – component’ and paste it for the second category >>> first category is ‘Cloths’

      – I change the name of ‘Model Category’ and the number of ‘Number of Models’ according to my second category folder-name >>> second category is ‘Pants’

      *** I’m not sure how to set up new ‘Component’
      It is necessary to create new script ( ….MenuSelector2…. MenuSelector3 …..etc ) , new prefab ( DressingMenu2, DressingMenuItem2 ) or not.

      Guide me please
      Thanks in advance!

      Best regards,

      KRICH

      • Well, I’m not a model designer and don’t create models myself. I have many models from other customers, but as you understand I cannot share them. Generally, the clothing models are just normal Unity humanoid models (bipedal models with Humanoid rig set in Unity). The only requirement (or wish) is they to have bone lengths proportional to human bones, as detected by the Kinect. Otherwise the model could not cover the human body very well. Experiment a bit and you will find out. And when you have the model, experiment with the scale factors of AvatarScaler-component, as well.

  27. I set up follow your guide.

    this’s my result.
    first model category ‘Cloths’
    https://ibb.co/eBzWO6

    second model category ‘Pants’
    https://ibb.co/jaxUAm

    The 3D models in the second menu can show but the previews image can’t show.

    *** first model category ‘Cloths’ is a new model category created by me.
    *** second model category ‘Pants’ is the same model category from your demo.

    Thanks in advance!

    Best regards,

    KRICH

    • Please zip your project and send it over to me via WeTransfer.com, so I could take a closer look. Don’t forget to mention the invoice number you got from Unity asset store, as well.

    • Hm, there is obviously some issue of the dressing menu with the Unity UI. If you save the scene, close and restart the Unity editor and then run the scene again, the menu will be OK.

      • yeahhhhh!! I can do it now

        This is my step
        – I rename My Project (My Scene)
        – I copy ModelSelector.cs and CategorySelector.cs from the original demo and paste to my project
        – ” GOOD JOB!! ” Everything Complete!!

        Thank you very much for the good advice from you.
        See you again if I have new project, new problem.

        Thank you again and again.

        Best regards,

        KRICH

  28. Hi Rumen F
    I am a Japanese developer.

    I develop an application.
    That there are plural avatars and chooses avatar and can transform it.
    I assume that I play in three people.

    Therefore I want to change a limit angle of “BoneOeientationConstraint” every avatar.

    Is there any good method?

    I’m sorry in my strange English.
    Thank you very much for your help.

    • Hi, I’m not sure what exactly you want to do. If you want to limit the space for user detection, see ‘Min user distance’, ‘Max user distance’, ‘Max left-right distance’, ‘Max tracked users’ and ‘User detection order’-settings of the KinectManager-component in the scene. More information is available here: https://ratemt.com/k2docs/KinectManager.html

      • Sorry, I am not the one who wants to limit the space for user detection.
        I want two things
        1.
        How can I switch KinectManager’s UseBoneOrientationConstraints for each model?
        2.
        When multiple people used SetFaceTexture at the same time, the face of the first one was applied to everyone.How can I use SetFaceTexture simultaneously by multiple people?

      • To your questions:
        1. You can’t. BoneOrientationConstraints is applied to all detected users. You could turn it off, and then try to use the ‘Apply muscle limits’-setting of the AvatarController-component of the respective models instead. This setting is experimental though. Please check first, if it works appropriately with the humanoid models you’re using.
        2. SetFaceTexture has a setting called ‘Player index’, used to set the tracked user. Duplicate the component and change its player index accordingly.

  29. Hi,

    I can’t seem to find an answer to this problem that works well, although I feel it is very basic. I am having issues with jitteriness of Kinect Data. I’m moving a camera in Unity based on a person’s head position and rotation, so any flaws are very noticeable. I am basing a lot of my code on the ModelHatController, which does has a smooth filter. I am using the Vector3.Lerp and Quanternion3.Lerp accordingly but am still getting fairly shaky results, or a lot of lag (depending on the smoothing factor). Even when I zoom in on the hat within KinectFaceTrackingDemo2, it is also shaky. I saw your JointPositionsFilter script but don’t see any implementation examples for it anywhere. Do you have any suggestions on how to solve this issue?

    Thank you,
    Tegan

    • There is a setting of KinectManager-component in the scene, called ‘Smoothing’. It uses the JointPositionsFilter-class internally. Try to experiment a bit with its different options.
      By the way, the 2nd avatars-demo scene uses similar setup as yours, in means of first-person camera. And I didn’t notice such major jitterness so far.

      • Hi, thank you Rumen for your response.

        Changing the ‘smoothing’ factor in Kinect Manager doesn’t seem to have an effect on the scene. I’ve fixed it using several other methods, it makes it slow but it is okay for now.

        My other question, which may tie into my previous one. Is my Kinect is on an angle. I’ve followed your steps as stated here to set the angle, mine is -34. The Kinect however seems to ignore this angle input when getting the head’s position. Which means my ‘Y’ axis has a great affect on the ‘Z’ axis.

        Is this something I have to manually offset in the position? If so, I don’t see any examples on how to do this. Or could it be my Kinect Manager isn’t being read correctly.

        Thank you for your help,
        Tegan

  30. Hi Rumen ! First thank you for your amazing work, you literally saved me tons of time with this plugin.
    I’m successfully running the fitting room demo with custom models, one of them covering almost full body (except head), which is difficult to calibrate for multiple users with various body shapes
    So my idea was the following : I would like to mask unnecessary user body parts to leave only head and neck visible, as if user’s head was ”plugged into” the 3d model. Any recommendations to achieve this ? Thank you.

  31. Hi Rumen,

    For some reason I cannot comment on your latest response to me, so I will continue here.

    I’ve tried using the KinectAvatarDemo2, and changing the sensor angle has no affect on the head rotation. It seems to vary on the scene. AvatarDemo1 and AvatarDemo3 was okay, but there is no head rotation on the model so it is difficult to say how well it actually works. KinectProjectorDemo was okay as well, but again that was just position related.

    Changing the smoothing factor on KinectAvatarDemo2 also doesn’t seem to have a noticeable effect on the scene.
    Thanks for your help,

    Tegan

    • Yes, I see. Please open KinectScript/KinectManager.cs and look for ‘Vector3 rotAngles = headRotation.eulerAngles;’. After the block of 3 lines add this line: ‘rotAngles.x += sensorAngle;’. See the screenshot below. Please tell me then, if this resolves the head rotation issue you are having.

      head-rotation-sensor-angle-fix

      • That fixes the rotation, so that fixes half the issue! However, the head position is still off.

        The faceManager.GetHeadPosition(userId, true) is still ignoring the sensorAngle.
        Thanks for your help!

      • Look, the head position and head rotation, as returned by the FacetrackingManager, are intentionally in Kinect’s coordinate system. This position is fine, if it is used to overlay the color camera image. As far as I see, in the demo scenes it is used only for this. If you need to convert it into world coordinates, get the world transformation matrix like this: ‘Matrix4x4 k2wMatrix = KinectManager.Instance.GetKinectToWorldMatrix();’ and then transform the position like this: ‘Vector3 posHeadWorld = k2wMatrix.MultiplyPoint3x4(posHead);’.

  32. Didn’t realize that was intentional. Converting the matrix into world coordinates did the trick! Thanks again for your hlep

  33. Pingback: VFR References – lqminh

  34. Hello Rumnen, thanks for the amazing plug-in.

    I’m using your multi-scene demo to make sure one kinect manager is re-used throughout different scenes. I’ve set up the Player Calibration Pose as T-Pose in the Kinect manager. This makes sure that the right user is being tracked at all times. However, when i try to load the next scene, the tracked player seems to be lost and I end up having to do the T-Pose again to register a tracked player. Is there any way I can carry over the tracked player over multiple scenes?
    Thanks!

    • I seemed to have missed the code in the LocateKinectController script that clears the tracked users. Commenting it out fixed it

    • Hi, see KinectDemos/VisualizerDemo/KinectUserVisualizer-demo scene. It scans the user only frontally, but I think it may be a good starting point. To have a full body scan, 1-3 more user meshes would be needed (from behind and/or from the sides), and then stick them all together in a single mesh.

      • Not sure I understand what exactly you mean. Also, not sure if you already found this, but if you enable the BackgroundRemovalManager-component in the Fitting-Room demo scene, you will get the background you see in the scene view when you run the scene. Moreover, you are free to modify the background removal functionality the way you need. See UpdateBackgroundRemoval()-function in KinectScripts/KinectInterop.cs and the materials/shaders it uses. The full source code is, including used shaders, is available in the package.

  35. Do you know if there are any tools to generate some clothes from 2D image files? My vendor just providing me some 2D graphics, but I have no clue to how to convert them into 3D models which can be used in the examples. Thanks in advance.

  36. Sir, is there any suggestion for the occlusion problem in the Fitting Room Demo now? Imaging an open shirt in a man, the back side of the shirt will occlude the front side of human body now, which looks weird. I tried the forward offset, but it failed. Should I do something about the shader for this? Thanks in advance.

    • Is the back of the shirt too close to the front? The user body blender tries to respect the depth of the virtual surfaces (as stored in the depth buffer), and the depth of the user himself. If adjusting the ‘Depth threshold’-parameter of UserBodyBlender doesn’t help, then try to work around the issue with a shader for the clothing, and disable the UserBodyBlender-component.

    • Nope. The models must be in the Resources-folder, and you need to rebuild the scene to embed the new model(s). Don’t forget to change the ‘Number of models’-setting of the respective ModelSelector-component, too.

  37. I’ve some issues with UserBodyBlender – it works fine in the editor, but in the build – there’s no image from kinect camera (only the blue screen, the cloth model and some dots). After disabling UserBodyBlender build works fine. I’m working now in Unity 5.6.2f1. Do you know this issue and maybe solution?

    • Hm, I just built the 1st fitting-room scene in the latest version with Unity 2017.2 and it works as expected, both in the build and in the editor. What is the version of the K2-asset you are using, and have you tried to build the 1st fitting room scene, after you import the package into a new Unity project?

  38. Hello Rumen!

    Thank you very much for the latest asset update!

    I have one question: I have to develop a project that need to keep Kinect active for 12 hours straight. But I think Kinect tends to fail once a certain amount of inactive time has passed.
    What do you recommend? Keep Kinect active for a while (e. g. 5 minutes), then via code set it off for 2 minutes, and then repeat this process continuously? Or always leave the Kinect active for 12 hours straight?

    Thank you very much!

    Best regards,

    Cris.

    • Just leave it to work for 12 hours, to see if it is OK. If there is a problem with that setup, then restart the app every hour or every few hours.

  39. Hi Rumen!

    Is there a way to simulate a mouse to click the Button component(e.g. click when a normal hand changing to grip)? I have tried a Interaction Manager(Script) with the settings Control Mouse Cursor and Control Mouse Drag enabled, but it didn’t work. And can I let the Control Mouse Cursor settings be disabled when it is outside the Game Window ? Because it will influence the mouse use for Game-else situation.

    Thanks in advance!

  40. is it possible to dynamically load the fbx in fitting room samples by downloading the model files? and how? Thanks.

    • Not sure where you are located, but there is holiday season in Europe now and I’m out of office.

      Regarding your issue, here is what I just tried:
      1. I copied the model you sent in the KinectDemos/FittingRoomDemo/Resources/Models-folder.
      2. Just like you, I selected the model in Unity and set its rig to Humanoid. Had to manually configure the leg joints in the process.
      3. Opened KinectFittingRoom2-demo scene and dragged FuseFemale6-model into the scene.
      4. Set the position to (0,0,0) and rotation to (0,180,0).
      5. Copied the AvatarController & AvatarScaler-components from ModelMF-object to FuseFemale6-object, then disabled the ModelMF-object in the scene.
      6. You can see the component settings in the attached screenshot.
      7. I ran the scene. The model was displayed over my body, and it looked quite OK.
      8. Maybe there are better scaling factors than those on the screenshot, but I just wanted to check if the model shows up OK or not.

      That said, I have not tried the model in the KinectFittingRoom1-scene yet, because it requires specific folder, file naming and ModelSelector component. If this is your issue, please e-mail me screenshots of:
      1. The folder and file-name of the model in Resources;
      2. The settings of the ModelSelector-component in the scene used to display the model.

      Do mind all ModelSelector-components and the CategorySelector-component should be on one object.

      FuseFemale6 Settings

      • I followed your instructions and the model can be displayed in FittingRoom2, but it cannot be displayed in FittingRoom1. I just adding one more folder in “Clothing” named “0003” and put the files into that folder.

      • I was able to reproduce your issue in the 1st fitting-room scene. To work around it you have two options:

        1. Disable the UserBodyBlender-component of the MainCamera-object in the scene, or
        2. Select model/Tops in the model folder in Project-view, unfold the material (FuseFemale6_Top_Diffuse), and change the ‘Rendering mode’ from Transparent to Cutout. The same rendering mode is used by ModelMF & all clothing models in the fitting-room demo scenes.

      • The difference is the user body blender, as you can see on the pictures. It is disabled in FR2, but enabled in FR1. Please check if you use Linear color space (in Player settings). If you do, try to switch to Gamma instead. If this doesn’t help, you can disable the UserBodyBlender-component of the MainCamera in the FR1-scene.

Leave a Reply to TankCancel reply