Kinect v2 Examples with MS-SDK

Kinect2_MsSDKKinect v2 Examples v2.24 is a set of Kinect-v2 (aka ‘Kinect for Xbox One’) examples that use several major scripts, grouped in one folder. The package contains over thirty demo scenes.

Please also look at the Azure Kinect Examples for Unity asset. It works with Azure Kinect and Femto Bolt & Mega sensors, as well as with Kinect-v2 sensors.

The avatar-demo scenes show how to utilize Kinect-controlled avatars in your scenes, gesture-demos – how to use the programmatic or visual gestures, fitting room demos – how to create your own dressing room, overlay demos – how to overlay body parts with virtual objects, etc. You can find short descriptions of all demo-scenes in the K2-asset online documentation.

This package works with Kinect-v2 sensors (aka Kinect for Xbox One) and Kinect-v1 sensors (aka Kinect for Xbox 360). It can be used with all versions of Unity – Free, Plus & Pro.

If you need a package with similar functionality, components and demo scenes that works with regular camera, please look at Computer Vision Examples for Unity.

One request:
Please don’t share this package or its demo scenes in source form with others, or as part of public repositories, without my explicit consent.

Customer support:
* First, please check if you can find the answer you’re looking for on the Tips, tricks and examples page, as well as on K2-asset Online Documentation page. See also the comments below the articles here, or in the Unity forums.
* If you e-mail me, please include your invoice number. More information regarding e-mail support you can find here.
* Please note, you can always upgrade your K2-asset free of any charge, from the Unity Asset store.

How to run the demo scenes:
1. (Kinect-v2) Download and install Kinect for Windows SDK v2.0. The download link is below.
2. (Kinect-v2) If you want to use Kinect speech recognition, download and install the Speech Platform Runtime, as well as EN-US (and other needed) language packs. The download links are below.
3. (Kinect-v1) If you want to work with Kinect-v1 sensor, please download and install Kinect for Windows SDK v1.8. The download link is below. Please also look at this tip.
4. Import this package into new Unity project.
5. Open ‘File / Build settings’ and make sure that ‘Windows’ is the current active platform, and the architecture is set to ‘Intel 64 bit’.
6. Make sure that ‘Direct3D11’ is the first option in the ‘Auto Graphics API for Windows’-list setting, in ‘Player Settings / Other Settings / Rendering’.
7. Open and run a demo scene of your choice, from a subfolder of ‘K2Examples/KinectDemos’-folder. Short descriptions of all demo scenes are available here.

* Kinect for Windows SDK v2.0 (Windows-only) can be found here.
* MS Speech Platform Runtime v11 can be downloaded here. Please install both x86 and x64 versions, to be on the safe side.
* Kinect for Windows SDK 2.0 language packs can be downloaded here. The language codes are listed here.
* Kinect for Windows SDK v1.8 (Windows only) can be found here.

Documentation:
* The online documentation of the K2-asset is available here and as a pdf-file, as well.

Downloads:
* The official release of the ‘Kinect v2 with MS-SDK’ is available at Unity Asset Store. All updates are free of charge.

Troubleshooting:
* If you get errors like ‘Texture2D’ does not contain a definition for ‘LoadImage’ or ‘Texture2D’ does not contain a definition for ‘EncodeToJPG’, please open the Package Maneger, select ‘Built-in packages’ and enable ‘Image conversion’ and ‘Physics 2D’ packages.
* If you get compilation errors like “Type `System.IO.FileInfo’ does not contain a definition for `Length’”, you need to set the build platform to ‘Windows standalone’. For more information look at this tip.
* If the Unity editor crashes, when you start demo-scenes with face-tracking components, please look at this workaround tip.
* If the demo scene reports errors or remains in ‘Waiting for users’-state, make sure you have installed Kinect SDK 2.0, the other needed components, and check if the sensor is connected.

* Here is a link to the project’s Unity forum: http://forum.unity3d.com/threads/kinect-v2-with-ms-sdk.260106/
* Many Kinect-related tips, tricks and examples are available here.
* The official online documentation of the K2-asset is available here.

Known Issues:
* If you get compilation errors, like “Type `System.IO.FileInfo’ does not contain a definition for `Length“, open the project’s Build-Settings (menu File / Build Settings) and make sure that ‘PC, Mac & Linux Standalone’ is selected as ‘Platform’ on the left. On the right side, ‘Windows’ must be the ‘Target platform’, and ‘x86’ or ‘x86_64’ – the ‘Architecture’. If Windows Standalone was not the default platform, do these changes and then click on ‘Switch Platform’-button, to set the new build platform. If the Windows Standalone platform is not installed at all, run UnityDownloadAssistant again, then select and install the ‘Windows Build Support’-component.
* If you experience Unity crashes, when you start the Avatar-demos, Face-tracking-demos or Fitting-room-demos, this is probably due to a known bug in the Kinect face-tracking subsystem. In this case, first try to update the NVidia drivers on your machine to their latest version from NVidia website. If this doesn’t help and the Unity editor still crashes, please disable or remove the FacetrackingManager-component of KinectController-game object. This will provide a quick workaround for many demo-scenes. The Face-tracking component and demos will still not work, of course.
* Unity 5.1.0 and 5.1.1 introduced an issue, which causes some of the shaders in the K2-asset to stop working. They worked fine in 5.0.0 and 5.0.1 though. The issue is best visible, if you run the background-removal-demo. In case you use Unity 5.1.0 or 5.1.1, the scene doesn’t show any users over the background image. The workaround is to update to Unity 5.1.2 or later. The shader issue was fixed there.
* If you update an existing project to K2-asset v2.18 or later, you may get various syntax errors in the console, like this one: “error CS1502: The best overloaded method match for `Microsoft.Kinect.VisualGestureBuilder.VisualGestureBuilderFrameSource.Create(Windows.Kinect.KinectSensor, ulong)’ has some invalid arguments”. This may be caused by the moved ‘Standard Assets’-folder from Assets-folder to Assets/K2Examples-folder, due to the latest Asset store requirements. The workaround is to delete the ‘Assets/Standard Assets’-folder. Be careful though. ‘Assets/Standard Assets’-folder may contain scripts and files from other imported packages, too. In this case, see what files and folders the ‘K2Examples/Standard Assets’-folder contains, and delete only those files and folders from ‘Assets/Standard Assets’, to prevent duplications.
* If you want to release a Windows 32 build of your project, please download this library (for Kinect-v2) and/or this library (for Kinect-v1), and put them into K2Examples/Resources-folder of your project. These 32-bit libraries are stripped out of the latest releases of K2-asset, in order to reduce its package size.

What’s New in Version 2.24:
1. Upgraded the ‘Kinect-v2 Examples with MS-SDK’ package to Unity 6.0.
2. Fixed the incorrect-face-rectangle issue in the ShowFaceImage-component (KinectFaceDemo3).
3. Cosmetic changes in various components and demo scenes.

Videos worth more than 1000 words:
Here is a mixed-reality interactive experience that takes the visitors to beautiful jungles, landscapes, safari and other imaginary worlds, created by ‘YOYO Events’ and ‘AnimatedStudio’:

Here is a mixed-reality interactive experience that takes the visitors to different worlds, different times and dimensions, created by ‘YOYO Events’ and ‘AnimatedStudio’:

Here is a video by Ricardo Salazar, created with Unity5, Kinect v2 and “Kinect v2 with MS-SDK”, v.2.3:

..a video by Ranek Runthal, created with Unity4, Kinect v2 and “Kinect v2 with MS-SDK”, v.2.3

..and a video by Brandon Tay, created with Unity4, Kinect v2 and “Kinect v2 with MS-SDK”, v.2.0:

811 thoughts on “Kinect v2 Examples with MS-SDK

  1. Hi Rumen,
    I’d like to use pointcloudview.cs in kinectscripts/samples/ folder.
    But I have no idea how to use that.
    Please let me know the way to use pointcloudview or could you send me the sample?

    Best,
    Woo Lee

  2. Some examples of a few testes, this time with sound, that you can make with this amazing Unity Package, and with the very good support from Rumen.
    Buy it, share it !!

    • Some examples of a few testes, this time with sound, that you can make with this amazing Unity Package, and with the very good support from Rumen.
      Buy it, share it !!

    • Hello salazarxesign, how did you detected when the hand is open or close? Thanks

      • it detects automatically.
        Rumen unity package doest it all by it self !!
        At first it activated the click after 2 seconds with your hand in the same place. But them Rumen change the code on the “interactive manager” script, and he haded a public function ” allow hand clicks”.
        If you insert the interaction script and choose the images of the hands, it works right away !!
        If you want i can send you the unity file of this game, if Rumen doesnt mind.
        Did you understand what i said?

  3. Hey Rumen! Thanks for your very good work with this Unity asset!! It’s awesome!!
    When will you release a smoother background removal ??

    Best regards!

    Chris

  4. Hello, this is an awesome package, really useful. But I’d like to know if on the InteractionDemo, is it possible to create 3D objects, like if the kinect could could capture the depth of the movement. Thanks

    • I can’t quite understand what you mean. Can you please give me an example. Send me an e-mail, if you don’t want to talk about it publicly.

  5. Hey Rumen,
    i just checked the new update and its amazing thank you.
    i just have one problem i’m getting a Missing open CV dll with the background removal demo, i checked and it exits with the files: here is the error DllNotFoundException: opencv_core2410
    OpenCvSharp.Utilities.PInvokeHelper.TryPInvoke ()
    Rethrow as OpenCvSharpException: opencv_core2410
    *** An exception has occurred because of P/Invoke. ***
    any advice?
    thank you in advance

    • Hi Wissam, try to delete the opencv-dlls in the root folder of your Unity project (this is the folder that contains the Assets-folder). Then try to run the BR-scene again. This should copy the dlls there, again. If the problem persists, turn back to the simple background removal. To do it, disable the BackgroundRemovalManager-component of the MainCamera and enable the SimpleBackgroundRemoval. Actually, the OpenCV processing in color camera resolution does more bad than good. I’m going to replace it with shaders, soon.

      • I am also having this issue. It works fine on some computers but not others. I found that if I make the build x64 instead of x86 it works, although it breaks other plugins that I am using, so I need it to work on x86. Any other ideas on what can be causing this? I tried to manually copy over the OpenCV dll’s but haven’t been able to get it to work.

      • Yes, I had some luck 😉 This will be one of the features in the next release, but if you’re in a hurry, drop me an e-mail and I’ll send you something to try out.

  6. HI Rumen,
    i am trying out the new feature you have >> late-Update
    to animate my avatar with mecanim and kinect together.

    The animation take over the kinect ( I barely can see slight movement on the avatar when I I move in front of the kinect)

    Do you have any idea regarding this ?

    Thanks in advance Rumen :).

    Firas.

  7. Hello Rumen, Thanks again for this amazing plugin. I am able to get the confidence level for each hand. Is there a way I can get Tracking confidence for other joints of the body as well.

    Thanks in Advance, And big round of applause for the new video you posted. We all here loved the way you presented it.

    Regards,
    Munish

    • Hello Munish, which video do you mean? To your question: there is no tracking confidence for the joints, but a tracking state instead (not-tracked, inferred or tracked), which you can get with the GetJointTrackingState()-public function of KinectManager.

  8. Hello Rumen,
    first of all thank you for your great package. I’ve been working with it the last weeks and i am really astonished. I’ve been using the drag and drop function lately to in include it in my project.
    I want to let dragged object fall on the ground. However, the gravity doesn’t seem to work after an object have been dragged once. I already checked the code, but couldn’t find any reason for this.
    Do you have an idea why this happens.

    Thanks in advance!

    Regards,
    Tim

    • Hello Tim, the objects in KinectInteractionDemo don’t fall, because the ‘Use Gravity’-settings of their Rigidbody-components are disabled. Maybe, just enabling the gravity once the object is dropped would do the job.

      • Hi Rumen, thanks for that quick reply. I have already enabled the UseGravity`-settings and it clearky works before the objects are dragged once. I also added a line to the DragDropScript in order to enable gravity once an object is released, but that won´t work either. Maybe there is a codeline which fixes the vector3 position after a object is dragged once?

        Thanks for your help!

      • Hm, I just checked it and added a line too, here in GrabDropScript.cs:

        // restore the object’s material and stop dragging the object
        draggedObject.GetComponent().material = draggedObjectMaterial;
        // here comes the new line – add gravity to the released object
        draggedObject.GetComponent().useGravity = true;

        and the released object started falling as expected. Please contact me by e-mail, if you need the updated script.

    • Hi, I suppose you mean the face-tracking animation units. If this is the case – yes, it does. Use the public functions of FacetrackingManager.Instance to get the ones you need.

    • U_Character_REF is the model asset. There is a scale factor there. But you can also scale the game objects (for instance U_CharacterBack or U_CharactereFront in the KinectAvatarsDemo scene). Is there a problem with scaling?

  9. Hey Man, I have an Issue with your V2,

    I used your V1 to make a small game with the DepthViewerDemo for 2 players,
    I was able to do that to make a DepthImageViewer2 and then specify that it had to follow Player2…

    But I’m unable to do this with your V2… Since it uses “Primary Player” and I cant specify otherwise…

    HELP ME! I’M DESPERATE

    • Hi Max, no need to get desperate 🙂 Just add ‘public int playerIndex = 0;’ at the start of DepthImageViewer.cs-script and then, in Update(), replace ‘manager.GetPrimaryUserID()’ with ‘manager.GetUserIdByIndex(playerIndex)’. After that you can reuse the same script for more than one player (set playerIndex=0 for the first one, 1 for the second one and so on).

  10. Hey Man,

    I’ve been working with your code and managed to make a fun game,
    I got another issue but I dunno how to solve it, maybe you can help me.

    With the DepthViewer I have everything step.
    Been working on a small game in where you ‘push’ objects in a container to get points.

    But I’m trying to achieve that the ‘colliders’ that are created within the DepthImageViewer.cs
    to add force to falling objects….

    But I seem to be stuck at that, I’ve tried working with Physics Materials, but then they seem to just bounce off the colliders. Which is not actually what I want to achieve.

    However what does happen is that if the ‘player’ pushes another item into a second one, then the second item just flies off screen, that’s kinda what I want to have but then with the colliders that are made on the hands of the Player.(HandLeft-collider and HandRight-collider)

    In short:
    Is there a way so that the colliders can push objects further away if they ‘ram’ into the falling objects?
    Maybe you’ve got some tips for me?

    You can see an image here:
    http://i.imgur.com/M9FG0AO.jpg

    • I’m not big expert in physics, but as far as I remember, the impact force is a result of mass and velocity of the colliding objects. As only the eggs have mass/rigid-bodies in this scene, try first to increase their mass in the EggPrefab. Then try to add rigid-bodies to the hand joints and set their mass accordingly. They shouldn’t use gravity and maybe be kinematic objects. Experiment a bit and you will find it out.

  11. Hi, Rumen
    I want to make the game using your asset where you can control flying bird. In this game the Earth is a Sphere that rotates by certain speed. The bird is a game object that can move only up and down. To achieve that I use AvatarController from your sample. I attached bird to the left hand of Avatar. But there is a problem with bird. It rotates. It would be great if you could help me to solve this problem.

    • Hi, why have you attached the bird to the hand instead of the hips, shoulder center or neck, which don’t change their orientations so much? If you prefer the hand anyway, you should add some kind of script to the bird that resets bird’s rotation in its Update(), like this: ‘transform.rotation = Quaternion.identity;’ This will stop all bird’s rotations. You can experiment a bit with the rotation of bird and avatar to fine tune it to your needs.

      • Thanks very much!)) I spend 3 hours, but can’t guess how to achieve that. Everything i have tried brought me sad result. Your advice magically done, what i want)))

  12. Hi Rumen! I want to congratulate you, for your Kinect Plug-in, it’s awesome!
    I want to know, if I can do a virtual fitting room 3D app with your plug-in?
    And, how it works with a virtual shoe fitting room? Does it work fine with the feet?
    Thanks!

    • Yes, there is a FittingRoomDemo-scene. Though I’m not sure about the feet, have never tried it. If you contact me by e-mail, I’ll send you the asset, to try it out.

      • Thanks Rumen for your answer! I’ve seen that in some cases you have to use an extra Kinect to track the feet.
        For example, here they use 2 Kinects v1 only to track the foot: https://www.youtube.com/watch?v=uSn7c1uw1_A
        And here he uses 1 Kinect v2 only to track the foot: https://www.youtube.com/watch?v=mWUiO0tOlfA

        So, my question is: can I work with 2 Kinect’s v2 with your plugin, to do the virtual dressing room including a Virtual Shoe Fitting?
        Thanks!

      • You can’t use more than one Kinect v2 on a single computer. This is a requirement of Kinect SDK 2.0. You would need to transfer and sync the sensor data between computers. To be honest, I’m not sure what you try to do is directly feasible with my plugin.

      • OK Rumen, I understand what you said. Thanks again for your answer.
        I just want to have a virtual fitting room including the shoes.
        Do you think that I could use your Kinect v1 (for the feet) with your Kinect v2 plug-ins simultaneously?

        Thanks! 😀

      • This will require some script renaming, which is not impossible to do, but how do you plan to use them together? If the one sensor looks at the feet only, it probably won’t recognize the whole body and joints. Hence, you would need to do some ML-based analysis of the depth image to recognize where exactly the feet are and how they look. It won’t be as simple as putting two blocks together. If I were you, I would contact this guy Tango Chen, to ask about his algorithm.

      • Good idea, I will contact this guy.
        As always, thanks for your advice Rumen!

  13. I bought the $ 20 in Unity3D AssetStore.
    But it does not properly used.
    I need advice.

    I have a question.

    Use KinectV2 version in Unity3D.

    I have been using KinectManager Script and DepthImageView Script and KinectOverlayer Script and AvatarControllerClassic Script to download the package.

    I try to use at the same time the Kinect video and avatar.

    KinectOverlayer Script is I draw a picture of Kinect in connection has been GUITexture Object.

    AvatarControllerClassic Script there is the Avatar is connected.

    I think the avatar want to move so as to coincide with the movement of people in the GUITexture video.

    But the position of the people and the avatar in the GUITextture the video does not match.

    What should I do?

    I wait for answers.

    whitetear4@gmail.com

    • Well, maybe you better take a look at the FittingRoomDemo. It does what you try to achieve (maybe with a little modification). Just run the scene and look at the model and components it creates after the T-pose. Please contact me by e-mail, if you have questions about it.

      • Because I do not know your e-mail, you leave a question here.
        I want to please understand because not good at English.
        I turned on the “KinectFittingRoom” scene.
        I try to use the other model.
        It was export files FBX extension 3DMax.
        Import the model changed the settings.

        Model – Scale Factor = 1 -> 0.01
        Rig – Animation Type = Generic -> Humanoid

        Then I came up the model size does not fit or does not move.
        Why do that?
        I want to use a different model, is another condition is required?

      • For example, although the size does not fit the model to fit the size of the first time, the rate is kept moving.

        And if the ratio is not correct, an error occurs. (T-Pose is estimated that the model does not fit, not sure.)

        NullReferenceException: Object reference not set to an instance of an object
        AvatarController.MoveAvatar (Int64 UserID) (at Assets/KinectScripts/AvatarController.cs:300)
        AvatarController.UpdateAvatar (Int64 UserID) (at Assets/KinectScripts/AvatarController.cs:134)
        KinectManager.Update () (at Assets/KinectScripts/KinectManager.cs:1602)

        KinectManager.cs:1602
        Update(){

        if(!lateUpdateAvatars)
        {
        foreach (AvatarController controller in avatarControllers)
        {
        int userIndex = controller ? controller.playerIndex : -1;

        if((userIndex >= 0) && (userIndex < alUserIds.Count))
        {
        Int64 userId = alUserIds[userIndex];
        controller.UpdateAvatar(userId); // 1602 line.
        }
        }
        }
        }

        AvatarController.cs:134
        public void UpdateAvatar(Int64 UserID){

        MoveAvatar(UserID); // 134 line.
        }

        AvatarController.cs:300
        protected void MoveAvatar(Int64 UserID){

        Vector3 hipCenterPos = bodyRoot != null ? bodyRoot.position : bones[0].position; // error double select, this. 300 line.
        }

      • The model doesn’t move because of this exception. Find out why the avatar doesn’t valid Hips. Go again to the Rig-tab of the asset and check if the avatar is correctly rigged. Click on Configure, if needed, and check if all mandatory bones are assigned to the model’s joints. If not, assign them manually. With the fitting-room models it is common not all joints to be used during the model skinning, which causes errors at rig-time.

      • I’ll add very soon a tip on ‘Tips and tricks’-page, about how to test your own model in FittingRoomDemo.

  14. Hi Rumen
    I am trying do do this project and I need some help aor some guidance!
    I am making this fire wings. The wings are a child of a GameObject who is attached to the neck.
    How can I give to the wings a movment based on sholder or elbow with some offest form the center of the plane?
    this is the test video :

    • Hi Adrian, it already looks cool 🙂 Why don’t you parent the wings to the left/right shoulders instead to the neck? Otherwise you can also use some vector math to calculate the positions of the wings, but it will be a little bit more complex.

  15. Hi, Rumen
    It’s me again with the question about game of flying bird.
    I want to make the game using your asset where you can control flying bird. In this game the Earth is a Sphere that rotates by certain speed. The bird is a game object that can move only up and down. To achieve that I use AvatarController from your sample. I attached bird to the left hand of Avatar. It works well.
    Now i want to make it as a trainer for hands))) Is it it possible to make that the less movement of the hand creates more movement of the bird and conversely the more movement of the hand creates less movement of bird? I want to get certain coefficient to control this…

      • For example, in the first level of game moving the hand for each 0.1 meter moves the bird’s position.y for 10.0f. In the second level, moving the hand for each 0.1 meter moves the bird’s position.y for 5.0f, etc…

      • Well, you can get the hand position in meters all the time, for instance you can check: (pos2.y – pos1.y) > 0.1. If you need the distance between the hand positions in different levels, use ‘(pos2 – pos1).magnitude’ instead.

  16. Thank you) maybe it’s noob’s question, but how can I get hand position in meters?

  17. Hi Rumen,

    I just have a short question regarding the KinectGesture Script: Can the floates in the CheckForGesture method can be interpreted as metres? For instance in the code

    (jointsPos[rightHandIndex].y – jointsPos[leftShoulderIndex].y) > 0.1f

    it means, that the difference in y-direction should be more than 0.1 metres?
    Would it not have the consequence that the gestures are different for people with different heights like children and adults?

    Thanks in advance!

    Greetings,
    Mihawk

    • Hi Mihawk,

      Good question! Yes, all distances in KinectGestures.cs are in meters. In this regard 0.1 means 0.1m = 10cm. Of course, if the required distance is too big, it could be more difficult for children to reach it. I’ve tried to keep the required distances small enough (10-15cm), which is within the reach of the children too. If anyway, there is a problem with them, you can modify the constants in the script, as to your needs, or make them dependent on the user’s height. It is all matter of testing and fine tuning.

      • Thanks a lot! I will consider that in my project.

        However, i have one more question. I examined your InteractionManager, but i do not quite get it how the closing and opening movements of the hand is detected. As far as I know the Kinect V2 has only 2 joint for the hand and those are not used in the script. So how is the hand detection is technically possible?

        Greetings,
        Mihawk

      • Opened and closed hands are detected directly in the Kinect SDK and returned as hand states. Something similar was done by the KinectInteraction in Kinect v1 SDK, hence the name interaction manager.

  18. Hi Rumen,

    First I want to thank you for your great job !

    I’m actually developping my graduation project with Unity and Kinect v2.

    As I have to reproduct a kind of a X rays wall I would like to control multiple avatars (cause we should be able to see multiple skeletons through the screen), is this possible starting with your code ?

    in Addition, I have to consider observator eyes position for the projection (custom asymmetric frustum), I’ve already calibrated my system without the avateering and your asset and I was considering the sensor were at the origine of the scene. Is there any trick to put the avatar at the Kinect positions considering Kinect sensor at the origine ?

    Sorry for my bad english.

    Greetings,
    Kilian

    • Hi Kilian, to your questions: 1. Yes, the playerIndex-setting of AvatarController determines the tracked player (0 – 1st one, 1 – 2nd one, etc.); 2. There is another setting of AvatarController, called ‘PosRelativeToCamera’, which should be set to the camera representing the sensor. I suppose you should also set this camera at the origin in the scene as well, i.e. at (0,0,0).

      • Yes I saw this param called PosRelativeToCamera and tried it but the problem is that my camera represent user’s eyes and could not be at (0,0,0). In fact the camera should look through a plane object that represent a screen in real world. Can I trick with a second camera object (just for the relative pos) or is there any other proper solution ?

        Anyway thank you for the multiples avatars, I see now how to deal with.

        Is there any explanation on how to setup a custom 3D model as the avatar (it’s all new for me) ?

        Thank you very much for your quick response and your help !

        Best regards

      • Once again thank you very much !

        My setup is quite difficult to explain but I found a work around yesterday. I didn’t even see you were talking about how to setup a custom avatar on your webstie, excuse me.

        Have a nice day.
        Kilian

  19. Hi Rumen,

    Do you know if there is a good way to have the background remover only show the person closest to the Kinect? I was looking through the KinectManager class and found two variables that looked like they could help, maxUserDistance and maxTrackedUsers. They seemed to have no effect on the background remover. Are they just for tracking skeletons?

    Thanks

    • Yes, you’re right. To do what you need, please do as follows: 1. Use the GetBodyIndexByUserId() function of KinectManager to get the body index of the closest user (this is usually the primary user); 2. Modify a bit PollDepthFrame()-function in KinectInterop.cs to move only these elements of bodyIndexImage that match the body index of the required user. All the rest should be set to 255. The required user index may be passed as parameter to PollDepthFrame().

  20. What do i have to do if i have kinect v1??….How do i specify the type of joint that i want to know?

    • If you have Kinect v1 but use it with the K2-asset, use (int)KinectInterop.JointType.xxx If you use the K1-asset, use (int)KinectWrapper.NuiSkeletonPositionIndex.xxx

  21. Hi Rumen,

    First, please understand that using English is the lack of a translator.
    I have being using the kinect v2 with MS-SDK.
    Inde production of the report of the FittingRoomDemo, there is one question.
    FittingRoomDemo you can see that the clothes appear after T-pose.
    What I want to appear in the clothes you want to create, select and push objects with hand and T-pose.
    But the clothes, but seems to appear after the push to measure the T-pose again.
    How can you show the clothes without the T-pose measurement when push this?
    The first T-pose once, and is there any way a T-pose should not choose when to push?

    • Well, I understand English is not your language, but you could at least edit the message to make it more understandable, after using the online translator. If you want to skip or change the T-pose at the start, change the ‘Player Calibration Pose’-setting of KinectManager, which is a component of the MainCamera in the scene. Keep in mind that the AvatarScaler-component prefers this pose to measure correctly the length of user’s bones.

    • If you mean continuous vgb gestures – then yes. Add your code to the GestureInProgress()-function of the listener. The continuous gestures report only progress.

      • But is it posible to use only *.vgb – file with progressive gestures without writing code like you use seated.gbd in GestureDemo in gameobject of VisualGestures?

      • The SimpleVisualGestureListener in GesturesDemo should tell you about the progress of your continuous gesture, without writing any additional code. I meant, if you want to process the gesture by yourself, you need to add the processing code to the GestureInProgress()-function of the listener.

  22. Rumen It’s solved that I update version of 5.1.1….:-)
    but I got the another problem. backgroundRemovalDemo is not working..

    When I stand in front of kinect, top left text (“waiting for users”) is invisible.
    which means kinect detects user..

    but I couldn’t see anything but background image.DepthColiderDemo as well.
    other AvatarsDemo and FaceTrackingDemo is working great..

    so is there anything that I have to check option or something?

    • Yes, I know. Unity 5.1 introduced a new issue, which cause some of the shaders used in the K2-asset to stop working. They worked fine in 5.0 and 5.0.1, but not any more. I’m researching this issue and possible workarounds, but it is a bit weird and will take a while. In the meanwhile, a workaround would be to open KinectScripts/KinectInterop.cs, find IsDirectX11Available()-function, comment out the current return-line and add ‘return false;’ below it. Then save the script and return to Unity editor. After that, the K2-asset will use the OpenCV-functions instead of shaders to produce the needed textures. Of course, as this is CPU-intensive work, you can expect substantial decrease in performance, but at least all scenes will work again.

      • Thank you for quick reply…and I’m looking forward to working again soon…:-)

  23. Hello,
    Great library!

    Id like to compute and display the skeleton on the new UI in Unity.

    I have set the computeusermap, displayskeletonlines and displayusermap variables to true, but i dont see any image drawn. Do you have an example for this?

    Thanks

    • Hi there,
      These are two different questions:
      1. To display the skeletons-image wherever you want, you need to enable only ‘Compute User Map’ (and optionally DisplaySkeletonLines). Then use the GetUsersLblTex()-function of KinectManager, to get the texture and apply it anywhere. See the tip on ‘Tips and tricks’-page.
      2. If you don’t see the user silhouettes on the screen, make sure you’re not using Unity 5.1.0 or 5.1.1, which cause a shader issue. See the ‘Known bugs’ section above.

  24. Hello Rumen
    My name is Colin.I am a student from Taiwan
    And I am now using Kinect v2 for my graduation project
    You say if we want the Kinect v2 with MS-SDK should contact your e mail
    But i cant find your e-mail in this website.
    If you can ,please contact me.
    Thank you!

  25. Hello, I’m currently trying to track how many bodies are on screen and assign each an avatar. But when there is no one standing in front of the Kinect it seems to be reading 6 in Sensordata.Bodycount. Is there a reason for this?

    • KinectManager.Instance.GetUsersCount() is the function you seek. sensorData.BodyCount is the maximum number of tracked bodies, used internally for array initialization, etc.

  26. Hi Rumen!

    I see that you have updated all your kinect unity asset! THat’s amazing! Thanks for your continous work!

    I want to develop a kinect game that simulates the user running, similar to this video:

    https://www.youtube.com/watch?v=DgfC4dpa-qA

    ¿Do you know how can I manage the speed of the camera (in the game) depending of the speed of the real running user?

    Best regards!

    Cris

    • Hi Chris, I suppose you would need to measure the speed of running, i.e. how often he moves his legs (or arms). This is just a suggestion. By the way, Kinect Sports 1 is a great game. My kids love it. Me too.

      • OK Rumen, I understand.
        Could you please give me some guide to measure how often the user moves his legs please? 😀
        Do you recommend me a scene/script of your plug-in to begin the develop?
        Thanks! 😀 😀 😀

      • Look in the mirror when you run. I would look at the Y-positions of the left and right knee. If let’s say the left knee is higher than the right knee (i.e. Ylk < Yrk) and within 1-2 seconds the right knee is above the left (Yrk < Ylk), and then again the left is above the right within the time set, and so on, this means the user runs. If you divide the number of these knee detections to the period of time in seconds, you will get the speed you need. As example, see the implementation of KickLeft/KickRight gestures in the KinectGestures,cs-script, although the Z-coordinate is important there. Hope this info helps.

      • Good idea Rumen!! As always, thanks for your good tips and tricks! 😀 😀 😀
        I will program the condition watching the y-coordinates of the knees!

        You’re a master!!

        Best regards!

        Cris

  27. Hi Rumen, I’m trying to update an old project from last year and am having trouble figuring out how to get access to the coordinate mapper in the latest version of your plugin…..

    In the old version (works great):

    cameraSpacePoints = new CameraSpacePoint[sensorData.depthImageWidth * sensorData.depthImageHeight];

    sensorData.coordMapper.MapDepthFrameToCameraSpace(sensorData.depthImage, cameraSpacePoints);

    But now sensorData no longer contains coordMapper, and I’ve been digging through most of the scripts and cannot find it..

    Any insight? Thanks!

    • Hi, CoordMapper is SDK 2.0-specific, that’s why I moved it to Kinect2Interface-class, long time ago. You can access it like this, I think: ((Kinect2Interface)sensorData.sensorInterface).coordMapper. There are also mapping-functions in the KinectManager.

  28. Hi Rumen, this kinect unity asset is amazing!.

    Im looking for a speech code list (Spanish Spain and Spanish MX).

    Greetings

      • Onsterion September 1, 2015 at 14:29
        Hi Rumen, i have installed the packages (kinect for windows speech recognition language pack) en-US, es-ES and es-MX. I create in resourcers the xml “SpeechGrammarSpanish.grxml” with english code 1033 works fine but if i use the code 1034 (es-ES) or 2058 (es-MX) i have the error: “Error loading grammar file SpeechGrammarSpanish.grxml: An attempt to load a CFG grammar with a LANGID different than other loaded grammars.”

        Grammar File:

        Hola

        hola

        Greetings

        P.D: delete the duplicate post below, sorry.

      • Hi again, I suppose you changed the grammar file name-setting of the SpeechManager-component too. Please contact me by e-mail and send me this grammar file, so I could check what exactly is not OK.

  29. Hi Rumen, I’ve found that there is a problem with the new update 2.6, the same projects using version 2.5 run at ~60fps but with the updated asset 2.6, they run at ~12fps.
    This only occurs when using Kinect1Interface, even when there is only a game object with the kinect manager on the scene, and only if Compute Color map AND compute User Map are activated.
    I’ve profiled the scenes and it seems that the method MapDepthFrameToColorCoords() from the Kinect1Interface eats up the processor, and I’ve compared this method on v2.5 and v2.6 and found this:

    v2.5: the method always returns false
    v2.6: the method does what it seems to be a heavy operation with nested for on depthImage.

    it seems that commenting out that bunch of code and simple return false fix the problem, but I don’t know if it creates another problem or incompatibility.

    Thanks!

    • Hi, as far as I remember, this bunch of code has one purpose – to support mixing of the color and depth images, when both compute-color-map and compute-user-map are enabled. If you don’t need this feature, there is no problem to comment the code out. Well done! Maybe I should also think about commenting it out altogether, or avoiding it in most cases.

      • Hi, thanks for answering, So I will comment it out, and if a problem arises, I’ll have that workaround in mind. I’m still wondering what this code improves or why it suddenly appears from v2.5 to v2.6 as it seems that the results on the background removal demos are the same in both versions with no visible improvements. If I find something interesting I will let you know.

        Cheers!

      • It is not part of the background-removal. Enable the DisplayUserMap-setting of KM and you will see what I mean, with and without this code. It came to life between 2.5 and 2.6 to fix an issue in generating the user-map texture, when the custom shaders are used in Dx3d-11 mode.

  30. Hi Rumen,
    Hope you are doing well.
    In Background Removal why the shoes are removed from the cut out?
    Any idea on this

  31. Onsterion, Could you load Spanish languaje? I had the same problem… “Error loading grammar file SpeechGrammarSpanish.grxml: An attempt to load a CFG grammar with a LANGID different than other loaded grammars.”

    • Hi Miguel,

      Yep, you need to put your “SpeechGrammarSpanish.grxml” in the root folder of your project before the Assets Folder.

      En español:
      Tenés que poner el archivo “SpeechGrammarSpanish.grxml” en el root de tu proyecto osea antes de la carpeta Assets.

      • There are 3 things to watch (I’ll add a tip on that soon): 1. The ‘Language code’-setting of SpeechManager has to be 1034 (Spanish); 2. The xml:lang attribute in the grammar file needs to be ‘es-ES’ or ‘es’; 3. If you modify the grxml in Resources, you need to delete the grxml-file in the root-folder (the parent-folder of Assets-folder, not visible in Unity editor), before you run the scene.

      • Thank you Onsterion and Rumen F, the problem was I only edited the *.grxml in Resources and I did not remove the other file in root-folder.

  32. “This package is free for educational purposes (i.e. for schools, universities, students and teachers). If you match this criterion, please contact me by e-mail to get the Kinect-v2 asset directly from me.”
    Hi, I am a university student. in this moment i´m new in unity sotware and i need to create a position data base for a game with kinect v2.
    The idea consist in:
    1-Create a position data base “3 or 4 positions”(only the first time).
    To start the game:
    2-Select the position to recognize.
    3-Recognize the position of a person and calificate it (compared with the perfect position “data base”)
    4-Generate a result.
    I need help with it.
    Thanks 🙂

    • First make sure that ‘Compute color map’-setting of the KinectManager-component is enabled. Then, in your script do something like this:

      Texture2D texColor = KinectManager.Instance.GetUsersClrTex();
      byte[] btTexEncoded = texColor.EncodeToPNG();
      File.WriteAllBytes(“my_photo.png”, btTexEncoded);

  33. I’m new to rigging and 3D animation but I’m attempting to control a rig with your avatar controller, which works when I place the object in the scene originally. but I’m trying to have it spawn when a user enters with a character manager and it doesn’t seem to be registering. I can only assume it’s not making a connection with the Kinect Manager potentially? What calls the controller? what does it need access to to connect properly to the Kinect manager?

    • See the LoadModel()-function of FittingRoomDemo/Scripts/ModelSelector.cs-script, how the AvatarController-component of the instantiated model is initialized, calibrated and added to the KinectManager’s list. Maybe only the posRelativeToCamera-setting does not apply to your case. Hope this information helps and wish you a nice weekend. Same to me 😉

  34. Hi Rumen!

    Thanks for your update in the Kinect v2 asset! As always, we appreciate your work!
    I want to ask you, if you know how to implement gender and age recognition using the kinect?
    Do you recommend a SDK for doing this (without using an Internet API) ?

    Thanks!

    Best regards,

    Cris

    • Hi Chris, I have no idea how to do gender or age recognition. As to me, this would require some image analysis + a machine learning tool on top of that. Maybe some sort of image classifier, like for detecting faces. Look at the OpenCV classifiers. More than this I cannot tell at the moment.

      • Hi Rumen!
        Thanks for your quick answer! I understand what you said. I will search for gender and age recognition, but in the field of image analysis instead of kinect.
        Last question about this: How can I get the image (texture or png) of only the face of someone recogniced using the Face-recognition of Kinect? So I could pass this image to another script that recognise the gender and age..

        Thanks!

        Best regards!

        Cris

      • Can’t check it right now, but I think, the last face-tracking demo scene shows how to cut the face rectangle only. I’m not really sure if you need only the face. Maybe the cut-out of the whole person (or the respective depth image) would be better to train and/or test your gender/age classifiers.

      • OK Rumen, thanks for your quick answer. As always I appreciate your help.

        Best regards,

        Cris.

Leave a Reply to taeCancel reply