Kinect v2 Examples with MS-SDK

Kinect2_MsSDKKinect v2 Examples with MS-SDK is a set of Kinect-v2 (aka ‘Kinect for Xbox One’) examples that use several major scripts, grouped in one folder. The package contains over thirty demo scenes.

Please look at the Azure Kinect Examples for Unity asset, as well. It works with Azure Kinect sensors (aka Kinect-for-Azure, K4A), as well as with Kinect-v2 sensors (aka Kinect-for-Xbox-One).

The avatar-demo scenes show how to utilize Kinect-controlled avatars in your scenes, gesture-demos – how to use the programmatic or visual gestures, fitting room demos – how to create your own dressing room, overlay demos – how to overlay body parts with virtual objects, etc. You can find short descriptions of all demo-scenes in the K2-asset online documentation.

This package works with Kinect-v2 sensors (aka Kinect for Xbox One) and Kinect-v1 sensors (aka Kinect for Xbox 360). It can be used with all versions of Unity – Free, Plus & Pro.

Free for education:
The package is free for academic use. If you are a student, lecturer or academic researcher, please e-mail me to get the K2-asset directly from me.

One request:
Please don’t share this package or its demo scenes in source form with others, or as part of public repositories, without my explicit consent.

Customer support:
* First, please check if you can find the answer you’re looking for on the Tips, tricks and examples page, as well as on K2-asset Online Documentation page. See also the comments below the articles here, or in the Unity forums. If it is not there, you may contact me, but please don’t do it on weekends or holidays.
* If you e-mail me, please include your invoice number. More information regarding e-mail support you can find here.
* Please note, you can always upgrade your K2-asset free of any charge, from the Unity Asset store.

How to run the demo scenes:
1. (Kinect-v2) Download and install Kinect for Windows SDK v2.0. The download link is below.
2. (Kinect-v2) If you want to use Kinect speech recognition, download and install the Speech Platform Runtime, as well as EN-US (and other needed) language packs. The download links are below.
3. (Nuitrack-Deprecated) If you want to work with Nuitrack body tracking SDK, please look at this tip.
4. Import this package into new Unity project.
5. Open ‘File / Build settings’ and switch to ‘PC, Mac & Linux Standalone’. The target platform should be ‘Windows’ and architecture – ‘Intel 64-bit’.
6. Make sure that ‘Direct3D11’ is the first option in the ‘Auto Graphics API for Windows’-list setting, in ‘Player Settings / Other Settings / Rendering’.
7. Open and run a demo scene of your choice, from a subfolder of ‘K2Examples/KinectDemos’-folder. Short descriptions of all demo scenes are available here.

* Kinect for Windows SDK v2.0 (Windows-only) can be found here.
* MS Speech Platform Runtime v11 can be downloaded here. Please install both x86 and x64 versions, to be on the safe side.
* Kinect for Windows SDK 2.0 language packs can be downloaded here. The language codes are listed here.

Documentation:
* The online documentation of the K2-asset is available here and as a pdf-file, as well.

Downloads:
* The official release of the ‘Kinect v2 with MS-SDK’ is available at Unity Asset Store. All updates are free of charge.

Troubleshooting:
* If you get errors like ‘Texture2D’ does not contain a definition for ‘LoadImage’ or ‘Texture2D’ does not contain a definition for ‘EncodeToJPG’, please open the Package Maneger, select ‘Built-in packages’ and enable ‘Image conversion’ and ‘Physics 2D’ packages.
* If you get compilation errors like “Type `System.IO.FileInfo’ does not contain a definition for `Length’”, you need to set the build platform to ‘Windows standalone’. For more information look at this tip.
* If the Unity editor crashes, when you start demo-scenes with face-tracking components, please look at this workaround tip.
* If the demo scene reports errors or remains in ‘Waiting for users’-state, make sure you have installed Kinect SDK 2.0, the other needed components, and check if the sensor is connected.

* Here is a link to the project’s Unity forum: http://forum.unity3d.com/threads/kinect-v2-with-ms-sdk.260106/
* Many Kinect-related tips, tricks and examples are available here.
* The official online documentation of the K2-asset is available here.

Known Issues:
* If you get compilation errors, like “Type `System.IO.FileInfo’ does not contain a definition for `Length“, open the project’s Build-Settings (menu File / Build Settings) and make sure that ‘PC, Mac & Linux Standalone’ is selected as ‘Platform’ on the left. On the right side, ‘Windows’ must be the ‘Target platform’, and ‘x86’ or ‘x86_64’ – the ‘Architecture’. If Windows Standalone was not the default platform, do these changes and then click on ‘Switch Platform’-button, to set the new build platform. If the Windows Standalone platform is not installed at all, run UnityDownloadAssistant again, then select and install the ‘Windows Build Support’-component.
* If you experience Unity crashes, when you start the Avatar-demos, Face-tracking-demos or Fitting-room-demos, this is probably due to a known bug in the Kinect face-tracking subsystem. In this case, first try to update the NVidia drivers on your machine to their latest version from NVidia website. If this doesn’t help and the Unity editor still crashes, please disable or remove the FacetrackingManager-component of KinectController-game object. This will provide a quick workaround for many demo-scenes. The Face-tracking component and demos will still not work, of course.
* Unity 5.1.0 and 5.1.1 introduced an issue, which causes some of the shaders in the K2-asset to stop working. They worked fine in 5.0.0 and 5.0.1 though. The issue is best visible, if you run the background-removal-demo. In case you use Unity 5.1.0 or 5.1.1, the scene doesn’t show any users over the background image. The workaround is to update to Unity 5.1.2 or later. The shader issue was fixed there.
* If you update an existing project to K2-asset v2.18 or later, you may get various syntax errors in the console, like this one: “error CS1502: The best overloaded method match for `Microsoft.Kinect.VisualGestureBuilder.VisualGestureBuilderFrameSource.Create(Windows.Kinect.KinectSensor, ulong)’ has some invalid arguments”. This may be caused by the moved ‘Standard Assets’-folder from Assets-folder to Assets/K2Examples-folder, due to the latest Asset store requirements. The workaround is to delete the ‘Assets/Standard Assets’-folder. Be careful though. ‘Assets/Standard Assets’-folder may contain scripts and files from other imported packages, too. In this case, see what files and folders the ‘K2Examples/Standard Assets’-folder contains, and delete only those files and folders from ‘Assets/Standard Assets’, to prevent duplications.
* If you want to release a Windows 32 build of your project, please download this library (for Kinect-v2) and/or this library (for Kinect-v1), and put them into K2Examples/Resources-folder of your project. These 32-bit libraries are stripped out of the latest releases of K2-asset, in order to reduce its package size.

What’s New in Version 2.21:
1. Added ‘Fixed step indices’ to the user detection orders, to allow user detection in adjacent areas.
2. Added ‘Central position’-setting to the KinectManager, to allow user detection by distance, according
to the given central position.
3. Added ‘Users face backwards’-setting to the KM, to ease the backward-facing setups.
4. Updated Nuitrack-sensor interface, to support the newer Nuitrack SDKs (thanks to Renjith P K).
5. Cosmetic changes in several scenes and components.

Videos worth more than 1000 words:
Here is a video by Ricardo Salazar, created with Unity5, Kinect v2 and “Kinect v2 with MS-SDK”, v.2.3:

..a video by Ranek Runthal, created with Unity4, Kinect v2 and “Kinect v2 with MS-SDK”, v.2.3

..and a video by Brandon Tay, created with Unity4, Kinect v2 and “Kinect v2 with MS-SDK”, v.2.0:

806 thoughts on “Kinect v2 Examples with MS-SDK

  1. Thank you for the great asset. How can I put 3D-objects in front of background removal, so that the user could “hide” behind object.

  2. Hi Rumen, Is possible use the playmaker actions with kinect v2?, I have a complex project in kinevt v1 that use this asset and I need to update to run with kinect 2. Tks

    • Hi, there are some example PM actions in the K2-asset. You can see them in the PlaymakerKinectActions.zip, located in KinectScripts/Playmaker-folder. They may be unzipped, when the Playmaker-package is also imported. More actions could also be added, when needed. Just use the current ones as example. Please tell me, if you need more information.

    • Send me an e-mail from your university e-mail address, to prove you’re eligible for a free copy. See the About-section above for my contact info.

    • Not sure, but I suppose you need to map the Kinect color camera image (1920 x 1080) to your quality camera image. By the way, do you mean 1080p is poor quality?

  3. hi
    can you please tell me how to control cursor using of wristleft or WristRigt insted if HandLeft and Hand Right.

  4. Hello
    Your Package has been a bless, kinect is not easy, thanks for your works.
    I just want to share some of my creations with your package:

    Mixed with Leap Motion + cardboard for full body tracking
    https://www.youtube.com/watch?v=XTtS5_yzkY4

    3D hair style:
    https://www.youtube.com/watch?v=xX2rxBKTu1M

    Aura:
    https://www.youtube.com/watch?v=3XLrrP8Sh4Q

    Overlays:
    https://www.youtube.com/watch?v=lOm1-iWJiYI

    8bit avatar:
    https://www.youtube.com/watch?v=aUt1fOoVaOI

    Atraction Aura:

    • Thank you, Dan! There is no limit for the creative minds 🙂 Would you mind, if I share some of your videos on Twitter or on this blog’s pages?

    • Thanks to Rumen, Dan

      It is pretty amazing to see your work guys. @Dan, as a newbie I am so curious to see combined action of Leap motion with Kinect. it would be great if you could share the details of “how” did you do that ?

      • I cannot respond for Dan, but it’s not so complicated. You need to enable the ‘External hand rotations’-setting of AvatarController, to allow the model hands to be controlled by the LeapMotion-sensor. A simple implementation of this you can find in the ‘Kinect Mocap Animator’-asset, if you have it. Here is more information about LM-integration: https://ratemt.com/k2mocap/LeapMotionFingerTracking.html

  5. Hello,Rumen I just purchased this Kinect v2 with MSsdk examples from asset store. But I found some quetions about the KinectAvatarDemo2.unity in the AvatarsDemo-folder. When I run the scene, it always report the RunTime error. I don’t know how to deal with that, and I run the example in the unity3d 5.3.0f4. Besides, I wanna use the kinect v2 examples with avatar charactor on web player, how can I do that? I found there is a KinectDataServer scene in the KinectDataServer-folder, how does it work ?

    • Hi Leon, sorry for the delayed response. I don’t work at weekends and holidays. To your questions:

      The K2-asset can work in Windows standalone mode only, because it calls the Kinect SDK 2.0 API. Please open ‘File / Build Settings’ and make sure ‘PC, Mac & Linux Standalone’ is selected as ‘Platform’ on the left and ‘Windows’ as target on the right.

      The KinectDataServer is the server-app for ‘Kinect v2 VR Examples’-asset (K2VR for short). K2VR is a set of lightweight Kinect-scenes, which get their data over the network instead. It is aimed to work on mobile platforms, such as Android and iOS, and VR-platforms, such as Oculus, GearVR or Vive. If you would like to try it out, please e-mail me your invoice number from Unity asset store and I’ll send it to you. I’ve not tested it on the WebGL-platform so far (i.e. issues are possible), but I’ll try to do it this week. I’m also not sure from security point of view, if a web app should have rights to get private information, such as the movement of human bodies in the room.

  6. Hi Rumen, first of all many thanks for your amazing plugin, it’s working really well!
    For my game I’d like to use two hand cursors (one for each hand), which can be moved seperately and are fully functional on their own. My current approach is to duplicate the Interaction Manager and restrict each to a specific hand. This way I managed to get two cursors, but their both not really functional and quite buggy. Do you have an idea for a better approach, without the use of two separate scripts?

    And what is it about the “lasso” state for the hand cursor? I’ve never seen it in use and also the “Normal Hand Texture” is never used.

    • Hi Jonas, the InteractionManager tracks both hands all the time. The code you need to modify is near the end of its OnGUI()-method and instead of displaying the cursor on cursorScreenPos, use leftHandScreenPos & rightHandScreenPos to display 2 cursor textures.

      Regarding the Lasso-state, it is currently considered as closed hand too, but you are free to modify this in HandStateToEvent()-function of IM. The normal-cursor texture is used only when the user or his hands are not tracked, for instance before the user ever gets detected or gets lost.

      • Hi Rumen, thanks for your reply! I managed now to properly display a hand cursor for each hand with the method you told me. But now I’m struggling with the primaries for the hands. Only one hand at a time can be primary right now. That’s bad, because I want to use both hands for grabbing/ activating stuff. Any tips on how I can set both hands to be primary all the time?

        Thank you in advance!

      • I am using the InteractionDemo1 as a testing enviroment. In the Interaction Manager script I tried to set both isLeftHandPrimary and isRightHandPrimary bools to true, but that didn’t work. After that I tried to modify any code related to the primary bools, but it only accepts one hand as a primary. On the infoGuiText you can see switching the primaries (the * after the L.Hand/R.Hand changes to the other hand if I hide one hand from the Kinect). Also the cursor on the “non-primary” hand is not changing it’s sprite to the grab cursor, but stays in the lasso cursor shape, which is used right now, if the hand is off screen/ not used.

        e.g. I can grab stuff with the left hand. If I hide my left hand, the right hand becomes primary and it changes from the lasso cursor to the release hand cursor. Now i can grab stuff with my right hand. The left hand is still showing up as cursor, but not usable for interaction.The GrabDropScript is only responding to the primary hand. And also it seems like the movement of one hand is affecting the movement of the other hand (just a bit)

      • There are probably 1-2 small mistakes in your code changes. You can debug the code, to find out what is wrong. If you get really stuck, e-mail me next week, mention your invoice number and I’ll try to help you. This week I’m quite busy, and can’t do it. By the way, there will be some changes in the interaction manager, coming with the K2-asset update later this week.

      • Hi Rumen,

        Have you ever followed up on this request by coincidence?
        I would like to implement the same functionality (2 cursors), the problem isn’t displaying them, but making them work at the same time.

      • Yes, we made it work back then, together with Jonas. The problem was that this required duplication of many code parts in InteractionManager. That’s why I never added it to IM-updates. You can debug a bit the code in IM, related to left/right hand cursor control, and you will find out what duplication or changes are needed to make it work.

      • Hi, Rumen. I want to say thank you first for the nice plugin you made! And I’m sorry, but I’d like to ask you a favor.
        While I was working on the graduation project, I was also stuck in the middle of making two cursors available..I am an unity beginner, and this process is too difficult for me.
        What I want to do is make right hand click to draw lines, and left hand to move the camera by click and drag.

        I tried to solve this problem alone, but I can’t come up with any ideas and I ask you for help. Could you simply give me a brief description of the process for solving this problem? Because it’s hard to understand if you just give me the answers. I really want to solve this problem, and I don’t have much time left. Sorry to bother you, but I’d appreciate it if you help me solve this problem.

      • Hi, for hand drawing see KinectDemos/OverlayDemo/KinectOverlayDemo3-scene and its script component HandOverlayer. For dragging/dropping objects see KinectDemos/InteractionDemo/KinectInteractionDemo1-scene and its script component GrabDropScript. It uses the InteractionListenerInterface to detect hand events (grip and releases), and the hand’s normalized position in Update() to move the object. I suppose you need something like this in your project, too. More information regarding the components is available in the online documentation: https://ratemt.com/k2docs/HandOverlayer.html and https://ratemt.com/k2docs/GrabDropScript.html

  7. hi,
    my final year project is running game
    i am using kinect v2 sdk with example and by using avatar demo example i’m able to control avatar but i have one issue that how to control avatar that runs so that its also able to collect coin by its hands

    • Hi, this is a matter of physics and colliders, as to me. Put for instance sphere colliders around avatar’s hands and make it rigid body, as in the KinectAvatarsDemo1-scene. Then add trigger colliders around the coins and finally check for collisions in your coin-collecting script.

  8. Hello Rumen!

    I am making a VR self defense game with kinect in unity for my class project. I have a kinect for xbox 360. Will it run kinect mocap and kinect v2 ms-sdk? If so can I please get kinect v2 with ms-sdk and mocap animator please?

    Regards,
    Omkar Manjrekar

  9. Hi, I bought this asset package to make games, and it has been working out pretty well. However, I am trying to shift to the Universal Windows Platform so that I can target the xbox market. Do you have any plans to support the UWP platform? (On Unity it is labeled as Windows Store – Universal)

  10. Hey, Im making a game that requires keyboar input to write an email adress but it seems the InteractionManager interfers somehow. Im using the UI InputField. Do you have any suggestions of what could be causing trouble?

    Thanks

    • The Unity event system uses only one input module – the one that is enabled and is on top of the list. That said, probably the InteractionInputModule hides the standard one (that processes keyboard and mouse input). I’ll try to reproduce & research this issue during the weekend, and look for a solution. Please email me next week to check, if I have found a solution or workaround.

  11. Hello Mr.Rumen!

    I’m making a senior project with kinect in unity about AR fitting room, I have a kinect v2. Could you please have any suggestions about using your package with my project? Thanks for noticing this. Sorry for my bad English.

    Best Regards,
    Pakkanan Satha

  12. Hello Rumen,

    I am trying to get the audio source angle from Kinect. How could I implement it? I couldn’t find a public KinectSensor variable in KinectManager.

    Thank you!
    Jing

    • Hi Jing, you can get the Kinect-v2 SDK-specific KinectSensor like this: ‘Windows.Kinect.KinectSensor kinectSensor = (KinectManager.Instance.GetSensorData().sensorInterface as Kinect2Interface).kinectSensor;’

      But there were some little tricks, when you need to get the audio beam angle from the audio source, as far as I remember. If you get stuck, please e-mail me and I’ll try to find the audio-beam tracking class I have written some time ago.

      • Hi Rumen,

        I am also trying to use the speech manager. Microsoft suggests — for long recognition sessions, that the SpeechRecognitionEngine be recycled (destroyed and recreated) periodically, say every 2 minutes based on your resource constraints. I also want to turn off the AdaptationOn flag in UpdateRecognizerSetting. How could I do those?

        Thank you!
        Jing

      • To recycle the speech manager, I suppose you need to put it in a prefab, then instantiate it and destroy the instance from time to time.

        Regarding the adaptation-on/off flag, see the last parameter of sensorData.sensorInterface.InitSpeechRecognition()-invocation in the Start()-method of SpeechManager.

      • I put SpeechManager in a prefab, instantiate a clone in Start() function, then destroy the clone and instantiate a new one every 60 seconds in Update() function. It seems that the clone are destroyed and recreated. However, the speech Manager script only run once. After 60s, I could create a clone but no speech recognition functionality. Do I need to specifically tell the script to start running?

      • I meant that part:

        I am trying to get the audio source angle from Kinect. How could I implement it? I couldn’t find a public KinectSensor variable in KinectManager.

    • It’s a matter of layers rendered by different cameras, as to me. See the 2nd background removal demo. It shows background + BR-foreground image + 3d objects rendered behind and in front of the BR-image.

      • I open the KinectBackgroundRemoval1 ,but I can’t see mine , and unity did’n print any error,why?

      • 1.log:K2-sensor opened, available: True
        2.log:Interface used: Kinect2Interface
        3.log:Shader level: 50
        4.log:Waiting for users.
        5.log:Adding user 0, ID: 72057594037941507, Body: 4

        but the demo(background remove) did’n work, why?

      • Which version of the K2-asset are you using? If it not the latest one, please download the latest, and try again.

    • Model asset means the fbx-file of your rigged model, placed somewhere under the Assets-folder of your Unity project. It should be placed there, in order to be visible by the Unity editor.

  13. Hello ,
    like you mentioned , , i am currently doing my bachelor thesis , and my project is based on kinect body tracking so i am urgently need this package to enable me to start my project and start get the data from it , but i can not find your e-mail to contact you , so can you help me in this please .
    Thank you .

  14. Hello, Rumen. Fantastic Work. Realy helpful.

    Is it posible to customize grip event? I mean, HandState = Grip when hand is 100% closed. Is it possible customize that and detect HandState = Grip when hand is 70% closed for example? I don’t find the class to do that in the project.

    Thanks in advance

    • Hi, the Kinect SDK provides only 3 discrete hand states – Release, Grip and Lasso. In this regard, there is no percentage available. These states are processed as hand interactions by the InteractionManager (script component of the KinectController-game object) in the demo scenes.

  15. Is it possible to use your package to create Xbox One app in the Unity and then run the app on the Xbox One device with connected kinect 2 ?

  16. Hello,

    Is it possible to run your examples in the Xbox One app made in Unity?
    So that I export the app in Unity as an Xbox One app and then run the example on the actual Xbox One device with kinect connected.

    Regards

    • Hi Aldo, I’m not sure if it is the same, but take a look at the 3rd and 4th background-removal demo scenes. They do something similar, I think.

  17. Hi Rumen,

    I purchased this asset on the assetstore and it worked great for me.

    I have a question regarding “seated” tracking mode, I have an app where the lower half of the body of the user (legs and torso) is not visible to the Kinect, and according to this: https://msdn.microsoft.com/en-us/library/hh973077.aspx , seated mode is more suitable for my case.

    I tried current tracking mode and it seems fine when parts of the legs are in range, but once legs are out of range everything becomes a mess, even hands, which are the only thing I really want to track.

    However, it seems that this functionality isn’t accessible via this asset as of now, is there a workaround so I can access “kinect.SkeletonStream.TrackingMode”?

    Best Regards,

    • Hi, there is no more seated mode in the Kinect SDK 2.0. It is Kinect-v1 specific mode, as far as I now. But even then I never utilized this mode in my Unity assets, because it changes the body hierarchy.

      I would recommend to use the AvatarControllerClassic-component instead of the AvatarController. Then assign to its settings only the joints belonging to the upper part of the avatar’s body. See the 3rd avatar-demo, if you need an example.

      If this solution is not enough, feel free to e-mail me your invoice number and paypal account, and I’ll refund you the money.

      • Thanks for the reply,

        I didn’t notice that seated mode is v1 only, I will try the AvatarControllerClassic demo and see if it gives better results, but i don’t think it will because the problem is apparent in user map display even when having no avatar controllers at all, that’s why I sought a different algorithm and thought seated mode is the solution.

        I will try a few different things, maybe we can redesign the project so it can rely on gestures or something.

        And don’t worry about the refund, I already used it on a different project and it worked fine.

        Thanks again and best regards,

  18. hello rumen!
    i am new to kinect and unity! i ask you something that for control of avatar in unity it is necessary to buy this asset? i am student it is expensive for me to purchase this asset. is there another solution for control avatar in unity with kinectv2?

  19. Hi Rumen,

    I have an issue with the kinect manager script “player calibration pose”. Setting it to anything other than none, my model wouldn’t appear. How do i set it to calibrate via the t pose? Thanks!

    • Hi, make sure the KinectGestures-component is added to the KinectController-object in the scene, as well. The processing of all poses and gestures is in there.

  20. Hi Rumen
    What could be done so that the avatar’s hands do not go through his body?. We have seen that in your examples there are some Colliders, but we do not know if they are aimed at avoiding this situation.
    Thanks

    • Hi Mario, the current colliders were more aimed to prevent going through other objects, but I think preventing hands going through the body should apply something similar. One other option would be to make proper muscle adjustments in the avatar definitions, and then enable ‘Apply muscle limits’-setting of the AvatarController. Please experiment a bit.

  21. Hi Rumen,

    First, I’d like to thank you for a great asset! Works wonderfully well except for one little problem which I hope you can help me resolve.

    I’ve been using an older version of the package for nearly a year now and only recently have I updated it to the latest version. After the update, I was not able to move the position of the avatar so I compared the avatarcontroller script between the two versions and I noticed that you’re now setting the global position of the gameobject instead of the local position. Is there a way for me to manipulate the avatar’s position globally like how it was previously possible?

    Thanks in advance,
    Sid

    • Hi Sid, why don’t you just copy back the older version of AvatarController, in case you find it more useful? I suppose you would need to fix some syntax errors in the process (like method invocation parameters, etc.), but it should be possible. In this regard, feel free to email me, if you have questions or get stuck anywhere in the process.

  22. Hola soy estudiante del Instituto Metropolitano de Diseño en Ecuador y estoy realizando mi tesis de grado para la cual necesito el Asset (Kinect v2 Examples with MS-SDK) y quisiera saber si tu me lo podrías ayudar ya que el proyecto no sera comercializado ni replicado te agradezco de antemano por tu pronta respuesta.

    • Sorry, my Spanish is not good, at all. If you want to get a student’s copy of the K2-asset, please send me you your e-mail request FROM your university e-mail address.

  23. Hi Rumen

    Will it be possible to display the joint’s orientation(position) on the scene display?

    Thanks.

    • Sure. Actually, if you enable the Cubeman-object in KinectAvatarsDemo1-scene you should see the orientations of all tracked joints as orientations of the respective cubes. Scale them up, if needed.

  24. Hi, Rumen. I’m a student who sent you e-mail last week.
    Thanks to your support, our research is going well (for now). However unexpected problem happened. We’re now using two unity packages that you made, (‘Kinect V2 Examples with MS-SDK'[I downloaded it for free on Unity Asset Store, but Free version has been removed from store…], ‘Kinect MoCap Animator’) but we have faced that script error somehow….

    • Hi, the K2-asset was never free on Unity asset store. Maybe you downloaded the K1-asset, which is free from the start. And I don’t understand why you combined it with the mocap animator, which is a separate tool, and what script error you faced in the end.

      • Ah… Was it impossible combining these two tools together? I didn’t know that 🙁
        It seems that I cannot attach image file on here. The error was ‘Assets/KinectMocapFbx/Scripts/KinectFbxRecorder.cs(268,3):error CS1525:Unexpected symbol ‘throw’

      • It is possible to combine them, if you avoid duplicating the scripts they both use, but I don’t see much sense in combining the two packages. By the way, ‘throw’ is a C# keyword, so maybe you messed up something generally, if you get this error. Make sure the platform in ‘Build settings’ is set to ‘PC, Mac & Linux Standalone’, and the ‘Target platform’ is set to ‘Windows’.

  25. Hi!

    Is it possible to use your examples on a Mac with MacOS/OSX without Kinect?

    What I would like to achieve is to record skeleton movements on Windows and be able to play it back on Unity for Mac. This way I could program most of my business logic without needing an actual Kinect. Can I do this using your KinectManager and related classes?

    Thanks,

    Balazs

    • Hi, I think you can. Try to do as follows:
      1. Run KinectDemos/RecorderDemo/KinectRecorderDemo-scene and record the needed movements. You can make several recordings by changing the name of the output file (‘File path’-setting) in KinectRecorderPlayer-component. Here is more info regarding this component: https://ratemt.com/k2docs/KinectRecorderPlayer.html
      2. Copy the recorded files to the project on the Mac-machine.
      3. Open the KinectScripts/KinectInterop.cs-script in the Mac project. Near the beginning of the script there is a definition of an array called SensorInterfaceOrder. In this array, before ‘new Kinect2Interface()’ add ‘new DummyK2Interface(), ‘. This will make the package use a dummy sensor interface instead of interface to a real Kinect-sensor.
      4. Add KinectScripts/KinectRecorderPlayer-script as component to the scene. I usually add these components to the KinectController-game object. Then set its ‘File path’ to point to the copied recording-file, and don’t forget to enable its ‘Play at start’-setting, too. It will start the replay right after the scene starts.
      5. This should do the job. Run the scene to check if it works or not.

  26. Hello there
    I can make my own outfit using the Fitting room scene. If it can be done, I will buy the package.

  27. Hi Rumen,

    Package still works like a charm, one question regarding BackgroundRemovalManager. I’m creating a white silhouette by setting the ComputeBodyTexOnly to true, but the silhouette is only truly white when I either set the dilate/erode iterations to something else than zero. When both are on zero, which is my prefered setting, it becomes slightly greyish.

    I could probably hack around it, but I was hoping for a clean solution. Do you have any suggestions?

    Thank you, j

    • Yes. Please open Resources/BodyShader.shader, and near its end replace ‘float clrPlayer = (240 + player) / 255;’ with ‘float clrPlayer = 1.0;’. Hope this will make the body-texture completely B/W.

  28. Dear Mr. Filkov,
    I am using your tool and I find it amazing, thank you!
    I only have an issue: I project the scene on the ground with a projector and I want to map my position as seen by the Kinect (which is perpendicular to me, in “normal” position) in the virtual environment, which has a camera from above.
    I am using the first Avatar scene as example but I don’t understand where and when in the scripts the position of the user is taken and sent to the avatar to map the position.
    If I only import your assets and scripts in my scene, obviously my position on the ground does not match the position that the avatar has on the ground.
    Thank you very much

    • Hi, as I understand the sensor is in front of you (as expected), while the projector is on the ceiling. Please take a look at KinectDemos/ProjectorDemo/KinectProjectorDemo-scene (enable the U_Character-game object in the scene) and this tip here: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t34 I suppose it could be re-configured to correspond to your setup.

      Regarding the position of the avatar model in the scene: This is done in the MoveAvatar()-method of KinectScripts/AvatarController.cs (component of the model in the scene), and depends on whether ‘Pos relative to camera’-setting references a camera in the scene or not. Here are some hints regarding the avatar-controller settings: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t39 and here is the AC documentation, as well: https://ratemt.com/k2docs/AvatarController.html Do mind, you are also free to modify or extend the AvatarController’s code, to match best your needs.

      • Dear Mr. Filkov,
        Thank you for your answer. The projector is not on the ceiling but above the kinect, at about 3 meters from the ground, so the mapping between the virtual world and the real world is not direct due to the fact that the projector is not perpendicular to the ground.
        I have inspected the Projector Demo Scene but I dont’ think it will be feasible to adapt it.
        I think I will make my own Controller using yours as reference, as I only need to have the user followed by a light aura in the scene and to recognize some movements without actually having the avatar mimicking them.

        Only one other question: how do you suggest to manage the mapping between the real and the virtual world with the kinect?
        Meaning, one side of the projection (so of the virtual environment) starts exactly where the kinect is.
        If I am standing at 1.5 meters from the kinect, how could I have Unity understand that I am standing 1.5 meters from the side of the virtual world, and consequently have the light aura drawn there? I need a sort of mapping of the real projection to the virtual world, do you have any suggestion of a good starting point from where I could start working on that?

        Thank you very much again

      • The sensor will detect user’s coordinates in its coordinate system. It needs to be re-projected into your projector’s coordinate system, no matter where the projector is and how it is oriented. In means of Unity these are two cameras to the same physical/virtual world. You need to estimate the position, orientation & projection matrix of the projector, to match the world “seen” by Kinect. This is usually called ‘calibration’ and may be relatively easily done with the calibration tool of the Room-alive toolkit. Then, this ‘calibration data’ may be utilized by the projector-demo I pointed you above. I’m not quite sure what other starting point you need. Of course, if you are good at math and geometric/matrix transformations, you could estimate the needed calibration manually.

  29. Dear Mr. Filkov,
    thanks again for the answer.
    I’ll then proceed to investigate the Projector Demo Scene a bit more and, if necessary, to do some math.

    Thank you for your help

  30. Hello, Rumen
    Many thanks for the package
    I have a problem with Fitting Room2. The clothes stay in front of me. How can I get clothes back?

  31. Hello, Rumen
    Thanks for the package, it was amazing !!

    I faced a problem when export my project to Win10 UWP, I manage to export it but when i run the apps, it give me this error:
    ” Background removal cannot be start, as kinect manager is missing or cannot initialize”
    However, i have no issue when i export it to standalone .exe apps.

    By the way, i am using Kinect v1

    Thank You very much

  32. Hi, Rumen
    i would like to ask a question:

    How can i set the avatar to animate himself (animation) while no user is tracked, and still able to controlled by user when user is detected?

  33. Hi, Rumen.
    I need to get a picture with a good quality, but kinect has a poor quality camera, so I think is it possible to combine the Kinect sensor and a good camera. How can I calibrate the Kinect and a camera to make them work together?

      • You could, but I’m not sure you would want to do it for a school project. You should calibrate the high quality rgb camera with the Kinect’s depth camera, and then replace the camera-polling and coordinate mapping functions in the Kinect-interface class with your own. On the other hand, Kinect-v2 has 1920×1080 color camera resolution, which is not that bad as to me.

      • The school project is over with your help. Thank you very much. I ask you to learn how to do the development myself.

      • How can I do a camera replacement? Could you tell me a little clear? I am developing for myself now.

      • I already said I have not done camera replacement so far. If you do it anyway, you would need to provide the remapping algorithms – from depth to color space, and from color to depth space. This is the hard part. First research and experiment a lot to un-project the points from one space and project them into the other. When the results are reliable enough, replace the color-camera and color-space related functions in KinectScripts/Interfaces/Kinect2Interface.cs. That would be all, as to me.

      • If you have a sample of using a different camera, can you throw it at me? I really need it and I have no idea how to do it …

  34. Hi I purchased your kinect plugin and it was a blast,
    however i tried to do something different from your demos, let’s say im trying to make an interactive floor. i tried to place the kinect sensor above the head facing down the floor. and im struggling to make it work, since your kinect manager only trigger if the sensor catch a full body first. I want the sensor to track the user movement from above.

    do you have any tips on how to achieve this?

    • Hi, I think I answered a similar question on the K2-forum. And no, it’s not my KinectManager. The Kinect sensor can detect bodies only frontally, not from above. In this case you can get the color and depth images from the KM, and then use some kind of image processing, to locate the user blobs in them.

  35. What is the specification of the clothes in the fitting room demo. I am asking the 3D artist to make more model based on 2D images. What kind of software do you use to produce the 3D clothing model? Thanks!

    • I’m not a model designer or 3d artist, and don’t create the models myself. Generally speaking, the clothing models are just normal Unity humanoid models (bipedal models with Humanoid rig set in Unity). The only requirement is that they have bone lengths proportional to human bones, as detected by the Kinect. Otherwise the model may not cover the user’s body very well. See the 2nd overlay demo, to see what I mean. And when you have the model ready, experiment with the scale factors of its AvatarScaler-component, as well.

      • Yep, they need to have humanoid rig in the current setup. Of course, you are free to modify the category/model-selector scripts as needed, to utilize a single model (or several models – man, woman, boy, girl) instead, and only change the clothing textures over them. I’m not sure if this would fit all possible clothing cases though.

  36. Hi Rumen, what a great job, I bought your scrit a couple of weeks ago and I didnt want to ask you prior to read all documentation and try everything possible..but I am running out of time for my project….
    I am using overlayDemo1, so I just want a 3d Object Helmet to appear in my head, so I just added the object as a prefab, in the joint overlayer script I changed tracked joint to Head, and overlay object Helmet (transform)…when I try animation, the Helmet 3D object appear far away from my head in a erratic way as I move….I even try to put helmet in my right hand as the example, but when I move my hand, it seems to be moving too high , not in my hand, sometimes dissapear out of the screen….how could I solve this issue?

    • Hi, make sure the helmet’s transform in the scene has Y-rotation = 180 degrees, i.e. is turned to you when you run the scene. Anyway, I think in your case (helmet on the head), you should use the 2nd face-tracking scene in KinectDemos/FacetrackingDemo-folder instead (the one with the Viking-hat), and build on it. If the overlay issues persist, please contact me by e-mail, and send me over the helmet model you are using, so I could experiment a bit with it.

      • Hi Rumen, thanks for your quick response…I was struggling to put the helmet in such a position that covers my head, so I did…it was just a matter of selecting all mesh of model and putting transform position to 0,0,0…is this the only way? before doing this, the helmet appeared in a different position than my head..this is solved…but now I have another issue, the roman helmet is in my head but my face is cover by the helmet…is there any way that my face appear inside the helmet and not convering my face? I already checked your viking hat and its exactly what I need, appear in top of my head but the rear of the Hat doesnt cover my face…how I could achieve that? I guess is something related about a shader.. I will email you sending the helmet fbx file and a picture of the issue…thanks in advance..

      • No, I don’t think it’s the shader, but some properties of the model. Look at the inside of the hat model and the inside of your helmet model. The inside of the helmet should be invisible too, and this will make the user’s face visible, not occluded. Unfortunately I’m not a model designer, to tell you how to do it.

  37. Hi Rumen ! First Congratulation for your great job. I have a question for my project. I Have a project who run in standalone mode but when i switch to uwp it doesn’t work. I make test with all update for kinect and the package for made a test project with your package. When I try in demo mode standalone version all work great. But when I switch platform to uwp, when i try to demo mode, there are no error but the kinect wait users. Maybe I forget something if I’m the onlyone who have this issue.

      • Thanks for your response. I succeeded to build for UWP with an older version of unity (5.x). your tips explain for the windows store export in unity but in the recent version of unity (2017.x) they replace the windows store export to UWP export. Maybe you have an idea what is defferent for export with recent version of unity. I think it’s a parameter problem but I don’t find it.

      • I’ll check it again with the latest version of Unity. What happened when you build it for UWP on Unity 2017.x? Did you get errors while building the project in Unity, when you compiled it in VS, or when you ran it? As far as I remember, ‘Universal Windows Platform’ in 2017.x was just a re-brand of previous ‘Windows Store’-platform.

      • On my first attempt yes. But Itry with the last version of unity with an old version of your package. Then I try with a newer version of your package, I get an error who depend of plugins because I have the ancient plugin with the new multik2. I fixed this error, and when I ran in play mode but the demo waiting users. After a lot of test I build a project without error wit UWP exporter but the project won’t work in visual studio. I hope is helpful. I’m new with windows exporter, and like you says, I don’t think is a master problem, maybe a little problem of parameter. thanks for your reactivity.

  38. Hello Rumen F, I have two questions to ask. (1) – can I modify this asset of yours or can you tell me how can I just put images of clothes (transparent png) over the users video feed at run time instead of using a 3d skeleton model or skinned mesh renderer in Unity 3d. And
    (2) – Can I load images and data from a server externally in your fitting room demo scene.
    How can we do this?
    If you can reply to this and explain it a bit would be highly appriciated.
    Thanks.

    • Hello, to your questions:
      1. You need to modify at least the LoadDressingModel()-method of KinectDemos/FittingRoomDemo/Scripts/ModelSelector.cs, to apply the png-file as texture of the current model. In this case you could use only one humanoid model per category, and change its textures when the user changes his or her selection.
      2. This is general Unity question. I think you could use the WWW-class to load the texture png-files from a web server.

  39. Can you please explain how can I use only images for my dress up demo. The thing is I don’t want to use 3d models at all only pngs. Do I need to create a empty game object. please help me I need your help. I tried using only images but it is not overlaying on the live feed of my character image. Thanks.

  40. Hi Rumen: I always seem to get this error when starting any of the demos: “Missing assembly net35/unity-custom/nunit.framework.dll for TestRunner. Extension support may be incomplete.
    UnityEditor.Modules.ModuleManager:InitializeModuleManager()” Any ideas?

Leave a Reply to Rumen F.Cancel reply