Kinect with MS-SDK is a set of Kinect v1 examples that uses several major scripts, grouped in one folder. It demonstrates how to use Kinect-controlled avatars, Kinect-detected gestures or other Kinect-related stuff in your own Unity projects. This asset uses the Kinect SDK/Runtime provided by Microsoft. For more Kinect v1-related examples, utilizing Kinect Interaction, Kinect Speech Recognition, Face Tracking or Background Removal, see the KinectExtras with MsSDK. These two packages work with Kinect v1 only and can be used with both Unity Pro and Unity Free editors.
Unity 5.0 and later introduced an issue, which causes in many cases Kinect tracking to stop after 1-2 minutes with DeviceNotGenuine error. As a result you will see that the Kinect-enabled game freezes. There is no definite workaround yet. My advice is: If you encounter this issue, please install Unity 4.6 along with Unity 5 (just in another folder) and use the K1-assets in Unity 4 environment.
Update 12.Aug.2015: The bug-report, along with all details, is in the hands of Unity staff now, for reproduction and fix.
Update 07.Sep.2015: Here is what I got from the Unity support stuff: “We have checked that other people have the issue outside of Unity also, the entire device goes in a broken state. That indicates a driver bug. If we pop up the windows device manager, we can see the device is broken after it gets into this state and even unplugging it and plugging it back in doesn’t fix it so it’s an issue on Microsoft’s driver.”
Update 10.Sep.2015: One other asset user – Thank you Victor Grigoryev! – has experimented with multiple configurations. Here are his conclusions:
- If I connect the Kinect to USB 3.0 on Windows 8 and 10, it works perfect. But the issue was on my Windows 7 machine with USB 2.0 only (no USB 3.0 on motherboard at all).
- When I tried to connect to USB 2.0 on Windows 8 or 10, I’ve seen some lags from kinect, but it still worked.
- On Win 7 – I reinstalled Kinect SDK and RESTARTED the computer then. Without restarting nothing will change. I checked that several times. And not to restart was one of my major mistakes before.
Update 29.Oct.2015: I posted the issue on the MSDN Kinect-v1 forum here. Please up-vote the post and comment it with your own case and machine configuration (CPU, OS, USB port, sensor, Kinect SDK, Unity version, etc.), if you suffer from this issue. This is the last thing we could do, in order to ask the Kinect staff provide a workaround or SDK update.
Last Update, FYI: The best workaround for this issue, as reported so far by different users, is as follows: Uninstall all Kinect-v1 related packages (SDKs, drivers, OpenNI, NiTE, Zugfu, etc.), restart, install Kinect SDK 1.8 only, and restart again.
How to Run the Example:
1. Install the Kinect SDK 1.8 or Runtime 1.8 as explained in Readme-Kinect-MsSdk.pdf, located in Assets-folder.
2. Download and import this package.
3. Open and run scene KinectAvatarsDemo, located in Assets/AvatarsDemo-folder.
4. Open and run scene KinectGesturesDemo, located in Assets/GesturesDemo-folder.
5. Open and run scene KinectOverlayDemo, located in Assets/OverlayDemo-folder.
6. Open and run scene DepthColliderDemo, located in Assets/DepthColliderDemo-folder.
The official release of ‘Kinect with MS-SDK’-package is available in the Unity Asset Store.
The project’s Git-repository is public and is located here. This repository is private and its access is limited to contributors and donators only.
* If you need integration with the KinectExtras, see ‘How to Integrate KinectExtras with the KinectManager’-section here.
* If you get DllNotFoundException, make sure you have installed the Kinect SDK 1.8 or Kinect Runtime 1.8.
* Kinect SDK 1.8 and tools (Windows-only) can be found here.
* The example was tested with Kinect SDK 1.5, 1.6, 1.7 and 1.8.
* Here is a link to the project’s Unity forum: http://forum.unity3d.com/threads/218033-Kinect-with-MS-SDK
What’s New in Version 1.12:
1. Updated AvatarController to use the Mecanim configured bones. Big thanks to Mikhail Korchun!
2. Added AvatarControllerClassic-component to allow manual assignment of bone transforms. Big thanks to Aaron Brooker!
3. Added ‘Offset relative to sensor’-setting to AvatarController and AvatarControllerClassic, to provide the option to put the avatar into his real Kinect coordinates. Big thanks to Claudio Rufa!
4. Added depth-collider demo scene, to demonstrate the mapping of Kinect space and depth coordinates to Unity world coordinates, and how this can be used for VR collisions.
5. Added gestures debug-text-setting to KinectManager to enable easier gesture development.
6. Updated detection of the available gestures, to make them more robust and easier to use.
7. Fixed sensor initialization, when the speech manager from KinectExtras is integrated.
Playmaker Actions for ‘Kinect with MS-SDK’ and ‘KinectExtras with MsSDK’:
And here is “one more thing”: A great Unity-package for designers and developers using Playmaker, created by my good friend Jonathan O’Duffy from HitLab-Australia and his team of talented students. It contains many ready-to-use Playmaker actions for Kinect v1 and a lot of example scenes. The package integrates seamlessly with ‘Kinect with MS-SDK’ and ‘KinectExtras with MsSDK’-assets. I can only recommend it!
181 thoughts on “Kinect with MS-SDK”
thanks for the great free tools! 🙂 a couple of questions..
I see the green skeleton / cube man joint positions the hip position and feet seem quite different in the 3d avatar, when I lift my arms up and down the 3d avatar moves up and down.. also when I lean left/right the characters feet move in opposite direction, it appears to be floating rather than planted as the cube man is..
could you tell me if this is as good as it gets or if there is a way to adjust this?
I got it working on my own avatar and the issues are still there..
if not are these issues resolved in the kinect v2 version? (is it worth me upgrading for that reason?)
thanks and kind regards!
Hi Alex, there are some differences between the cube-man and the 3d avatar. While the cube-man shows the positions of the Kinect joints, the avatar uses only the body central joint (hip-center) and the joint orientations. Probably the hip-center position changes a bit, when you lift your hands, and this leads to avatar movements up and down. Also, the bones of the avatar are oriented in space by using forward kinematics (i.e. from hip-center to the limbs, not vice versa), which causes these sometimes incorrect limb positions. If you need to have the feet grounded all the time, maybe a script (or package) for inverse kinematics (IK) would provide some help. In the K2-package the algorithm is similar, but the K2-sensor is more precise than K1. That’s the difference. I think it’s not worth upgrading just because of this avatar issue, if you don’t possess a Kinect v2 sensor.
hi rumen, thanks for the reply.. maybe the hip shifts a little but as it is it seems quite exaggerated, more so when lifting both arms, i see theres a vertical position toggle, do you think at some point you could add a scaling function so you can go from no vertical movement to 100%? would allow a bit more tuning.. regarding the I.K will it be possible to add bone control on top of the avatar controller? Also I found I needed to use the avatar classic controller and manually map it when using an autodesk character generator character or it was pinned to the spot when running the game.. is this correct approach? sorry for all the ?? I am a unity noob but have got some fast results so I am interested to take it further..
The source is there. Just open the script and modify it the way you find appropriate 😉 Regarding the IK, I think you can add the bone adjustments in LateUpdate(). Regarding the AvatarController, here is a tip for the K2-asset, but it applies to the K1-asset, too: http://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t3
hi rumen , i want to develop a virtual dressing room with your sdk . please tell me if it is possible and if yes how to achieve it
Yes, it is possible, although not quite easy. See my comment below the K1-asset description in Unity asset store on another question regarding dressing rooms. In the K2-asset there are some ready-made fitting room demos too, but their models are optimized for Kinect v2. This tip here could also be of some help: http://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t12
Hi Arsalan Have u Completed this Dressing room or not? if u completed then i need help Because it is my Final Year project and still in the Middle
Hi Rumen, I would like to know if there is any way to overlay an object not only moving it with the user, but also rotating it acordingly. Say, for a basic example, having a cube over a persons head and have it rotate as he rotates the head. How can I go about it? Any way to get the rotation data necessary for it?
The 2nd face-tracking-demo in Kinect v2-asset does what you describe (it works with Kinect v1, too). To do it with the K1-asset you, need to get the joint rotation of the neck (the function is KinectManager.Instance.GetJointRotation(), as far as I remember), and then apply the rotation to the cube.
Hello ! im making a virtual dressing room project using kinect v1 and in your asset there is an overlay demo and im using it the question is how could i match clothes to the person who is standing in front of the screen? thanks in advance ! :)
See the answer, I gave to a similar question some time ago, here: https://www.assetstore.unity3d.com/en/#!/user/464178
Thanks so much for the great tools.
I’m trying to make a game in which the user can move the player character forward by swinging their arms.
If this is too complicated, I’d like to make the character move forward when the player raises their arms.
Let me know if you know how to figure this out.
Hi. If this is the way you want to control the player, then you need gestures and a gesture listener. In the gesture listener (see the GestureListener-component in KinectGesturesDemo as example), you could move your character forward, when a discrete gesture (like raising arm) is completed, or while continuous gesture (like wheel) is in progress. You can also define your own gesture for detecting swinging arms, if you need it.
Hello. What do you think, is it possible to add Run gesture? For additional $100 maybe?
The Run-gesture (and many more gestures) are also available in the K2-asset, which works with Kinect v1 as well. It costs $20 only 🙂 – https://www.assetstore.unity3d.com/en/#!/content/18708
Me puedes enviar al correo el kinect-extra?
Please e-mail me your request for KinectExtras, and I’ll send you back the package.
Can I request this package “KinectExtras with MsSDK” ? because this package is no longer available in Unity Asset Store. I want to learn about kinect very much , but dont know to start with it until i found your package, and I want to learn something from this playmaker, but its require that missing package T.T
Sure, you can. Just e-mail me your request for KinectExtras, and I’ll send you back the package.
HI,I want to use two device work together in a scene, I saw useMultiSourceReader this parameter. But don’t know how to use for 2 to access data flow. What is the reference?Thank You
Hi, I suppose you mean the K2-asset. No, you cannot use more than one Kinect, moreover only one sensor can be connected to a single PC/notebook. The ‘Use multi-source reader’-setting concerns stream source synchronization (i.e. color, depth, coordinate mapping), which is important when one needs color camera overlays or background removal.
I have a problem with this asset and is that my kinect doesn’t start the tracking the KinectAvatarsDemo scene only shows Waiting for users, any idea of the problem or coudl you help me please
Restart the computer and check if Kinect is powered up and connected to the PC. Then run the ‘Developer toolkit browser’ (part of Kinect SDK 1.8) and try if several of the samples in there work (at last the color, depth & skeleton demos). If they work, try the avatars demo in Unity again. Otherwise look for possible hardware/SDK issue.
Hi Rumen, i have problem with your code. I dont know why but the squat doesn’t work..
Add some Debug.Log()-lines to KinectGestures.cs, in the code responsible for detecting the Squat-gesture, and you will find out.
I tried to debug but it does not go into the Kinect Gesture squat case and this code is correct because it already came with the project.
In this case, are you sure the Squat-gesture is configured for detection? For instance, as far as I remember, there is a component in the avatars-demo called SimpleGestureListener (or GestureListener in gestures-demo). In the UserDetected()-method of this script there are calls to KinectManager.DetectGesture()-method to define the gestures tracked in this scene. The Squat-gesture should be there, too.
yeah it there but the code doesnt recognize anyway. The UserDetected calls the squat but doesnt recognize anyway in the Kinect Gestures. I dont know.. Thank you anyway!
My sentence is misspelled sorry!!
Problem solved!!! Thanks!! 😀
Good! What was it in the end?
Hi and thanks for the tool.
I am looking for a way to get the depth or infrared as a grayscale texture. I can’t seem to find any texture that is not somehow user detection related. Am I missing something obvious?
Hi, if you enable ‘Compute user map’ and optionally ‘Display user map’-settings of KinectManager-component in the scene, you will get the important part of the depth texture, i.e. the depth image of the detected users. This texture is created by the UpdateUserMap()-method of KinectManager. If you want to see the depth pixels, where there is no user detected, modify the code in this method – the block after ‘if (userMap == 0)’. It currently sets the respective texture pixel to ‘invisible’ color.
I don’t see code setting pixels to transparent in UpdateUserMap. In UpdateUserHistogramImage though, I saw:
usersHistogramImage[i] = clrClear;
So I commented that out, but the userTexture is still only showing the depth pixels for the tracked users.
Maybe your code base is older than mine. If you look some lines before the one you found, you will see ‘clrClear = Color.clear’, i.e. invisible color. I actually never said to comment out the line. I meant to modify the value, for instance like this:
float depthVal = userDepth <= 5000 ? (5f – (float)userDepth / 1000f) / 5f : 0f; // here 5f is the supposed max distance of 5m
usersHistogramImage[i] = new Color(depthVal, depthVal, depthVal);
Then you should see the pixels not belonging to any user in gray.
Thanks for your work,
i am currently trying to access the depth map too. I did everything you said and it works but not perfectly.
is there a parameter for the near and far information ? In kinect studio i can see depth farther than in Unity, and clother too. Did i do something wrong ?
thank you !
Hi Baptiste, as far as I see, our discussion is quite old. I mean, the depth texture since a long time is created by a shader, not in the C# code.
You can always get the raw depth data by calling ‘KinectManager.Instance.GetRawDepthMap()’. This will return an array of u-shorts (distances in mm) with the depth image size of 512*424.
The other option is to modify the shader (Resources/DepthShader.shader) to show the depth information along with user silhouettes. To do it, please uncomment the else-part, near the end of the shader’s code. In this case, you can get the depth texture by calling ‘KinectManager.Instance.GetUsersLblTex()’.
To your question: I have limited the processing of depth data to 5 meters (i.e. 5000 mm), because this is the maximum reliable user distance. If you want to extend this limit, search for ‘5000’ and ‘5001’ in the code of KinectInterop.cs and DepthShader.shader, and modify these constants accordingly.
Thank you very much for taking the time to respond to me !
I’ll try all this this evening and gives you update if you are interested.
thank you again, you rock !
I looked for the shader and script KinectInterop you werer talking about but didn’t find them, are you talking of the same project ? i have kinect 1.8.
And like you said, i tried to access the raw depth data but when i log it, every int are returning 0.
Maybe I don’t have the knowledge to achieve that, I don’t want to be bothering you with my stuffs…
Oh, sorry! I didn’t see you asked for the free Kinect-v1 asset. In this case GetRawDepthMap() & GetUsersLblTex() are still there, but instead of shader the texture is created by the UpdateUserMap()-method of KinectManager. Please mind, in order to use these methods, you need to enable the ComputeUserMap-setting of KinectManager-component in the scene. I suppose this is the reason for having only 0s in the raw-depth array.
Regarding UpdateUserMap: Its main goal is to get the user silhouette in the texture, not the full depth map. To make it draw the depth map instead, you would need to modify its code a bit. Please replace ‘ushort userMap = (ushort)(usersDepthMap[i] & 7);’ with ‘ushort userMap = 1;’. I think this should do the job.
Hey Rumen, Thanks a lot for this tutorial. Is it compatible with Unity 2017.2? or should I go back to Unity 5 for now? I didn’t test it yet.
I have not tested it either, but theoretically it should work. Just try it and you will find out.
I want to change model in Avatar Demo. Can you suggest an easy way or tutorial? Thanks in advance.
I want to change model in avatar demo. What is the easy/best way for that? (Can you suggest a tutorial)
Thanks in advance.
Hi, look at this tip: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t3
Thank you very much.
Hi, I’m having a problem when building the Unity project and hopefully you can help me. The kinect works fine when in the editor but on the build it’s not detecting the User or the background, it’s as if the kinect is not plugged in.
I’m using the Kinect V1, Any help is really appreciated.
Hi, please send me the Player’s log-file, so I can take a closer look. Here is where to find Unity log-files: https://docs.unity3d.com/Manual/LogFiles.html