Kinect v2 Examples with MS-SDK is a set of Kinect-v2 (aka ‘Kinect for Xbox One’) examples that use several major scripts, grouped in one folder. The package contains over thirty demo scenes.
Please look at the Azure Kinect Examples for Unity asset, as well. It works with the Azure Kinect (aka Kinect-for-Azure, K4A), as well as with Kinect-v2 (aka Kinect-for-Xbox-One) and RealSense D400 series sensors.
Apart of the Kinect-v2 and v1 sensors, the K2-package supports Intel’s RealSense D400-series, as well as Orbbec Astra & Astra-Pro sensors via the Nuitrack body tracking SDK.
The avatar-demo scenes show how to utilize Kinect-controlled avatars in your scenes, gesture-demos – how to use the programmatic or visual gestures, fitting room demos – how to create your own dressing room, overlay demos – how to overlay body parts with virtual objects, etc. You can find short descriptions of all demo-scenes in the K2-asset online documentation.
This package can work with Kinect-v2 (aka Kinect for Xbox One), Kinect-v1 (aka Kinect for Xbox 360), as well as with Intel RealSense D415 & D435, Orbbec Astra & Astra-Pro via the Nuitrack body tracking SDK. It can be used in all versions of Unity – Free, Plus & Pro.
Free for education:
The package is free for academic use. If you are a student, lecturer or academic researcher, please e-mail me to get the K2-asset directly from me.
One request:
My only request is NOT to share this package or its demo scenes in source form with others, or as part of public repositories, without my explicit consent.
Customer support:
* First, please check if you can find the answer you’re looking for on the Tips, tricks and examples page, as well as on K2-asset Online Documentation page. See also the comments below the articles here, or in the Unity forums. If it is not there, you may contact me, but please don’t do it on weekends or holidays.
* If you e-mail me, please include your invoice number. More information regarding e-mail support can be found here.
* Please mind, you can always update your K2-asset free of charge, from the Unity Asset store.
How to run the demo scenes:
1. (Kinect-v2) Download and install Kinect for Windows SDK v2.0. The download link is below.
2. (Kinect-v2) If you want to use Kinect speech recognition, download and install the Speech Platform Runtime, as well as EN-US (and other needed) language packs. The download links are below.
3. (Nuitrack) If you want to work with Nuitrack body tracking SDK, please look at this tip.
4. Import this package into new Unity project.
5. Open ‘File / Build settings’ and switch to ‘PC, Mac & Linux Standalone’. The target platform should be ‘Windows’ and architecture – ‘x86_64’.
6. Make sure that ‘Direct3D11’ is the first option in the ‘Auto Graphics API for Windows’-list setting, in ‘Player Settings / Other Settings / Rendering’.
7. Open and run a demo scene of your choice, from a subfolder of ‘K2Examples/KinectDemos’-folder. Short descriptions of all demo scenes are available here.
* Kinect for Windows SDK v2.0 (Windows-only) can be found here.
* MS Speech Platform Runtime v11 can be downloaded here. Please install both x86 and x64 versions, to be on the safe side.
* Kinect for Windows SDK 2.0 language packs can be downloaded here. The language codes are listed here.
Documentation:
* The online documentation of the K2-asset is available here and as a pdf-file, as well.
Downloads:
* The official release of the ‘Kinect v2 with MS-SDK’ is available at Unity Asset Store. All updates are free of charge.
Troubleshooting:
* If you get compilation errors like “Type `System.IO.FileInfo’ does not contain a definition for `Length’”, you need to set the build platform to ‘Windows standalone’. For more information look at this tip.
* If the Unity editor crashes, when you start demo-scenes with face-tracking components, please look at this workaround tip.
* If the demo scene reports errors or remains in ‘Waiting for users’-state, make sure you have installed Kinect SDK 2.0, the other needed components, and check if the sensor is connected.
* Here is a link to the project’s Unity forum: http://forum.unity3d.com/threads/kinect-v2-with-ms-sdk.260106/
* Many Kinect-related tips, tricks and examples are available here.
* The official online documentation of the K2-asset is available here.
Known Issues:
* If you get compilation errors, like “Type `System.IO.FileInfo’ does not contain a definition for `Length‘“, open the project’s Build-Settings (menu File / Build Settings) and make sure that ‘PC, Mac & Linux Standalone’ is selected as ‘Platform’ on the left. On the right side, ‘Windows’ must be the ‘Target platform’, and ‘x86’ or ‘x86_64’ – the ‘Architecture’. If Windows Standalone was not the default platform, do these changes and then click on ‘Switch Platform’-button, to set the new build platform. If the Windows Standalone platform is not installed at all, run UnityDownloadAssistant again, then select and install the ‘Windows Build Support’-component.
* If you experience Unity crashes, when you start the Avatar-demos, Face-tracking-demos or Fitting-room-demos, this is probably due to a known bug in the Kinect face-tracking subsystem. In this case, first try to update the NVidia drivers on your machine to their latest version from NVidia website. If this doesn’t help and the Unity editor still crashes, please disable or remove the FacetrackingManager-component of KinectController-game object. This will provide a quick workaround for many demo-scenes. The Face-tracking component and demos will still not work, of course.
* Unity 5.1.0 and 5.1.1 introduced an issue, which causes some of the shaders in the K2-asset to stop working. They worked fine in 5.0.0 and 5.0.1 though. The issue is best visible, if you run the background-removal-demo. In case you use Unity 5.1.0 or 5.1.1, the scene doesn’t show any users over the background image. The workaround is to update to Unity 5.1.2 or later. The shader issue was fixed there.
* If you update an existing project to K2-asset v2.18 or later, you may get various syntax errors in the console, like this one: “error CS1502: The best overloaded method match for `Microsoft.Kinect.VisualGestureBuilder.VisualGestureBuilderFrameSource.Create(Windows.Kinect.KinectSensor, ulong)’ has some invalid arguments”. This may be caused by the moved ‘Standard Assets’-folder from Assets-folder to Assets/K2Examples-folder, due to the latest Asset store requirements. The workaround is to delete the ‘Assets/Standard Assets’-folder. Be careful though. ‘Assets/Standard Assets’-folder may contain scripts and files from other imported packages, too. In this case, see what files and folders the ‘K2Examples/Standard Assets’-folder contains, and delete only those files and folders from ‘Assets/Standard Assets’, to prevent duplications.
* If you want to release a Windows 32 build of your project, please download this library (for Kinect-v2) and/or this library (for Kinect-v1), and put them into K2Examples/Resources-folder of your project. These 32-bit libraries are stripped out of the latest releases of K2-asset, in order to reduce its package size.
What’s New in Version 2.21:
1. Added ‘Fixed step indices’ to the user detection orders, to allow user detection in adjacent areas.
2. Added ‘Central position’-setting to the KinectManager, to allow user detection by distance, according
to the given central position.
3. Added ‘Users face backwards’-setting to the KM, to ease the backward-facing setups.
4. Updated Nuitrack-sensor interface, to support the newer Nuitrack SDKs (thanks to Renjith P K).
5. Cosmetic changes in several scenes and components.
Videos worth more than 1000 words:
Here is a video by Ricardo Salazar, created with Unity5, Kinect v2 and “Kinect v2 with MS-SDK”, v.2.3:
..a video by Ranek Runthal, created with Unity4, Kinect v2 and “Kinect v2 with MS-SDK”, v.2.3
..and a video by Brandon Tay, created with Unity4, Kinect v2 and “Kinect v2 with MS-SDK”, v.2.0:
Thank you for the great asset. How can I put 3D-objects in front of background removal, so that the user could “hide” behind object.
You can use different cameras, rendering different layers. See the 2nd background-removal demo, to get the idea and example setup.
Indeed. Thank you for your reply!
HI Rumen F.
I’ m student in Hanyang university
how can i get ‘Kinect v2 Examples with MS-SDK’ ?
Please send me your request from your university e-mail address, to prove you really study there. That’s all.
Hi Rumen, Is possible use the playmaker actions with kinect v2?, I have a complex project in kinevt v1 that use this asset and I need to update to run with kinect 2. Tks
Hi, there are some example PM actions in the K2-asset. You can see them in the PlaymakerKinectActions.zip, located in KinectScripts/Playmaker-folder. They may be unzipped, when the Playmaker-package is also imported. More actions could also be added, when needed. Just use the current ones as example. Please tell me, if you need more information.
hi Rumen
as I get the Playmaker-package? or if you can facilitate ,me I would appreciate it
Hi,Remun,I want to build the FittingRoom as resolution 1080*1920 ,how should I do?
You cannot turn the sensor 90 degrees, hence there is no way to achieve this resolution. There is a portrait mode available though (608 x 1080) for displays with aspect ratio 9:16. Here is a tip on how to do it: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t19
i want to get this example for free. how i can show you i’m a student. and i need your e-mail adress.
evol2121@naver.com. it’s my e-mail adress
Send me an e-mail from your university e-mail address, to prove you’re eligible for a free copy. See the About-section above for my contact info.
Hi, Rumen.
I need to get a picture with a good quality, but kinect has a poor quality camera, so I think is it possible to combine the Kinect sensor and a good camera. How can I calibrate the Kinect and a camera to make them work together?
https://i.imgsafe.org/3152a65abe.png
Not sure, but I suppose you need to map the Kinect color camera image (1920 x 1080) to your quality camera image. By the way, do you mean 1080p is poor quality?
hi
can you please tell me how to control cursor using of wristleft or WristRigt insted if HandLeft and Hand Right.
Hi, open KinectScripts/InteractionManager.cs, then replace ‘KinectInterop.JointType.HandLeft’ with ‘KinectInterop.JointType.WristLeft’ and ‘KinectInterop.JointType.HandRight’ with ‘KinectInterop.JointType.WristRight’.
try that but that’s not working
Then debug why it isn’t. The source code is there.
thank you for your quick response sir. its working now.
Hello
Your Package has been a bless, kinect is not easy, thanks for your works.
I just want to share some of my creations with your package:
Mixed with Leap Motion + cardboard for full body tracking
https://www.youtube.com/watch?v=XTtS5_yzkY4
3D hair style:
https://www.youtube.com/watch?v=xX2rxBKTu1M
Aura:
https://www.youtube.com/watch?v=3XLrrP8Sh4Q
Overlays:
https://www.youtube.com/watch?v=lOm1-iWJiYI
8bit avatar:
https://www.youtube.com/watch?v=aUt1fOoVaOI
Atraction Aura:
Thank you, Dan! There is no limit for the creative minds 🙂 Would you mind, if I share some of your videos on Twitter or on this blog’s pages?
Not at all, Please feel free to do so. Your code has been a bless 🙂
Thanks to Rumen, Dan
It is pretty amazing to see your work guys. @Dan, as a newbie I am so curious to see combined action of Leap motion with Kinect. it would be great if you could share the details of “how” did you do that ?
I cannot respond for Dan, but it’s not so complicated. You need to enable the ‘External hand rotations’-setting of AvatarController, to allow the model hands to be controlled by the LeapMotion-sensor. A simple implementation of this you can find in the ‘Kinect Mocap Animator’-asset, if you have it. Here is more information about LM-integration: https://ratemt.com/k2mocap/LeapMotionFingerTracking.html
Hello,Rumen I just purchased this Kinect v2 with MSsdk examples from asset store. But I found some quetions about the KinectAvatarDemo2.unity in the AvatarsDemo-folder. When I run the scene, it always report the RunTime error. I don’t know how to deal with that, and I run the example in the unity3d 5.3.0f4. Besides, I wanna use the kinect v2 examples with avatar charactor on web player, how can I do that? I found there is a KinectDataServer scene in the KinectDataServer-folder, how does it work ?
Hi Leon, sorry for the delayed response. I don’t work at weekends and holidays. To your questions:
The K2-asset can work in Windows standalone mode only, because it calls the Kinect SDK 2.0 API. Please open ‘File / Build Settings’ and make sure ‘PC, Mac & Linux Standalone’ is selected as ‘Platform’ on the left and ‘Windows’ as target on the right.
The KinectDataServer is the server-app for ‘Kinect v2 VR Examples’-asset (K2VR for short). K2VR is a set of lightweight Kinect-scenes, which get their data over the network instead. It is aimed to work on mobile platforms, such as Android and iOS, and VR-platforms, such as Oculus, GearVR or Vive. If you would like to try it out, please e-mail me your invoice number from Unity asset store and I’ll send it to you. I’ve not tested it on the WebGL-platform so far (i.e. issues are possible), but I’ll try to do it this week. I’m also not sure from security point of view, if a web app should have rights to get private information, such as the movement of human bodies in the room.
I have sent my invoice number to your e-mail. And there is also some questions about the ‘KinectAvatarDemo2’. Thank you for your kind help.
I don’t think I have received your e-mail. See https://rfilkov.com/about/#contact for my contact information.
I’m so sorry I sent the invoice number to a wrong e-mail. And I sent it again. Can you recieve it?
Yep, I got your e-mail this time.
Oh,yeah, Can you sent me the ‘Kinect v2 VR Examples’ for a test?
Yes, I will. Just wanted to make it work on WebGL-platform, too. If I’m not ready by tomorrow, I’ll send you the current release, and after the holidays – the updated one.
Thank you very much, Rumen. And have a good holiday.
Hi Rumen, first of all many thanks for your amazing plugin, it’s working really well!
For my game I’d like to use two hand cursors (one for each hand), which can be moved seperately and are fully functional on their own. My current approach is to duplicate the Interaction Manager and restrict each to a specific hand. This way I managed to get two cursors, but their both not really functional and quite buggy. Do you have an idea for a better approach, without the use of two separate scripts?
And what is it about the “lasso” state for the hand cursor? I’ve never seen it in use and also the “Normal Hand Texture” is never used.
Hi Jonas, the InteractionManager tracks both hands all the time. The code you need to modify is near the end of its OnGUI()-method and instead of displaying the cursor on cursorScreenPos, use leftHandScreenPos & rightHandScreenPos to display 2 cursor textures.
Regarding the Lasso-state, it is currently considered as closed hand too, but you are free to modify this in HandStateToEvent()-function of IM. The normal-cursor texture is used only when the user or his hands are not tracked, for instance before the user ever gets detected or gets lost.
Hi Rumen, thanks for your reply! I managed now to properly display a hand cursor for each hand with the method you told me. But now I’m struggling with the primaries for the hands. Only one hand at a time can be primary right now. That’s bad, because I want to use both hands for grabbing/ activating stuff. Any tips on how I can set both hands to be primary all the time?
Thank you in advance!
Where exactly do you hit the primary-hand problem?
I am using the InteractionDemo1 as a testing enviroment. In the Interaction Manager script I tried to set both isLeftHandPrimary and isRightHandPrimary bools to true, but that didn’t work. After that I tried to modify any code related to the primary bools, but it only accepts one hand as a primary. On the infoGuiText you can see switching the primaries (the * after the L.Hand/R.Hand changes to the other hand if I hide one hand from the Kinect). Also the cursor on the “non-primary” hand is not changing it’s sprite to the grab cursor, but stays in the lasso cursor shape, which is used right now, if the hand is off screen/ not used.
e.g. I can grab stuff with the left hand. If I hide my left hand, the right hand becomes primary and it changes from the lasso cursor to the release hand cursor. Now i can grab stuff with my right hand. The left hand is still showing up as cursor, but not usable for interaction.The GrabDropScript is only responding to the primary hand. And also it seems like the movement of one hand is affecting the movement of the other hand (just a bit)
There are probably 1-2 small mistakes in your code changes. You can debug the code, to find out what is wrong. If you get really stuck, e-mail me next week, mention your invoice number and I’ll try to help you. This week I’m quite busy, and can’t do it. By the way, there will be some changes in the interaction manager, coming with the K2-asset update later this week.
Hi Rumen,
Have you ever followed up on this request by coincidence?
I would like to implement the same functionality (2 cursors), the problem isn’t displaying them, but making them work at the same time.
Yes, we made it work back then, together with Jonas. The problem was that this required duplication of many code parts in InteractionManager. That’s why I never added it to IM-updates. You can debug a bit the code in IM, related to left/right hand cursor control, and you will find out what duplication or changes are needed to make it work.
Hi, Rumen. I want to say thank you first for the nice plugin you made! And I’m sorry, but I’d like to ask you a favor.
While I was working on the graduation project, I was also stuck in the middle of making two cursors available..I am an unity beginner, and this process is too difficult for me.
What I want to do is make right hand click to draw lines, and left hand to move the camera by click and drag.
I tried to solve this problem alone, but I can’t come up with any ideas and I ask you for help. Could you simply give me a brief description of the process for solving this problem? Because it’s hard to understand if you just give me the answers. I really want to solve this problem, and I don’t have much time left. Sorry to bother you, but I’d appreciate it if you help me solve this problem.
Hi, for hand drawing see KinectDemos/OverlayDemo/KinectOverlayDemo3-scene and its script component HandOverlayer. For dragging/dropping objects see KinectDemos/InteractionDemo/KinectInteractionDemo1-scene and its script component GrabDropScript. It uses the InteractionListenerInterface to detect hand events (grip and releases), and the hand’s normalized position in Update() to move the object. I suppose you need something like this in your project, too. More information regarding the components is available in the online documentation: https://ratemt.com/k2docs/HandOverlayer.html and https://ratemt.com/k2docs/GrabDropScript.html
hi,
my final year project is running game
i am using kinect v2 sdk with example and by using avatar demo example i’m able to control avatar but i have one issue that how to control avatar that runs so that its also able to collect coin by its hands
Hi, this is a matter of physics and colliders, as to me. Put for instance sphere colliders around avatar’s hands and make it rigid body, as in the KinectAvatarsDemo1-scene. Then add trigger colliders around the coins and finally check for collisions in your coin-collecting script.
Hello Rumen!
I am making a VR self defense game with kinect in unity for my class project. I have a kinect for xbox 360. Will it run kinect mocap and kinect v2 ms-sdk? If so can I please get kinect v2 with ms-sdk and mocap animator please?
Regards,
Omkar Manjrekar
If you are a student, you’re eligible to get them free of charge. Just send me an e-mail from your university e-mail address. Regarding Kinect-for-Xbox360 (aka Kinect-v1), see this tip: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t21
Hello Rumen,
Thanks for the quick reply.
I study at St. Francis institute of technology, Mumbai University, India and as such we don’t have individual student mail id. Perhaps if there is anything else I can do?
Regards,
Mr. Omkar Manjrekar
I need something that proves you are really a student. You can send me a picture of your student’s card, for instance.
HI Rumen,
I have sent an email with a picture of my identity card atttached!
Hi, I bought this asset package to make games, and it has been working out pretty well. However, I am trying to shift to the Universal Windows Platform so that I can target the xbox market. Do you have any plans to support the UWP platform? (On Unity it is labeled as Windows Store – Universal)
The UWP is already supported, but only with SDK 8.1. See this tip: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t25 Regarding Universal10-SDK and the Anniversary update that you probably mean: Yes, I have plans, but need to check first if and how this may be done. But by all means, I have it in my todo-list for this year.
Hi, just pointing you to this blog post https://mtaulty.com/2016/11/07/windows-10-1607-uwp-and-experimenting-with-the-kinect-for-windows-v2-update/
UWP 10 apps can currently access the Kinect as a webcam with joint tracking functionality. Microsoft has been silent on the matter of the Kinect for Windows v2 SDK being updated to support UWP 10 since the Anniversary Update.
Will you go ahead with UWP 10 support if the Kinect for Windows v2 is not updated?
Thank you! Yes, I’m working on the K2 UWP-10 interface now.
Hey, Im making a game that requires keyboar input to write an email adress but it seems the InteractionManager interfers somehow. Im using the UI InputField. Do you have any suggestions of what could be causing trouble?
Thanks
The Unity event system uses only one input module – the one that is enabled and is on top of the list. That said, probably the InteractionInputModule hides the standard one (that processes keyboard and mouse input). I’ll try to reproduce & research this issue during the weekend, and look for a solution. Please email me next week to check, if I have found a solution or workaround.
That would be awesome!
Thank you very much, I’ll keep in touch then.
Hello Mr.Rumen!
I’m making a senior project with kinect in unity about AR fitting room, I have a kinect v2. Could you please have any suggestions about using your package with my project? Thanks for noticing this. Sorry for my bad English.
Best Regards,
Pakkanan Satha
Hello Rumen,
I am trying to get the audio source angle from Kinect. How could I implement it? I couldn’t find a public KinectSensor variable in KinectManager.
Thank you!
Jing
Hi Jing, you can get the Kinect-v2 SDK-specific KinectSensor like this: ‘Windows.Kinect.KinectSensor kinectSensor = (KinectManager.Instance.GetSensorData().sensorInterface as Kinect2Interface).kinectSensor;’
But there were some little tricks, when you need to get the audio beam angle from the audio source, as far as I remember. If you get stuck, please e-mail me and I’ll try to find the audio-beam tracking class I have written some time ago.
I got it working. Thank you!
Hi Rumen,
I am also trying to use the speech manager. Microsoft suggests — for long recognition sessions, that the SpeechRecognitionEngine be recycled (destroyed and recreated) periodically, say every 2 minutes based on your resource constraints. I also want to turn off the AdaptationOn flag in UpdateRecognizerSetting. How could I do those?
Thank you!
Jing
To recycle the speech manager, I suppose you need to put it in a prefab, then instantiate it and destroy the instance from time to time.
Regarding the adaptation-on/off flag, see the last parameter of sensorData.sensorInterface.InitSpeechRecognition()-invocation in the Start()-method of SpeechManager.
I put SpeechManager in a prefab, instantiate a clone in Start() function, then destroy the clone and instantiate a new one every 60 seconds in Update() function. It seems that the clone are destroyed and recreated. However, the speech Manager script only run once. After 60s, I could create a clone but no speech recognition functionality. Do I need to specifically tell the script to start running?
Never mind, I figured it out.
Good! What was the problem?
After destroy the clone, I need to check if its null and then initiate a new one.
Hi Rumen,
If I use a grammar file like this one https://msdn.microsoft.com/en-us/library/hh378351(v=office.14).aspx.
The program only returns the top level tag, Card or MoveCard. Is it possible to get the tag of the sub-keys as well? For example, I want to get Card three. Thank you!
Jing
As far as I remember, the speech recognizer could utilize quite complex grammars as well, even dynamic ones, as described here: https://msdn.microsoft.com/en-us/library/jj127913.aspx
Hey Jing,
How did you got it working?
I meant that part:
I am trying to get the audio source angle from Kinect. How could I implement it? I couldn’t find a public KinectSensor variable in KinectManager.
See KinectDemos/VariousDemos/KinectAudioTracker-demo scene and its KinectAudioTracker-script component.
Hi, Rumen.
How can i make different layers as on picture?
Like in a photoshop.
Thanks
http://imgur.com/a/1ZvQS
It’s a matter of layers rendered by different cameras, as to me. See the 2nd background removal demo. It shows background + BR-foreground image + 3d objects rendered behind and in front of the BR-image.
I open the KinectBackgroundRemoval1 ,but I can’t see mine , and unity did’n print any error,why?
1.log:K2-sensor opened, available: True
2.log:Interface used: Kinect2Interface
3.log:Shader level: 50
4.log:Waiting for users.
5.log:Adding user 0, ID: 72057594037941507, Body: 4
but the demo(background remove) did’n work, why?
my unity is 5.4.1
Which version of the K2-asset are you using? If it not the latest one, please download the latest, and try again.
Hi Ruman, I bought your package from Unity Store, but I don’t understand how to replace your default model with my own model. I found another instruction in your another post, saying “Select the model-asset in Assets-folder. Select the Rig-tab in Inspector window” (https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t3). But I cannot find such a such a “model-asset” folder in this version. Do I miss something?
Model asset means the fbx-file of your rigged model, placed somewhere under the Assets-folder of your Unity project. It should be placed there, in order to be visible by the Unity editor.
Hello ,
like you mentioned , , i am currently doing my bachelor thesis , and my project is based on kinect body tracking so i am urgently need this package to enable me to start my project and start get the data from it , but i can not find your e-mail to contact you , so can you help me in this please .
Thank you .
See the About-section in the menu above.
Hello, Rumen. Fantastic Work. Realy helpful.
Is it posible to customize grip event? I mean, HandState = Grip when hand is 100% closed. Is it possible customize that and detect HandState = Grip when hand is 70% closed for example? I don’t find the class to do that in the project.
Thanks in advance
Hi, the Kinect SDK provides only 3 discrete hand states – Release, Grip and Lasso. In this regard, there is no percentage available. These states are processed as hand interactions by the InteractionManager (script component of the KinectController-game object) in the demo scenes.
Is it possible to use your package to create Xbox One app in the Unity and then run the app on the Xbox One device with connected kinect 2 ?
Hello,
Is it possible to run your examples in the Xbox One app made in Unity?
So that I export the app in Unity as an Xbox One app and then run the example on the actual Xbox One device with kinect connected.
Regards
You can run most of them on UWP (Windows 10) platform. I suppose this includes Xbox One as well. Anyway, cannot be sure, because I don’t have Xbox One to test. See: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t33
hi @wojoeviewer . Did you manage to run Kinect app on the Xbox One ? I have some troubles with that.
5.4 after the release of the I7 core display (no independent graphics) does not run on the process, 4.x is very smooth
Sorry, but I could not understand your question or issue. May I ask you to provide more details?
Hi Rumen 🙂
here with another question, do you have some tips o maybe you can gimme a direction of how I can achieve this (hole in the wall) https://static4.gamespot.com/uploads/screen_kubrick/mig/9/5/3/5/2109535-169_hole_in_the_wall_gp_4_co_op_xbox_360_082311.jpg
I have (I guess) an idea to fill a image mask percent using the 2d RGB silhouette of the kinect, but I think you maybe have a better idea
best regards,
Hi Aldo, I’m not sure if it is the same, but take a look at the 3rd and 4th background-removal demo scenes. They do something similar, I think.
Hi Rumen,
I purchased this asset on the assetstore and it worked great for me.
I have a question regarding “seated” tracking mode, I have an app where the lower half of the body of the user (legs and torso) is not visible to the Kinect, and according to this: https://msdn.microsoft.com/en-us/library/hh973077.aspx , seated mode is more suitable for my case.
I tried current tracking mode and it seems fine when parts of the legs are in range, but once legs are out of range everything becomes a mess, even hands, which are the only thing I really want to track.
However, it seems that this functionality isn’t accessible via this asset as of now, is there a workaround so I can access “kinect.SkeletonStream.TrackingMode”?
Best Regards,
Hi, there is no more seated mode in the Kinect SDK 2.0. It is Kinect-v1 specific mode, as far as I now. But even then I never utilized this mode in my Unity assets, because it changes the body hierarchy.
I would recommend to use the AvatarControllerClassic-component instead of the AvatarController. Then assign to its settings only the joints belonging to the upper part of the avatar’s body. See the 3rd avatar-demo, if you need an example.
If this solution is not enough, feel free to e-mail me your invoice number and paypal account, and I’ll refund you the money.
Thanks for the reply,
I didn’t notice that seated mode is v1 only, I will try the AvatarControllerClassic demo and see if it gives better results, but i don’t think it will because the problem is apparent in user map display even when having no avatar controllers at all, that’s why I sought a different algorithm and thought seated mode is the solution.
I will try a few different things, maybe we can redesign the project so it can rely on gestures or something.
And don’t worry about the refund, I already used it on a different project and it worked fine.
Thanks again and best regards,
hello rumen!
i am new to kinect and unity! i ask you something that for control of avatar in unity it is necessary to buy this asset? i am student it is expensive for me to purchase this asset. is there another solution for control avatar in unity with kinectv2?
Look at the ‘Free for education’-section above.
Hi Rumen,
I have an issue with the kinect manager script “player calibration pose”. Setting it to anything other than none, my model wouldn’t appear. How do i set it to calibrate via the t pose? Thanks!
Hi, make sure the KinectGestures-component is added to the KinectController-object in the scene, as well. The processing of all poses and gestures is in there.
Hi Rumen
What could be done so that the avatar’s hands do not go through his body?. We have seen that in your examples there are some Colliders, but we do not know if they are aimed at avoiding this situation.
Thanks
Hi Mario, the current colliders were more aimed to prevent going through other objects, but I think preventing hands going through the body should apply something similar. One other option would be to make proper muscle adjustments in the avatar definitions, and then enable ‘Apply muscle limits’-setting of the AvatarController. Please experiment a bit.
Hi Rumen,
First, I’d like to thank you for a great asset! Works wonderfully well except for one little problem which I hope you can help me resolve.
I’ve been using an older version of the package for nearly a year now and only recently have I updated it to the latest version. After the update, I was not able to move the position of the avatar so I compared the avatarcontroller script between the two versions and I noticed that you’re now setting the global position of the gameobject instead of the local position. Is there a way for me to manipulate the avatar’s position globally like how it was previously possible?
Thanks in advance,
Sid
Hi Sid, why don’t you just copy back the older version of AvatarController, in case you find it more useful? I suppose you would need to fix some syntax errors in the process (like method invocation parameters, etc.), but it should be possible. In this regard, feel free to email me, if you have questions or get stuck anywhere in the process.
Hola soy estudiante del Instituto Metropolitano de Diseño en Ecuador y estoy realizando mi tesis de grado para la cual necesito el Asset (Kinect v2 Examples with MS-SDK) y quisiera saber si tu me lo podrías ayudar ya que el proyecto no sera comercializado ni replicado te agradezco de antemano por tu pronta respuesta.
Sorry, my Spanish is not good, at all. If you want to get a student’s copy of the K2-asset, please send me you your e-mail request FROM your university e-mail address.
Hi Rumen
Will it be possible to display the joint’s orientation(position) on the scene display?
Thanks.
Sure. Actually, if you enable the Cubeman-object in KinectAvatarsDemo1-scene you should see the orientations of all tracked joints as orientations of the respective cubes. Scale them up, if needed.
Hi, Rumen. I’m a student who sent you e-mail last week.
Thanks to your support, our research is going well (for now). However unexpected problem happened. We’re now using two unity packages that you made, (‘Kinect V2 Examples with MS-SDK'[I downloaded it for free on Unity Asset Store, but Free version has been removed from store…], ‘Kinect MoCap Animator’) but we have faced that script error somehow….
Hi, the K2-asset was never free on Unity asset store. Maybe you downloaded the K1-asset, which is free from the start. And I don’t understand why you combined it with the mocap animator, which is a separate tool, and what script error you faced in the end.
Ah… Was it impossible combining these two tools together? I didn’t know that 🙁
It seems that I cannot attach image file on here. The error was ‘Assets/KinectMocapFbx/Scripts/KinectFbxRecorder.cs(268,3):error CS1525:Unexpected symbol ‘throw’
It is possible to combine them, if you avoid duplicating the scripts they both use, but I don’t see much sense in combining the two packages. By the way, ‘throw’ is a C# keyword, so maybe you messed up something generally, if you get this error. Make sure the platform in ‘Build settings’ is set to ‘PC, Mac & Linux Standalone’, and the ‘Target platform’ is set to ‘Windows’.
Hi!
Is it possible to use your examples on a Mac with MacOS/OSX without Kinect?
What I would like to achieve is to record skeleton movements on Windows and be able to play it back on Unity for Mac. This way I could program most of my business logic without needing an actual Kinect. Can I do this using your KinectManager and related classes?
Thanks,
Balazs
Hi, I think you can. Try to do as follows:
1. Run KinectDemos/RecorderDemo/KinectRecorderDemo-scene and record the needed movements. You can make several recordings by changing the name of the output file (‘File path’-setting) in KinectRecorderPlayer-component. Here is more info regarding this component: https://ratemt.com/k2docs/KinectRecorderPlayer.html
2. Copy the recorded files to the project on the Mac-machine.
3. Open the KinectScripts/KinectInterop.cs-script in the Mac project. Near the beginning of the script there is a definition of an array called SensorInterfaceOrder. In this array, before ‘new Kinect2Interface()’ add ‘new DummyK2Interface(), ‘. This will make the package use a dummy sensor interface instead of interface to a real Kinect-sensor.
4. Add KinectScripts/KinectRecorderPlayer-script as component to the scene. I usually add these components to the KinectController-game object. Then set its ‘File path’ to point to the copied recording-file, and don’t forget to enable its ‘Play at start’-setting, too. It will start the replay right after the scene starts.
5. This should do the job. Run the scene to check if it works or not.
Hello there
I can make my own outfit using the Fitting room scene. If it can be done, I will buy the package.
https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t12
Hi Rumen,
Package still works like a charm, one question regarding BackgroundRemovalManager. I’m creating a white silhouette by setting the ComputeBodyTexOnly to true, but the silhouette is only truly white when I either set the dilate/erode iterations to something else than zero. When both are on zero, which is my prefered setting, it becomes slightly greyish.
I could probably hack around it, but I was hoping for a clean solution. Do you have any suggestions?
Thank you, j
Yes. Please open Resources/BodyShader.shader, and near its end replace ‘float clrPlayer = (240 + player) / 255;’ with ‘float clrPlayer = 1.0;’. Hope this will make the body-texture completely B/W.
Thank you. Again, quality package and service.
Dear Mr. Filkov,
I am using your tool and I find it amazing, thank you!
I only have an issue: I project the scene on the ground with a projector and I want to map my position as seen by the Kinect (which is perpendicular to me, in “normal” position) in the virtual environment, which has a camera from above.
I am using the first Avatar scene as example but I don’t understand where and when in the scripts the position of the user is taken and sent to the avatar to map the position.
If I only import your assets and scripts in my scene, obviously my position on the ground does not match the position that the avatar has on the ground.
Thank you very much
Hi, as I understand the sensor is in front of you (as expected), while the projector is on the ceiling. Please take a look at KinectDemos/ProjectorDemo/KinectProjectorDemo-scene (enable the U_Character-game object in the scene) and this tip here: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t34 I suppose it could be re-configured to correspond to your setup.
Regarding the position of the avatar model in the scene: This is done in the MoveAvatar()-method of KinectScripts/AvatarController.cs (component of the model in the scene), and depends on whether ‘Pos relative to camera’-setting references a camera in the scene or not. Here are some hints regarding the avatar-controller settings: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t39 and here is the AC documentation, as well: https://ratemt.com/k2docs/AvatarController.html Do mind, you are also free to modify or extend the AvatarController’s code, to match best your needs.
Dear Mr. Filkov,
Thank you for your answer. The projector is not on the ceiling but above the kinect, at about 3 meters from the ground, so the mapping between the virtual world and the real world is not direct due to the fact that the projector is not perpendicular to the ground.
I have inspected the Projector Demo Scene but I dont’ think it will be feasible to adapt it.
I think I will make my own Controller using yours as reference, as I only need to have the user followed by a light aura in the scene and to recognize some movements without actually having the avatar mimicking them.
Only one other question: how do you suggest to manage the mapping between the real and the virtual world with the kinect?
Meaning, one side of the projection (so of the virtual environment) starts exactly where the kinect is.
If I am standing at 1.5 meters from the kinect, how could I have Unity understand that I am standing 1.5 meters from the side of the virtual world, and consequently have the light aura drawn there? I need a sort of mapping of the real projection to the virtual world, do you have any suggestion of a good starting point from where I could start working on that?
Thank you very much again
The sensor will detect user’s coordinates in its coordinate system. It needs to be re-projected into your projector’s coordinate system, no matter where the projector is and how it is oriented. In means of Unity these are two cameras to the same physical/virtual world. You need to estimate the position, orientation & projection matrix of the projector, to match the world “seen” by Kinect. This is usually called ‘calibration’ and may be relatively easily done with the calibration tool of the Room-alive toolkit. Then, this ‘calibration data’ may be utilized by the projector-demo I pointed you above. I’m not quite sure what other starting point you need. Of course, if you are good at math and geometric/matrix transformations, you could estimate the needed calibration manually.
Dear Mr. Filkov,
thanks again for the answer.
I’ll then proceed to investigate the Projector Demo Scene a bit more and, if necessary, to do some math.
Thank you for your help
Hello, Rumen
Many thanks for the package
I have a problem with Fitting Room2. The clothes stay in front of me. How can I get clothes back?
Look here: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t24
Does Kinect v1 support
https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t21
User Body Blender is disabled when I run the project. How can I fix it.
UserBodyBlender works with Kinect-v2 only.
Hello, Rumen
Thanks for the package, it was amazing !!
I faced a problem when export my project to Win10 UWP, I manage to export it but when i run the apps, it give me this error:
” Background removal cannot be start, as kinect manager is missing or cannot initialize”
However, i have no issue when i export it to standalone .exe apps.
By the way, i am using Kinect v1
Thank You very much
As far as I remember, Win10 UWP does not support Kinect-v1. Hence, there is no K1-UWP interface in the K2-package, as well. Apart of that, make sure that ‘.Net’ is selected as scripting backend. I think Unity switched to ‘by default’ IL2CPP scripting back-end recently. Here is the tip regarding Win-10 UWP builds: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t33
Hi, Rumen
Thanks for reply, i will surely take a look on that~
Hi, Rumen
i would like to ask a question:
How can i set the avatar to animate himself (animation) while no user is tracked, and still able to controlled by user when user is detected?
See KinectDemos/RecorderDemo/Scripts/PlayerDetectorController.cs script component. It plays some recorded sequence, while there is no user around, as far as I remember. Here is more information about it: https://ratemt.com/k2docs/PlayerDetectorController.html
thx for reply, but how if i want to set the avatar to do certain default animation like (idle, walking) when lost track of selected player index?
is there any function like OnUserTrack, OnUserLost for me to insert my code?
thanks
https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t14
Can I wear clothes for 2 people at the same time?
See p.14 here: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t12
Thank you Rumen
Hi, Rumen.
I need to get a picture with a good quality, but kinect has a poor quality camera, so I think is it possible to combine the Kinect sensor and a good camera. How can I calibrate the Kinect and a camera to make them work together?
Can I use a camera other than kinectin camera on the fitting room scene?
You could, but I’m not sure you would want to do it for a school project. You should calibrate the high quality rgb camera with the Kinect’s depth camera, and then replace the camera-polling and coordinate mapping functions in the Kinect-interface class with your own. On the other hand, Kinect-v2 has 1920×1080 color camera resolution, which is not that bad as to me.
The school project is over with your help. Thank you very much. I ask you to learn how to do the development myself.
How can I do a camera replacement? Could you tell me a little clear? I am developing for myself now.
I already said I have not done camera replacement so far. If you do it anyway, you would need to provide the remapping algorithms – from depth to color space, and from color to depth space. This is the hard part. First research and experiment a lot to un-project the points from one space and project them into the other. When the results are reliable enough, replace the color-camera and color-space related functions in KinectScripts/Interfaces/Kinect2Interface.cs. That would be all, as to me.
If you have a sample of using a different camera, can you throw it at me? I really need it and I have no idea how to do it …
Hi~when I run the FittingRoomDemo,there is some points on the model,dou you know how to fixed this?
Try to disable the UserBodyBlender-component of the MainCamera in the scene.
Hi I purchased your kinect plugin and it was a blast,
however i tried to do something different from your demos, let’s say im trying to make an interactive floor. i tried to place the kinect sensor above the head facing down the floor. and im struggling to make it work, since your kinect manager only trigger if the sensor catch a full body first. I want the sensor to track the user movement from above.
do you have any tips on how to achieve this?
Hi, I think I answered a similar question on the K2-forum. And no, it’s not my KinectManager. The Kinect sensor can detect bodies only frontally, not from above. In this case you can get the color and depth images from the KM, and then use some kind of image processing, to locate the user blobs in them.
What is the specification of the clothes in the fitting room demo. I am asking the 3D artist to make more model based on 2D images. What kind of software do you use to produce the 3D clothing model? Thanks!
I’m not a model designer or 3d artist, and don’t create the models myself. Generally speaking, the clothing models are just normal Unity humanoid models (bipedal models with Humanoid rig set in Unity). The only requirement is that they have bone lengths proportional to human bones, as detected by the Kinect. Otherwise the model may not cover the user’s body very well. See the 2nd overlay demo, to see what I mean. And when you have the model ready, experiment with the scale factors of its AvatarScaler-component, as well.
Thanks, is it necessary to have the clothes model to have rig inside?
Yep, they need to have humanoid rig in the current setup. Of course, you are free to modify the category/model-selector scripts as needed, to utilize a single model (or several models – man, woman, boy, girl) instead, and only change the clothing textures over them. I’m not sure if this would fit all possible clothing cases though.
Hi Rumen, what a great job, I bought your scrit a couple of weeks ago and I didnt want to ask you prior to read all documentation and try everything possible..but I am running out of time for my project….
I am using overlayDemo1, so I just want a 3d Object Helmet to appear in my head, so I just added the object as a prefab, in the joint overlayer script I changed tracked joint to Head, and overlay object Helmet (transform)…when I try animation, the Helmet 3D object appear far away from my head in a erratic way as I move….I even try to put helmet in my right hand as the example, but when I move my hand, it seems to be moving too high , not in my hand, sometimes dissapear out of the screen….how could I solve this issue?
Hi, make sure the helmet’s transform in the scene has Y-rotation = 180 degrees, i.e. is turned to you when you run the scene. Anyway, I think in your case (helmet on the head), you should use the 2nd face-tracking scene in KinectDemos/FacetrackingDemo-folder instead (the one with the Viking-hat), and build on it. If the overlay issues persist, please contact me by e-mail, and send me over the helmet model you are using, so I could experiment a bit with it.
Hi Rumen, thanks for your quick response…I was struggling to put the helmet in such a position that covers my head, so I did…it was just a matter of selecting all mesh of model and putting transform position to 0,0,0…is this the only way? before doing this, the helmet appeared in a different position than my head..this is solved…but now I have another issue, the roman helmet is in my head but my face is cover by the helmet…is there any way that my face appear inside the helmet and not convering my face? I already checked your viking hat and its exactly what I need, appear in top of my head but the rear of the Hat doesnt cover my face…how I could achieve that? I guess is something related about a shader.. I will email you sending the helmet fbx file and a picture of the issue…thanks in advance..
No, I don’t think it’s the shader, but some properties of the model. Look at the inside of the hat model and the inside of your helmet model. The inside of the helmet should be invisible too, and this will make the user’s face visible, not occluded. Unfortunately I’m not a model designer, to tell you how to do it.
Hi Rumen ! First Congratulation for your great job. I have a question for my project. I Have a project who run in standalone mode but when i switch to uwp it doesn’t work. I make test with all update for kinect and the package for made a test project with your package. When I try in demo mode standalone version all work great. But when I switch platform to uwp, when i try to demo mode, there are no error but the kinect wait users. Maybe I forget something if I’m the onlyone who have this issue.
Hi, please make sure you have followed these instructions, when building for UWP: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t33
Since Microsoft stopped producing the Kinect sensors some months ago, I stopped updating the UWP support in the K2-asset, as well. So, there is no guarantee all the scenes will work.
Thanks for your response. I succeeded to build for UWP with an older version of unity (5.x). your tips explain for the windows store export in unity but in the recent version of unity (2017.x) they replace the windows store export to UWP export. Maybe you have an idea what is defferent for export with recent version of unity. I think it’s a parameter problem but I don’t find it.
I’ll check it again with the latest version of Unity. What happened when you build it for UWP on Unity 2017.x? Did you get errors while building the project in Unity, when you compiled it in VS, or when you ran it? As far as I remember, ‘Universal Windows Platform’ in 2017.x was just a re-brand of previous ‘Windows Store’-platform.
On my first attempt yes. But Itry with the last version of unity with an old version of your package. Then I try with a newer version of your package, I get an error who depend of plugins because I have the ancient plugin with the new multik2. I fixed this error, and when I ran in play mode but the demo waiting users. After a lot of test I build a project without error wit UWP exporter but the project won’t work in visual studio. I hope is helpful. I’m new with windows exporter, and like you says, I don’t think is a master problem, maybe a little problem of parameter. thanks for your reactivity.
Hello Rumen F, I have two questions to ask. (1) – can I modify this asset of yours or can you tell me how can I just put images of clothes (transparent png) over the users video feed at run time instead of using a 3d skeleton model or skinned mesh renderer in Unity 3d. And
(2) – Can I load images and data from a server externally in your fitting room demo scene.
How can we do this?
If you can reply to this and explain it a bit would be highly appriciated.
Thanks.
Hello, to your questions:
1. You need to modify at least the LoadDressingModel()-method of KinectDemos/FittingRoomDemo/Scripts/ModelSelector.cs, to apply the png-file as texture of the current model. In this case you could use only one humanoid model per category, and change its textures when the user changes his or her selection.
2. This is general Unity question. I think you could use the WWW-class to load the texture png-files from a web server.
What if I dont want to use 3d models at all and only png image files to overlay on the users body?
Of course, you are free to use whatever you want. That’s why all the scenes in the K2-asset are demo, not anything mandatory.
Can you please explain how can I use only images for my dress up demo. The thing is I don’t want to use 3d models at all only pngs. Do I need to create a empty game object. please help me I need your help. I tried using only images but it is not overlaying on the live feed of my character image. Thanks.
I have no idea how to use only images, to overlay the user’s body.
Hi Rumen: I always seem to get this error when starting any of the demos: “Missing assembly net35/unity-custom/nunit.framework.dll for TestRunner. Extension support may be incomplete.
UnityEditor.Modules.ModuleManager:InitializeModuleManager()” Any ideas?
This looks like an internal Unity error, but I suppose it doesn’t stop you from running the demo scenes or using the package.