Kinect-v2 VR Examples

Kinect2_Mobile_VRKinect-v2 VR Examples is a set of Kinect-v2 (also known as ‘Kinect for Xbox One’) virtual reality examples. To run it, you need to download the ready-made build of KinectDataServer from the download section below. Alternatively, you can run or build the KinectDataServer scene by yourself, if you have the ‘Kinect v2 Examples’-package.

This package contains ten demo scenes. The avatar-demos show how to utilize Kinect-controlled avatars in virtual reality scenes. The gesture-demo scenes demonstrate how to use Kinect and visual gestures in VR scenes. The interaction-demo scenes – how to use hand interactions in virtual reality. The speech recognition demo shows how to use the Kinect speech recognition in virtual reality scenes.

All Kinect-related components used by the scenes in this package are lightweight. They don’t use the Kinect sensor directly, but instead receive the sensor data over the network, from the Kinect data server. The data-server must run on a machine, where the Kinect-v2 sensor is connected. The package supports standalone and mobile builds, as well as VR-builds for platforms like Oculus, Gear-VR, Vive, Cardboard, etc.. It works in both Unity Pro and Unity Personal editors.

This package is deprecated at Unity asset store, as of 01.Sep.2017. The reason: Its components and functionality is integrated into the K2-asset (‘Kinect-v2 Examples with MS-SDK’) now. I prefer not to maintain two projects with similar functionality. If you still need it or its demo scenes, please e-mail me your request along with the invoice number for the K2-asset you got from Unity asset store. I’ll send it to you free of charge.

Comparison of the Kinect-v2 packages:

Kinect-v2 Examples with MS-SDK (K2-asset) Kinect-v2 VR Examples (K2VR-asset)
Full-featured Kinect-v2 package. Utilizes all features and streams of the Kinect-v2 (and Kinect-v1) sensors. Mobile version of the K2-asset. Utilizes body, hand interaction, gestures and voice commands.
Gets all its data from the sensor connected to the same machine. Gets its data from KinectDataServer over the network. The KinectDataServer must run on the machine, where the sensor is connected.
Works on Windows-standalone (x86, x64) and UWP platforms. Work virtually on any platform with access to the network.

Free for education:
The package is free for academic use (i.e. in schools, colleges and universities, by students, teachers or researchers). If you match this criterion, please e-mail me to get the K2-VR asset directly from me.

One request:
My only request is NOT to share the package or its demo scenes in source form with others, without my explicit consent, regardless of whether you purchased it at Unity asset store, or got it from me free of charge for academic use. Still not sure why should I say this explicitly, but obviously it was not clear so far. Please respect my work.

How to run the examples:
1. Make sure the Kinect data server is running on a server machine, where the Kinect-v2 sensor is connected.
2. Download and import this package into new Unity project.
3. Open ‘Build settings’ and switch to ‘Android’, ‘iOS’ or ‘PC, Mac & Linux Standalone’ platform.
4. Open a demo scene of your choice. The demo-scenes are located in Assets/Kinect2MobileVr/DemoScenes-folder.
5. If the server is running on the same WLAN subnet, it should be discovered automatically when the scene runs. You don’t need to change any server-related settings in this case.
6. Alternatively, you can manually set the ‘Server host’ and ‘Server port’-settings of KinectDataClient-component of the KinectController game object, if you know the Kinect data server’s IP-address and port.
7. Run the scene in the editor to make sure it works. This means the configured server-settings are correct.
8. Build the scene for a mobile or virtual reality platform and test it on a mobile device. Make sure the mobile device is on the same WLAN subnet as the Kinect data server.

For more information and short descriptions of the available demo scenes, see the Readme-file in the Assets/Kinect2MobileVr/_Readme-folder of the package.

Download:
* The official release of ‘Kinect v2 VR Examples’-package is deprecated at Unity Asset Store. Look above for more info.
* For ‘Kinect-v2 VR Examples’ v1.2 and above – here us the KinectDataServer v1.4, built for different versions of Unity:

Please download the respective file for the Unity version you currently use. Then unzip it in a folder on the machine, where the Kinect sensor is connected. When you run it for a first time, allow the public network access, if the operating system asks to. Otherwise the Kinect data client may not be able to connect to the server.

Short Descriptions of the Demo Scenes
1. KinectAvatarDemo1 – This is first-person avatar demo. Move your arms or legs, and try to look at them, to see how the sensor tracks you.
2. KinectAvatarDemo2 – This is third-person avatar demo. Again, move your arms and legs, move or turn a bit left or right, to see your mirrored movements from 3rd person perspective.
3. FlyerGestureDemo – Lean left or right to move the flyer horizontally – left or right. Jump or squat to move it vertically.
4. KinectGestureDemo1 – This is the discrete gestures’ demo. Swipe left, swipe right or swipe up turn the presentation cube in the respective direction.
5. KinectGestureDemo2 – This is the continuous gestures’ demo. Use the Wheel-gesture to turn the model left or right, or Zoom-in/Zoom-out gestures, to scale the model. Lower your hands between the gestures, to stop the previous gesture.
6. VisualGestureDemo – This is a very basic demo that allows you to check how the visual gestures, configured at server-side, get recognized.
7. KinectInteractionDemo1 – Use your left or right hand to control the hand-cursor. Grab an object & drag it around. Open your hand to release it. Try to interact with the UI components, too.
8. KinectInteractionDemo2 – Grab the cube with your left or right hand. Then turn it in all directions, to look at all its sides.
9. SnowflakeShooterDemo – Look at the falling snowflakes. Close your left or right hand to shoot the snowflake you’re looking at. Keep in mind your shooting hand must be high enough, so the sensor could clearly see its state.
10. KinectSpeechRecognition – Say clearly one of the listed commands to control the robot. The grammar is configured at server side.

Troubleshooting:
* If the demo scenes cannot connect to the Kinect data server, try to manually set the ‘Server host’ and ‘Server port’-settings of KinectDataClient-component of the KinectController game object.
* Make sure the Kinect data server is running on a server machine and Kinect-v2 sensor is connected to it.
* Many Kinect-related tips, tricks and examples are available here.
* The Online documentation of the K2-asset can be found here.

What’s new in version 1.2:
1. Added AvatarScaler- and HeadMover-components and new, first-person avatar-demo scene.
2. Added two gesture-demo scenes – for discrete and continuous gestures.
3. Added two interaction-demo scenes. Tailored InteractionManager for VR use.
4. Added VisualGestureManager-component and visual-gesture demo scene.
5. Added SpeechManager-component and speech-recognition demo scene.
6. Removed multi-scene demo. KinectManager and KinectDataClient may be used in each scene now.
7. Renamed previous (more complex) demo scenes. Rearranged demo-scene folders.

Video worth more than 1000 words:
Here is a video, courtesy of by Ricardo Salazar, created with this package, Kinect data server, Kinect-v2 sensor and Samsung’s GearVR HMD:

 

142 thoughts on “Kinect-v2 VR Examples

    • CRCMISMATCH means that the two ends of the connection have different settings about channels, QoS and network parameters. Probably you’re not connecting to the data-server at all. Make sure the server host and port-parameters of the KinectDataClient-component are correct. They need to point to the host and port, where the KinectDataServer is listening.
      And, please stick to one channel for reporting issues you experience.

      • Hi Rumen ,
        I have found the same error : CRCMISMATCH.
        I verified the server and port parameters of the kinectDataClient , but it still show the same error.
        what should I do??
        Regards.

      • The usual reason for CRCMISMATCH, as to my experience, is when the client and server are running different versions of Unity engine. For instance if the server is built on Unity 5.3, but the client is running on Unity 5.5.

  1. Hey Rumen,

    greetings from nearby Germany ;). Interesting your experiments with Kinect and VR. This is exactly what I try to do now on my own now. I’m just new to Unity and will try some tutorials first until I can connect Kinect and Unity, but I’m looking forward to it.

    It would be wonderful, if I could maybe contact you, when I’m stuck or could need some good hints from you, if you got time, anytime 😉

    But I already have a question… What do you use to connect Client (Smartphone) and Server (PC with K-SDK)? You use simple TCP- / UDP-Sockets or maybe WCF or maybe something very different?

    • Hi, I’m quite busy with my own projects and issues, but always try to be helpful. But please, don’t ask too many questions, and try to keep them short and concise. Regarding your question: In the client-server communication I prefer to stick to tcp/udp sockets.

  2. Pingback: Kinect 二三事 – Kennedy Wong

  3. hi Rumen, is there a way to implement VR samples for multiple users? will the user assignment be automatic or will it be network (IP) per user? i would appreciate the workflow or tips,Thank you in advance

    • The VR demos get the same Kinect body data as the demo-scenes in the standalone K2-asset. Hence, the tracking of multiple users should not be an issue. The tracked user is specified by the PlayerIndex-setting of the respective component.

    • Yep. The KinectDataServer should recognize the sensor and send the Kinect-v1 data over the network. But I think there will be some K1-specific issues at client/VR side. For instance, as far as I remember, the K1 SpineBase joint was turned ~40 degrees to the back and the avatar in VR will probably lean to the back. Please contact me by e-mail and mention your invoice number, if you need help for this or other K1-specific issue.

  4. Pingback: Kinect for Windows v2 with Unity VR Examples – Be Analytics

  5. Hi Rumen, I have one error after import the Kinect v2 VR Examples-package
    after I delete the duplicate files DummyK2Interface2.cs and SimpleInteractionListener.cs show me the error

    Assets/KinectDemos/KinectDataServer/Scripts/KinectDataServer.cs(351,74): error CS1061: Type `FacetrackingManager’ does not contain a definition for `GetFaceParamsAsCsv’ and no extension method `GetFaceParamsAsCsv’ of type `FacetrackingManager’ could be found (are you missing a using directive or an assembly reference?)

    • Hi, probably you deleted the wrong FacetrackingManager. There should be such a function in there, indeed 🙂 You should either update it, or delete the KinectDataServer script/folder, which invokes the function.

  6. Hello, 🙂 I bought this asset as well as others. First this doesn’t work with other kinect examples. Thats a bummer.

    Also i was thinking getting to see the vr example from fps view. My hand = avatar’s hand. Right now this example asset doesn’t add much to the other example assets of yours am i right? Avatar demo is the same demo as the other example. Only a bit vr optimized. Even though, i am really grateful for your works.

    Ok I would be grateful if you can help for my questions:
    – how can i integrate a my body = avatar body ?
    – Can i use >1 kinect devices for a better body track and how?

    Fantastic work!

    • Hi, I’m a bit confused now – is my work fantastic or a bummer in the end? 🙂
      My idea with the VR-examples was to provide (step by step) the functionality and components of the original K2-examples package to the VR world, because it is blatantly missing there. Now, the VR components and client/server part needed for Kinect body tracking, Kinect gestures and hand interactions are available in the VR-examples as well, as components and respective demos.
      Maybe the interaction demo will be interesting to you, because it is from FPS point of view. I wanted to show that avatars could be used from first-person POV, as well as from third-person POV. But maybe you’re right, there should be a second avatar-demo from first-person point of view.
      To your questions:
      – if you mean FPS – see above. If you mean your body image, this will take a while, because the pixel and uv data that has to be transferred over the network is relatively big and I need to think how to optimize it first.
      – No, with the current VR examples you can’t use multiple sensors. There is an option to use a software called Fusion Kit (for dealing with multiple Kinects), as data provider on server-side, but the interface is still experimental and the overall performance will be quite slow, especially for VR.

      • Ok last decision the asset is fantastic 🙂 But i wish i could have imported the new asset without any work on top of older example assets.
        Thank you very much for your help. 🙂

        I have checked the interaction demo but there is a problem. i am seeing the avatar 1 meter up of it’s head.
        (Kinect v2 + Oculus Dk2)
        I need to see the avatar from it’s eyes. (like a fps game just like you said. It’s hand should be synced to my hand and feet synced .. etc.) .

        Is it something with my tracker setup?

      • Hmm. I test it usually with GearVR, which is also based on Oculus SDK. The MainCamera is child of joint_Head, so it should not be moved above it, as to Unity descriptions. Have you tried it with and without the DK2 positional tracker?

      • Yes i have tried it both way. It turns on/off the positional tracking but: There happens to be a changing offset on the camera all the time even without any sensor connected to the Rift system. I have tried zeroing the offset of the camera on the LateUpdate without any luck.

  7. Ok the offset is not coming from the oculus tracker but the Kinect transform. Oculus Unity is trying to stay at the center of the world without any tracking. But when the Kinect server kicks in, the offset gets bigger. Because OVR is trying to stay at the center and Kinect neck is going to places 🙂 Here it talks about it also:
    https://docs.unity3d.com/Manual/VROverview.html
    “You can manually set the FOV to a specific value, but you will not be able to set the Camera’s transform values directly”
    So what do you think is the best way to solve this? Offseting the camera.parent s transform based on the offset? Or?

    • Sorry, I was quite busy today. OK, I’ll get a DK2 and test this issue during the weekend. Please e-mail me today, tomorrow or next week, so I can ping you back, when I find a workaround.

    • A little bit below it also says what I mentioned “The camera transform is overridden with the head-tracked pose. To move or rotate the camera, attach it as a child of another GameObject. This makes it so that all transform changes to the parent of the camera will affect the camera itself. This also applies to moving or rotating the camera using a script.” In this case the camera is parented to CameraMover, which is parented to joint_Neck.

      I could not reproduce this issue, no matter what I did. See my response to Yuer below. I tried it on 2 machines, with different initial positions of the HMD and Kinect, also without Kinect. Is there anything specific I should do, in order to get this offset? Have you tried it with HMD only?

  8. I bought this asset too,and i’ve faced the same problem like Baris Cigal.So have you solved it?I haved tried to reset the position bu failed,there are still offset.

    • I could not reproduce the issue. The Oculus camera in my setup stays always at head’s height, no matter what I do. I use Oculus Runitime SDK 0.8.0.0, and DK2 without tracker. Did you do anything custom, or just ran KinectInteractionDemo, without any scene or script modifications? Did you use the tracker, and have you tried the demo with HMD only, i.e. without the Kinect data? Did the camera transform still have offset in this case?

      • i’ve solved it by closing to tracker & using CameraMover as the parent of camera to fix the offset. Thanks a lot for your advisesグッ!(๑•̀ㅂ•́)و✧

      • There is a data line to link the oculus & the track that trak the position.And i pull out it.So the position can be controlled by kinect.But the rotation data wo get is still from the oculus.I dont know how to controll the rotation only by the Kinect but not oculus.Because the is a initia rotaion on camera(not (0,0,0)).I’ve tried to changed the “eye tracker” from “both” to “none(main display)”.But then there are no image on the oculus screem.

      • Thank you again! I don’t think you can turn off the HMD rotation tracking, and that’s OK, as to me, because the Kinect cannot track your head’s rotation so precise. I think the best solution would be to reset/recenter the tracker when the scene starts.

      • Thanks for your advices.And how could i reset the initial position&rotation?It has a vocter3 number that is not zero when the program start.

      • There is a VRTrackingReset.cs-script in Kinect2MobileVr/KinectScripts/VrScripts-folder. Open it, add a Start()-method and put this line: ‘InputTracking.Recenter();’ in it. Then save it, return to Unity editor add the script as component to KinectController-game object in the scene. It will reset the HMD orientation at scene start-up.

      • Thank you so much for your help,it really help.me a lot & i’ve solved the problems(rotation&position offset)with your help.Thanks again! (。ò ∀ ó。)

  9. Hi, how I get the real positions from the kinect?
    I want to estimate the arm length from the user and scale the character arm to it… Thanks a lot!

    • You can use the AvatarScaler-component from the K2-asset, if you have it. Otherwise, first see this tip on how to get the real position of user’s joint: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t7 Then get the real positions of user’s arm joints. Then you can calculate the lengths of bones between them: L = (P2 – P1).magnitude. Similarly, you can calculate the character’s arm lengths, and finally – the needed scaling, to match real lengths.

  10. Hi, i want to get the real positions from the kinect. How i can catch them? I want to estimate the armlength from the user and scale the character arm length ingame. Thanks. John

  11. Hi Rumen, 🙂

    I’m working on physical therapy project by playing 2 characters at the same time ,one is user character and another one is character that play from recorded file(by switch turnindex from 1 to 2 and 2 to 1 for select a character to play in that round).

    But I found a problem, when user got in the camera frame and then user character got random a bodyIndex (bodyIndex display from CalibrateUser function in KinectManager.cs) ,sometime bodyIndex numbers from two characters are same bodyIndex number that made they used same bodydata and it made the characters display incorrectly.

    So I think the solution is making the bodyIndex unique.
    for example, user character use bodyIndex 0-4 and character that play from recorded file use bodyIndex 5.

    But i couldn’t find a function that define those parameters.

    Do you have any advice?

    Thank you.

    • Hi, I think this solution would be good. You could modify (or override) the GetEmptyUserSlot()-function of KinectManager, and use one set of indices for realtime Kinect users, and other – to the recorded ones, provided you have a flag in the body data structures to separate them. I would use the bIsTracked-flag, and set it for instance to 1 when realtime user get tracked, but to 2 when recorded body frame gets processed.

    • Hi ,
      I am a student and i am doing a bachelor project that i will use also the two players one to track and other played from a recorded scene , so can you please tell me if you found a solution for this , this will help me a lot , my email is : baher.abdelmalek100@gmail.com

      Thank you,

  12. Hello Rumen F.,

    I am a student of ingeniaria in, CDMX Mexico National Polytechnic Institute, I wonder if I can supply the active K2-VR

    Thank you!

      • Hello,

        This would be my bad institutional, it would be amazing to have a free package

        Thank you!

  13. Hi Rumen, I’m trying to build a gesture recognition system with Kinect and Unity, but Unity freezes when there are multiple people in the scene. Have you had any success with custom gesture (load gbd file) detection in Unity with more than one person ?

    Thanks in advance.

    • Hi, I never tried visual gestures with multiple users in front of the sensor. I don’t use vgb gestures too much, because they have some issues on the 64-bit systems. But as far as I remember, when you process these gestures, you should assign userId to the frame source. How would you assign the multiple userId’s in this case?

      • Hi, sorry for the late response. I have given up (and also because of changing business conditions) on the gestures system. However, to share the general idea, I was assigning the multiple userIds by looping through in the Update function and setting it for every detected and tracked body. I guess it will be hard to convey without code…
        Anyway, thanks for your response. I didn’t know that vgb gestures have issues on 64-bit systems though. Mine was a 64-bit system. I wonder if this is the cause.

        Thanks again for the great post!

      • Hey Rumen,

        Nope, I’m playing around with the K2VR above. 1.3 is giving me this error but 1.2 isn’t and the examples from the asset store are working perfectly with the 1.2 server! Seriously, kudos. Awesome work!

        I guess my next question that I’d have for you would be how can I get access to the raw depth texture and is there a way for me to easily show it on screen through OnGUI? We plan on using several Kinects in our installation and we’d use the depth textures in order to feed them into a shader that will recreate point clouds in real-time. Any suggestions on how we can get started?

      • Ah, you mean the KinectDataServer. It is actually the same, because KDS is a one of the scenes in the K2-asset. Anyway, I suppose the speech-manager error does not prevent KDS 1.3 from working.

        Currently, the depth and color textures are not transferred to the mobile platforms (because of the large data size), to keep the performance of client and server. But the full source code is available, and you are free to add whatever data you want. If you don’t have the KDS scene and code, please e-mail me, and I’ll send it to you.

  14. The Kinect2 Data Server v1.3 U54_Data worked fine the first time I run it, but the second time it didn’t recognized the port and the IP. So I can’t move the character in unity.

    • What happened between the 1st and 2nd run? Please restart your machine and then try again. Make also sure the Kinect2DataServer-v1.3-U540 is unzipped its own separate folder, and is not part of any Unity project. I tested it here several times, but cannot reproduce this behavior. The Kinect2DataServer runs every time.

      • I’ve tried restarting the machine and it still does the same. I also tried putting the folder in a separate folder from the unity project. When I did the test with the Kinect2DataServer with it on a separate folder, it recognized the IP and the port the first time, but the 2n run didn’t recognized them. In fact, it only shows the blue background with the text v1.3 and nothing more.

      • OK. In the Kinect2DataServer-v1.3-U540_Data-folder, there is a log-file called ‘output_log.txt’. Please copy somewhere this file after the 1st run (the working one), then after the 2nd run (the non-working one). Then e-mail me both of these log-files, so I can check what exactly went wrong between both runs. My contact details are here: https://rfilkov.com/about/#contact

  15. Is it possible to move the avatar 3rd person within the gear vr environment? I have tried to run it, but it just loads the 3d model and no movement with the kinect is detected. The same happened when I ran the Interaction demo on the gear vr.

    • Sure, this is the idea. What message do you get, when you run the demo on GearVR – connecting, connected, disconnected, …? You need to have the K2DataServer-app running on the machine where the Kinect-v2 is connected, the WiFi on the phone should be enabled, and the K2DataServer machine should be in the same subnet. See ‘How to run..’-section above.

  16. The DataServer works fine now in the 2nd, 3rd run and so on. But now it doesn’t communicate well with the Kinect2VrExamples Package you shared with me.

    I hope you can help me figure out what happened.

    Thanks in advance.

  17. I’m running the demo scene in 1st person: KinectAvatarDemo1. I’m trying to make the physics work so my character can fall out of a table (like in a pirate boat) and it didn’t fall (even before I made my own package usign the dataserver package you send to me)

    • Well, the physics and client-server communication are two separate things. You said above that the communications with KinectDataServer is not working. In this regard, both the 1st and 2nd original avatar-demo scenes show the connection status until connection is established. With the 2nd avatar-demo (3rd person), it is easier to see, whether the avatar is moving or not. You can run it first in the editor, to check if client-server communication is OK. When the client and server are connected and there is user detected, the avatar will start moving. Otherwise a message about connection status will appear, telling what the client is doing: waiting for server, connecting, disconnected, reconnecting, etc. If the client is working OK in the editor, then build and run it on GearVR too. It should work in similar way, as long as WiFi is enabled on the phone. So, what exactly is happening in your case? Please e-mail me a screenshot, if it is difficult to describe. Again, please run one of the unmodified avatar demo scenes of the K2VR-asset.

  18. Hello Rumen,

    I am looking to set up a kind of VR mirror where the character body responds to the user, and this package seems like the best way to do it. I would also like to incorporate lip sync and wondering if this does it. I found OVRLipSync can do this, so perhaps it can be incorporated if the kinect voice detector is not used in this already somehow. Any thoughts would be greatly appreciated!

    Misha

    • Hello Misha, I’m not sure what exactly you are trying to do. The K2VR-examples package is targeted to mobile or VR-platforms (like Oculus, GearVR or Vive). The VR platforms usually require the user to wear a headset, which makes the Kinect face-tracking useless. If the mirror should run on a standalone platform (e.g. display), the ‘Kinect v2 Examples with MS-SDK’-package would be better, and also easier to use. Both package include the full source code, and you are free to incorporate other packages, as to your requirements.

  19. Hi Rumen, I bought your Unity package to try using it against different sensor data. I don’t own a Kinect camera, so would you mind sharing the protocol and some sample data captures for the body and hand movements? I downloaded your data server piece but that’s not in source form. If you could at least give me some sample sensor captures, I could replace your DataClient with something that doesn’t depend on the server.

  20. Hi Rumen, I bought your package but want to use it with different sensors than Kinect. I don’t own a Kinect camera so would you mind sharing some sample captures of body and hand movements, and perhaps the protocol between client and server (not tcp/udp but the app layer protocol). It would help me spoof the data from my sensors in the same form. I got the data server package but it isn’t in source form, and it won’t work until it sees Kinect sdk + camera.

    • Hi, if you’d like to get the K2-data server sources, please e-mail me next week and mention your invoice number. Sure, I can tell you the protocol if you prefer to do it that way, but I think it may be easier just to add interface-class, processing your sensor’s data instead.

  21. Hi I built and am running the avatars 2 demo scene on a windows 8 machine and the server on a win 10 machine, it connects, starts working and then keeps disconnecting and reconnecting, any advice?

    cheers!

    Alex

    • Hi Alex, please check for error messages both in client and server consoles. Disconnecting and reconnecting usually means no data was received by the client from server, for a certain amount of time. The client considers this as a network problem, and tries to reconnect. Please try to figure out what the reason may be.

    • See the ‘How to use the mobile/VR components in your own Unity project’-section of ‘Readme-Kinect2-VR-Examples.pdf’-file in the package. Hope I understood your question correctly.

    • These tips are mainly targeted to the standalone K2-asset, but many of them should still work for the K2VR-asset as well. Which tip exactly do you mean that leaded you to lots of compilation errors? And, for what platform do you build your project?

  22. Hi Rumen, this VR package looks interesting. I have been trying to get your Kinect scripts working with the HTC Vive for quite some time, but keep running into problems, so I hope this one solves them.

    I saw one post asking about the Vive where you answered it should work. But just to be sure, have you solved or eliminated the problem of the infra-red interference between the Kinect and the Vive? That is my main problem, the Vive only works when I start the scene with the headset not in line of sight with the Kinect, but even then after stopping and restarting it the Vive has lost tracking and I need to restart SteamVR.

    So if you have managed to solve that I will definitely buy this package 🙂

    Thanks.

    • Hi Joost, I must admit I don’t have Vive, so no way test or fine tune the K2VR scripts in this environment. But, as far as I remember, the guy above did not have issues with IR interference. If you only want to test the K2VR-asset with Vive, e-mail me and I’ll send it to you, free of charge for testing.

      • Thanks for the package. The first quick test seems promising. It is a lot more stable than with the non-VR scripts. I was able to start and stop the client application (the avatar1 demo scene) several times without the Vive crashing. With the previous version it would crash after 1 or 2 starts.

        I’ll build this method into my existing application and do some more tests and let you know.

  23. Hello?
    I purchased Kinect V2 VR Example from Unity Asset Store. Right now i am using BackgroundRemoval demo. The problem is i cant get clear border of person and sometimes person hair is messy. What should i do? Is it possible to get perfect background removal?

    • Hi there,

      The K2VR-asset doesn’t have background-removal demo scenes, so I suppose you mean the original K2-asset.

      To your question: Believe me, there is no way to satisfy all customers. Before the most requested BR-feature was to have blurry edges, now your request is to have sharp edges. Fortunately I have left the original background-removal component. You need to disable the BackgroundRemovalManager-component of KinectController-object in the scene, and enable the SimpleBackgroundRemoval-component instead. Also, change the ‘Compute user map’-setting of the KinectManager-component to ‘Cut-out texture’.

      Regarding the hair/black clothing BR-issues. I know from the other customers this is more a light problem. Put a light bulb in front, above the user’s head and you will get the most of it. Hope this helps…

  24. hello ,

    I am a student , and i get the kinect v2 examples , as i am doing my bachelor thesis in a project using kinect .

    when i tried the kinect recorder demo it played well , but i can not find the recorded file , as i need it to analyze the recorded data , and try to compare it with other data ,
    can you please help me in how to find this file .

    Thank you very much .

    • The recorded file is usually saved in the root-folder of the Unity project. This is the parent folder of Assets-folder, hence it is not visible in the Unity editor.

      Please mind this file has special format, so it will not be so easy to use it for direct comparison or motion analysis. Better see the GetJointPositionDemo.cs-script in KinectScripts/Samples-folder. To use it, add it as component of a game object in the scene, enable its ‘Is Saving’-setting and run the scene. Currently it saves the position of one joint only, but you can easily extend it to save all joints’ data.

  25. Hi rumen
    i have kinect v2 VR
    the demo workes on the PC Platform(.exe )
    But when extracting APK packets
    The application does not work on the phone
    Do you work on limited types of phones؟!
    thank you

      • It’s working now and reading data
        But there is a normal camera
        The camera fled is not the same as in the video
        So what should I do to make the dimensions fit into VR glasses?

      • First you need to enable ‘Virtual reality supported’ in Player settings. Then, it depends on your VR platform… See the VR-platform documentation related to Unity, and the Readme-file in the K2VR-package.

  26. Hi Rumen,

    I am a student at Southern Illinois University and would love to implement this into gesture recognition research I am conducting. Could you please provide the packages to me via email? Ourania referred me to you.

    Best regards,

    Christian Garcia

  27. Hi Rumen
    this EX can not run on Unity 5.6
    why?
    this Error (* Assertion at ..\mono\mini\unwind.c:620, condition `cfa_reg != -1′ not met)
    what should I do?

  28. Hi ,
    Please i want to know how to differentiate between real time frames and recorded body frames ,
    and if i can make the recorded frames be played by a specific character in the scene ?, as this will help me in creating my unity game .

    i tried to change the body index number in the avatar controller but this make the character not tracked in the real time frame also, and if i assigned kinect manager to every character a lag happen.

    Thank you .

    • Hi, there should be only one KinectManager in the scene. The player index of AvatarController is a way to differentiate users in the scene, but in this case you should set first the body data structures of KinectManager accordingly. This would require significant coding and testing.

      An easier approach as to me, would be to use the Mocap animator (if you have it) to record or convert the recorded frames to animations and then copy and re-target these animations to the characters in your scene.

  29. HI
    i puschased kinect v2 examples
    actually i cound’t find reply section in unity store
    so ask here.

    i really appericiate your owesome plugin

    and one more issue

    what do i modify in “KinectHolographicViewe scener” if i t want close the object me when i colse sencor

    now version, it works good, but object becomes far if i close sense.

    please opologize my poor english

    thanks a lot~

    • Hi, I suppose you mean the K2-asset, not the K2VR-package. As a customer you can comment under the asset in Asset store or e-mail me directly.

      To your question: I think you don’t mean closing of the sensor (this happens when the app is closed), but how to process the event when the user gets lost. In this regard, open SimpleHolographicCamera-script, which is component of the MainCamera-object in the scene, find LateUpdate()-method, and look at the else-block near the end of the method. It currently restores the original camera position, rotation and projection. You can replace it with your own code that destroys the scene objects or does whatever you need.

  30. Hello, love your work with this.
    I bought this to learn more about how to use Kinect. Does your work have some sort of documentation to explain how some things work?

    I’m trying to use the background remover script and only want the video feed of the players on a small corner of the screen. I’ve been fiddling with the PortraitBackground class and it doesn’t seem to do anything.

    Any help would be amazing 🙂

    • I suppose you mean the K2-asset. The cocumentation is here: https://ratemt.com/k2docs/

      To show the BR in one corner, open the 2nd background-removal demo, and set the proper ‘viewport rect’ for both BackgroundCamera1 and BackgroundCamera2. Then remove the objects you don’t need from the Hierarchy.

    • Hi Andres, thank you for this question! As far as I remember, they haven’t published their project so far, neither as closed library nor as open-source. But I’ll put this into my todo-list and research it again, when I have some free time.

  31. Hi Rumen, There is a problem in the SpeechDemo
    It works, but does not perceive the voice.
    I can see the text ‘connected’ but the character does not move.
    You know what`s wrong?
    Help me.

    Thank you.

    • Hi, the speech recognition in VR-examples is done on the server, where the sensor is connected. On the machine, where the KinectDataServer runs, you need to install ‘MS Speech Platform SDK’ (both 32- and 64-bit, to be on the safe side), and at least the English language pack, too. You can find the links to these packages under the ‘Downloads’-section here: https://rfilkov.com/2014/08/01/kinect-v2-with-ms-sdk/ Then run ‘Kinect data server’ and enable the ‘Enable speech recognition’-setting in its control panel. By default it uses the voice commands, as configured in ‘SpeechGrammar.grxml’. Hope this info helps 🙂

  32. Hi, I would like to buy the package. But, I want to know for integration of Kinect2, Unity and google cardboard, which “Kinect v2 VR Examples” or “Kinect v2 Examples with MS-SDK”, package will be more applicable? Whether “Kinect v2 Examples with MS-SDK” includes “Kinect v2 VR Examples”, then I will go for “Kinect v2 Examples with MS-SDK”.

    • The K2VR-asset is a “mobile” version of the original K2-asset. While the K2-asset gets all data directly from the sensor connected to the same machine, the K2VR-scenes get the body and other data they need over the network, from a KinectDataServer-app working on the machine, where the sensor is connected. This way the K2VR scenes and components would work virtually on any platform, while the full featured K2-asset works on Windows-standalone (and UWP) platforms only. The Cardboard integration is described in the Readme-file of K2VR-asset, as far as I remember. It is no different than Cardboard integration of any other Android app. Do you think I should provide this package comparison in the K2VR-description above?

      • Thank your for your reply. Yes, I think this comparison between K2VR-asset and K2-asset will help to user to choose the right package according their need.

  33. Pingback: Kinect v2 Tips, Tricks and Examples | RFilkov.com - Technology, Health and More

  34. Hi, thanks for developing this tool!
    We are a couple of students looking to use it for a project, but only one of us owns a Windows machine. So my question is this: Is it possible to run this on a Mac? I can see that this version, which mentions a server/client structure for sending the Kinect data is now deprecated, so I am a little confused if we could make it work with the one in the Unity assets store or not, and how we might do it?

    Thank you!

    • Hi, The K2VR-asset was deprecated, because its components (in means of KinectDataServer, KinectDataClient, and HeadMover for avatars) are now integrated into the core K2-asset – https://rfilkov.com/2014/08/01/kinect-v2-with-ms-sdk/ As students you are eligible to get the K2-asset free of charge for academic use. To do it, please send me an e-mail from your university e-mail address, to prove you are really a student there. One request for your group would be enough.

  35. Hello Rumen, I already sent you an email to request the Kinect-v2 VR Examples(Billing information included)… What I want in my unity project is to put a camera over the Avatar Head and access this camera from multiple android+Cardbords phones, so I can have the same view perspective as the avatar….as far as I understand, I also need the kinect server running and connected to the xbox one..

  36. Hello Rumen,

    I want to match the KinecdFace with my animated Character. When using primitives my code works, however when trying to control position and rotation of the KinectFace plane the live face glitches around like some other code is also trying to set its orientaiton.

    Could you give me a hint on where exactly you set the orientation of the face plane?
    Thanks.

  37. Hi Rumen,

    I am a researcher from Germany and am quite interested in your work. Would you be able to send me your VR examples? I would like to test them purely for academic purposes.

    Kind regards

      • Hi Rumen,

        This is my university address. you can google the extension to make sure.

        Kind regards

      • No offense, but I don’t send e-mails or Unity packages to email addresses I see in Internet. That’s why I always request to get an e-mail first, to make sure the address is real and has a real owner.

  38. Hi Rumen,

    I am using ur Kinect v2 smaple. In that i am using KinectFittingRoom2 example with background removal. So now i want to show only face and rest of the body should be hidden in that. Is that possible to do so?

    Like in fitting room i only want to show the face with background removal and rest of the body will not visible.

    Waiting for your quick reply.

    Thank You.

    • Hi, this is difficult, but take a look at KinectFaceTrackingDemo3 in KinectDemos/FacetrackingDemo. Maybe if you combine the FaceView-game object, FacetrackingManager & SetFaceTexture-components from this scene with the avatar model and components of the fitting-room scene, you will get what you need.

      • Hi Rumen,

        Thank you so much for your quick reply. your solution almost worked for me. But now i am only stuck at one place, that is if i am enabling Background Removal Manger on Kinect Controller, then the face tracking is stopped working. Means on Face View the whole life feed is coming rather then only face. Any idea on this?
        I just stuck here only. If you can help me in this the it will be great for me.

      • Hi again, I suppose you need to disable the ForegroundToRawImage-component of the BackgroundImage2/RawImage-game object. Please also make sure the UserBodyBlender-component of the main camera in the scene is disabled, too. Hope this helps…

Leave a Reply to jiyaCancel reply