Azure Kinect and Femto Bolt Examples for Unity

Azure Kinect Examples for Unity, v1.19.1 is a set of Azure Kinect and Femto Bolt camera examples that use several major scripts, grouped in one folder. The package contains over thirty five demo scenes. In addition to the Azure Kinect, Femto Bolt and Femto Mega sensors, the K4A-package supports the “classic” Kinect-v2 (aka Kinect for Xbox One), as well as iPhone-Pro LiDAR sensors.

The avatar-demo scenes show how to utilize Kinect-controlled avatars in your scenes, gesture demo – how to use discrete and continuous gestures in your projects, fitting room demos – how to overlay or blend the user’s body with virtual models, background removal demo – how to display user silhouettes on virtual background, point cloud demos – how to present the real environment or users as meshes in your scene, etc. Short descriptions of all demo-scenes are available in the online documentation.

This package works with Azure Kinect, Femto Bolt and Femto Mega sensors, Kinect-v2 (aka Kinect for Xbox One) and iPhone-Pro LiDAR sensors. It can be used with all versions of Unity – Free, Plus & Pro.

How to run the demo scenes:
1a. (Azure Kinect) Download and install the latest release of Azure-Kinect Sensor SDK. The download link is below. Then open ‘Azure Kinect Viewer’ to check, if the sensor works as expected.
1b. (Femto Bolt and Mega) Download and unzip the latest release of Orbbec Viewer, as well as Orbbec SDK K4A Wrapper. The download links are below. Then open first the ‘Orbec Viewer’ and then ‘K4A Viewer’ to check, if the sensor works as expected.
2. (Azure Kinect and Femto Bolt/Mega) Follow the instructions on how to download and install the latest release of Azure-Kinect Body Tracking SDK. It is used by all body-tracking related scenes, regardless of the camera. The download link is below.
3. (Kinect-v2) Download and install Kinect for Windows SDK 2.0. The download link is below.
4. (iPhone Pro) For integration with the iPhone Pro’s LiDAR sensor, please look at this tip.
5a. Import this package into a new Unity project.
5b. (Femto Bolt and Mega) Please look at this tip on what to do next.
6. Open ‘File / Build settings’ and switch to ‘PC, Mac & Linux Standalone’, Target platform: ‘Windows’ & Architecture: ‘Intel 64 bit’.
7. Make sure that ‘Direct3D11’ is the first option in the ‘Auto Graphics API for Windows’-list setting, in ‘Player Settings / Other Settings / Rendering’.
8. Open and run a demo scene of your choice from a subfolder of the ‘AzureKinectExamples/KinectDemos’-folder. Short descriptions of all demo-scenes are available in the online documentation.

* The latest Azure Kinect Sensor SDK (v1.4.1) can be found here.
* The latest release of Orbbec Viewer can be found here.
* The latest Orbbec SDK K4A-Wrapper (K4A Viewer) can be found here.

* The latest Azure Kinect Body Tracking SDK (v1.1.2) can be found here.
* Older releases of Azure Kinect Body Tracking SDK can be found here.
* Instructions how to install the body tracking SDK can be found here.

* Kinect for Windows SDK 2.0 can be found here.
* RealSense SDK 2.0 can be found here.

Downloads:
* The K4A-asset may be purchased and downloaded in the Unity Asset store. All updates are and will be available to all customers, free of any charge.
* If you’d like to try the free version of the K4A-asset, you can find it here.
* If you need to replace Azure Kinect with (or prefer to use) Orbbec’s Femto Bolt or Mega sensors, please follow this tip.
* If you’d like to utilize the LiDAR sensor on your iPhone-Pro or iPad-Pro as a depth sensor, please look at this tip.
* (Deprecated) The support of body tracking for Intel RealSense-sensor is currently deprecated.

Free for education:
The package is free for academic use. If you are a student, lecturer or university researcher, please e-mail me to get a free copy of the K4A-asset for academic and personal use.

One request:
Please don’t share this package or its demo scenes in source form with others, or as part of public repositories, without my explicit consent.

Documentation:
* The basic documentation is in the Readme-pdf file, in the package.
* The K4A-asset online documentation is available here.
* Many K4A-package tips, tricks and examples are available here.

Troubleshooting:
* If you get errors like ‘Texture2D’ does not contain a definition for ‘LoadImage’ or ‘Texture2D’ does not contain a definition for ‘EncodeToJPG’, please open the Package Manager, select ‘Built-in packages’ and enable ‘Image conversion’ and ‘Physics 2D’ packages.
* If you get errors like ‘Can’t create body tracker for Kinect4AzureInterface0!‘, please follow these tips:

  • Check, if you have installed the Body Tracking SDK v1.1.2 into ‘C:\Program Files\Azure Kinect Body Tracking SDK’-folder.
  • Start the ‘Azure Kinect Body Tracking Viewer’, and check if it works as expected.
  • Please note, the ‘Azure Kinect Body Tracking Viewer’ by default uses DirectML-processing mode, while the K4A-asset by default uses CUDA-processing mode (for performance reasons).
  • If you have NVidia GPU and prefer to stay with the CUDA processing mode for body tracking, please make sure you have installed the latest NVidia driver. See this link for more information. To make sure the CUDA processing more works on your machine, please open the command prompt (cmd), type ‘cd C:\Program Files\Azure Kinect Body Tracking SDK\tools’, and then run ‘k4abt_simple_3d_viewer CUDA’. This will start the ‘Azure Kinect Body Tracking Viewer’ with CUDA-processing mode.
  • Otherwise, if CUDA doesn’t work or you prefer to use DirectML (as ‘Azure Kinect Body Tracking Viewer’ does) in the K4A-asset too, please open the ‘Kinect4AzureInterface.cs’-script in ‘AzureKinectExamples/KinectScripts/Interfaces’-folder, find this line: ‘public k4abt_tracker_processing_mode_t bodyTrackingProcessingMode = k4abt_tracker_processing_mode_t.K4ABT_TRACKER_PROCESSING_MODE_GPU_CUDA;‘, and replace it with ‘private k4abt_tracker_processing_mode_t bodyTrackingProcessingMode = k4abt_tracker_processing_mode_t.K4ABT_TRACKER_PROCESSING_MODE_GPU_DIRECTML;‘. Then save the script, return to Unity and try to run the demo scene again.

* If the scene works in Unity editor, but doesn’t work in the build, please check if the ‘Architecture’ in Build settings is ‘x86_64’, and the ‘Scripting backend’ in Player settings is set to ‘Mono’.
* If you can’t upgrade the K4A-package in your project to the latest release, please go to ‘C:/Users/{user-name}/AppData/Roaming/Unity/Asset Store-5.x’ on Windows or ‘/Users/{user-name}/Library/Unity/Asset Store-5.x’ on Mac, find and delete the currently downloaded package, and then try again to download and import it.
* If Unity editor freezes or crashes at the scene start, please make sure the path where the Unity project resides does not contain any non-English characters.
* If you get syntax errors in the console like “The type or namespace name ‘UI’ does not exist…”, please open the Package manager (menu Window / Package Manager) and install the ‘Unity UI’ package. The UI elements are extensively used in the K4A-asset demo scenes.
* If you get “‘KinectInterop.DepthSensorPlatform’ does not contain a definition for ‘DummyK2′” in the console, please delete ‘DummyK2Interface.cs’ from the KinectScripts/Interfaces-folder. This dummy interface is replaced now with DummyK4AInterface.cs.
* If the Azure Kinect sensor cannot be started, because StartCameras()-method fails, please check again #6 in ‘How to run the demo scenes‘-section above.
* If you get a ‘Can’t create the body tracker’-error message, please check again #2 in ‘How to run the demo scenes‘-section above. Check also, if the Body Tracking SDK is installed in ‘C:\Program Files\Azure Kinect Body Tracking SDK’-folder.
* If the body tracking stops working at run-time or the Unity editor crashes without notice, update to the latest version of the Body tracking SDK. This is a known bug in BT SDK v0.9.0.
* RealSense support is deprecated. The Cubemos skeleton tracking SDK is not available anymore. For more information please look at this tip.
* If there are errors like ‘Shader error in [System 1]…’, while importing the K4A-asset, please note this is not really an error, but shader issues due to missing HDRP & VFX packages. You only need these packages for the Point-cloud demo. All other scenes should be started without any issues.
* If there are compilation errors in the console, or the demo scenes remain in ‘Waiting for users’-state, make sure you have installed the respective sensor SDKs and the other needed components. Please also check, if the sensor is connected.

What’s New in Version 1.19.x:
1. Added two new face demo scenes – hat overlay & show face image.
2. Updated BackgroundRemovalDemo2-scene with a new ‘Apply sensor pose’-option.
3. Updated body-spin filter, to be configurable if the user faces forward, backward or changes direction.
4. Added IsInitFailed()-method to KM, to provide information if the sensor initialization has failed.
5. Fixed avatar-floating issue in AvatarController, when the user enters the scene in non-stand-up pose.
6. Fixed AvatarController repositioning issue for some models, when ‘Apply muscle limits’ is utilized.
7. Fixed and improved the feet detection in AvatarController, when ‘Grounded feet’ is utilized.
8. Fixed the reset of sensor pose in K4A-sensor interface, to avoid the extra camera tilt.
9. (1.19.1) Added support for Orbbec’s Femto Bolt and Femto Mega cameras.

Videos worth more than 1000 words:
I love sharing creator’s work, like this one by James Bragg.

And here is a holographic setup, created by i-mmersive GmbH, with Unity 2019.1f2, Azure Kinect sensor and “Azure Kinect Examples for Unity”:

 

202 thoughts on “Azure Kinect and Femto Bolt Examples for Unity

  1. Hey Rumen,
    Would love to try out this package but am not seeing where i can download / access it? Is it only visible to certain readers?

  2. Hi Rumen.
    I’m working on Azure Kinect and Unity and really really want to try it out. Where can I download this package.

    Thanks,
    Kien Le

  3. Hey, i’m using 2019.1.10f1 and none of the avatar demos are working with the azure kinect. Unity immediately quits after pressing play. I have installed and tested both the sensor and body tracking sdks. Do you have any suggestions? I know you said there was a bug in the body tracking sdk. Does anyone know if this has been fixed yet?

    • Hi Chris, sorry, but the K4A-asset is quite new and issues are possible. Your issue is a bit odd though. May I ask you send me the Unity editor’s log file, along with the versions of Sensor SDK and Body Tracking SDK you have installed. Here is where to find the log-file: https://docs.unity3d.com/Manual/LogFiles.html My e-mail address is in the About-page of this website.

      • I have met the same issue with Azure Kinect Body Tracking SDK ver 0.9.0 and Azure Kinect Examples for Unity v1.1. After I updated the Azure Kinect Body Tracking SDK to ver 0.9.1, the problem has been solved.

      • Hey, Sorry for the delayed response, but updating to the .9.1 body tracking sdk fixed the issue. Works well now, thanks!

    • Hi Chris, this might not be your issue, but I noticed that if you have the Kinect Sensor already open when you try to run any of the Demos, it will crash immediately. So make sure you don’t have any other service or process that has opened the sensor before trying to use the Unity Project.

      • Andrew, thank you for providing this great tip! It may be useful to many of the users of K4A-sensor. As a matter of fact, I haven’t heard anything from Chris since reporting this issue. I suppose the issue got resolved by itself.

  4. Hi Rumen, thanks a lot for this! Quick question tho.
    In Kinect 4 Azure Interface when I play a recorded MKV with a depth track that works in the official Azure Kinect Viewer I get
    ArgumentException: Result is not of a recognized result type.
    Also depth_track_enabled of Playback is never set to True (color_track_enabled: True, depth_track_enabled: False, ir_track_enabled: False, imu_track_enabled: False, depth_delay_off_color_usec: 566)
    Any idea? Not sure if posting here is the best way to report something like this.

    • Hi Gauthier, yes I know what you mean. This is a bug in the C# wrapper of Sensor SDK. It is fixed in v1.2 of the K4A-asset that I published today. Please update from the download link in your invoice, or from your account page (if you’ve created one). Import the updated asset, try again and finally please tell me, if the issue is resolved or not.

  5. Hi Rumen,
    Rergarding those : “HDRP & VFX packages”
    Could you let us know which packages we need to download from the asset store to avoid the shader issues ?
    Thx,
    Seb

    • Hi Seb, what shader issues do you mean? If you mean the shader errors while importing the K4A-asset, just ignore them. They will not affect the demo scenes or script compilation. The HDRP & VFX packages are only needed for the point-cloud demo, along with the visual effects that produce the errors while importing. If you want to run this demo scene, see the instructions here: https://ratemt.com/k4adocs/Point-CloudDemo.html I personally would recommend to create a separate Unity project for it.

  6. Hello again, since I already purchased the unity examples package would it be possible for me to get a copy of the unity asset store version for free? I know it is the same, but it would make it a lot easier to integrate into future projects as the versions change.

    Cheers,
    Chris

    • Hi, please e-mail me your invoice number and PayPal account. I’ll refund you the money and then you can purchase the same asset on the Asset store.

  7. Hi Rumen –
    Thanks for making this asset! I’m getting the ‘Can’t create the body tracker’ error when I try to run the demo scenes, though the sensor does get turned on. The Sensor SDK and Body Tracking SDK are working fine too. I’m on Windows 10, Unity 2019.2.2f1, Sensor SDK 1.2.0, Body Tracking SDK 0.9.2. Any suggestions?

    • Hi Sean, please e-mail me and attach a screenshot of your project’s root folder, so I can take a look. This is the parent folder of the Assets-folder. Please also tell me the path, where the body tracking SDK is installed on your machine, and don’t forget to mention your invoice number, as well.

  8. Hi Rumen, I have a problem with the sensor recognition once I build my proyect. The sensor is recognized perfectly in the editor but whenever I build the proyect there is no image. Thanks in advance.

      • Hi there, I am also having this issue and can’t seem to figure out what’s going on. Would love to be in the loop on debugging this if possible. Aside from this, everything has been working great so far!

      • Hi Mark, please e-mail me and send me the Player’s log file (see the link above), so I can look more closely at what’s going on at run-time.

  9. I have 2 errors not covered in the docs, firstly I just now installed the latest versions of the tracking library and viewer as of today, and verified they work. Using Unity 2019.1.8f1 on a fresh project.

    AzureKinectOpenDeviceException: result = K4A_RESULT_FAILED
    and then
    Failed opening Kinect4AzureInterface, device-index: 0

    • Actually the error depends on what scene is open, the lights on the azure Kinect are all on when the scene plays, and the last line of the console is 1 sensor(s) open, but it never tracks anyone. In the backgroundremoval 1 demo the first error is: Can’t create body tracker for Kinect4AzureInterface0!

      • If only the body tracking doesn’t work, please make sure you have installed the Azure Kinect Body Tracking SDK into its by-default folder, i.e. ‘C:\Program Files\Azure Kinect Body Tracking SDK’. The K4A-asset expects to find it there. If it must be installed somewhere else, this would require a slight change in the code.

        If the sensor doesn’t work at all (i.e. scenes like BlobDetectionDemo or SceneMeshDemo don’t work too), please make sure you have installed Azure Kinect SDK v1.2.0. If the problem persists, please e-mail me for further analysis of the issue. I would appreciate it, if you send me the Editor log as well, so I can check more closely what happens. Here you can see where to find the Editor log: https://docs.unity3d.com/Manual/LogFiles.html

  10. That was my problem, my C drive is full so I’d installed on a separate drive, changing it fixed everything, thank you so much!

    Unrelated to your product it’s depressing how narrow the field of view on the depth sensor is on the azure, you can’t fit 6 people in it’s tracking area like you could with the previous kinect.

    Thanks again, your product helps so many people, I’ve used it since kinect v1 and really appreciate your effort.

  11. Hello Rumen F
    I‘m very happy to use your unity Azure Kinect Pakage。now i have some problems to ask you。i want to use Multiple divece in unity 。 i follow your tips to set kinectcontroller prefabes。but the sec deviece doesn’t work。there is erro about the sec device that is “Can’t create body tracker for Kinect4AzureInterface1!“。i try to close creat body function then test the the sec device, it doesn’t work either . what i need to do to fix this erro and make the sec device work?

      • Thanks very much! I bought in the Asset Store and works GREAT – excellent work! I was looking through the docs and wasn’t able to locate the licensing but I did find your repo with the free (limited) code base and it was MIT – is this version also covered under MIT? Thanks again – really appreciate your work!

  12. Yes, I saw your requests and notes on not sharing and will DEFINITELY not be sharing and will respect that for sure. Thank you for the work you’ve done, it’s a blessing!

  13. Hi! I’m an university student, and i’m going to do a project About motion tracking.
    I read in the document that for educational purpose is free to use?

    • Yes, please send me an e-mail request for a free copy of the K4A-asset from your university e-mail address. This is to prove you really study or work there.

    • Sorry, but I don’t send any packages to e-mail addresses, without getting an e-mail from them first. This is just a basic security measure.

  14. Hi Rumen! Has been a huge help using your package for my project. I have been trying to set up pose detection for multiple models by instantiating a pose detector for each new avatar created by the UserAvatarMatcher script, but even though I am able to instantiate multiple pose detectors and assign the appropriate PoseModelHelper and AvatarController scripts to each pose detector, the pose detection doesn’t work. Is it even possible to run pose detection for multiple avatars in the scene? If so, do you have any idea what I might be doing wrong?

    • Hi Mark, could you please e-mail me and send me your script (along with the meta-file) and the scene, where the script is used. I’ll try to find out what goes wrong. Please also mention your invoice number in the e-mail.

  15. Is there a way to replace the live azure kinect data stream with a pre-recorded data stream? I’m trying to stress test my scene running for many hours but would prefer not to rely on actively moving in front of the camera for a long time.

  16. Can we get the something like the interaction demo in KinectV2 back?
    That’s the most important part (for me) 🙂

  17. Hi thanks for the great work! Just want to ask, is there an API from KinectManager for transformations for depth to color and color to depth? I wanted to get the depth2color buffer but it returns null.

    • ulong frameTime = 0, lastUsedFrameTime = 0;

      // in Start()
      KinectInterop.SensorData sensorData = KinectManager.Instance.GetSensorData(0);
      sensorData.sensorInterface.EnableColorCameraDepthFrame(sensorData, true);

      // in Update()
      ushort[] transformedDepthFrame = sensorData.sensorInterface.GetColorCameraDepthFrame(ref frameTime);
      if(transformedDepthFrame != null && frameTime != lastUsedFrameTime)
      {
      lastUsedFrameTime = frameTime;
      // do something with the transformed depth frame
      }

      The same approach can be used for depth-camera transformed color frame as well.

  18. I want to access only the user’s head portion from the point cloud data. I am using the UserMeshDemo of PointCloudDemo. How can I do that?

    • Sorry, I’ve missed this question before. What I would do if I were you, would be to get the user’s head position, like this: ‘Vector3 headPos = KinectManager.Instance.GetJointKinectPosition(userId, KinectInterop.JointType.Head, true);’. Then set it as shader’s parameter in UpdateMesh(). And finally, in the shader’s vert() method, filter out all vertices that are not within some distance to the current head’s position.

      • Thank you!

        Could you please clarify the last part? I couldn’t find the vert() method.

  19. Hello
    I transfer v2 to azure recently
    I have a question about “Can’t create body tracker for device” bug
    SDK installed. And the viewer cannot open with GPU default
    and I can open viewer with Command “k4abt_simple_3d_viewer.exe CPU”
    How do I use CPU body tracker in the asset

    • Hi, please open DepthSensorBase.cs in AzureKinectExamples/KinectScripts/Interfaces-folder and search for ‘new BodyTracking(‘. Then change the 3rd argument from ‘K4ABT_TRACKER_PROCESSING_MODE_GPU’ to ‘K4ABT_TRACKER_PROCESSING_MODE_CPU’, save the change, return to Unity and try again. This should switch the body tracking to CPU only mode.

  20. Hey Rumen,
    I am using the background removal with the camera orientated clockwise and am getting some weird offset of the masking.It looks like the mask and the rgb feed are just slightly off in one direction. Have you tested the background removal with the camera orientated this way? And if you have did you need to change anything?
    Any help would be greatly appreciated.

  21. Hi Rumen,
    I have a question about excessive CPU utilization. My hardware configuration is I5-7500 and RTX 1660TI 6G. When I open any scene (like OverlayDemo (v1.8)), the CPU utilization rate is over 80% all the time, meanwhile the GPU utilization is just below 20%. If I open another program at the same time, the cpu utilization could reach 100% and have a program crash. I run the Azure Kinect Body Tracking Viewer, and the CPU utilization is about 50%, and the GPU utilization is below 20%.
    Is this a normal phenomenon? Or is there anything that needs setting? I set the default set of ‘K4ABT_TRACKER_PROCESSING_MODE_GPU’.
    Thanks a lot!

      • Thank you for this question! I have not looked at the overall CPU utilization so far.

        Please open and look for ‘Thread.Sleep(‘ in KinectManager.cs (in KinectScripts-folder) and DepthSensorBase.cs (in KinectScripts/Interfaces-folder). Then experiment a bit by changing the parameter of this method invocation to 10 or more (milliseconds). Then save the script, go back to Unity, run the overlay demo again and check, if the CPU utilization has changed. Repeat this for several different sleep-time values. would be happy, if you share here what sleep time you find as optimal.

      • I set the sleep time as 8ms, now the CPU utilization is about 50% with i5-7500. Thank you.

  22. Hi Rumen!

    Thanks for your awesome asset!
    I’m looking for some help to do this:
    Is there a way to show a few body segmenst of the human body in real time, and hide the other body segments, using the Background Removal feature? Here is an image of that I’m trying to do:

    http://www.nexar.cl/demos/test.png

    Do I have to use a special Mask?

    Or Screenspace Textures, like this:
    https://www.ronja-tutorials.com/2019/01/20/screenspace-texture.html

    Or Clipping, like this:
    https://www.ronja-tutorials.com/2018/08/06/plane-clipping.html

    Any guide to do this?

    Best regards,

    Cris

    • Hi Cris, Honestly, I don’t know how to answer your question. You could try different options and compare them. I personally would modify ForegroundFiltBodyShader.shader to filter only the points that are near to some specific joints, like head, elbows, wrists, hands, knees and ankles. Of course this may not produce 100% the expected results, because if for instance user’s arms are near the body, parts of the body will be close to elbows, wrists and hands, and will be visible as well.

      • OK Rumen, thanks for your help. I’s a good step to start with. Maybe we will use some digital image processing tools as well.

        Best regards,

        Cris

  23. Hi,

    Thanks for an awesome plugin.
    Using K4A I’ve gotten the skeleton tracking latency to a minimum by having a low res depth texture and not streaming any RGB textures, it’s working fine in the Unity IDE.

    However, when I make a Windows Build, latency increases a whole lot. I’m not GPU or CPU bounded, it just feels a lot slower. Any ideas as to what could be causing this?

    Thanks!
    Tom

    • Hi Tom, hm hm, this doesn’t sound right. It should work the same way (or better) than in the Editor. Please locate the Player’s log and send it over to me, so I can take a look, and if possible, attach two short video clips depicting the latency in the Editor and in the Player. Don’t forget to mention your invoice number in the e-mail, too. Here is where to find the Unity log files: https://docs.unity3d.com/Manual/LogFiles.html

      • Discussion continued through mail and we’ve found a solution in turning of VSync in a build and then set Vsync to “fast” in the Nvidia driver.

  24. Hello I would like to know if it is possible not to have the mirror effect in the user mesh? a solution to offer !

  25. Hello Rumen.
    We have been developing products using Kinect V2.
    Now testing a new product using Kinect V4.
    The main feature of the product is that measure the angle between the joints.
    In V2, the Angles were calculated with coordinates of joint object which is get from GetJointColorOverlay Method. The angles accurated. But in V4 Isn’t.
    I measured it using GetJointPosition Method.
    and the difference between the actual user’s motion and the measured value is about 10 degrees.
    I wonder if there is a solution. thank you.

      • GetJointColorOverlay()- method output x, y value for pixel coordinate and z value for raw skeleton coordinate. (e.g. 146.2, 235.4, 2.1) So I can’t calculate angle of a joint accuratly
        And I also activate a code in GetJointColorOveray()-method that calculate distance between plane of camera and the joint (Vector3), but output of color overlay is same…
        So I get raw coordination of joint using GetJoinsPosition()-method and angle is correct.
        (I mistake to use GetJoiintKinectPosition()-method which I previously commented.)

  26. Hi Rumen, I notice in the documentation a component called “KinectUserBodyMerger” which seems interesting for using multiple sensors for superior body tracking. I am using the most recent version of your tool and SDKs, but I can not see this component as a script or anything I can add to my scene. Is there somewhere else I need to go to find this?

      • It means when I’m using multiple cameras, I use the GetJointKinectPosition() or GetJointPosition() method, and the method returns averaged position of a body joint after bodies merge. Is that right?

      • Yes, that’s right. If you need the joint positions as detected by specific cameras, you can use GetSensorJointKinectPosition() or GetSensorJointPosition().

  27. Hi Rumen, the important feature, classifier for the hand states, has not received the feedback forum’s response for months. But we urgently need the simple feature, just “grab” and “release”, so that we can do some simple interaction. Do you have any ideas or alternatives to implement the feature about grab and release? Thank you very much.

    • Hi, unfortunately there is no reliable alternative of the hand states classifier, as far as I know. The COVID-19 situation in the USA and Europe is making the things only more difficult and introduces huge delays.

  28. Thank you so much, Rumen. The experimental interaction components really help us solve the interaction problem.

    • Thank you, too! It’s not perfect, but it was the best I could do, while we’re waiting for a real solution, in means of SDK-provided hand state classifier.

  29. Hi Rumen,
    I am currently working on a project that measures human body joints. Along with those measurements, some measurement requires an angle data which uses the position of joints such as the wrist, handtip, and the thumb. Sadly, I came across a problem where the newer Azure Kinect sensor has a poor accuracy, especially for handtip and thumb, than the old one. 🙁 Kinect V2 was accurate and reliable enough to measure the angle, where Azure one does not.

    I wonder if there is a solution, or a clever method to raise the accuracy of the position of handtip and thumb.

    Thank you.

    • Hi Min Oh, sorry for the late reply. Unfortunately I’m not aware of such method. But I don’t think Azure Kinect’s body tracking has poorer accuracy of the hand joints than Kinect-v2. By both sensors tracking of these joints is far from perfect, and as far as I know, this is due to the relatively small areas the hands (along with the fingers) take in the overall image.

      • Thank you for replying! Guess I have to find one out myself.. Still, I appreciate you concerning about my problem. 🙂

        Thanks!

  30. Hi, rumen, when I recently tested multiple people with a single device, I found that the data of two people would cross. Looking at the code, I found that the bodyindex will change when the number of people is refreshed, and the data transmission depends on the bodyindex. How can I solve this problem.Thank you

    • Please e-mail me and tell me some more details about your issue. If the body indices change, probably the IDs of the tracked people have changed, too.

    • The face tracking is not anymore part of the body tracking SDK. As you can see in the tutorial above, it uses the Azure cognitive services instead that are 1. slow, because they must be invoked over the Internet, and 2. you need an Azure account and payment plan for it. That’s why I have not included face tracking in the K4A-asset yet. If you want to use the Azure face API with any kind of web camera (not just Azure Kinect), please look at this repo: https://github.com/rfilkov/CloudUserManager

      • Thank you for the informative reply!
        I have other question about using multiple kinects. It seems that each device renders a mesh individually. Is there any way to render all of the point cloud all together as a single mesh?

      • The point clouds come from different sensors. That’s why they are rendered individually. I’ve got your e-mail, by the way. The problem may occur, because of the calibration or synching between the sensors. Let me research the issue a bit, and then I’ll get back to you in few days.

  31. Greetings!
    I think it’s so great!
    These are the assets I need now!

    I am currently using Unity 2020.1.0f1 version.
    I’ve searched for various sample projects through multiple paths now, but errors have occurred, so I’m trying to purchase your assets without spending any more.

    But I want to get some confirmations before buying!

    I am in a situation where I need to use VFXGraph.
    I want to know if your assets work without problem in VFXGraph Preview project in Unity 2020.1.0f1

    The Windows version is WIndows 10.

    OS Build version is 19041, 450.

    Version 1.4.1 is installed through Kinect Viewer

    Will your assets work normally in the following environments?

  32. Hi Rumen,
    I love this asset it works beautifully and is extremely helpful. Thank you very much!
    I was wondering if there is a way to directly map the rig created in blender to the joints tracked by the azure kinect to overlay an avatar on the user. Essentially overlaying the rigs bones with the blue lines that the skeleton overlayer displays and the bone heads and tails with the green spheres(tracked joints)? Currently, when I use the humanoid mecanim rig and the avatar user matcher script there is always a bit of an offset.
    Thank you for your time!

    • Hi, please look at the fitting room demo scenes (in your case KinectFittingRoom2) and try to replace the ModelMF-object in the scene with your model. Use the same components as ModelMF for it. Please note, your model should be set with Humanoid rig, as well. Hope this does what you need.

  33. Hi Rumen,

    First, thank you very much for this asset, it has done wonders for the projects I am using the Azure Kinect with.

    I have a question about the FOV settings for the kinect with your asset. I am trying to compare the ranges of the depth sensor using WFOV and NFOV, but when I change the ‘Depth Camera Mode’ variable in the K4AInterface script it does not seem to make any meaningful change in the kinect range. I’m wondering if there just isn’t that much of a difference between WFOV and NFOV, or if the depth camera mode variable isn’t being applied to the Kinect opening process.

    • Hi Kevin, you should only change the depth camera mode (as well as color camera mode) before you run the scene. Otherwise it will have no effect.

      According to my experience, there is some difference between the NFOV and WFOV modes. Even the form of the depth image changes. You can see this, if you display it on screen. To to it, set ‘Get depth frames’-setting of KinectManager-component in the scene to ‘Depth texture’, add one more image to the ‘Display images’-setting, and set it to display ‘Sensor 0 depth image’.

  34. Hi Rumen,
    My project contains multiple scenes, and most of the scenes do not use the Kinect, only few scenes use it. How can I turn off the Kinect function in most of the scenes and turn on it in the few scenes, because the Kinect consumes lots of CPU resources?
    Thank you very much.

  35. Hello
    I am testing the example of DepthColiderDemo2D.
    Unity scene frames are 25 fps or higher. However, the depth image frame seems to be less than 8 fps.
    How do I solve the problem of depth frame falling?

    And I’m going to use sensorData.bodyIndexImage.
    The output is different from the border shown in the actual dept camera.
    The human image is crushed.
    How do I fix this?

    Compared to the Kinect V2, this performance is significantly lower.

    • Hi, both issues are because of the lousy body tracking SDK that accompanies the Azure Kinect sensors. It works slow, the body segmentation images are inaccurate (in comparison to Kinect-v2 SDK), and you need a high end GPU to make it work with proper FPS. All my (and other devs’) efforts to request a better, faster and more accurate body tracking from the AK team were fruitless.

  36. hi, There is no facetracking in azure Kinect.
    I want to use the facetracking that I used in Kinect 2.
    Is there any way?

  37. Hello Rumen, and thanks for the good works it’s really effective 🙂
    I have a question about Kinect studio 2.0,
    I use to work with it all the time, to avoid me get up each time i want to test 🙂
    but it doesn’t seem to work anymore with your library

    when i select “connected sensor” in the device streaming mode
    it gives me this on the console :
    D0: Kinect-v2 Sensor, id: KinectV2
    K2-sensor opened, available: False

    And when i select “play recording” in the device streaming mode
    on the console :
    Opening S1: Kinect2Interface, device-index: 0
    Please use Kinect Studio v2.0 to play the sensor data recording!

    Do you have an idea ?
    Thank you !

    • Please use the ‘Connected sensor’ option as device streaming mode. Run the scene. In the Kinect Studio, make sure all needed streams are selected, and press the Connect-button. Then you can either play a recording or use the live sensor data through the Studio.

      • ho yes ! thanks a lot for your answer, I had not selected the good streams inside Kinect Studio

  38. Hi,
    The photobooth is not stable
    Do you have any stable solution for 3d model with texture of full body cloths

  39. Hello Rumen. Do you think the body tracking SDK will be updated or left behind? i read several discussions online debating the lacking quality – and that happened after buying it ;/

    • Hi, they had some plans to update it at least one more time, to make the BT-SDK ARM compatible. But I don’t expect any major changes regarding the performance or quality of tracking. I’m still quite angry at the body tracking team, because they ignored the advices of the Kinect supporters. Instead they tried to ruin their businesses, as well as the end-user experience of a very good sensor.

      • :/ I bought one after using your kinect 2 unity package with great results. Really some good building bricks for exploration. I am, as you, dissapointed. After reading comments here about the tracking i managed to find one forum post regarding possible implementation of a legacy (kinect2) tracking. Furthermore i found one dev. saying they work on updating the tracking – but it is a small team doing it. Maybe realsense has more prospects in terms of affordable body tracking devices. As far as i understand you need two kinect azure + an extensive computer rig for precise tracking. Baffled about the lacking results even with good enough gear. What a shame.

    • The color-camera aligned depth image should be undistorted. Please look at the BackgroundColorCamDepthImage-script component in ‘AzureKinectExamples/KinectScripts’-folder.

  40. Hi Rumen,
    I try to run the background removal demo scene with a kinect v2, however, an error
    “D0 is not available. You can set the device index to -1, to disable it” kept occuring. Would you kindly answer me what I can do to fix this issue? Also, I was planning to transfer some of my kinect v2 project to k4a.

  41. Pingback: Kinect v2 Examples with MS-SDK | RF Solutions - Technology, Health and More

  42. Hi Rumen ,I want to know how many people can be tracked by Azure Kinect at most, will track the skeletons of all identified users, or there will be a limit on the number.

    • I think Azure Kinect Body Tracking SDK can track up to 6 people, but this information is missing in the official documentation. A workaround of this limitation would be to use more cameras, to cover a bigger area.

Leave a Reply to Rumen F.Cancel reply