Azure Kinect and Femto Bolt Examples for Unity

Azure Kinect Examples for Unity, v1.19.1 is a set of Azure Kinect and Femto Bolt camera examples that use several major scripts, grouped in one folder. The package contains over thirty five demo scenes. In addition to the Azure Kinect, Femto Bolt and Femto Mega sensors, the K4A-package supports the “classic” Kinect-v2 (aka Kinect for Xbox One), as well as iPhone-Pro LiDAR sensors.

The avatar-demo scenes show how to utilize Kinect-controlled avatars in your scenes, gesture demo – how to use discrete and continuous gestures in your projects, fitting room demos – how to overlay or blend the user’s body with virtual models, background removal demo – how to display user silhouettes on virtual background, point cloud demos – how to present the real environment or users as meshes in your scene, etc. Short descriptions of all demo-scenes are available in the online documentation.

This package works with Azure Kinect, Femto Bolt and Femto Mega sensors, Kinect-v2 (aka Kinect for Xbox One) and iPhone-Pro LiDAR sensors. It can be used with all versions of Unity – Free, Plus & Pro.

How to run the demo scenes:
1a. (Azure Kinect) Download and install the latest release of Azure-Kinect Sensor SDK. The download link is below. Then open ‘Azure Kinect Viewer’ to check, if the sensor works as expected.
1b. (Femto Bolt and Mega) Download and unzip the latest release of Orbbec Viewer, as well as Orbbec SDK K4A Wrapper. The download links are below. Then open first the ‘Orbec Viewer’ and then ‘K4A Viewer’ to check, if the sensor works as expected.
2. (Azure Kinect and Femto Bolt/Mega) Follow the instructions on how to download and install the latest release of Azure-Kinect Body Tracking SDK. It is used by all body-tracking related scenes, regardless of the camera. The download link is below.
3. (Kinect-v2) Download and install Kinect for Windows SDK 2.0. The download link is below.
4. (iPhone Pro) For integration with the iPhone Pro’s LiDAR sensor, please look at this tip.
5a. Import this package into a new Unity project.
5b. (Femto Bolt and Mega) Please look at this tip on what to do next.
6. Open ‘File / Build settings’ and switch to ‘PC, Mac & Linux Standalone’, Target platform: ‘Windows’ & Architecture: ‘Intel 64 bit’.
7. Make sure that ‘Direct3D11’ is the first option in the ‘Auto Graphics API for Windows’-list setting, in ‘Player Settings / Other Settings / Rendering’.
8. Open and run a demo scene of your choice from a subfolder of the ‘AzureKinectExamples/KinectDemos’-folder. Short descriptions of all demo-scenes are available in the online documentation.

* The latest Azure Kinect Sensor SDK (v1.4.1) can be found here.
* The latest release of Orbbec Viewer can be found here.
* The latest Orbbec SDK K4A-Wrapper (K4A Viewer) can be found here.

* The latest Azure Kinect Body Tracking SDK (v1.1.2) can be found here.
* Older releases of Azure Kinect Body Tracking SDK can be found here.
* Instructions how to install the body tracking SDK can be found here.

* Kinect for Windows SDK 2.0 can be found here.
* RealSense SDK 2.0 can be found here.

Downloads:
* The K4A-asset may be purchased and downloaded in the Unity Asset store. All updates are and will be available to all customers, free of any charge.
* If you’d like to try the free version of the K4A-asset, you can find it here.
* If you need to replace Azure Kinect with (or prefer to use) Orbbec’s Femto Bolt or Mega sensors, please follow this tip.
* If you’d like to utilize the LiDAR sensor on your iPhone-Pro or iPad-Pro as a depth sensor, please look at this tip.
* (Deprecated) The support of body tracking for Intel RealSense-sensor is currently deprecated.

Free for education:
The package is free for academic use. If you are a student, lecturer or university researcher, please e-mail me to get a free copy of the K4A-asset for academic and personal use.

One request:
Please don’t share this package or its demo scenes in source form with others, or as part of public repositories, without my explicit consent.

Documentation:
* The basic documentation is in the Readme-pdf file, in the package.
* The K4A-asset online documentation is available here.
* Many K4A-package tips, tricks and examples are available here.

Troubleshooting:
* If you get errors like ‘Texture2D’ does not contain a definition for ‘LoadImage’ or ‘Texture2D’ does not contain a definition for ‘EncodeToJPG’, please open the Package Manager, select ‘Built-in packages’ and enable ‘Image conversion’ and ‘Physics 2D’ packages.
* If you get errors like ‘Can’t create body tracker for Kinect4AzureInterface0!‘, please follow these tips:

  • Check, if you have installed the Body Tracking SDK v1.1.2 into ‘C:\Program Files\Azure Kinect Body Tracking SDK’-folder.
  • Start the ‘Azure Kinect Body Tracking Viewer’, and check if it works as expected.
  • Please note, the ‘Azure Kinect Body Tracking Viewer’ by default uses DirectML-processing mode, while the K4A-asset by default uses CUDA-processing mode (for performance reasons).
  • If you have NVidia GPU and prefer to stay with the CUDA processing mode for body tracking, please make sure you have installed the latest NVidia driver. See this link for more information. To make sure the CUDA processing more works on your machine, please open the command prompt (cmd), type ‘cd C:\Program Files\Azure Kinect Body Tracking SDK\tools’, and then run ‘k4abt_simple_3d_viewer CUDA’. This will start the ‘Azure Kinect Body Tracking Viewer’ with CUDA-processing mode.
  • Otherwise, if CUDA doesn’t work or you prefer to use DirectML (as ‘Azure Kinect Body Tracking Viewer’ does) in the K4A-asset too, please open the ‘Kinect4AzureInterface.cs’-script in ‘AzureKinectExamples/KinectScripts/Interfaces’-folder, find this line: ‘public k4abt_tracker_processing_mode_t bodyTrackingProcessingMode = k4abt_tracker_processing_mode_t.K4ABT_TRACKER_PROCESSING_MODE_GPU_CUDA;‘, and replace it with ‘private k4abt_tracker_processing_mode_t bodyTrackingProcessingMode = k4abt_tracker_processing_mode_t.K4ABT_TRACKER_PROCESSING_MODE_GPU_DIRECTML;‘. Then save the script, return to Unity and try to run the demo scene again.

* If the scene works in Unity editor, but doesn’t work in the build, please check if the ‘Architecture’ in Build settings is ‘x86_64’, and the ‘Scripting backend’ in Player settings is set to ‘Mono’.
* If you can’t upgrade the K4A-package in your project to the latest release, please go to ‘C:/Users/{user-name}/AppData/Roaming/Unity/Asset Store-5.x’ on Windows or ‘/Users/{user-name}/Library/Unity/Asset Store-5.x’ on Mac, find and delete the currently downloaded package, and then try again to download and import it.
* If Unity editor freezes or crashes at the scene start, please make sure the path where the Unity project resides does not contain any non-English characters.
* If you get syntax errors in the console like “The type or namespace name ‘UI’ does not exist…”, please open the Package manager (menu Window / Package Manager) and install the ‘Unity UI’ package. The UI elements are extensively used in the K4A-asset demo scenes.
* If you get “‘KinectInterop.DepthSensorPlatform’ does not contain a definition for ‘DummyK2′” in the console, please delete ‘DummyK2Interface.cs’ from the KinectScripts/Interfaces-folder. This dummy interface is replaced now with DummyK4AInterface.cs.
* If the Azure Kinect sensor cannot be started, because StartCameras()-method fails, please check again #6 in ‘How to run the demo scenes‘-section above.
* If you get a ‘Can’t create the body tracker’-error message, please check again #2 in ‘How to run the demo scenes‘-section above. Check also, if the Body Tracking SDK is installed in ‘C:\Program Files\Azure Kinect Body Tracking SDK’-folder.
* If the body tracking stops working at run-time or the Unity editor crashes without notice, update to the latest version of the Body tracking SDK. This is a known bug in BT SDK v0.9.0.
* RealSense support is deprecated. The Cubemos skeleton tracking SDK is not available anymore. For more information please look at this tip.
* If there are errors like ‘Shader error in [System 1]…’, while importing the K4A-asset, please note this is not really an error, but shader issues due to missing HDRP & VFX packages. You only need these packages for the Point-cloud demo. All other scenes should be started without any issues.
* If there are compilation errors in the console, or the demo scenes remain in ‘Waiting for users’-state, make sure you have installed the respective sensor SDKs and the other needed components. Please also check, if the sensor is connected.

What’s New in Version 1.19.x:
1. Added two new face demo scenes – hat overlay & show face image.
2. Updated BackgroundRemovalDemo2-scene with a new ‘Apply sensor pose’-option.
3. Updated body-spin filter, to be configurable if the user faces forward, backward or changes direction.
4. Added IsInitFailed()-method to KM, to provide information if the sensor initialization has failed.
5. Fixed avatar-floating issue in AvatarController, when the user enters the scene in non-stand-up pose.
6. Fixed AvatarController repositioning issue for some models, when ‘Apply muscle limits’ is utilized.
7. Fixed and improved the feet detection in AvatarController, when ‘Grounded feet’ is utilized.
8. Fixed the reset of sensor pose in K4A-sensor interface, to avoid the extra camera tilt.
9. (1.19.1) Added support for Orbbec’s Femto Bolt and Femto Mega cameras.

Videos worth more than 1000 words:
I love sharing creator’s work, like this one by James Bragg.

And here is a holographic setup, created by i-mmersive GmbH, with Unity 2019.1f2, Azure Kinect sensor and “Azure Kinect Examples for Unity”:

 

202 thoughts on “Azure Kinect and Femto Bolt Examples for Unity

  1. Hi Rumen, Firstly, I’d like to say that this package is a great tool.
    I’m working on an Unity project using Azure Kinect Examples for Unity for couple of months. It is fine all the time.
    Windows forced updating my system yesterday. After that update finish my project can’t run normally. I had tried some of examples of Azure Kinect Examples for Unity, running the BackgroundRemovalDemo1 scene was showing a small window with title “Unity Editor – Unity 2019.3.3f1_7…” and was terminated immediately. While running BackgroundRemovalDemo2 was everything fine. I found that if I set the parameter of “GetBodyFrames” of Kinect Manager Component to “none” it will be fine, otherwise, it failed to run.
    How do I solve this problem?

    • Hi,

      Obviously there is a problem with the Body Tracking SDK. Please run some of the problematic scenes in the Editor (I suppose the 1st avatar demo will be problematic, as well), then close Unity editor, and e-mail me the Editor’s log file, so I can take a closer look. Here you can see where to find Unity log-files: https://docs.unity3d.com/Manual/LogFiles.html

  2. Hi Rumen,
    Color camera fps dosen’t look like 30fps.
    But It works perfectly in Azure Kinect Viewer.
    How can I fix it?

    • Hi, I’m not sure what you mean by “fps doesn’t look like 30fps”. Please e-mail me some video clip that shows the issue, the current settings of the KinectManager-component in the scene, and the version of the K4A-asset you are using.

  3. Hi Rumen!

    I have a problem with a Sensor data parameter in Kinect manager: “Get Pose Frames”. By default it has “None” in your plug-in, but I need to have it in “Raw Pose Data” in my project.

    When I use the Kinect4Azure sensor with this parameter set to something other than “None”, both body tracking and application performance drops considerably (it works at 8 fps).

    Do you know why this happens with Kinect Azure? With kinect v2 this problem does not occur to me, since my project must have support to use either of the two sensors.

    Thanks!

    OS: Windows 10
    K4Aasset: v1.16

    • Hi. Yes, you are right. This setting (when not None) tries to estimate the sensor’s pose, i.e. its position and rotation. In case of Azure Kinect it determines the height of the sensor above the ground and the sensor rotation around the X & Z axes. This involves some complex computations on the GPU & CPU side, and this causes the performance issues. But usually Azure Kinect is static and its pose needs to be determined only once – after the sensor gets installed. In this regard, please see this tip: https://rfilkov.com/2019/08/26/azure-kinect-tips-tricks/#t9 , as well as the next one, if you intend to use multiple sensors in your scenes.

      • Thanks! I ended up using the “EnablePoseStream(true/false)” flag to activate and deactivate the imu frames in time for a few seconds, and thus get the position and rotation of the sensor without problems

  4. Hi, I am looking for a help for the following scenario :

    My scenario is I need to capture a color image with Azure Kinect DK and that image has to be downloaded as a PNG or JPG format on my pc using Unity. Can you please share a few references for this scenario implementation.

    • Hi, this should be pretty straight forward. You would need something like this in your script:

      Texture2D texColorImage = (Texture2D)KinectManager.Instance.GetColorImageTex(0);
      byte[] btColorImage = texColorImage.EncodeToPNG();
      System.IO.File.WriteAllBytes("ColorImage.png", btColorImage);


      Please don’t forget to set ‘Get color frames’-setting of the KinectManager-component in the scene to ‘Color texture’, too.

  5. Hi, I’m an Azure Kinect user and I bought your asset and I’m using it for my Unity project.

    I’m curious about the functions of K2’s ColorDepthShader.
    I want to image transform the Depth map to color map(I want 16:9 resolution). ColorDepthShaderK2 seems to function, but can’t it be used in Azure Kinect?

    It would be nice if you could tell me how to use this function in Azure Kinect.

    • Hi, please look at the BackgroundRemovalDemo1-scene and its BackgroundRemoval-game object (and the respective BackgroundRemovalManager & BackgroundRemovalByBodyBounds-components).

  6. Hi Rumen, Great tool!!!

    I am using the network demo. I can use it on Quest 2 and deploy it on Quest2. When I build it for Hololens 2, Editor play mode works fine. But when I deploy it to a real device, error message shows as “Timeout waiting for net-dst”. The server side successfully connects client’s Control Server, ColorServer, and DepthServer. The Editor simulation would have 4 connections on the server. The device deployment only has 3. I think the missing connection is TColorServer.

    Is there anything I can do to dig out the problem?

  7. Hi Rumen, Great Tool package.

    I had a problem with the network demo. I posted a question. It seems that it did not go through. If it did, please delete one of the duplicated posts.

    I used the network demo and deploy it on real device. The deployment on Quest 2 works like a charm.

    I tried it with MRTK on Hololens. In Editor play mode, everything works fine. When I try it on a real Hololens 2 device, I get a “Timeout Waiting for net-dst”. Server side log shows that the Hololens 2 connects ControlServer, ColorServer and DepthServer successfully. I think TColorServer is missing when I compare with Editor play mode log.

    Is that something you have met before?
    What’s next step if I want to find out the real problem?

    • Hi Chao, no I don’t have any experience connecting HoloLens (or HL2) to the KinectNetServer. After you run the Please try to find the Unity log file on the HoloLens, as well as the log-file on the KinectNetServer machine, and e-mail them both over to me, so I can take a closer look at what happens under the hood.

  8. Hi Rumen,

    Can you advice what Unity editor ver would you suggest to do the demo?

    Even tho the free package was written in 2019.1.01f1, I couldnt run it well, got a full black screen on the game scene. Seems to works fine with 2020 but it prompted me if i want to do some pipeline render fix.

  9. Hi Rumen! I’m a MA Computational Arts Student and I was hoping to use your SDK for K4A to Unity to do some testing for a marked project. Would this be ok? Best wishes, Elliot

  10. Pingback: ⭐️⭐️⭐️⭐️⭐️ Azure Kinect Examples for Unity - Published By RF Solutions - UnityMVP.com

  11. Hi Rumen, with Azure Kinect I often have the problem that the body flips when looking at someone on the ground similar to what is described here https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/575. Kalman Filtering only helps so much.. any advice how to filter out this flipping? Maybe use another network on top that can tell me which direction the user is facing ( I have rgb , ir and depth)? or only update the model’s nodes when certain joints are tracked? I noticed the flipping is worse when the face is not visible, but it still happens quite a lot even when the face is visible.

    • Hi Marc, could you please e-mail me and (if possible) provide me some video or recording that depicts the problem or helps me reproduce it?

  12. Hi Rumen!
    I have a problem with building:

    In my project, the scene & Kinect Device works fine in Unity Editor, but doesn’t work in the build, the lights of the Device never turned on when the build active. I’ve check my settings: the ‘Architecture’ in Build settings doesn’t have the option named ‘x86_64’, only ‘Intel 64bits’ & ‘Intel 32 bits’, and the ‘Scripting backend’ in Player settings is already set to ‘Mono’ correctly, both of the ‘Architecture’ settings don’t work.

    How do I solve this problem?

    My Environment:
    · Unity Editor 2021.2.0f1c1
    · Kinect 2.0

      • Hi Rumen, thanks for reply!

        After reading the tips behind, I realized that I disable 2 whloe objects with RS & K4A interface in my object with ‘KinectMannager.cs’, rather than set them to ‘Disabled’ in ‘Device Streaming Mode’ (I thought it’s safer if I’m using only K2 LOL).
        Then I build it, and it worked! the K2 Device light up and the function running correctly.

        But I have another try: disabled RS & K4A object again and rebuild it, the weird thing is it worked as well. I’m pretty sure I reboot the project couple of times yesterday, not sure if I should reboot my PC back then. The building log never shows any error since yesterday.

        Anyway, my issue is solved weirdly, just feed it back to you, hope it could helps others, thanks again!

  13. Hey Rumen, I wanted to ask if it was possible to make the networking scenes work when building it for a WebGL Platform? If so where can i start looking into it, i’m not that well versed with networking so any starting hints/tips would be appreciated

  14. Hey Rumen,

    I wanted to ask for your help with something. I am trying to activate the game object containing all the Azure Kinect controller stuff during runtime instead of on Awake.
    I am doing this by setting the gameobject to active when a condition is met.

    However, when I do that, unity freezes for roughly 5-10 seconds as the game object activates and then works as expected.

    Do you know of any way we can achieve activating the gameobject during runtime without the lag/freezing?

    Any help you can provide would be greatly appreciated!
    Thanks

    • Hi, I think this is because of loading libraries (both Azure SDK’s and Body Tracking SDK’s), initializing the environment and starting the sensor. If you leave the object with KinectManager to be initialized on Awake, and then call its StopDepthSensors() and StartDepthSensors() methods from your script, when needed, this may improve the user experience.

  15. Hi Rumen,

    I am trying to use the azure kinect senor sideways, I have followed your instructions from “How to set up sensor’s position and rotation in the scene”. The only thing that I would also need is the ability to rotate the colour and depth images by 90 degrees without actually rotating the screen, e.g. by setting up a scene to be 9:16 or 1080×1920.

    With the current setup, it is really difficult to set up UI and other overlaying elements. Furthermore, if we rotate the screen without rotating it in the windows settings, makes the mouse controls very difficult to operate.

    Thanks

  16. nice to meet you!

    I have a question.

    Are you using inverse kinematics to make your character move?

    I’m going to use IK to make the character follow the movement.

    I wonder if that asset used IK!

    If you’re not sorry

    Do you have a video showing the motion capture footage of the avatar?

    https://youtu.be/1ftiOSFXL1Q

    Are the characters in the video in the link using IK?

  17. Helo, i brought the assets and it is great.
    I am checking if six players could be on the screen together with different overlay images. When the first player left, will other players overlay objects will be affected?

    • I solved the problem by adopting user ID, instead of player index.
      However, i got another question. How to flp the image from the kinect sensor. I want it to show in mirror mode.

      • To flip the image, just set the texture or UI image scale to -1 in the direction you want.
        To your previous question: I’m not sure what scene and image you talk about, but I think it would be better to stick to player indices instead of userID, and get the ID by calling GetUserIdByIndex(), because the IDs change when the users enter, exit or re-enter the scene.

  18. Hi Rumen,

    I’m running into an issue using the Azure Kinect Examples withthe HDRP
    While in editor I can see the kinect azure feed but when i make a build the kinect azure doesnt connect/show. however, when making a development build it does show/connect to the kinect azure.
    I have made a seperate project to test where the problem lies and it seems that when i make a build with HDRP enabled the kinect azure stops connecting/working. But without the hdrp it all works normally.

    This happens in unity 2021.3, 2022.3 and 2023.1

    Would you happen to know how to solve this?

    Kind regard,
    Aron

    • Hi Aron, please check if Azure Kinect Body Tracking SDK is located in ‘C:\Program Files\Azure Kinect Body Tracking SDK’ on the target machine. The K4A-asset copies some files from there, when KinectManager is instantiated for a first time. If this looks OK, please find the Player’s log-file and e-mail it over to me, so I can take a closer look at what happens under the hood. Here you can see where to look for the Player’s log-file on the target machine.

  19. Hi Rumen,

    Thanks again for this awesome Unity package. I left a review on Unity Asset store and want to follow up here. The new Orbbec Femto Mega is using similar hardware as the Azure Kinect and it provided a wrapper SDK that in theory should work with existing applications (it has all the identical dlls). We have tested the wrapper SDK with Microsoft’s azure body tracking library and it worked. Hope this information helps.

    Thanks again.

  20. Hi Rumen,
    Thanks for the package! It’s been immensely useful with my personal projects! I wanted to ask for some advice regarding Background Removal. Trying to make a simple project where the background is removed and the person can take a photo by taking a screenshot. I’m using an Azure Kinect with the “BackgroundRemovalByBodyIndex” component to achieve this.

    Example screenshot here: https://imgur.com/a/G2pzLVR

    The masking works, but there’s lots of outline around the body. I’ve tried playing with the “Erode Iteration” settings available, but they always end up masking out certain parts of the body. Is there any way I can improve the quality of masking? Like maybe settings on the scripts or a change of IRL environment, etc.

    Much appreciated! 🙂

    • Hi Ashwin, in your use case I would recommend to use the BackgroundRemovalByBodyBounds-component instead of BackgroundRemovalByBodyIndex. You can see it in action in the BackgroundRemovalDemo1-scene. The reason for that is that the silhouette generated by the Body Tracking SDK is not very precise, to say the least.
      Other than that, you can further adjust the outline of the silhouette with ‘Dilate iterations’ and ‘Erode iterations’-settings of BackgroundRemovalManager-component. Please experiment a bit with them. The positive difference between dilate & erode iterations produces contour around the silhouette. You can set the color of this contour as well, making it blend better with the background.

  21. Hi Rumen, your assets are amazing. I did use your kinect v2 and azure.
    I got an issue about “apply muscle limit”. I set all muscle range to be “zero” at muscles & settings. But after kinect runs, basically the avatar runs as usual and sometimes the avatar will hands & legs are overlapping.
    Do you know if there is any limitation on how to enable it so that the avatar joints following kinect will be limited.

    • Hi Rex and sorry for the late reply! How exactly do you set the muscle range to “zero”? Can you please give me an example or post a screenshot, so I can reproduce the issue here.

      • Do the rest of the muscles (of spine, arms & legs) also have limits from 0 to 0?
        Sure. Please feel free to share your project with me. You can find my e-mail in the About-page here.

      • Hi Rumen,

        Thansk for your prompt reply.
        Rest of the muscles (of spine, arms & legs) also applied limits from 0 to 0.
        I emailed you my project. Could you please help to take a look on it.

        Thanks,
        Rex

  22. Hi Rumen,I’m using Femto Bolt,but I’m running into some issues,When I use “KinectFittingRoom2”,After running it, it was found that the RGB camera could not be invoked, and the pose could not be captured, and most importantly, there was no error!How can I solve it

    • Hi. I have tested all demo scenes before releasing the update, and KinectFittingRoom2-scene worked as well. Please e-mail me and, if possible, send me some more details regarding what exactly is working and not working. Some screenshots or a short video of the issue would help, too. Please also find and send me the Editor’s log file after you run (and close) the scene. Here you can see where to look for Unity log file.

  23. Hi Rumen ,
    I am working on a gesture based Data viz project using kinect v2. I used your package previously for Kinect v1 , on unity 2019 and it was working fine. Now i want to translate everything to Azure Kinect v2.o on unity 2022. When i tried it it shows lot of error and mis matches. Is there a simple way to open old files which uses v1 and unity 2019 to AK v2 , Unity 2022 ? Any guidance on this would be really helpful..

    • Hi, what asset did you use before – K2-asset with Kinect-v1 or the oldest one that was for Kinect-v1 only?
      Here are some tips on how to upgrade your project from K2-asset to “Azure Kinect Examples”, but I think you can use them even if you have older code. If you have questions about specific errors, please ask here, or send me an e-mail.

      • @Rumen F
        Previously, I successfully employed K2-Assets with Kinect v1. Now, as I transition to Azure Kinect using k2Asset, I’ve run into a roadblock. In accordance with the Tips page suggestions, I attempted to unfold the KinectController within the scene. Regrettably, I discovered that it functions as a single entity, with no available option to unfold it.

        Is there a simple way to use switch between Kinectv1 and Azure Kinect on the same project

      • Please copy a KinectController along with its child objects from a demo scene of your choice to your scene, and remove the previous KinectController from it. Then adjust the KinectManager-settings according to your scene’s requirements. Enable (or disable) the needed sensor interfaces and run the scene to check how it works. See this screenshot.

  24. Hi Rumen, I met a question. I have a Orbbec Femto Bolt. I imported the Azure Kinect Examples for Unity (v1.19) and the OrbbecFemtoInterface unitypackage (v1.2) according to the tips. It worked well, and displayed graphics normally. But when I changed another scene and played it or closed the Unity program, the program could’t respond and freezed. I tried many times, still the same. But when I used the Azure Kinect without the OrbbecFemtoInterface unitypackage (and the K4A examples is v1.19), I didn’t encounter this problem. My unity is 2019.2.4f1. How can I solve it? Any help you can provide would be greatly appreciated!

      • I found that it would be freezed the second time I ran the scene. Looks like it can’t be run repeatedly. Maybe something wasn’t released correctly.

      • Hi Sheng Jiang. Unfortunately I can’t reproduce your issue here. Please tell me what is the firmware version of your Femto Bolt camera and what demo scene are you trying to run? Or, just describe me how to reproduce your issue. Feel free to e-mail me this information. Please also send me the Editor’s log-file after you the scene is open, run and frozen, as well as the Orbbec’s log files from the Log-subfolder of your Unity project. They should help me understand what happens under the hood.

      • Hi Rumen. I made a series of attempts. After I changed the unity version to 2023.2.5, this problem disappeared. The 2019.2.4 version still has this problem. I think I should upgrade my project to 2023.2. Incidentally I can also use the latest Computer Vision Examples.

  25. Hi Rumen, I have an Orbbec Femto Mega camera connected via ethernet but the Unity project doesn’t seem to detect it. It’s detected just fine and works in Orbbec viewer and the orbbec sdk k4aviewer. I was able to get it working with the unity project when connected via USB. Do the scripts support ethernet at all?

    • Hi Usman, Femto Mega’s ethernet connection works only with Orbbec SDK and in Orbbec Viewer. The K4A-wrapper for Orbbec cameras works only with USB-connected cameras, at least for now. This Unity asset uses the K4A-wrapper, to get access to sensor streams and utilize the body tracking SDK. That’s why it doesn’t support Ethernet connections either.

  26. Hi Rumen,
    with an Azure Kinect, I noticed that if I start the sensor in one scene, then stop it, then load another scene and start the sensor again in this scene, the Kinect throws an error. Starting and stopping in one scene works. Can you recreate this?

    • Hi Moritz, do you have one KinectManager in the project, or is there one in each scene?
      What kind of error gets thrown when you try to restart the sensor?

Leave a Reply to Rumen F.Cancel reply