Azure Kinect Examples for Unity

Azure Kinect Examples for Unity, v1.13 (also available in Unity Asset store) is a set of Azure Kinect (aka ‘Kinect for Azure’, K4A) examples that use several major scripts, grouped in one folder. The package currently contains over thirty demo scenes. Apart of the Azure Kinect sensor (aka K4A), the K4A-package supports the “classic” Kinect-v2 (aka Kinect for Xbox One) sensor, as well as Intel RealSense D400-series sensors.

The avatar-demo scenes show how to utilize Kinect-controlled avatars in your scenes, gesture demo – how to use discrete and continuous gestures in your projects, fitting room demos – how to overlay or blend the user’s body with virtual models, background removal demo – how to display user silhouettes on virtual background, point cloud demos – how to show the real environment or users as meshes in your scene, etc. Short descriptions of all demo-scenes are available in the online documentation.

This package works with Azure Kinect (aka Kinect for Azure, K4A), Kinect-v2 (aka Kinect for Xbox One) and Intel RealSense D400-series sensors. It can be used with all versions of Unity – Free, Plus & Pro. Please note, the body tracking demo scenes don’t work with Intel RealSense D400-series sensors.

How to run the demo scenes:
1. (Azure Kinect) Download and install the latest release of Azure-Kinect Sensor SDK. The download link is below. Then open ‘Azure Kinect Viewer’ to check, if the sensor works as expected.
2. (Azure Kinect) Follow the instructions on how to download and install the latest release of Azure-Kinect Body Tracking SDK and its related components. The link is below. Then open ‘Azure Kinect Body Tracking Viewer’ to check, if the body tracker works as expected.
3. (Kinect-v2) Download and install Kinect for Windows SDK 2.0. The download link is below.
4. (RealSense) Download and install RealSense SDK 2.0. The download link is below.
5. Import this package into a new Unity project.
6. Open ‘File / Build settings’ and switch to ‘PC, Mac & Linux Standalone’, Target platform: ‘Windows’ & Architecture: ‘x86_64’.
7. Make sure that ‘Direct3D11’ is the first option in the ‘Auto Graphics API for Windows’-list setting, in ‘Player Settings / Other Settings / Rendering’.
8. Open and run a demo scene of your choice from a subfolder of the ‘AzureKinectExamples/KinectDemos’-folder. Short descriptions of all demo-scenes are available in the online documentation.

* The latest Azure Kinect Sensor SDK (v1.4.0) can be found here.
* The latest Azure Kinect Body Tracking SDK (v1.0.1) can be found here.
* Older releases of Azure Kinect Body Tracking SDK can be found here.
* Instructions how to install the body tracking SDK can be found here.

* Kinect for Windows SDK 2.0 can be found here.
* RealSense SDK 2.0 can be found here.

* The K4A-asset may be purchased and downloaded in the Unity Asset store. All future updates will be available free of any charge.
* If you’d like to try the free version of the K4A-asset, you can find it here.

Free for education:
The package is free for academic use. If you are a student, lecturer or university researcher, please e-mail me to get a free copy of the K4A-asset directly from me.

One request:
Please don’t share this package or its demo scenes in source form with others, or as part of public repositories, without my explicit consent.

* The basic documentation is in the Readme-pdf file, in the package.
* The K4A-asset online documentation is available here.
* Many K4A-package tips, tricks and examples are available here.

* If Unity editor freezes or crashes at the scene start, please make sure the path where the Unity project resides does not contain any non-English characters. If it does, please create a new folder and new Unity project with only English characters in their names, import the K4A-asset and then try again.
* If you get syntax errors in console like “The type or namespace name ‘UI’ does not exist…”, please open the Package manager (menu Window / Package Manager) and install the ‘Unity UI’ package. The UI elements are extensively used in the K4A-asset demo scenes. Recently, for unknown reasons, Unity has decided to remove core packages (like the UI-package) from the standard distributions.
* If you get “‘KinectInterop.DepthSensorPlatform’ does not contain a definition for ‘DummyK2′” in the console, please delete ‘DummyK2Interface.cs’ from the KinectScripts/Interfaces-folder. This dummy interface is replaced now with DummyK4AInterface.cs.
* If the Azure Kinect sensor cannot be started, because StartCameras()-method fails, please check again #6 in ‘How to run the demo scenes‘-section above.
* If you get a ‘Can’t create the body tracker’-error message, please check again #2 in ‘How to run the demo scenes‘-section above. Check also, if the Body Tracking SDK is installed into its by-default folder.
* If the body tracking stops working at run-time or the Unity editor crashes without notice, update to the latest version of the Body tracking SDK. This is a known bug in BT SDK v0.9.0.
* The RealSense-interface is still in experimental state. Known issues are that depth and color frames are out of sync, and the body tracking doesn’t work with RealSense D400 sensors.
* If there are errors like ‘Shader error in [System 1]…’, while importing the K4A-asset, please note this is not really an error, but shader issues due to missing HDRP & VFX packages. You only need these packages for the Point-cloud demo. All other scenes should be started without any issues.
* If there are compilation errors in the console, or the demo scenes remain in ‘Waiting for users’-state, make sure you have installed the respective sensor SDKs and the other needed components. Please also check, if the sensor is connected.

What’s New in Version 1.13:
1. Added 3rd overlay demo scene, to utilize the HandOverlayer-component (thanks to Edgaras Art).
2. Added photo-booth overlay demo scene, to demonstrate how to manage multiple joint overlays in 2D mode, gesture detection and hand grip interaction, all in one scene.
3. Added color-camera IR frame transformation API, to be used when needed.
4. Updated the scene-mesh and user-mesh scenes, components and shaders, to support HDRP & URP.
5. Replaced UserBodyBlender with SceneBlendRenderer-component in the fitting room demo-scenes, to support HDRP & URP (thanks to Fernando Gonzalez).
6. Updated the BackgroundRemovalByDist-component to support different max left & right distances (thanks to Mark Dodson).
7. Updated the AvatarController-comp. to use unscaled time for smoothing (thanks to Ruben Gonzalez).
8. Fixed ‘Point cloud player list’-issue when using multiple sensors (thanks to Ashlee Lim).
9. Fixed BackgroundRemovalByBodyBounds-component in camera ortho mode (thanks to sukim).

Videos worth more than 1000 words:
Here is a holographic setup, created by i-mmersive GmbH, with Unity 2019.1f2, Azure Kinect sensor and “Azure Kinect Examples for Unity”:


110 thoughts on “Azure Kinect Examples for Unity

  1. Hey Rumen,
    Would love to try out this package but am not seeing where i can download / access it? Is it only visible to certain readers?

  2. Hi Rumen.
    I’m working on Azure Kinect and Unity and really really want to try it out. Where can I download this package.

    Kien Le

  3. Hey, i’m using 2019.1.10f1 and none of the avatar demos are working with the azure kinect. Unity immediately quits after pressing play. I have installed and tested both the sensor and body tracking sdks. Do you have any suggestions? I know you said there was a bug in the body tracking sdk. Does anyone know if this has been fixed yet?

    • Hi Chris, sorry, but the K4A-asset is quite new and issues are possible. Your issue is a bit odd though. May I ask you send me the Unity editor’s log file, along with the versions of Sensor SDK and Body Tracking SDK you have installed. Here is where to find the log-file: My e-mail address is in the About-page of this website.

      • I have met the same issue with Azure Kinect Body Tracking SDK ver 0.9.0 and Azure Kinect Examples for Unity v1.1. After I updated the Azure Kinect Body Tracking SDK to ver 0.9.1, the problem has been solved.

      • Hey, Sorry for the delayed response, but updating to the .9.1 body tracking sdk fixed the issue. Works well now, thanks!

    • Hi Chris, this might not be your issue, but I noticed that if you have the Kinect Sensor already open when you try to run any of the Demos, it will crash immediately. So make sure you don’t have any other service or process that has opened the sensor before trying to use the Unity Project.

      • Andrew, thank you for providing this great tip! It may be useful to many of the users of K4A-sensor. As a matter of fact, I haven’t heard anything from Chris since reporting this issue. I suppose the issue got resolved by itself.

  4. Hi Rumen, thanks a lot for this! Quick question tho.
    In Kinect 4 Azure Interface when I play a recorded MKV with a depth track that works in the official Azure Kinect Viewer I get
    ArgumentException: Result is not of a recognized result type.
    Also depth_track_enabled of Playback is never set to True (color_track_enabled: True, depth_track_enabled: False, ir_track_enabled: False, imu_track_enabled: False, depth_delay_off_color_usec: 566)
    Any idea? Not sure if posting here is the best way to report something like this.

    • Hi Gauthier, yes I know what you mean. This is a bug in the C# wrapper of Sensor SDK. It is fixed in v1.2 of the K4A-asset that I published today. Please update from the download link in your invoice, or from your account page (if you’ve created one). Import the updated asset, try again and finally please tell me, if the issue is resolved or not.

  5. Hi Rumen,
    Rergarding those : “HDRP & VFX packages”
    Could you let us know which packages we need to download from the asset store to avoid the shader issues ?

    • Hi Seb, what shader issues do you mean? If you mean the shader errors while importing the K4A-asset, just ignore them. They will not affect the demo scenes or script compilation. The HDRP & VFX packages are only needed for the point-cloud demo, along with the visual effects that produce the errors while importing. If you want to run this demo scene, see the instructions here: I personally would recommend to create a separate Unity project for it.

  6. Hello again, since I already purchased the unity examples package would it be possible for me to get a copy of the unity asset store version for free? I know it is the same, but it would make it a lot easier to integrate into future projects as the versions change.


    • Hi, please e-mail me your invoice number and PayPal account. I’ll refund you the money and then you can purchase the same asset on the Asset store.

  7. Hi Rumen –
    Thanks for making this asset! I’m getting the ‘Can’t create the body tracker’ error when I try to run the demo scenes, though the sensor does get turned on. The Sensor SDK and Body Tracking SDK are working fine too. I’m on Windows 10, Unity 2019.2.2f1, Sensor SDK 1.2.0, Body Tracking SDK 0.9.2. Any suggestions?

    • Hi Sean, please e-mail me and attach a screenshot of your project’s root folder, so I can take a look. This is the parent folder of the Assets-folder. Please also tell me the path, where the body tracking SDK is installed on your machine, and don’t forget to mention your invoice number, as well.

  8. Hi Rumen, I have a problem with the sensor recognition once I build my proyect. The sensor is recognized perfectly in the editor but whenever I build the proyect there is no image. Thanks in advance.

      • Hi there, I am also having this issue and can’t seem to figure out what’s going on. Would love to be in the loop on debugging this if possible. Aside from this, everything has been working great so far!

      • Hi Mark, please e-mail me and send me the Player’s log file (see the link above), so I can look more closely at what’s going on at run-time.

  9. I have 2 errors not covered in the docs, firstly I just now installed the latest versions of the tracking library and viewer as of today, and verified they work. Using Unity 2019.1.8f1 on a fresh project.

    AzureKinectOpenDeviceException: result = K4A_RESULT_FAILED
    and then
    Failed opening Kinect4AzureInterface, device-index: 0

    • Actually the error depends on what scene is open, the lights on the azure Kinect are all on when the scene plays, and the last line of the console is 1 sensor(s) open, but it never tracks anyone. In the backgroundremoval 1 demo the first error is: Can’t create body tracker for Kinect4AzureInterface0!

      • If only the body tracking doesn’t work, please make sure you have installed the Azure Kinect Body Tracking SDK into its by-default folder, i.e. ‘C:\Program Files\Azure Kinect Body Tracking SDK’. The K4A-asset expects to find it there. If it must be installed somewhere else, this would require a slight change in the code.

        If the sensor doesn’t work at all (i.e. scenes like BlobDetectionDemo or SceneMeshDemo don’t work too), please make sure you have installed Azure Kinect SDK v1.2.0. If the problem persists, please e-mail me for further analysis of the issue. I would appreciate it, if you send me the Editor log as well, so I can check more closely what happens. Here you can see where to find the Editor log:

  10. That was my problem, my C drive is full so I’d installed on a separate drive, changing it fixed everything, thank you so much!

    Unrelated to your product it’s depressing how narrow the field of view on the depth sensor is on the azure, you can’t fit 6 people in it’s tracking area like you could with the previous kinect.

    Thanks again, your product helps so many people, I’ve used it since kinect v1 and really appreciate your effort.

  11. Hello Rumen F
    I‘m very happy to use your unity Azure Kinect Pakage。now i have some problems to ask you。i want to use Multiple divece in unity 。 i follow your tips to set kinectcontroller prefabes。but the sec deviece doesn’t work。there is erro about the sec device that is “Can’t create body tracker for Kinect4AzureInterface1!“。i try to close creat body function then test the the sec device, it doesn’t work either . what i need to do to fix this erro and make the sec device work?

      • Thanks very much! I bought in the Asset Store and works GREAT – excellent work! I was looking through the docs and wasn’t able to locate the licensing but I did find your repo with the free (limited) code base and it was MIT – is this version also covered under MIT? Thanks again – really appreciate your work!

  12. Yes, I saw your requests and notes on not sharing and will DEFINITELY not be sharing and will respect that for sure. Thank you for the work you’ve done, it’s a blessing!

  13. Hi! I’m an university student, and i’m going to do a project About motion tracking.
    I read in the document that for educational purpose is free to use?

    • Yes, please send me an e-mail request for a free copy of the K4A-asset from your university e-mail address. This is to prove you really study or work there.

    • Sorry, but I don’t send any packages to e-mail addresses, without getting an e-mail from them first. This is just a basic security measure.

  14. Hi Rumen! Has been a huge help using your package for my project. I have been trying to set up pose detection for multiple models by instantiating a pose detector for each new avatar created by the UserAvatarMatcher script, but even though I am able to instantiate multiple pose detectors and assign the appropriate PoseModelHelper and AvatarController scripts to each pose detector, the pose detection doesn’t work. Is it even possible to run pose detection for multiple avatars in the scene? If so, do you have any idea what I might be doing wrong?

    • Hi Mark, could you please e-mail me and send me your script (along with the meta-file) and the scene, where the script is used. I’ll try to find out what goes wrong. Please also mention your invoice number in the e-mail.

  15. Is there a way to replace the live azure kinect data stream with a pre-recorded data stream? I’m trying to stress test my scene running for many hours but would prefer not to rely on actively moving in front of the camera for a long time.

  16. Can we get the something like the interaction demo in KinectV2 back?
    That’s the most important part (for me) 🙂

  17. Hi thanks for the great work! Just want to ask, is there an API from KinectManager for transformations for depth to color and color to depth? I wanted to get the depth2color buffer but it returns null.

    • ulong frameTime = 0, lastUsedFrameTime = 0;

      // in Start()
      KinectInterop.SensorData sensorData = KinectManager.Instance.GetSensorData(0);
      sensorData.sensorInterface.EnableColorCameraDepthFrame(sensorData, true);

      // in Update()
      ushort[] transformedDepthFrame = sensorData.sensorInterface.GetColorCameraDepthFrame(ref frameTime);
      if(transformedDepthFrame != null && frameTime != lastUsedFrameTime)
      lastUsedFrameTime = frameTime;
      // do something with the transformed depth frame

      The same approach can be used for depth-camera transformed color frame as well.

  18. I want to access only the user’s head portion from the point cloud data. I am using the UserMeshDemo of PointCloudDemo. How can I do that?

    • Sorry, I’ve missed this question before. What I would do if I were you, would be to get the user’s head position, like this: ‘Vector3 headPos = KinectManager.Instance.GetJointKinectPosition(userId, KinectInterop.JointType.Head, true);’. Then set it as shader’s parameter in UpdateMesh(). And finally, in the shader’s vert() method, filter out all vertices that are not within some distance to the current head’s position.

      • Thank you!

        Could you please clarify the last part? I couldn’t find the vert() method.

  19. Hello
    I transfer v2 to azure recently
    I have a question about “Can’t create body tracker for device” bug
    SDK installed. And the viewer cannot open with GPU default
    and I can open viewer with Command “k4abt_simple_3d_viewer.exe CPU”
    How do I use CPU body tracker in the asset

    • Hi, please open DepthSensorBase.cs in AzureKinectExamples/KinectScripts/Interfaces-folder and search for ‘new BodyTracking(‘. Then change the 3rd argument from ‘K4ABT_TRACKER_PROCESSING_MODE_GPU’ to ‘K4ABT_TRACKER_PROCESSING_MODE_CPU’, save the change, return to Unity and try again. This should switch the body tracking to CPU only mode.

  20. Hey Rumen,
    I am using the background removal with the camera orientated clockwise and am getting some weird offset of the masking.It looks like the mask and the rgb feed are just slightly off in one direction. Have you tested the background removal with the camera orientated this way? And if you have did you need to change anything?
    Any help would be greatly appreciated.

  21. Hi Rumen,
    I have a question about excessive CPU utilization. My hardware configuration is I5-7500 and RTX 1660TI 6G. When I open any scene (like OverlayDemo (v1.8)), the CPU utilization rate is over 80% all the time, meanwhile the GPU utilization is just below 20%. If I open another program at the same time, the cpu utilization could reach 100% and have a program crash. I run the Azure Kinect Body Tracking Viewer, and the CPU utilization is about 50%, and the GPU utilization is below 20%.
    Is this a normal phenomenon? Or is there anything that needs setting? I set the default set of ‘K4ABT_TRACKER_PROCESSING_MODE_GPU’.
    Thanks a lot!

      • Thank you for this question! I have not looked at the overall CPU utilization so far.

        Please open and look for ‘Thread.Sleep(‘ in KinectManager.cs (in KinectScripts-folder) and DepthSensorBase.cs (in KinectScripts/Interfaces-folder). Then experiment a bit by changing the parameter of this method invocation to 10 or more (milliseconds). Then save the script, go back to Unity, run the overlay demo again and check, if the CPU utilization has changed. Repeat this for several different sleep-time values. would be happy, if you share here what sleep time you find as optimal.

      • I set the sleep time as 8ms, now the CPU utilization is about 50% with i5-7500. Thank you.

  22. Hi Rumen!

    Thanks for your awesome asset!
    I’m looking for some help to do this:
    Is there a way to show a few body segmenst of the human body in real time, and hide the other body segments, using the Background Removal feature? Here is an image of that I’m trying to do:

    Do I have to use a special Mask?

    Or Screenspace Textures, like this:

    Or Clipping, like this:

    Any guide to do this?

    Best regards,


    • Hi Cris, Honestly, I don’t know how to answer your question. You could try different options and compare them. I personally would modify ForegroundFiltBodyShader.shader to filter only the points that are near to some specific joints, like head, elbows, wrists, hands, knees and ankles. Of course this may not produce 100% the expected results, because if for instance user’s arms are near the body, parts of the body will be close to elbows, wrists and hands, and will be visible as well.

      • OK Rumen, thanks for your help. I’s a good step to start with. Maybe we will use some digital image processing tools as well.

        Best regards,


  23. Hi,

    Thanks for an awesome plugin.
    Using K4A I’ve gotten the skeleton tracking latency to a minimum by having a low res depth texture and not streaming any RGB textures, it’s working fine in the Unity IDE.

    However, when I make a Windows Build, latency increases a whole lot. I’m not GPU or CPU bounded, it just feels a lot slower. Any ideas as to what could be causing this?


    • Hi Tom, hm hm, this doesn’t sound right. It should work the same way (or better) than in the Editor. Please locate the Player’s log and send it over to me, so I can take a look, and if possible, attach two short video clips depicting the latency in the Editor and in the Player. Don’t forget to mention your invoice number in the e-mail, too. Here is where to find the Unity log files:

      • Discussion continued through mail and we’ve found a solution in turning of VSync in a build and then set Vsync to “fast” in the Nvidia driver.

  24. Hello I would like to know if it is possible not to have the mirror effect in the user mesh? a solution to offer !

  25. Hello Rumen.
    We have been developing products using Kinect V2.
    Now testing a new product using Kinect V4.
    The main feature of the product is that measure the angle between the joints.
    In V2, the Angles were calculated with coordinates of joint object which is get from GetJointColorOverlay Method. The angles accurated. But in V4 Isn’t.
    I measured it using GetJointPosition Method.
    and the difference between the actual user’s motion and the measured value is about 10 degrees.
    I wonder if there is a solution. thank you.

      • GetJointColorOverlay()- method output x, y value for pixel coordinate and z value for raw skeleton coordinate. (e.g. 146.2, 235.4, 2.1) So I can’t calculate angle of a joint accuratly
        And I also activate a code in GetJointColorOveray()-method that calculate distance between plane of camera and the joint (Vector3), but output of color overlay is same…
        So I get raw coordination of joint using GetJoinsPosition()-method and angle is correct.
        (I mistake to use GetJoiintKinectPosition()-method which I previously commented.)

  26. Hi Rumen, I notice in the documentation a component called “KinectUserBodyMerger” which seems interesting for using multiple sensors for superior body tracking. I am using the most recent version of your tool and SDKs, but I can not see this component as a script or anything I can add to my scene. Is there somewhere else I need to go to find this?

      • It means when I’m using multiple cameras, I use the GetJointKinectPosition() or GetJointPosition() method, and the method returns averaged position of a body joint after bodies merge. Is that right?

      • Yes, that’s right. If you need the joint positions as detected by specific cameras, you can use GetSensorJointKinectPosition() or GetSensorJointPosition().

  27. Hi Rumen, the important feature, classifier for the hand states, has not received the feedback forum’s response for months. But we urgently need the simple feature, just “grab” and “release”, so that we can do some simple interaction. Do you have any ideas or alternatives to implement the feature about grab and release? Thank you very much.

    • Hi, unfortunately there is no reliable alternative of the hand states classifier, as far as I know. The COVID-19 situation in the USA and Europe is making the things only more difficult and introduces huge delays.

  28. Thank you so much, Rumen. The experimental interaction components really help us solve the interaction problem.

    • Thank you, too! It’s not perfect, but it was the best I could do, while we’re waiting for a real solution, in means of SDK-provided hand state classifier.

  29. Hi Rumen,
    I am currently working on a project that measures human body joints. Along with those measurements, some measurement requires an angle data which uses the position of joints such as the wrist, handtip, and the thumb. Sadly, I came across a problem where the newer Azure Kinect sensor has a poor accuracy, especially for handtip and thumb, than the old one. 🙁 Kinect V2 was accurate and reliable enough to measure the angle, where Azure one does not.

    I wonder if there is a solution, or a clever method to raise the accuracy of the position of handtip and thumb.

    Thank you.

    • Hi Min Oh, sorry for the late reply. Unfortunately I’m not aware of such method. But I don’t think Azure Kinect’s body tracking has poorer accuracy of the hand joints than Kinect-v2. By both sensors tracking of these joints is far from perfect, and as far as I know, this is due to the relatively small areas the hands (along with the fingers) take in the overall image.

      • Thank you for replying! Guess I have to find one out myself.. Still, I appreciate you concerning about my problem. 🙂


  30. Hi, rumen, when I recently tested multiple people with a single device, I found that the data of two people would cross. Looking at the code, I found that the bodyindex will change when the number of people is refreshed, and the data transmission depends on the bodyindex. How can I solve this problem.Thank you

    • Please e-mail me and tell me some more details about your issue. If the body indices change, probably the IDs of the tracked people have changed, too.

    • The face tracking is not anymore part of the body tracking SDK. As you can see in the tutorial above, it uses the Azure cognitive services instead that are 1. slow, because they must be invoked over the Internet, and 2. you need an Azure account and payment plan for it. That’s why I have not included face tracking in the K4A-asset yet. If you want to use the Azure face API with any kind of web camera (not just Azure Kinect), please look at this repo:

      • Thank you for the informative reply!
        I have other question about using multiple kinects. It seems that each device renders a mesh individually. Is there any way to render all of the point cloud all together as a single mesh?

      • The point clouds come from different sensors. That’s why they are rendered individually. I’ve got your e-mail, by the way. The problem may occur, because of the calibration or synching between the sensors. Let me research the issue a bit, and then I’ll get back to you in few days.

  31. Greetings!
    I think it’s so great!
    These are the assets I need now!

    I am currently using Unity 2020.1.0f1 version.
    I’ve searched for various sample projects through multiple paths now, but errors have occurred, so I’m trying to purchase your assets without spending any more.

    But I want to get some confirmations before buying!

    I am in a situation where I need to use VFXGraph.
    I want to know if your assets work without problem in VFXGraph Preview project in Unity 2020.1.0f1

    The Windows version is WIndows 10.

    OS Build version is 19041, 450.

    Version 1.4.1 is installed through Kinect Viewer

    Will your assets work normally in the following environments?

  32. Hi Rumen,
    I love this asset it works beautifully and is extremely helpful. Thank you very much!
    I was wondering if there is a way to directly map the rig created in blender to the joints tracked by the azure kinect to overlay an avatar on the user. Essentially overlaying the rigs bones with the blue lines that the skeleton overlayer displays and the bone heads and tails with the green spheres(tracked joints)? Currently, when I use the humanoid mecanim rig and the avatar user matcher script there is always a bit of an offset.
    Thank you for your time!

    • Hi, please look at the fitting room demo scenes (in your case KinectFittingRoom2) and try to replace the ModelMF-object in the scene with your model. Use the same components as ModelMF for it. Please note, your model should be set with Humanoid rig, as well. Hope this does what you need.

Leave a Reply