I also read another post where people were speculating about possible randomic change of the 2 modulating frequencies (around 80Mhz) used for phase detection and wavelength period discrimination (such a frequency change being an autonomous sensor attempt to solve potential ambient light and dynamic range issues) and regarding it as a possibile cause of the temporal interference pattern recursion.Īssuming, as it seems, that there is no external way to synchronize multiple devices and alter the modulating frequencies to avoid mutual interference: perhaps, using the HD RGB camera included, a Visual Hull check and constrain of the point-cloud may be used as compensation for too severe occasional distortions (up to 20 cm in the worst case according to that paper) due to randomic light interferences among sensors, even though such a small number of sensors (2 or 3 in the sensor emitters/receivers overlapping lines of sight) might imply a too coarse visual hull reconstruction rendering it close to useless.Description What is a Random Dot Autostereogram? Unfortunately he reported too, in a new post, that after a bit the interference pattern seemed to recur (wonder if the sensor cooled down due to fan operation or interference just due to a random temporal pattern unrelated to sensor temperature). I read a post where a guy was reporting a reduction of the interference after 20 minutes warming up of the v2 devices. Thanks for sharing your insights/observations. On a side note the v2 sensor’s video and IR streams also allow for easier intrinsic/extrinsic calibration of multiple devices. And there may be some ways (at least theoretically )to compensate. I’m not at a point yet where I can give a definitive answer on how bad the data is when actually using pointclouds/skeletons but it seems more managble and happening less often than the v1. It may be happening less when devices have warmed up but I’m not 100% sure. In practice this only happens on occasions and phases in and out, I haven’t done measurements but in severe cases the error seems to be in the range of up to 5-10 cm or so. There is a small chance that multiple devices will send a pulse at the same time and this will be perceived as an IR image that is too bright, resulting in a shift in Z in the depth signal. The v2 works with about 300 light pulses per second which are also extremely short.
#Kinect random dot pattern how to#
There was a paper showing how to reduce the noise by adding vibrating motors at different speeds to each sensor but I’ve never been able to get stable results with that myself. This is due to the sensor working by projecting a pattern of dots and analyzing how it distorts in space, multiple devices will confuse each other’s pattern essentially. With v1 you generally get noise on parts where there is overlap, the noise will be there all the time and can be quite severe to the point where data becomes unusable. Yes there certainly can be interference with Kinect v1 and v2, and both are different in nature. Thanks for the link to that paper, hadn’t seen it yet but will definitely study it ? Thanks very much and keep up with the good job.
Please describe your experience with noise and the way you coped with it. This behavior is most likely due to the asynchronous operation of the two devices".ĭid you experience noise due to mutual interference of the sensors? I’m quite interested in doing mocap with more than 1 kinect sensor (possibly v2) and I’ve been reading about noise due to mutual interference of the sensors.Īn in-depth and specific study for both v1 and v2 kinect versions is “Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect” which can be found hereĪccording to it v2 is much more sensitive to reciprocal induced noise than v1 and quoting it from page 30 “The Kinect ToF camera (v2) shows low interference for the majority of the frames (RMSE: < 5mm), but extreme interference errors for some 25% of the frames (RMSE up to 19.3mm) that occur in a sequence which has a nearly constant repetition rate.