Kinect 1.8色帧和深度帧不协调

我的程序存在深度和彩色图像之间协调性差的问题。

玩家面具与人不在同一个地方(见下图)。

void _AllFreamReady(object sender, AllFramesReadyEventArgs e) { using (ColorImageFrame _colorFrame = e.OpenColorImageFrame()) { if (_colorFrame == null) //jezeli pusta ramka nie rob nic { return; } byte[] _pixels = new byte[_colorFrame.PixelDataLength]; //utworzenie tablicy pixeli dla 1 ramki obrazu o rozmiarach przechwyconej ramki z strumienia _colorFrame.CopyPixelDataTo(_pixels); //kopiujemy pixele do tablicy int _stride = _colorFrame.Width * 4; //Kazdy pixel moze miec 4 wartosci Red Green Blue lub pusty image1.Source = BitmapSource.Create(_colorFrame.Width, _colorFrame.Height, 96, 96, PixelFormats.Bgr32, null, _pixels, _stride); if (_closing) { return; } using (DepthImageFrame _depthFrame = e.OpenDepthImageFrame()) { if (_depthFrame == null) { return; } byte[] _pixelsdepth = _GenerateColoredBytes(_depthFrame,_pixels); int _dstride = _depthFrame.Width * 4; image3.Source = BitmapSource.Create(_depthFrame.Width, _depthFrame.Height, 96, 96, PixelFormats.Bgr32, null, _pixelsdepth, _dstride); } } } private byte[] _GenerateColoredBytes(DepthImageFrame _depthFrame, byte[] _pixels) { short[] _rawDepthData = new short[_depthFrame.PixelDataLength]; _depthFrame.CopyPixelDataTo(_rawDepthData); Byte[] _dpixels = new byte[_depthFrame.Height * _depthFrame.Width * 4]; const int _blueindex = 0; const int _greenindex = 1; const int _redindex = 2; for (int _depthindex = 0, _colorindex = 0; _depthindex < _rawDepthData.Length && _colorindex  0) { _dpixels[_colorindex + _redindex] = _pixels[_colorindex + _redindex]; _dpixels[_colorindex + _greenindex] = _pixels[_colorindex + _greenindex]; _dpixels[_colorindex + _blueindex] = _pixels[_colorindex + _blueindex]; }; } return _dpixels; } 

程序输出

RGB和深度数据未对齐。 这是由于Kinect情况下深度传感器和RGB相机的位置:它们是不同的,因此您不能指望使用不同视点的对齐图像。

但是,您的问题很常见,并且已经在SDK 1.6之后弃用的KinectSensor.MapDepthFrameToColorFrame解决了。 现在,您需要的是CoordinateMapper.MapDepthFrameToColorFrame方法 。

坐标映射基础知识-WPF C#示例显示了如何使用此方法。 您可以在以下内容中找到代码的一些重要部分:

 // Intermediate storage for the depth data received from the sensor private DepthImagePixel[] depthPixels; // Intermediate storage for the color data received from the camera private byte[] colorPixels; // Intermediate storage for the depth to color mapping private ColorImagePoint[] colorCoordinates; // Inverse scaling factor between color and depth private int colorToDepthDivisor; // Format we will use for the depth stream private const DepthImageFormat DepthFormat = DepthImageFormat.Resolution320x240Fps30; // Format we will use for the color stream private const ColorImageFormat ColorFormat = ColorImageFormat.RgbResolution640x480Fps30; //... // Initialization this.colorCoordinates = new ColorImagePoint[this.sensor.DepthStream.FramePixelDataLength]; this.depthWidth = this.sensor.DepthStream.FrameWidth; this.depthHeight = this.sensor.DepthStream.FrameHeight; int colorWidth = this.sensor.ColorStream.FrameWidth; int colorHeight = this.sensor.ColorStream.FrameHeight; this.colorToDepthDivisor = colorWidth / this.depthWidth; this.sensor.AllFramesReady += this.SensorAllFramesReady; //... private void SensorAllFramesReady(object sender, AllFramesReadyEventArgs e) { // in the middle of shutting down, so nothing to do if (null == this.sensor) { return; } bool depthReceived = false; bool colorReceived = false; using (DepthImageFrame depthFrame = e.OpenDepthImageFrame()) { if (null != depthFrame) { // Copy the pixel data from the image to a temporary array depthFrame.CopyDepthImagePixelDataTo(this.depthPixels); depthReceived = true; } } using (ColorImageFrame colorFrame = e.OpenColorImageFrame()) { if (null != colorFrame) { // Copy the pixel data from the image to a temporary array colorFrame.CopyPixelDataTo(this.colorPixels); colorReceived = true; } } if (true == depthReceived) { this.sensor.CoordinateMapper.MapDepthFrameToColorFrame( DepthFormat, this.depthPixels, ColorFormat, this.colorCoordinates); // ... int depthIndex = x + (y * this.depthWidth); DepthImagePixel depthPixel = this.depthPixels[depthIndex]; // scale color coordinates to depth resolution int X = colorImagePoint.X / this.colorToDepthDivisor; int Y = colorImagePoint.Y / this.colorToDepthDivisor; // depthPixel is the depth for the (X,Y) pixel in the color frame } } 

我自己正在研究这个问题。 我同意VitoShadow的一个解决方案是在坐标映射中,但未发布未匹配的深度和颜色屏幕分辨率之间的比率( this.colorToDepthDivisor = colorWidth / this.depthWidth; )。 这与数据的移位一起使用( this.playerPixelData[playerPixelIndex - 1] = opaquePixelValue; )以考虑未命中匹配。

不幸的是,这可以在蒙版图像周围创建边框,其中深度框架不会拉伸到颜色框架的边缘。 我试图不使用骨架映射,并通过使用emgu cv跟踪深度数据来优化我的代码,以将一个点作为颜色框的ROI的中心。 我还在努力。