如何将kinect的深度图像与彩色图像对齐

由Kinect上的颜色和深度传感器产生的图像略微不对齐。 我怎样才能改变它们以使它们排成一行?

关键是调用’Runtime.NuiCamera.GetColorPixelCoordinatesFromDepthPixel’

这是Runtime类的扩展方法。 它返回一个WriteableBitmap对象。 随着新帧的进入,这个WriteableBitmap会自动更新。所以它的使用非常简单:

kinect = new Runtime(); kinect.Initialize(RuntimeOptions.UseColor | RuntimeOptions.UseSkeletalTracking | RuntimeOptions.UseDepthAndPlayerIndex); kinect.DepthStream.Open(ImageStreamType.Depth, 2, ImageResolution.Resolution320x240, ImageType.DepthAndPlayerIndex); kinect.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); myImageControl.Source = kinect.CreateLivePlayerRenderer(); 

这是代码本身:

 public static class RuntimeExtensions { public static WriteableBitmap CreateLivePlayerRenderer(this Runtime runtime) { if (runtime.DepthStream.Width == 0) throw new InvalidOperationException("Either open the depth stream before calling this method or use the overload which takes in the resolution that the depth stream will later be opened with."); return runtime.CreateLivePlayerRenderer(runtime.DepthStream.Width, runtime.DepthStream.Height); } public static WriteableBitmap CreateLivePlayerRenderer(this Runtime runtime, int depthWidth, int depthHeight) { PlanarImage depthImage = new PlanarImage(); WriteableBitmap target = new WriteableBitmap(depthWidth, depthHeight, 96, 96, PixelFormats.Bgra32, null); var depthRect = new System.Windows.Int32Rect(0, 0, depthWidth, depthHeight); runtime.DepthFrameReady += (s, e) => { depthImage = e.ImageFrame.Image; Debug.Assert(depthImage.Height == depthHeight && depthImage.Width == depthWidth); }; runtime.VideoFrameReady += (s, e) => { // don't do anything if we don't yet have a depth image if (depthImage.Bits == null) return; byte[] color = e.ImageFrame.Image.Bits; byte[] output = new byte[depthWidth * depthHeight * 4]; // loop over each pixel in the depth image int outputIndex = 0; for (int depthY = 0, depthIndex = 0; depthY < depthHeight; depthY++) { for (int depthX = 0; depthX < depthWidth; depthX++, depthIndex += 2) { // combine the 2 bytes of depth data representing this pixel short depthValue = (short)(depthImage.Bits[depthIndex] | (depthImage.Bits[depthIndex + 1] << 8)); // extract the id of a tracked player from the first bit of depth data for this pixel int player = depthImage.Bits[depthIndex] & 7; // find a pixel in the color image which matches this coordinate from the depth image int colorX, colorY; runtime.NuiCamera.GetColorPixelCoordinatesFromDepthPixel( e.ImageFrame.Resolution, e.ImageFrame.ViewArea, depthX, depthY, // depth coordinate depthValue, // depth value out colorX, out colorY); // color coordinate // ensure that the calculated color location is within the bounds of the image colorX = Math.Max(0, Math.Min(colorX, e.ImageFrame.Image.Width - 1)); colorY = Math.Max(0, Math.Min(colorY, e.ImageFrame.Image.Height - 1)); output[outputIndex++] = color[(4 * (colorX + (colorY * e.ImageFrame.Image.Width))) + 0]; output[outputIndex++] = color[(4 * (colorX + (colorY * e.ImageFrame.Image.Width))) + 1]; output[outputIndex++] = color[(4 * (colorX + (colorY * e.ImageFrame.Image.Width))) + 2]; output[outputIndex++] = player > 0 ? (byte)255 : (byte)0; } } target.WritePixels(depthRect, output, depthWidth * PixelFormats.Bgra32.BitsPerPixel / 8, 0); }; return target; } } 

一种方法是假设颜色和深度图像在它们中具有相似的变化,并且使两个图像(或它们的较小版本)交叉相关。

  • 预先白化图像以获得基础变化。
  • 将预先变白的图像或它们的较小版本进行交叉关联 。
  • 互相关的峰值位置将告诉您xy的偏移