Tuesday, November 19, 2013

Stereo camera with block matching

So I managed to start using my stereo camera to seeing some depth and got the first steps working to generating a point cloud of the scene.
Top row: Left and right images (rectified). and disparity image is lower left using the block matching algorithm.
The new camera uses adjustable screws, see image below:
New camera setting with adjustable screws
I used the code/app from http://blog.martinperis.com/2011/01/opencv-stereo-camera-calibration.html to calibrate the cameras and work out there matrices/ rectifying values for each feed. Once the feeds are rectified (another way to describe this would be to normalize the images, so that they can be interpreted correctly) then the block matching algorithm can be used to determine the disparity map. Next would be to calculate the actual distance of each point from the camera and then we can generate a point cloud. The algorithm only runs at about 5 fps right now on the HD frame on my NVS 4200. I checked with NVIDIA nsight profiling and the block matching kernel takes 188ms, which is the reason for the poor perf. See image below

Profile using NVIDIA nsight of stereo block matching
 As always, the code is up at https://github.com/arcanon/raspbot. I know, it won't really compile, which I hope to fix. Next I want to generate the point cloud and then I will make everything easily compile-able. well as easy as you can get, because its using a whole bunch of complicated libraries...

Sunday, November 10, 2013

Raspberry Pi Stereo Camera

Stereo camera with 2 raspberry pi's
So I made my first stereo camera this weekend with 2 raspberry pi's. It actually worked out pretty easily. The exact stereo angle of the camera's is not exact and only controlled with pieces of paper and elastic bands. The blue bands in the middle and paper on the outer edge tilt the camera a little more inwards. Here is a example anaglyph stereo (you will need red/blue stereo glasses to view it properly):

It would be best to have exact screws which you can use to adjust the angles and such. There are small holes on the camera that would allow these screws to be attached, so its just a matter of finding the right adjustable screws. Although thinking of that now, it should not be a big thing. I know the stereo alignment is a bit funny, which should be fixed, but my eyes where able to find the right focus and you can see the 3D effect quite nicely. The code is up at https://github.com/arcanon/raspbot. It won't compile out of the box, but have a look at video reader for the capture loop/anaglyph composition.

The CUDA kernel that composites the kernel looks like this:

__global__   void anaglyph_dev(char* imageLeft, char* imageRight, char *imageOut, int pitchInputs, int pitchOutput, int width, int height)
{
    int x = blockIdx.x * blockDim.x + threadIdx.x;
    int y = blockIdx.y * blockDim.y + threadIdx.y; 

    if (x >= width || y >= height)
        return;

    int linearPosInputs = y*pitchInputs + x;
    int linearPosOutput = y*pitchOutput + x*4;

    // Red
    imageOut[linearPosOutput]   = imageLeft[linearPosInputs];
    imageOut[linearPosOutput+1] = 0;
    imageOut[linearPosOutput+2] = imageRight[linearPosInputs];
}