Archive for the ‘code snippets’ Category

Fingerprint Extraction for Matching & Tracking

Posted on: No Comments

There have been several questions, requests and discussions regarding which is the proper way to add images to the image matching and tracking pool with our SDKs. Here I’m going to do a brief summary of strengths and weakness of each option:

1.) Add normal images from the device resources (locally) or from URL (remote):

  •  API Functions:

ANDROID

Local

addImage (int resourceId)

addImage (int resourceId, int uniqueId)

addImage (Bitmap bmpImage, int uniqueId)

addImage (Bitmap bmpImage)

Remote

addImage (String imageURL)

addImage (String imageURL, int uniqueId)

IOS:

Local

addImage:

addImage:withUniqeID:

Remote

addImageFromUrl:

addImageFromUrl:withUniqeID:

  • Strengths:

- Images can be added “on the fly”

  • Weakness:

-The process of feature extraction is done locally on the device and can take some time when there are a lot of images in the pool (80+).

2.) Add pre-trained images (.dat files).

(more…)

iPhone video output setting YUV or BGR CVImageBufferRef to IplImage [Code snippet]

Posted on: No Comments
  • We are going to publish a series of posts with some useful code snippets for iOS and Android.

Any suggestion or contributions are welcomed ;)

Here we show how to get and IplImage from CVImageBufferRef or CMSampleBufferGetImageBuffer with the output video set to:

kCVPixelFormatType_32BGRA or

kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange or

kCVPixelFormatType_420YpCbCr8BiPlanarFullRange

Here we go:

- (IplImage *)createIplImageFromBuffer:(CVImageBufferRef)imageBuffer 
                                        withChannels:(int)channels {

    IplImage *iplimage = 0;
    if (imageBuffer) {
        // From CMSampleBufferGetImageBuffer:
        // CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
        // lock buffer
        CVPixelBufferLockBaseAddress(imageBuffer, 0);
        // get buffer's image information
        uint8_t *bufferBaseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
        size_t bufferWidth = CVPixelBufferGetWidth(imageBuffer);
        size_t bufferHeight = CVPixelBufferGetHeight(imageBuffer);
        size_t bufferBytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
        // create IplImage
        if (bufferBaseAddress) {
            iplimage = cvCreateImage(cvSize(bufferWidth, bufferHeight), IPL_DEPTH_8U, channels);
        }
        // from YUV  - 1 channel (grey image)
        // from BGRA - 4 channels (colourful image)
        if(channels == 4)
        {
            vImage_Buffer src;
            src.width = bufferWidth;
            src.height = bufferHeight;
            src.rowBytes = bufferBytesPerRow;
            src.data = bufferBaseAddress;
            vImage_Buffer dest;
            dest.height = bufferHeight;
            dest.width = bufferWidth;
            dest.rowBytes = bufferBytesPerRow;
            dest.data = iplimage->imageData;
            // swap the pixel channels from BGRA to RGBA.
            const uint8_t map[4] = { 2, 1, 0, 3 };
            vImagePermuteChannels_ARGB8888(&src, &dest, map, kvImageNoFlags);
        }else
        {
            iplimage->imageData = (char*)bufferBaseAddress;
        }
        // unlock buffer
        CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
    }
    else
        NSLog(@"No sampleBuffer!!");
    return iplimage;
}