Posts Tagged ‘Image Matching’

Featuring MuseARt

Posted on: 2 Comments

Today we feature MuseARt, an amazing app Salvador Sanchez has developed and I recommend you all.

MuseARt is the first application that combines art with the most advanced technologies of AUGMENTED REALITY, TEXT TO SPEECH and IMAGE RECOGNITION in REAL TIME.

With dozens of artworks from The Prado Museum, The Thyssen-Bornemisza Museum and The Reina Sofia Museum in Madrid.

MuseARt is an essential application for art lovers and for those who want to have their own audioguide when visiting a museum.

This is a collection of artwork ordered by museums, exhibitions, galleries and specific collections created by the world’s major museums.

But that’s not all: MuseARt has an AUGMENTED REALITY interface that directly identifies each artwork through the camera device, displaying in real time all the information associated with it, including the audioguide in English and Spanish, made with text to speech technology with high quality interpretation and diction, provided by the latest technology in text recognition and human intonation existing in the market.

The advantage of image identification system built into the application represents a significant advance in the currently existing audioguides and do not need to manually enter any identifier: only focusing the camera to the artwork achieve the identification, and visitors are not distracted by entering data into the device. This image identification system is the result of extensive research in Augmented Reality, making it more human and closer to the user.

The application is constantly evolving, including regular updates of artworks, exhibitions, new museums around the world and their respective audioguides.

Each artwork has a high definition version that you can download for later reference even without network connection.

Fingerprint Extraction for Matching & Tracking

Posted on: No Comments

There have been several questions, requests and discussions regarding which is the proper way to add images to the image matching and tracking pool with our SDKs. Here I’m going to do a brief summary of strengths and weakness of each option:

1.) Add normal images from the device resources (locally) or from URL (remote):

  •  API Functions:

ANDROID

Local

addImage (int resourceId)

addImage (int resourceId, int uniqueId)

addImage (Bitmap bmpImage, int uniqueId)

addImage (Bitmap bmpImage)

Remote

addImage (String imageURL)

addImage (String imageURL, int uniqueId)

IOS:

Local

addImage:

addImage:withUniqeID:

Remote

addImageFromUrl:

addImageFromUrl:withUniqeID:

  • Strengths:

- Images can be added “on the fly”

  • Weakness:

-The process of feature extraction is done locally on the device and can take some time when there are a lot of images in the pool (80+).

2.) Add pre-trained images (.dat files).

(more…)

Image Locals Features Descriptors in Augmented Reality

Posted on: 1 Comment

          As computer vision experts, we have to handle almost daily with the information “hidden” on images in order to generate information “visible” and useful for our algorithms. In this entry I want to talk about the Image Matching process. The image matching is the technique used in Computer Vision to find enough patches or strong features in a couple – or more- images in order to be able to state that one of these images is contained on the other one, or that both images are the same image. For this purpose several approaches have been proposed in the literature, but we are going to focus on local features approaches.

     Local Feature representation of images are widely used for matching and recognition in the field of computer vision and, lately also used in  Augmented Reality applications by adding any augmented information on the real world. Robust feature descriptors such as SIFT, SURF, FAST, Harris-Affine or GLOH (to name some examples) have become a core component in those kind of applications. The main idea about it is first detect features and then compute a set of descriptors for these features. One important thing to keep in mind is that all these methods will be ported to mobile devices later, and they could drive to very heavy processes, not reaching real-time rates. Thus, several techniques are lately developed so that the features detection and descriptors extraction methods selected can be implemented in mobiles devices with a real-time performance. But this is another step I do not want to focus on in this entry.

Local Features

           But what is a local feature? A local feature is an image pattern which differs from its immediate neighborhood. This difference can be associated with a change of an image property, which the commonly considered are texture, intensity and color.

(more…)

Box Office Matcher. Recognize your movie posters.

Posted on: 1 Comment

After having launched the beta version of the Augmented Reality Image Matching SDK, the ARLab mobile team developed this simple application using this tool. This application is able to recognize in real-time one of the posters that have been previously downloaded and stored in the device’s database, and once this happens, it launches a screen where the users can choose among several actions, like:

  • - Watch the trailer.
  • - See the movie’s info, like the synopsis or rankings.
  • - or visit the movie´s official website.

 

(more…)