Posts Tagged ‘Image Tracking’

ARLab Image Tracking SDK

Posted on: 2 Comments

      In this post we want to present the product which has been the most challenging for us: image tracking. Unlike in older Augmented Reality systems that use QR Codes or Markers (commonly black and white) in order to follow the target, markerless systems only use the image itself. This means that is not needed to prepare the image before or even introduce in the environment ugly marks to be able to follow it.

       The aim of tracking the target is to know where it is on the scene and how its pose or perspective is regarding to the viewer. This pose camera estimation of the target is very important if we want to achieve a successful augmented experience, due to Augmented Reality systems aim at the superposition of additional scene data, like 3D objects or video, into the video stream of a real camera used.

      With the image tracking SDK, once more, we bring the idea of making Augmented Reality affordable to everyone. With this new SDK, you are going to be able to track almost any image you want. It supports thousands of images within pools of 50 or even 60 images. this technology allows you to superpose any information over the target tracked into the video stream of the device.

      The image tracking engine works as follows: it recognizes the image to be tracked and tracks its position at any time, providing some useful information about it. This information provided by the engine includes: the image you have recognized, its current location on the screen and the camera pose estimation (projection matrix), so you can integrate it with others SDKs , like for exampe our 3D Render engine or other 3D engines from another companies. You will also be able to add or remove images from the pool at execution time, load images both from the mobile device resources or URLs. And finally you can get advantage from our built-in camera view or just create your own view wherever is more comfortable for you.

Augmented Reality, linking augmented and real world.

Posted on: No Comments

     As its definition states, augmented reality is a technology that augments the view of a real world environment, overlaying extra information to the real objects present on that real world, and so enhancing users’ current perception of reality. This extra information is “augmented” by computer-generated sensory input, like graphics or GPS data which comes from the GPS’ device. It is important to say that unlike in virtual reality, where the computer generated data replaces the elements present in the real world, in augmented reality this computer data is added to it in order to improve the user´s perception of the environment.

      Modern augmented reality systems allow smartphones and other mobile devices to be able to integrate these systems into them. In order to make possible the creation of augmented elements into the real world, devices can use one or more of the following technologies: optical sensors,  accelerometers and gyroscopes , GPS, solid state compasses, RFID and wireless sensors.

      According to how or what Augmented Reality will show, one or more than one of the aforementioned technologies can be used. One example that we can find of linking augmented components into the real world is an augmented geolocated view (picture 1). In this type of geolocated views, the augmented reality system will use the GPS, accelerometers, gyroscopes and compasses in order to place a lot of Point of interests according to the user´s location and direction. Once all the data received from this hardware are analyzed, the output will be used to place some POIs on the screen, with very useful information for the user like distance to the POIs, directions to them, or what is the meaning of those POIs, for example.

      Picture 1. Example of an augmented reality geolocated view.

      Another widely spread augmented reality system is the one that uses the optical sensors to “catch” what is happening in the environment, analyzes the input information and overlays the augmented information over the real physic world. As augmented reality is becoming more popular, these AR systems which use the camera as the “door” for input data are playing an important role on the merging between both worlds.

  Picture 2. Example of an augmented reality image tracking.

(more…)

Fingerprint Extraction for Matching & Tracking

Posted on: No Comments

There have been several questions, requests and discussions regarding which is the proper way to add images to the image matching and tracking pool with our SDKs. Here I’m going to do a brief summary of strengths and weakness of each option:

1.) Add normal images from the device resources (locally) or from URL (remote):

  •  API Functions:

ANDROID

Local

addImage (int resourceId)

addImage (int resourceId, int uniqueId)

addImage (Bitmap bmpImage, int uniqueId)

addImage (Bitmap bmpImage)

Remote

addImage (String imageURL)

addImage (String imageURL, int uniqueId)

IOS:

Local

addImage:

addImage:withUniqeID:

Remote

addImageFromUrl:

addImageFromUrl:withUniqeID:

  • Strengths:

- Images can be added “on the fly”

  • Weakness:

-The process of feature extraction is done locally on the device and can take some time when there are a lot of images in the pool (80+).

2.) Add pre-trained images (.dat files).

(more…)

Stabilization.Speed without accuracy…it’s worth nothing.

Posted on: No Comments

      In previous entries we were talking about speeding up the processes on mobiles devices through assembler optimization and its importance when we are integrating the code developed on PC into mobile devices. As we saw this optimization is crucial so that algorithm can run with a real-time rate on such devices. But the speed alone is nothing when we want an accurate result. We also need to stabilize the image and the 3D Object or any model that we wish to overlay on the target.

       We need to take into consideration that, when we are pointing to the image and we are static, the 3D Model overlaid should not move at all, but when a sudden movement is done, the reaction of the overlaid model should be immediately carried out. If the overlaid model is moving although the image is static the user won´t feel the Augmented Reality experience as he should. In the other hand, if the user moves the camera but the overlaid model doesn´t follow the target in real-time he will appreciate a delay in the target tracking.

(more…)

AUGMENTED REALITY AND 3D

Posted on: No Comments

Augmented reality wouldn’t be the same without an extensive use of 3D. After all, if we’re tracking an image to see something virtual on top of it, we need a 3D model that can be displayed in that virtual 3D world that exists over the real one that we see through our device’s camera.

AR is based on providing information over a real image that we get from a camera. But the interesting point is what we show, and how we show it. We want AR to be an inmersive experience, and as as we live in a tridimensional world, what can be more inmersive than showing 3D information over it?

Of course, hardware is a limit for this task. Realistic and complex 3D requires computing power. A lot of desktop computers today are able to play the latest generation videogames, which use very realistic graphics including realtime shadows, depth of field, millions of polygons, visual effects, distortions… but if we take one element of that games and put it over a real world image, we can see that it’s not so realistic at all!! (The reason is the context, as we’re seeing that object between a whole world of objects represented in the same way, it gives us a realistic experience, but when we change the context of that object to the real world, it loses that realism)

Note that we’re talking about realtime 3D graphics. In movies we’re used to see even realistic characters that perfectly fit into the real world, but that images are not realtime, a long rendering process and then manual tweaking by artists has been done to make it fit in that given scene of the movie that won’t change anymore. But realtime graphics (like in videogames) are another story. Everything must be calculated now, as it’s interactive and it depends on our actions, that’s why it requires some graphic power to represent realtime 3D.

Evolution of movile devices graphics quality during last years

(more…)

Markerless Image Tracking: recursive tracking techniques

Posted on: 1 Comment

          As described in previous entries using markers to perform a tracking presents more disadvantages than using the object itself as a target to be tracked. Some of those disadvantages are the need to print the marker or that the tracking can fail due to occlusions. Also, these markers are invasive to the environment, using marketing expression, “do not keep the packaging clean”.

      Due to these reasons, many researchers and companies have focused on developing markerless tracking systems instead of marker-based tracking systems. The former will be the subject of this entry.

    Figure 1. Online Monocular techniques scheme.

      Techniques developed for online monocular markerless augmented reality systems can be classified into two sub-branches: model based and Structure from Motion (SfM) based. The difference is that while in the former a previous knowledge about the real world before the tracking is performed is required, in the later this knowledge is acquired during the tracking. Inside these two sub-branches two different approaches can be taken into consideration according the nature of the tracking. The first of them, known as recursive tracking, uses the previous known pose to estimate the current one. The second option, which is called tracking by detection, allows to calculate the pose estimation without any previous knowledge or estimation, which can be better for recovering from failures.

      Furthermore, the model based approaches which use a recursive tracking can be classified in three branches or categories: edge based, optical flow based and textured based. In the other hand, the approaches covered by tracking by detection techniques are: edge based techniques and texture-based techniques. Although techniques based on tracking by detection seem to be a better option, several things have to be taken into consideration before to choose any of them in order to select the option which fits our requirements, like frame rate, accuracy or even object tracked.

(more…)