How to solve the Image Distortion Problem

      Theoretically, it is possible to define lens which will not introduce distortions. In practice, however, no lens is perfect. This is mainly due to manufacturing factors; it is much easier to make a “spherical” lens than to make a more mathematically ideal “parabolic” lens. It is also difficult to mechanically align the lens and imager exactly. Here we describe the two main lens distortions and how to model them. Radial distortions arise as a result of the shape of lens, whereas tangential distortions arise from the assembly process of the camera as a whole.

      We start with the radial distortion. The lenses of real cameras often noticeably distort the location of pixels near the edges of the imager. This bulging phenomenon is the source of the “barrel” or “fish-eye” effect. Figure1 gives some intuition as to why radial distortion occurs. With some lenses, rays farther from the center of the lens are bent more than those closer in. A typical inexpensive lens is, in effect, stronger than it ought to be as you get farther from the center. Barrel distortion is particularly noticeable in cheap web cameras but less apparent in high-end cameras, where a lot of effort is put into fancy lens systems that minimize radial distortion.

Figure 1.

        For radial distortions, the distortion is 0 at the (optical) center of the imager and increases as we move toward the periphery. In practice, this distortion is small and can be characterized by the first few terms of a Taylor series expansion around r = 0. For cheaper web cameras, we generally use the first two such terms; the first of which is conventionally called k1 and the second k2. For highly distorted cameras such as fish-eye lenses we can use a third radial distortion term k3.

In general, the radial location of a point on the image will be rescaled according to the following equations:

Here, (x, y) is the original location (on the imager) of the distorted point and (xcorrected, ycorrected) is the new location as a result of the correction.

The second largest common distortion is known as tangential distortion. This distortion is due to manufacturing defects resulting from the lens not being exactly parallel to the imaging plane.

Tangential distortion is minimally characterized by two additional parameters, p1 and p2, such that:

Thus, in total five (or six in some cases) distortion coefficients are going to be required.

Distortion example

      To ilustrate these theoretical points, I am going to show a couple of images taken by the same camera. The first of these images will show the picture with the distortion effect, whereas the second one will show the result of applying “undistortion” functions.

Figure 2. The image on the left shows a distorted image, while the right image shows an image in which the distortion has been corrected through the methods explained.

These “undistort” functions used are implemented both in Matlab and C++. For the C++ implementation, we can use the following functions which belong to the openCV libraries:

  // Undistort images

void cvInitUndistortMap(
const CvMat* intrinsic_matrix,
const CvMat* distortion_coeffs,
cvArr* mapx,
cvArr* mapy

void cvUndistort2(
const CvArr* src,
CvArr* dst,
const cvMat* intrinsic_matrix,
const cvMat* distortion_coeffs

 // Undistort a list of 2D points only
void cvUndistortPoints(
const CvMat* _src,
CvMat* dst,
const CvMat* intrinsic_matrix,
const CvMat* distortion_coeffs,
const CvMat* R = 0,
const CvMat* Mr = 0;

      The function cvInitUndistortMap() computes the distortion map, which relates each point in the image to the location where that point is mapped. The first two arguments are the camera intrinsic matrix and the distortion coefficients, both in the expected form as received from cvCalibrateCamera2(). The resulting distortion map is represented by two separate 32-bit, single-channel arrays: the first gives the x-value to which a given point is to be mapped and the second gives the y-value. You might be wondering why we don’t just use a single two-channel array instead. The reason is so that the results from cvUnitUndistortMap() can be passed directly to cvRemap().

       The function cvUndistort2() does all this in a single step. It takes your initial (distorted image) as well as the camera’s intrinsic matrix and distortion coefficients, and then outputs an undistorted image of the same size. As mentioned previously, cvUndistortPoints() is used if you just have a list of 2D point coordinates from the original image and you want to compute their associated undistorted point coordinates.

     In Matlab , there is not a function which computes this, but it is easy to implement. An implementation of a function to eliminate the distortion in an image could be:

 function [u_sd] = undistor( u_cd , D )



r = size( u_cd , 1 );
if r ~= 2 && r ~= 3
error( ‘u_cd must be a 2xn Matrix’ );

if numel( D ) ~= 6
error( ‘D must be a 6×1 or 1×6 vector’ ); % If we have 6 distortion                                                                   % coeffs
if r == 3
u_cd = u_cd./repmat( u_cd(3,:) , 3 , 1 );
u_cd = u_cd(1:2,:);
% Distortion Coeffs
Xo = D(1);
Yo = D(2);
k1 = D(3);
k2 = D(4);
p1 = D(5);
p2 = D(6);

% Convert to central coord.
u = u_cd(1,:);
v = u_cd(2,:);
xd_ = u – Xo;
yd_ = v – Yo;

% Radial distances
r2 = xd_.^2+yd_.^2;
r4 = r2.^2;

% Distortion correction
Axd = xd_.*(k1*r2+k2*r4)+p1*(r2+2*(xd_.^2))+2*p2*xd_.*yd_;
Ayd = yd_.*(k1*r2+k2*r4)+p2*(r2+2*(yd_.^2))+2*p1*xd_.*yd_;

u_sd = [u+Axd;v+Ayd];


Do not forget to check out our AR Browser and Image Matching SDKs.

Tags: ,
Posted on: No Comments

Leave a Reply