Login

Your Name:(required)

Your Password:(required)

Join Us

Your Name:(required)

Your Email:(required)

Your Message :

How is image rotation done?

Author: Ruby

Aug. 19, 2024

35 0 0

Geometric Operations - Rotate




Rotate

You can find more information on our web, so please take a look.


Common Names: Rotation

Brief Description

The rotation operator performs a geometric transform which maps the position of a picture element in an input image onto a position in an output image by rotating it through a user-specified angle about an origin . In most implementations, output locations which are outside the boundary of the image are ignored. Rotation is most commonly used to improve the visual appearance of an image, although it can be useful as a preprocessor in applications where directional operators are involved. Rotation is a special case of affine transformation.

How It Works

The rotation operator performs a transformation of the form:

where are the coordinates of the center of rotation (in the input image) and is the angle of rotation with clockwise rotations having positive angles. (Note here that we are working in image coordinates, so the y axis goes downward. Similar rotation formula can be defined for when the y axis goes upward.) Even more than the translate operator, the rotation operation produces output locations which do not fit within the boundaries of the image (as defined by the dimensions of the original input image). In such cases, destination elements which have been mapped outside the image are ignored by most implementations. Pixel locations out of which an image has been rotated are usually filled in with black pixels.

The rotation algorithm, unlike that employed by translation, can produce coordinates which are not integers. In order to generate the intensity of the pixels at each integer position, different heuristics (or re-sampling techniques} may be employed. For example, two common methods include:

  • Allow the intensity level at each integer pixel position to assume the value of the nearest non-integer neighbor .

  • Calculate the intensity level at each integer pixel position based on a weighted average of the n nearest non-integer values. The weighting is proportional to the distance or pixel overlap of the nearby projections.

The latter method produces better results but increases the computation time of the algorithm.

Guidelines for Use

A rotation is defined by an angle and an origin of rotation . For example, consider the image

whose subject is centered. We can rotate the image through 180 degrees about the image (and circle) center at to produce

If we use these same parameter settings but a new, smaller image, such as the 222×217 size artificial, black-on-white image

we achieve poor results, as shown in

because the specified axis of rotation is sufficiently displaced from the image center that much of the image is swept off the page. Likewise, rotating this image through a value which is not an integer multiple of 90 degrees (e.g. in this case equals 45 degrees) rotates part of the image off the visible output and leaves many empty pixel values, as seen in

(Here, non-integer pixel values were re-sampled using the first technique mentioned above.)

Like translation, rotation may be employed in the early stages of more sophisticated image processing operations. For example, there are numerous directional operators in image processing (e.g. many edge detection and morphological operators) and, in many implementations, these operations are only defined along a limited set of directions: 0, 45, 90, etc. A user may construct a hybrid operator which operates along any desired image orientation direction by first rotating an image through the desired direction, performing the edge detection (or erosion, dilation, etc.), and then rotating the image back to the original orientation. (See Figure 1.)




Figure 1 A variable-direction edge detector.

As an example, consider

whose edges were detected by the directional operator defined using translation giving

We can perform edge detection along the opposite direction to that shown in the image by employing a 180 degree rotation in the edge detection algorithm. The result is shown in

Notice the slight degradation of this image due to rotation re-sampling.

Interactive Experimentation

You can interactively experiment with this operator by clicking here.

Exercises

  1. Consider images

    and

    which contain L-shaped parts of different sizes. a) Rotate and translate one of the images such that the bottom left corner of the ``L'' is in the same position in both images. b) Using a combination of histograming, thresholding and pixel arithmetic (e.g. pixel subtraction) determine the approximate difference in size of the two parts.

  2. Make a collage based on a series of rotations and pixel additions of image

    You should begin by centering the propeller in the middle of the image. Next, rotate the image through a series of 45 degree rotations and add each rotated version back onto the original. (Note: you can improve the visual appearance of the result if you scale the intensity values of each rotated propeller a few shades before adding it onto the collage.)

  3. Investigate the effects of re-sampling when using rotation as a preprocessing tool in an image erosion application. First erode the images

    and

    using a 90 degree directional erosion operator. Next, rotate the image through 90 degrees before applying the directional erosion operator along the 0 degree orientation. Compare the results.

    You will get efficient and thoughtful service from optec.

    Additional reading:
    Optical Window Design Characteristics - 10 step chart

References

D. Ballard and C. Brown Computer Vision, Prentice-Hall, , Appendix 1.

B. Horn Robot Vision, MIT Press, , Chap. 3.

Local Information

Specific information about this operator may be found here.

More general advice about the local HIPR installation is available in the Local Information introductory section.




© R. Fisher, S. Perkins, A. Walker and E. Wolfart.

The Simple Math Behind Image Rotation | by Hemanth

The Simple Math Behind Image Rotation

Featuring an old-school game-development algorithm!

Hemanth

·

Follow

Published in

Street Science

·

8 min read

·

Mar 15,

--

The math behind image rotation &#; Illustration created by the author

I recently started learning videogame development and stumbled upon the topic of sprite animation. As I was goofing around with pixel-based (raster) image rotations, I wondered how the earliest game developers tackled this problem.

Thankfully, we have a time machine (the internet) that lets us travel back and find out exactly that. One thing led to another, and I gathered enough information and made plenty of sketches to explore the mathematics behind image rotation in this essay.

If you are the type who is curious about the technical side of animation or are just someone who enjoys math, this essay is right up your alley. I will keep it beginner-friendly; I promise! Why don&#;t we start by figuring out what the title image is all about?

The Pixel Salad

Most digital images that we see today, be it on our phones, televisions, or monitors, are pixel-based images (formally known as raster graphics). They approximate objects and scenes that make sense to our brains using a rectangular grid, whose dimensions we refer to as resolution.

We call each rectangular geometrical element in this grid a pixel, and each pixel carries exactly one colour value (a combination of&#;

If you want to learn more, please visit our website Dove Prism for Image Rotation.

Comments

0

0/2000