- Diffraction Limited Aperture
- What Does Diffraction Limited Mean
- Diffraction Limited Resolution
- Calculate Diffraction Limit
Diffraction is an optical effect which limits the total resolution of your photography — no matter how many megapixels your camera may have. It happens because light begins to disperse or 'diffract' when passing through a small opening (such as your camera's aperture). This effect is normally negligible, since smaller apertures often improve sharpness by minimizing lens aberrations. However, for sufficiently small apertures, this strategy becomes counterproductive — at which point your camera is said to have become diffraction limited. Knowing this limit can help maximize detail, and avoid an unnecessarily long exposure or high ISO speed.
Diffraction Limited Aperture. Discussion in 'Canon EOS' started by imagesinlightnorthwest, Nov 18, 2008. Page 1 of 2 1 2 Next imagesinlightnorthwest. Assume we have a physically perfect lens with a perfectly circular aperture, the lens would then be called 'diffraction limited.' This is because the only limitation to the maximum resolution of an image created by that lens is the physical phenomenon of light diffraction rather than any imperfections, misalignment, or sensor resolution.
Light rays passing through a small aperture will begin to diverge and interfere with one another. This becomes more significant as the size of the aperture decreases relative to the wavelength of light passing through, but occurs to some extent for any aperture or concentrated light source.
Since the divergent rays now travel different distances, some move out of phase and begin to interfere with each other — adding in some places and partially or completely canceling out in others. This interference produces a diffraction pattern with peak intensities where the amplitude of the light waves add, and less light where they subtract. If one were to measure the intensity of light reaching each position on a line, the measurements would appear as bands similar to those shown below.
Dmg automation gmbhyellowalley. For an ideal circular aperture, the 2-D diffraction pattern is called an 'airy disk,' after its discoverer George Airy. The width of the airy disk is used to define the theoretical maximum resolution for an optical system (defined as the diameter of the first dark circle).
When the diameter of the airy disk's central peak becomes large relative to the pixel size in the camera (or maximum tolerable circle of confusion), it begins to have a visual impact on the image. Once two airy disks become any closer than half their width, they are also no longer resolvable (Rayleigh criterion).
Diffraction thus sets a fundamental resolution limit that is independent of the number of megapixels, or the size of the film format. It depends only on the f-number of your lens, and on the wavelength of light being imaged. One can think of it as the smallest theoretical 'pixel' of detail in photography. Furthermore, the onset of diffraction is gradual; prior to limiting resolution, it can still reduce small-scale contrast by causing airy disks to partially overlap.
VISUAL EXAMPLE: APERTURE VS. PIXEL SIZE
The size of the airy disk is primarily useful in the context of pixel size. The following interactive tool shows a single airy disk compared to pixel size for several camera models:
Note: above airy disk will appear narrower than its specified diameter (since this is defined by where it reaches its first minimum instead of by the visible inner bright region).
As a result of the sensor's anti-aliasing filter (and the Rayleigh criterion above), an airy disk can have a diameter of about 2-3 pixels before diffraction limits resolution (assuming an otherwise perfect lens). However, diffraction will likely have a visual impact prior to reaching this diameter.
As two examples, the Canon EOS 20D begins to show diffraction at around f/11, whereas the Canon PowerShot G6 begins to show its effects at only about f/5.6. On the other hand, the Canon G6 does not require apertures as small as the 20D in order to achieve the same depth of field (due to its much smaller sensor size).
Since the size of the airy disk also depends on the wavelength of light, each of the three primary colors will reach its diffraction limit at a different aperture. The calculation above assumes light in the middle of the visible spectrum (~550 nm). Typical digital SLR cameras can capture light with a wavelength of anywhere from 450 to 680 nm, so at best the airy disk would have a diameter of 80% the size shown above (for pure blue light).
Another complication is that sensors utilizing a Bayer array allocate twice the fraction of pixels to green as red or blue light, and then interpolate these colors to produce the final full color image. This means that as the diffraction limit is approached, the first signs will be a loss of resolution in green and pixel-level luminosity. Blue light requires the smallest apertures (highest f-stop) in order to reduce its resolution due to diffraction.Other Technical Notes:
- The physical pixels do not actually occupy 100% of the sensor area, but instead have gaps in between. This calculation assumes that microlenses make these gaps negligible.
- Some cameras have pixels which are slightly rectangular, in which case diffraction will reduce resolution more in one direction than the other.
- The above chart approximates the aperture as being circular (a common approximation), but in reality these are polygonal with 5-8 sides.
- The calculation for pixel area assumes these extend all the way to the edge of each sensor, and all contribute to the final image. In reality, camera manufacturers leave some pixels unused around the edge of the sensor. Since not all manufacturers specify the number of used vs. unused pixels, only used pixels were considered when calculating the fraction of total sensor area. The pixel sizes above are thus slightly larger than if measured (but by no more than 5%).
WHAT IT LOOKS LIKE
Although the above diagrams help give a feel for the concept of diffraction, only real-world photography can show its visual impact. The following series of images were taken on the Canon EOS 20D, which typically exhibits softening from diffraction beyond about f/11. Move your mouse over each f-number to see how these impact fine detail:
Diffraction Limited Aperture
Note how most of the lines in the fabric are still resolved at f/11, but have slightly lower small-scale contrast or acutance (particularly where the fabric lines are very close). This is because the airy disks are only partially overlapping, similar to the effect on adjacent rows of alternating black and white airy disks (as shown on the right). By f/22, almost all fine lines have been smoothed out because the airy disks are larger than this detail.
CALCULATING THE DIFFRACTION LIMIT
The form below calculates the size of the airy disk and assesses whether the camera has become diffraction limited. Click on 'show advanced' to define a custom circle of confusion (CoC), or to see the influence of pixel size.
Note: CF = 'crop factor' (commonly referred to as the focal length multiplier);assumes square pixels, 4:3 aspect ratio for compact digital and 3:2 for SLR.*Calculator assumes that your camera sensor uses the typical bayer array.
This calculator shows a camera as being diffraction limited when the diameter of the airy disk exceeds what is typically resolvable in an 8x10 inch print viewed from one foot. Click 'show advanced' to change the criteria for reaching this limit. The 'set circle of confusion based on pixels' checkbox indicates when diffraction is likely to become visible on a computer at 100% scale. For a further explanation of each input setting, also see the depth of field calculator.
In practice, the diffraction limit doesn't necessarily bring about an abrupt change; there is actually a gradual transition between when diffraction is and is not visible. Furthermore, this limit is only a best-case scenario when using an otherwise perfect lens; real-world results may vary.
NOTES ON REAL-WORLD USE IN PHOTOGRAPHY
Even when a camera system is near or just past its diffraction limit, other factors such as focus accuracy, motion blur and imperfect lenses are likely to be more significant. Diffraction therefore limits total sharpness only when using a sturdy tripod, mirror lock-up and a very high quality lens.
Some diffraction is often ok if you are willing to sacrifice sharpness at the focal plane in exchange for sharpness outside the depth of field. Alternatively, very small apertures may be required to achieve sufficiently long exposures, such as to induce motion blur with flowing water. In other words, diffraction is just something to be aware of when choosing your exposure settings, similar to how one would balance other trade-offs such as noise (ISO) vs shutter speed.
This should not lead you to think that 'larger apertures are better,' even though very small apertures create a soft image; most lenses are also quite soft when used wide open (at the largest aperture available). Camera systems typically have an optimal aperture in between the largest and smallest settings; with most lenses, optimal sharpness is often close to the diffraction limit, but with some lenses this may even occur prior to the diffraction limit. These calculations only show when diffraction becomes significant, not necessarily the location of optimum sharpness (see camera lens quality: MTF, resolution & contrast for more on this).
Are smaller pixels somehow worse? Not necessarily. Just because the diffraction limit has been reached (with large pixels) does not necessarily mean an image is any worse than if smaller pixels had been used (and the limit was surpassed); both scenarios still have the same total resolution (even though the smaller pixels produce a larger file). However, the camera with the smaller pixels will render the photo with fewer artifacts (such as color moiré and aliasing). Smaller pixels also give more creative flexibility, since these can yield a higher resolution if using a larger aperture is possible (such as when the depth of field can be shallow). On the other hand, when other factors such as noise and dynamic range are considered, the 'small vs. large' pixels debate becomes more complicated..
Technical Note: Independence of Focal LengthSince the physical size of an aperture is larger for telephoto lenses (f/4 has a 50 mm diameter at 200 mm, but only a 25 mm diameter at 100 mm), why doesn't the airy disk become smaller? This is because longer focal lengths also cause light to travel farther before hitting the camera sensor -- thus increasing the distance over which the airy disk can continue to diverge. The competing effects of larger aperture and longer focal length therefore cancel, leaving only the f-number as being important (which describes focal length relative to aperture size).
For additional reading on this topic, also see the addendum:Digital Camera Diffraction, Part 2: Resolution, Color & Micro-Contrast
The optical microscope has played a central role in helping to untangle the complex mysteries of biology ever since the seventeenth century when Dutch inventor Antoni van Leeuwenhoek and English scientist Robert Hooke first reported observations using single-lens and compound microscopes, respectively. Over the past three centuries, a vast number of technological developments and manufacturing breakthroughs have led to significantly advanced microscope designs featuring dramatically improved image quality with minimal aberration. However, despite the computer-aided optical design and automated grinding methodology utilized to fabricate modern lens components, glass-based microscopes are still hampered by an ultimate limit in optical resolution that is imposed by the diffraction of visible light wavefronts as they pass through the circular aperture at the rear focal plane of the objective. As a result, the highest achievable point-to-point resolution that can be obtained with an optical microscope is governed by a fundamental set of physical laws that cannot be easily overcome by rational alternations in objective lens or aperture design. These resolution limitations are often referred to as the diffraction barrier, which restricts the ability of optical instruments to distinguish between two objects separated by a lateral distance less than approximately half the wavelength of light used to image the specimen.
The process of diffraction involves the spreading of light waves when they interact with the intricate structures that compose a typical specimen. Due to the fact that most specimens observed in the microscope are composed of highly overlapping features that are best represented by multiple point sources of light, discussions of the microscope diffraction barrier center on describing the passage of wavefronts representing a single point source of light through the various optical elements and aperture diaphragms. As will be discussed below, the transmitted light or fluorescence emission wavefronts emanating from a point in the specimen plane of the microscope become diffracted at the edges of the objective aperture, effectively spreading the wavefronts to produce an image of the point source that is broadened into a diffraction pattern having a central disk of finite, but larger size than the original point. Therefore, due to diffraction of light, the image of a specimen never perfectly represents the real details present in the specimen because there is a lower limit below which the microscope optical system cannot resolve structural details.
In addition to the diffraction phenomenon that occurs with divergent light waves in optical instruments, the process of interference describes the recombination and summation of two or more superimposed wavefronts. Interference of light is perhaps the most ubiquitous phenomenon in optical microscopy and plays a central role in all aspects of image formation. In fluorescence or laser scanning confocal microscopy, the role of the objective is to focus the excitation light onto a focal point in order to ensure constructive interference of the focused wavefront at the specimen plane. In terms of this requirement, constructive interference (discussed below) ensures that the electric field vector of wavefronts incident from all available objective aperture angles resides in the same phase and therefore produces the smallest possible excitation spot.
Both interference and diffraction, which are actually manifestations of the same process, are responsible for creating a real image of the specimen at the intermediate image plane in a microscope. In brief, interference between two wavefronts occurs with addition to double the amplitude if the waves are perfectly in phase (constructive interference), but the waves cancel each other completely when out of phase by 180 degrees (termed destructiveinterference; however, most interference occurs somewhere in between). The photon energy inherent in a light wave is not itself doubled or annihilated when two waves interfere; rather this energy is channeled during diffraction and interference in directions that permit constructive interference. Therefore, interference and diffraction should be considered as phenomena involving the redistribution of light waves and photon energy.
A point object in a microscope, such as a fluorescent protein single molecule, generates an image at the intermediate plane that consists of a diffraction pattern created by the action of interference. When highly magnified, the diffraction pattern of the point object is observed to consist of a central spot (diffraction disk) surrounded by a series of diffraction rings (see Figure 1). In the nomenclature associated with diffraction theory, the bright central region is referred to as the zeroth-order diffraction spot while the rings are called the first, second, third, etc., order diffraction rings. When the microscope is properly focused, the intensity of light at the minima between the rings is zero. Combined, this point source diffraction pattern is referred to as an Airy disk (after Sir George B. Airy, a nineteenth century English astronomer). The size of the central spot in the Airy pattern is related to the wavelength of light and the aperture angle of the objective. For a microscope objective, the aperture angle is described by the numerical aperture (NA), which includes the term sin θ, the half angle over which the objective can gather light from the specimen. In terms of resolution, the radius of the diffraction Airy disk in the lateral (x,y) image plane is defined by the following formula:
where λ is the average wavelength of illumination in transmitted light or the excitation wavelength band in fluorescence. The objective numerical aperture (NA = n•sin(θ)) is defined by the refractive index of the imaging medium (n; usually air, water, glycerin, or oil) multiplied by the sine of the aperture angle (sin(θ)). As a result of this relationship, the size of the spot created by a point source decreases with decreasing wavelength and increasing numerical aperture, but always remains a disk of finite diameter. Thus, the image spot size produced by a 100x magnification objective having a numerical aperture of 0.90 in green light (550 nanometers) is approximately 30 micrometers, whereas the spot size produced by a 100x objective of numerical aperture 1.4 is approximately 200 nanometers, almost 50 percent smaller. The diffraction-limited resolution theory was advanced by German physicist Ernst Abbe in 1873 (see Equation (1)) and later refined by Lord Rayleigh in 1896 (Equation (3)) to quantitate the measure of separation necessary between two Airy patterns in order to distinguish them as separate entities.
According to Abbe's theory, images are composed from an array of diffraction-limited spots having varying intensity that overlap to produce the final result, as described above. Thus, the only mechanism for optimizing spatial resolution and image contrast is to minimize the size of the diffraction-limited spots by decreasing the imaging wavelength, increasing numerical aperture, or using an imaging medium having a larger refractive index. However, under ideal conditions with the most powerful objectives, lateral resolution is still limited to relatively modest levels approaching 200 to 250 nanometers (see Equation (1)) due to transmission characterics of glass at wavelengths beneath 400 nanometers and the physical constraints on numerical aperture. In contrast, the axial dimension of the Airy disk forms an elliptical pattern that often referred to as the point-spread function (PSF). The elongated geometry of the point-spread function along the optical axis arises from the nature of the non-symmetrical wavefront that emerges from the microscope objective. Axial resolution in optical microscopy is even worse than lateral resolution (as outlined in Equation (2)), on the order of 500 nanometers. When attempting to image highly convoluted features, such as cellular organelles, diffraction-limited resolution is manifested as poor axial sectioning capability and lowered contrast in the imaging plane. Furthermore, overall specimen contrast achieved in three-dimensional specimens is generally dominated by the relatively poor axial resolution that occurs due to out-of-focus light interference with the point-spread function.
Illustrated in Figure 1 is the effect of objective aperture angle on the size of a diffraction spot produced in a typical optical microscope. The point source and its conjugate (P) in the image plane where wavefronts converge and undergo constructive interference are illustrated for objectives having large (Figure 1(a)) and small (Figure 1(b)) numerical aperture. The point P1is moved laterally in the focal plane until destructive interference at a certain distance (dictated by the objective numerical aperture) defines the location of the first diffraction minimum and thus the radius of the diffraction spot. For the high resolution configuration in Figure 1(a), Points A and B in the wavefront produce a smaller spot size with 10 arbitrary units defining the imaged spot size. In contrast, for the lower resolution configuration presented in Figure 1(b), the reduced aperture angle increases the distance between A and B to 18 arbitrary units. In other words, light emitted by a fluorophore (the point source) is focused by the objective at the image plane where wavefronts traveling the same distance arrive at the image plane in phase and interfere constructively to produce a spot having high intensity. Destructive interference, leading to zero intensity, is generated by wavefronts that arrive one-half wavelength out of phase (see discussion above). Because the drop in intensity is gradual along the lateral axis of the spot, two point sources (or fluorescent molecules) closer together than the size of the spot will appear to be a single, larger spot and are unresolved.
As described above, the intensity distribution of an Airy disk in three dimensions is referred to as a point-spread function and completely describes the diffraction pattern of a point source of light (such as a single fluorophore) in the lateral (x,y) and axial (z) dimensions as modified by a diffraction-limited optical microscope. The size of the point spread function is determined by the wavelength of imaging light and the characteristics of the objective (numerical aperture) and the refractive index of the imaging medium. Resolution, in a practical sense, is often defined as the smallest separation distance between two point-like objects in which they can still be distinguished as individual emitters (and not amalgamated into a single spot). As a result, most resolution criteria (for example, the Rayleigh criterion, Sparrow limit, or the full width at half maximum; FWHM) are directly related to the properties and geometry of the point-spread function.
According to the Rayleigh criterion, two point sources observed in the microscope are regarded as being resolved when the principal diffraction maximum (the central spot of the Airy disk; see Figure 2) from one of the point sources overlaps with the first minimum (dark region surrounding the central spot) of the Airy disk from the other point source. If the distance between the two Airy disks or point-spread functions is greater than this value, the two point sources are considered to be resolved (and can readily be distinguished). Otherwise, the Airy disks merge together and are considered not to be resolved. Stated in other terms, the Rayleigh criterion is satisfied when the distance between the images of two closely spaced point sources is approximately equal to the width of the point-spread function. In contrast, the Sparrow resolution limit is defined as the distance between two point sources where the images no longer have a dip in brightness between the central peaks, but rather exhibit constant brightness across the region between the peaks. The Sparrow resolution limit is closer to the Abbe value and approximately two-thirds (Equation (4)) of the Rayleigh resolution limit.
Presented in Figure 2 is a graphical representation of the Rayleigh criterion for both the lateral and axial dimensions of two closely positioned point sources. In Figure 2(a), the intensity of the point sources is represented by solid blue and dashed yellow curves. The total intensity generated by the combined point sources is represented by a red curve that is displaced along the ordinate for clarity. In order to distinguish between these point sources, the distance between the peaks should be sufficient to produce an intensity minimum that ranges between 20 and 30 percent of the peak intensity (Figure 2(a)). The same criterion applies to the axial dimension (Figure 2(b)). Note that the resolution (indicated in Figures 2(a) and 2(b) along the abscissa) is significantly lower along the z axis.
Although the Rayleigh criterion and similar measures are useful resolution gauges for observation of the specimen, there remain several shortcomings of such a definition for resolution. For example, in cases where the investigator is aware that two particles are merged to form a single point image, computer algorithms can be applied to discriminate between the particles to arbitrarily smaller distances. Determining the exact position of the two adjacent particles then becomes a question of experimental precision dictated by photon statistics rather than being described by the Rayleigh limit. Furthermore, resolution limits do not necessarily correspond to the level of detail that can be observed in images. While the Rayleigh limit is defined as the distance from the center of the first minimum of the point-spread function, this value can be rendered smaller by advanced optical systems or linear optics. Resolution criteria also do not rely on the fact that light is a diffracting wavefront that poses a finite limit to the level of detail that is actually contained within the waves.
The Abbe equation for resolution avoids the shortcomings of the Rayleigh criterion and Sparrow limit, but with a more indirect interpretation. The process of imaging a specimen in the microscope can be described by a convolution operation between the illumination and fluorescence emission (or transmitted light) point-spread functions. After being subjected to Fourier transformation (see Figure 3), objects observed in the microscope (whether they are periodic or not) can be uniquely described as a summation of numerous sinusoidal curves having different spatial frequencies. Note that the image of a specimen, present in all conjugate image planes, exists as the Fourier transform in the corresponding aperture planes where higher frequencies represent fine specimen detail and lower frequencies represent coarse details (Figure 3(a)). This point is illustrated with the waveform in objective rear aperture in Figure 3(b). The lower spatial frequencies reside near the center of the aperture, while the frequency progressively increases for regions approaching the edges of the aperture.
The concept of convolution in real space can be readily simplified by examining the equivalent operation in Fourier space. In the latter, the transformed object can be multiplied with the Fourier transform of the point-spread function to yield the Fourier transform of an ideal image lacking noise. After Fourier transformation, the point-spread function describes how efficiently each spatial frequency of the specimen is transferred to the final image. Thus, the Fourier-transformed point-spread function is referred to as the optical transfer function (OTF; see Figure 3(b)). The OTF defines the extent to which spatial frequencies containing information about the specimen are lost, retained, attenuated, or phase-shifted during the imaging process. Spatial frequency information that is lost during imaging cannot be recovered, so one of the primary goals for all forms of microscopy is to acquire the highest frequency range as possible for the specimen. The value of the OTF at each spatial frequency (measured in oscillations per meter) is a useful indicator to describe the contrast that a particular sinusoidal object feature achieves in the final image.
One of the important points to remember about the optical microscope is that the detection optical transfer function has a characteristic frequency that serves as a resolution 'cut-off' border (the Abbe limiting frequency; see Figure 3(b)). Frequencies higher than the limiting value are not present in the image recorded by the microscope. The peak-to-peak distance for the highest spatial frequency able to pass through the objective (the value d for the green waveform in Figure 3(a)) is therefore commonly referred to as the Abbe limit, which is more formally defined as the smallest periodicity in a structure that can be detected in the final image. Due to the fact that a point source emits or transmits a wide range of spatial frequencies, the Abbe limit must also be present in the point-spread function spanning three dimensions.
What Does Diffraction Limited Mean
Diffraction Limited Resolution
A traditional widefield microscope generates an image of a point source by capturing the light in various locations in the objective and further processing the wavefronts as the pass through the optical train to finally interfere at the image plane. As a consequence of the reciprocity principle in optics, the Abbe limit in the lateral axis of the microscope corresponds to the maximum-to-maximum distance that can be obtained by interfering two waves at the most extreme angles captured by the objective. The Abbe resolution limit is attractive because it depends only on the maximal relative angle between different wavefronts leaving the specimen and captured by the objective. This limit therefore describes the smallest level of detail that can possibly be imaged, and that periodic structures have higher spatial frequency (shorter wavelengths) will not be transferred to the image.
Even in cases where an optical microscope is equipped with the highest available quality of lens elements, is perfectly aligned, and has the highest numerical aperture, the resolution remains limited to approximately half the wavelength of light in the best case scenario. In practice, the resolution typically achieved in routine imaging often does not reach the physical limit imposed by diffraction. This is due to the fact that optical inhomogeneities in the specimen can distort the phase of the excitation beam, leading to a focal volume that is significantly larger than the diffraction-limited idea. Additionally, resolution can also be compromised by the use of incompatible immersion oil, coverslips having a thickness outside the optimum range, and improperly adjusted correction collars.
Laser scanning confocal and multiphoton microscopy have been widely used to moderately enhance spatial resolution along both the lateral and axial axes, but the techniques remain limited in terms of achieving substantial improvement. The focused laser excitation coupled with pinhole-restricted detection in confocal microscopy can, in principle, improve the spatial resolution by a factor of 1.4, although this is only realized at a significant cost in signal-to-noise. Likewise, multiphoton fluorescence microscopy takes advantage of nonlinear absorption processes to reduce the effective size of the excitation point-spread function. Once again, however, the smaller and more refined point-spread function is counteracted by the necessity to use longer wavelength excitation light. As a result, rather than providing dramatic improvements to resolution, the primary advantage of confocal and multiphoton microscopy over traditional widefield techniques is the reduction of background signal originating from emission sources removed from the focal plane (out-of-focus light), which enables crisp optical sections to be obtained for three-dimensional volume-rendered imaging.
The resolution limits imposed by the physical laws that govern optical microscopy can be exceeded, however, by taking advantage of 'loopholes' in the law that underscore the fact that the limitations are true only under certain assumptions. Techniques exploiting these 'loopholes' have come to be known as super-resolution microscopies, with many major manufacturers now offering various types of super-resolution microscopes.
Joel S. Silfies and Stanley A. Schwartz - Nikon Instruments, Inc., 1300 Walt Whitman Road, Melville, New York, 11747.
Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.