Файл: Ersoy O.K. Diffraction, Fourier optics, and imaging (Wiley, 2006)(ISBN 0471238163)(427s) PEo .pdf
ВУЗ: Не указан
Категория: Не указан
Дисциплина: Не указана
Добавлен: 28.06.2024
Просмотров: 888
Скачиваний: 0
xvi |
PREFACE |
Optical Fourier techniques have become very important in optical communications and networking. One such area covered in Chapter 19 is arrayed waveguide gratings (AWGs) used in dense wavelength division multiplexing (DWDM). AWG is also called phased array (PHASAR). It is an imaging device in which an array of waveguides are used. The waveguides are different in length by an integer m times the central wavelength so that a large phase difference is achieved from one waveguide to the next. The integer m is quite large, such as 30, and is responsible for the large resolution capability of the phasar device, meaning that the small changes in wavelength can be resolved in the output plane. This is the reason why waveguides are used rather than free space. However, it is diffraction that is used past the waveguides to generate images of points at different wavelengths at the output plane. This is similar to a DOE, which is a sampled device. Hence, the images repeat at certain intervals. This limits the number of wavelengths that can be imaged without interference from other wavelengths. A method called irregularly sampled zero-crossings (MISZCs) is discussed to avoid this problem. The MISZC has its origin in one-image-only holography discussed in Chapter 15.
Scalar diffraction theory becomes less accurate when the sizes of the diffracting apertures are smaller than the wavelength of the incident wave. Then, the Maxwell equations need to be solved by numerical methods. Some emerging approaches for this purpose are based on the method of finite differences, the Fourier modal analysis, and the method of finite elements. The first two approaches are discussed in Chapter 20. First, the paraxial BPM method discussed in Section 12.4 is reformulated in terms of finite differences using the Crank-Nicholson method. Next, the wide-angle BPM using the Pade approximation is discussed. The final sections highlight the finite difference time domain and the Fourier modal method.
Many colleagues, secretaries, friends, and students across the globe have been helpful toward the preparation of this manuscript. I am especially grateful to them for keeping me motivated under all circumstances for a lifetime. I am also very fortunate to have worked with John Wiley & Sons, on this project. They have been amazingly patient with me. Without such patience, I would not have been able to finish the project. Special thanks to George Telecki, the editor, for his patience and support throughout the project.
1
Diffraction, Fourier Optics and Imaging
1.1INTRODUCTION
When wave fields pass through ‘‘obstacles,’’ their behavior cannot be simply described in terms of rays. For example, when a plane wave passes through an aperture, some of the wave deviates from its original direction of propagation, and the resulting wave field is different from the wave field passing initially through the aperture, both in size and shape [Sommerfeld, 2006]. This type of phenomenon is called diffraction.
Wave propagation involves diffraction. Diffraction occurs with all types of waves, such as electromagnetic waves, acoustic waves, radio waves, ultrasonic waves, acoustical waves, ocean swells, and so on. Our main concern will be electromagnetic (EM) waves, even though the results are directly applicable to other types of waves as well.
In the past, diffraction was considered a nuisance in conventional optical design. This is because the resolution of an optical imaging system is determined by diffraction. The developments of analog holography (demonstrated in the 1940s and made practical in the 1960s), synthetic aperture radar (1960s), and computergenerated holograms and kinoforms, more generally known as diffractive optical elements (DOE’s) (late 1960s) marked the beginning of the development of optical elements based on diffraction. More recently, combination of diffractive and refractive optical elements, such as a refractive lens corrected by diffractive optics, showed how to achieve new design strategies.
Fourier optics involves those topics and applications of optics that involve continuous-space as well as discrete-space Fourier transforms. As such, scalar diffraction theory is a part of Fourier optics. Among other significant topics of Fourier optics, we can cite Fourier transforming and imaging properties of lenses, frequency analysis of optical imaging systems, spatial filtering and optical information processing, analog and computer-generated holography, design and analysis of DOE’s, and novel imaging techniques.
The modern theories of diffraction, imaging, and other related topics especially based on Fourier analysis and synthesis techniques have become essential for
Diffraction, Fourier Optics and Imaging, by Okan K. Ersoy
Copyright # 2007 John Wiley & Sons, Inc.
1
2 |
DIFFRACTION, FOURIER OPTICS, AND IMAGING |
understanding, analyzing, and synthesizing modern imaging, optical communications and networking, and micro/nanotechnology devices and systems. Some typical applications include tomography, magnetic resonance imaging, synthetic aperture radar (SAR), interferometric SAR, confocal microscopy, devices used in optical communications and networking such as directional couplers in fiber and integrated optics, analysis of very short optical pulses, computer-generated holograms, analog holograms, diffractive optical elements, gratings, zone plates, optical and microwave phased arrays, and wireless systems using EM waves.
Micro/nanotechnology is poised to develop in many directions and to result in novel products. In this endeavor, diffraction is a major area of increasing significance. All wave phenomena are governed by diffraction when the wavelength(s) of interest is/are of the order of or smaller than the dimensions of diffracting sources. It is clear that technology will aim at smaller and smaller devices/systems. As their complexity increases, a major approach, for example, for testing and analyzing them would be achieved with the help of diffraction.
In advanced computer technology, it will be very difficult to continue system designs with the conventional approach of a clock and synchronous communications using electronic pathways at extremely high speeds. The necessity will increase for using optical interconnects at increasing complexity. This is already happening in the computer industry today with very complex chips. It appears that the day is coming when only optical technologies will be able to keep up with the demands of more and more complex microprocessors. This is simply because photons do not suffer from the limitations of the copper wire.
Some microelectronic laboratory equipment such as the scanning electron microscope and reactive ion etching equipment will be more and more crucial for micro/nanodevice manufacturing and testing. An interesting application is to test devices obtained with such technologies by diffractive methods. Together with other equipment for optical and digital information processing, it is possible to digitize such images obtained from a camera and further process them by image processing. This allows processing of images due to diffraction from a variety of very complex microand nanosystems. In turn, the same technologies are the most competitive in terms of implementing optical devices based on diffraction.
In conclusion, Fourier and related transform techniques and diffraction have found significant applications in diverse areas of science and technology, especially related to imaging, communications, and networking. Linear system theory also plays a central role because the systems involved can often be modeled as linear, governed by convolution, which can be analyzed by the Fourier transform.
1.2 EXAMPLES OF EMERGING APPLICATIONS WITH GROWING SIGNIFICANCE
There are numerous applications with increasing significance as the technology matures while dimensions shrink. Below some specific examples that have recently emerged are discussed in more detail.
EXAMPLES OF EMERGING APPLICATIONS WITH GROWING SIGNIFICANCE |
3 |
1.2.1Dense Wavelength Division Multiplexing/Demultiplexing (DWDM)
Modern techniques for multispectral communications, networking, and computing have been increasingly optical. Topics such as dense wavelength division multiplexing/ demultiplexing (DWDM) are becoming more significant in the upcoming progress for communications and networking, and as the demand for more and more number of channels (wavelengths) is increasing.
DWDM provides a new direction for solving capacity and flexibility problems in communications and networking. It offers a very large transmission capacity and new novel network architectures. Major components in DWDM systems are the wavelength multiplexers and demultiplexers. Commercially available optical components are based on fiber-optic or microoptic techniques. Research on integrated-optic (de)multiplexers has increasingly been focused on grating-based and phased-array (PHASAR)-based devices (also called arrayed waveguide gratings). Both are imaging devices, that is, they image the field of an input waveguide onto an array of output waveguides in a dispersive way. In grating-based devices, a vertically etched reflection grating provides the focusing and dispersive properties required for demultiplexing. In phased-array-based devices, these properties are provided by an array of waveguides, the length of which has been chosen such as to obtain the required imaging and dispersive properties. As phased- array-based devices are realized in conventional waveguide technology and do not require the vertical etching step needed in grating-based devices, they appear to be more robust and fabrication tolerant. Such devices are based on diffraction to a large degree.
1.2.2Optical and Microwave DWDM Systems
Technological interest in optical and wireless microwave phased-array dense wavelength division multiplexing systems is also fast increasing. Microwave array antennas providing multiplexing/demultiplexing are becoming popular for wireless communications. For this purpose, use of optical components leads to the advantages of extreme wide bandwidth, miniaturization in size and weight, and immunity from electromagnetic interference and crosstalk.
1.2.3Diffractive and Subwavelength Optical Elements
Conventional optical devices such as lenses, mirrors, and prisms are based on refraction or reflection. By contrast, diffractive optical elements (DOE’s), for example, in the form of a phase relief are based on diffraction. They are becoming increasingly important in various applications.
A major factor in the practical implementation of diffractive elements at optical wavelengths was the revolution undergoing in electronic integrated circuits technology in the 1970s. Progress in optical and electron-beam lithography allowed complex patterns to be generated into resist with high precision. Phase control through surface relief with fine-line features and sharp sidewalls was made possible by dry etching techniques. Similar progress in diamond turning machines and laser
4 |
DIFFRACTION, FOURIER OPTICS, AND IMAGING |
writers also provided new ways of fabricating diffractive optical elements with high precision.
More recently, the commercial introduction of wafer-based nanofabrication techniques makes it possible to create a new class of optical components called subwavelength optical elements (SOEs). With physical structures far smaller than the wavelength of light, the physics of the interaction of these fine-scale surface structures with light yields new arrangements of optical-processing functions. These arrangements have greater density, more robust performance, and greater levels of integration when compared with many existing technologies and could fundamentally alter approaches to optical system design.
1.2.4Nanodiffractive Devices and Rigorous Diffraction Theory
The physics of such devices depends on rigorous application of the boundary conditions of Maxwell’s equations to describe the interaction of light with the structures. For example, at the wavelengths of light used in telecommunications – 980 through 1800 nm – the structures required to achieve those effects have some dimensions on the order of tens to a few hundred nanometers. At the lower end of the scale, single-electron or quantum effects may also be observed. In many applications, subwavelength structures act as a nanoscale diffraction grating whose interaction with incident light can be modeled by rigorous application of diffractiongrating theory and the above-mentioned boundary conditions of Maxwell’s equations.
Although these optical effects have been researched in the recent past, costeffective manufacturing of the optical elements has not been available. Building subwavelength grating structures in a research environment has generally required high-energy techniques, such as electron-beam (E-beam) lithography. E-beam machines are currently capable of generating a beam with a spot size around 5–10 nm. Therefore, they are capable of exposing patterns with lines of widths less than 0.1 mm or 100 nm.
The emergence of all-optical systems is being enabled in large part by new technologies that free systems from the data rate, bandwidth, latencies, signal loss, cost, and protocol dependencies inherent in optical systems with electrical conversion. In addition, the newer technologies, for example, microelectromechanical systems (MEMS)-based micromirrors allow external control of optical switching outside of the optical path. Hence, the electronics and optical parameters can be adjusted independently for optimal overall results. Studies of diffraction with such devices will also be crucial to develop new systems and technologies.
1.2.5Modern Imaging Techniques
If the source in an imaging system has a property called spatial coherence, the source wavefield is called coherent and can be described as a spatial distribution of complex-valued field amplitude. For example, holography is usually a coherent imaging technique. When the source does not have spatial coherence, it is called
EXAMPLES OF EMERGING APPLICATIONS WITH GROWING SIGNIFICANCE |
5 |
incoherent and can be described as a spatial distribution of real-valued intensity. Laser and microwave sources usually represent sources for coherent imaging. Then, the Fourier transform and diffraction are central for the understanding of imaging. Sunlight represents an incoherent source. Incoherent imaging can also be analyzed by Fourier techniques.
A number of computerized modern imaging techniques rely heavily on the Fourier transform and related computer algorithms for image reconstruction. For example, synthetic aperture radar, image reconstruction from projections including computerized tomography, magnetic resonance imaging, confocal microscopy, and confocal scanning microscopy are among such techniques.
2
Linear Systems and Transforms
2.1INTRODUCTION
Diffraction as well as imaging can often be modeled as linear systems. First of all, a system is an input–output mapping. Thus, given an input, the system generates an output. For example, in a diffraction or imaging problem, the input and output are typically a wave at an input plane and the corresponding diffracted wave at a distance from the input plane.
Optical systems are quite analogous to communication systems. Both types of systems have a primary purpose of collecting and processing information. Speech signals processed by communication systems are 1-D whereas images are 2-D. Onedimensional signals are typically temporal whereas 2-D signals are typically spatial. For example, an optical system utilizing a laser beam has spatial coherence. Then, the signals can be characterized as 2-D or 3-D complex-valued field amplitudes. Spatial coherence is necessary in order to observe diffraction. Illumination such as ordinary daylight does not have spatial coherence. Then, the signals can be characterized as 2-D spatial, real-valued intensities.
Linear time-invariant and space-invariant communication and optical systems are usually analyzed by frequency analysis using the Fourier transform. Nonlinear optical elements such as the photographic film and nonlinear electronic components such as diodes have similar input–output characteristics.
In both types of systems, Fourier techniques can be used for system synthesis as well. An example is two-dimensional filtering. Theoretically optical matched filters, optical image processing techniques are analogous to matched filters and image processing techniques used in communications and signal processing.
In this chapter, linear system theory and Fourier transform theory as related especially to diffraction, optical imaging, and related areas are discussed. The chapter consists of eight sections. The properties of linear systems with emphasis on convolution and shift invariance are highlighted in Section 2.2. The 1-D Fourier transform and the continuous-space Fourier transform (simply called the Fourier transform (FT) in the rest of the book) are introduced in Section 2.3. The conditions
Diffraction, Fourier Optics and Imaging, by Okan K. Ersoy
Copyright # 2007 John Wiley & Sons, Inc.
6
LINEAR SYSTEMS AND SHIFT INVARIANCE |
7 |
for the existence of the Fourier transform are given in Section 2.4. The properties of the Fourier transform are summarized in Section 2.5.
The Fourier transform discussed so far has a complex exponential kernel. It is actually possible to define the Fourier transform as a real transform with cosine and sine kernel functions. The resulting real Fourier transform is sometimes more useful. The 1-D real Fourier transform is discussed in Section 2.6. Amplitude and phase spectra of the 1-D Fourier transform are defined in Section 2.7.
Especially in optics and wave propagation applications, the 2-D signals sometimes have circular symmetry. In that case, the Fourier transform becomes the Hankel transform in cylindrical coordinates. The Hankel transform is discussed in Section 2.8.
2.2LINEAR SYSTEMS AND SHIFT INVARIANCE
Linearity allows the decomposition of a complex signal into elementary signals often called basis signals. In Fourier analysis, basis signals or functions are sinusoids.
In a linear system, a given input maps into a unique output. However, more than one input may map into the same output. Thus, the mapping may be one-to-one, or many-to-one.
A 2-D system is shown in Figure 2.1, where uðx; yÞ is the input signal, and gðx; yÞ is the output signal. Mathematically, the system can be written as
gðx; yÞ ¼ O½uðx; yÞ& |
ð2:2-1Þ |
in the continuous-space case. O½ & is an operator, mapping the input to the output. In the discrete-space case, the point ðx; yÞ is sampled as ½ x m; y n&, where x and y are the sampling intervals along the two directions. ½ x m; y n& can be simply represented as ½m; n&, and the system can be written as
g½m; n& ¼ O½u½m; n&& |
ð2:2-2Þ |
Below the continuous-space case is considered. The system is called linear if any linear combination of two inputs u1ðx; yÞ, and u2ðx; yÞ generates the same combination of their respective outputs g1ðx; yÞ and g2ðx; yÞ. This is called superposition principle and written as
O½a1u1ðt1; t2Þ þ a2u2ðx; yÞ& ¼ a1 O½u1ðx; yÞ& þ a2 O½u2ðx; yÞ& |
ð2:2-3Þ |
u(x, y) |
|
|
System |
|
g(x, y) |
|
|
||||
|
|
|
|
|
|
Figure 2.1. A system diagram.