1. Trang chủ
  2. » Khoa Học Tự Nhiên

Handbook of algorithms for physical design automation part 74 ppsx

10 250 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 10
Dung lượng 389,03 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

For advanced steppers, which fill the gap between the last lens element and the wafer with water for the higher angle coupling it allows water immersion a c b d FIGURE 35.18 a Layout wit

Trang 1

1

0.5

0

Pitch (nm)

Acceptable Unacceptable

Without SRAF With SRAF

FIGURE 35.17 Pitch curve for lines and spaces under a particular OAI approach called QUASAR illumination.

Without SRAFs, certain pitches do not have enough contrast, and will not print SRAF are added to restore the contrast (Adapted from Schellenberg, F.M., Capodieci, L., and Socha, B., Proceedings of the 38th Design Automation Conference, ACM, New York, 2001, pp 89–92 With permission.)

35.2.3.5 Polarization

At this time, there is a fourth independent variable of the EM field that has not yet been as fully exploited as the other three: polarization [73] For advanced steppers, which fill the gap between the last lens element and the wafer with water for the higher angle coupling it allows (water immersion

(a)

(c)

(b)

(d)

FIGURE 35.18 (a) Layout with alternating phase-shifted apertures, (black is opaque, left stripe is 0◦, right stripe 180◦), and (b) pupil map of an illumination pattern optimized for this layout; (c) Layout with or a memory cell (dark is opaque, clear is normal 0◦mask transmission), and (d) pupil map of an illumination pattern optimized

for this layout (Adapted from Granik, Y., J Microlith Microfab Microsyst., 3, 509, 2004 With permission.)

Trang 2

Alpert/Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 713 24-9-2008 #20

steppers [26]), anticipation of and compensation for polarization properties of the light is becom-ing crucial [73–76] At the time of this writbecom-ing, however, although some very creative techniques exploiting polarization have been proposed [77], no definitive polarization-based RET has been demonstrated as practical Instead, polarization is considered in each of the other RETs—source illumination, mask diffraction, and lens pupil transmission This may change in the future as the polarization issues with advanced immersion lithography become better understood

35.2.4 RET FLOW ANDCOMPUTATIONALLITHOGRAPHY

No matter what patterning technique is used, incorporating the simulation of the corresponding effects requires some care for insertion into an EDA environment Complete brute force image simulation

of a 32 mm× 22 mm IC with resolution at the nanometer scale would require a gigantic amount of simulation and days or even weeks to complete Some effort to therefore determine the minimum necessary set for simulation is called for

Therefore, the initial step simulation for an EDA flow involves fragmentation of the layout In

a layout format such as GDS-II or OASIS, a polygon is defined by a sequence of vertices These vertices are only placed where the boundary of a polygon makes a change (e.g., at the corners

of rectangles) With fragmentation, additional vertices are inserted [41,78] The rules governing fragmentation can be complex, but the intention is to basically break the longer edge segments into shorter, more manageable edge segments, with more segments (higher fragmentation) in regions

of high variability and fewer segments (low fragmentation) in regions of low variability This is illustrated in Figure 35.19

Once fragmented, a simulation point is determined for each edge segment This is the location at which the image simulation results will be determined, and the corresponding position of the edge

as expected on the wafer determined Each simulation point has an associated cutline, along which the various values for the image intensity and its derivatives (e.g., image slope) will be calculated This is illustrated in Figure 35.20 [41,79,80]

At this point, the simulator is invoked to systematically simulate the image properties only along the cutline for each edge segment Using some assumptions or a suitable algorithm, the position of the edge of the resist is determined from the computed image Once this edge position is determined,

a difference between the edge position in the desired layout and the simulated edge position is computed This difference is called the edge placement error (EPE) [41]

Fragmentation point

FIGURE 35.19 Original portion of a layout with original fragmentation (left) and layout after refragmentation

for OPC (right) (Adapted from Word, J and Cobb, N., Proc SPIE, 5567, 1305–1314, 2004 With permission.)

Trang 3

Fragmentation point Simulation cutline Location for image computation

FIGURE 35.20 Selection of the simulation cutlines to use with the fragmentation from Figure 35.19.

(Reproduced, Courtesy Mentor Graphics.)

Fragmentation Simulation Generate EPE Correction Layer selection

FIGURE 35.21 Sequence of operations within a typical OPC iterative loop.

For each and every edge segment there is, therefore, an EPE For an EPE of zero, the image

of the edge falls exactly on the desired location When the EPE is nonzero, a suggested motion for the edge segment is determined from the sign and magnitude of the EPE that should reduce the EPE The edge segment in the layout is then moved, according to this prediction Once this happens, a new simulation and a new EPE are generated for the revised layout The iterative process proceeds until the EPE has been reduced to be within a predetermined tolerance This is illustrated

in Figure 35.21

Although simplistic in outline, determining fragmentation settings and suitable simulation sites while remaining optimal for the competing metrics of high accuracy, rapid convergence, and man-ageable data volume remains challenging A real-world example of a layout with fragmentation selections is shown in Figure 35.22 In general, high fragmentation density leads to better accuracy, but requires more simulation and may create higher data volume Poorly chosen simulation sites can converge rapidly, but may not accurately represent the average behavior along the entire edge fragment (and in some cases, may even lead to a motion in the wrong direction) Cutlines chosen in certain orientations (e.g., normal to the layout, not normal to the image gradient) may again produce less representative EPEs, and the iteration may require longer to converge

Trang 4

Alpert/Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 715 24-9-2008 #22

FIGURE 35.22 Example of a real-world layout, showing the target layout, simulation cutlines, and image

contours (Reproduced, Courtesy Mentor Graphics.)

35.2.5 MASKMANUFACTURINGFLOW

Although originally developed for computing the relationship between the layout and the wafer image, a similar procedure can be carried out to compensate for mask manufacturing effects [81] In this case, the model must be derived for the various processes used in mask fabrication These typically involve exposure using an electron beam (E-beam), and because electrons are charged and repel, a significant amount of computation may be required to compensate for electron proximity effects [82] Optical mask writers, which write masks using UV lasers and use lithography materials similar to those used for wafers [82], can also be corrected for optical proximity and processing effects

35.2.6 CONTOUR-BASEDEPE

For sparse layouts, with feature dimensions larger than the optical wavelength, selection of frag-mentation settings and simulation sites can be fairly straightforward, as illustrated in Figure 35.23a

As feature dimensions become significantly smaller than the optical wavelength, however, more simulation sites can be needed, as illustrated in Figure 35.23b [83] At some point, the advantage

of a sparse simulation set is severely reduced, and the use of a uniform grid of simulation points becomes attractive again

In this case, the simulation of the image intensity is carried out using a regular grid, as illustrated

in Figure 35.24 Contours from the simulation result, using again a suitable model to predict the edge location on the wafer, are used to represent the image intensity The EPE is then synthesized from the desired position of an edge segment and a corresponding location on the contour Subsequent motion of the edge segments proceeds as previously described

Representation of the contour data can present additional problems not encountered in the sparse approach Accurate representations of contours contain far more vertices than their counterparts in the original GDS-II layout And although storing the contours after it has been used to determine an EPE may be extremely useful, because identical regions may be encountered later and the precomputed

Trang 5

(b)

FIGURE 35.23 (a) Layout with sparse simulation plan and (b) scaled layout using sparse simulation rules

when the target dimension is 65 nm and the exposure wavelength is 193 nm At some point, sparse simulations

are no longer sparse (Adapted from Cobb, N and Dudau, D., Proc SPIE, 6154, 615401, 2006 With permission.)

solution accessed and reused, the additional data volume for storage of contours with their high vertex counts in the database can present problems In spite of these logistical problems, however, there are some clear advantages for accuracy With the dense approach, certain features such as the bridge shown in Figure 35.24 can be simulated and flagged; catching such a structure with a sparse number of simulation sites becomes far more problematic

No matter what the simulation strategy, image and process simulators are invoked in these OPC flows We now turn our attention to the simulator itself, and some of the practical approximations that are used to make a simulator functional in an EDA environment

Trang 6

Alpert/Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 717 24-9-2008 #24

FIGURE 35.24 Fragmentation/simulation plan for a portion of a layout using sparse rules (left), and a dense

grid simulation (right) Using the contours from the dense grid, features such as the bridge between the two features can be detected (Reproduced, Courtesy Mentor Graphics)

35.3 SIMULATION TECHNIQUES

35.3.1 INTRODUCTION

In Section 35.2, the fundamental framework for modeling lithography and various RETs were pro-vided In this section, computational techniques that can be used within that framework for detailed mask transmission, image propagation, and wafer process simulation are presented, and the various trade-offs in the approximations they use are discussed

As described in Section 35.2.2.2, the imaging system can be approximated as a simple Fourier transform and its inverse, with the pupil aperture (e.g., a circle) providing a low pass cutoff for the spatial frequencies of the image

Although abstractly true, certainly much more than a pair of FFTs are needed to provide highly accurate simulation results The three areas that require modeling attention are the imaging system itself, the interaction with the photomask, and the interaction with the wafer

35.3.2 IMAGINGSYSTEMMODELING

A lithographic imaging system has a large number of highly polished, precision optical elements, mounted in a precision mechanical housing The lens column can weigh over 2 t and be over 2 m tall An example of a contemporary lens design [84] is shown in Figure 35.25 These lenses are usually designed with complex ray tracing programs that accurately represent the path that light takes through the reflective and refractive elements [85]

Because the mathematical theory of lens design is linear and well understood, the complex interactions of the lens elements can be represented as the simple, ideal Fourier lens described in Section 35.2.2.2, with all the physical properties of the lens (refraction, aberrations, etc.) lumped together into an idealized pupil function represented by Zernike polynomials This function can be measured using precision interferometry techniques, but this is usually not easy to do for an individual stepper in the field [86]

The interaction of this pupil with the illuminator presents the essential challenge of imaging simulation If the light falling on the lens were a single, coherent, uniform normal incidence (on-axis) plane wave, the corresponding spectrum in the pupil would be a single point at the center of the pupil This represents coherent illumination, as shown in Figure 35.26a In practice, however, light falls on the photomask at a range of angles, from a number of potential source points The corresponding interactions in the lens pupil are shifted and overlapped The degree to which the

Trang 7

508 511 512

514 LG1

516 518

531 532

LG2

520 522 524 526 528 538

536

530 531 534

LG3

180

548 546 544 542 540 110

FIGURE 35.25 Example of a contemporary scanner lens design (From Kreuzer, J., US Patent 6,836,380.)

pupil is filled is then related to the spatial coherence of the light source For very coherent light, the pupil filling ratio is small (Figure 35.26b); for larger angles and lower coherence, the pupil filling is higher (Figure 35.26c) This ratio, also called the coherence factor, is typically designated

by lithographers using the symbol σ This should not be confused, however, with the electrical

conductivity from Equation 35.1b above

Imaging with complicated sources and pupils can be complicated to model For coherent light, the image fields add directly both at every moment in time and in a time average, and so we can sum the various contributions individually For incoherent light, the local fields add instantaneously, but for the time average, the correlation is lost, and so the various image intensities must be computed and added

However, most illumination systems are partially coherent This means that the relation between

the image I (x, y) from two different points in an object (xo , yo ) and (xo , yo ) (e.g., two points in a mask) do not fit either of these simple cases Likewise, the illumination of an object by a distribution

of source points follows similarly

FIGURE 35.26 Pupil maps for illumination that is (a) coherent, (b) partially coherent, and (c) incoherent.

Trang 8

Alpert/Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 719 24-9-2008 #26

The image formulation for this situation can be computed using the mutual intensity function

J (xo , yo ; xo , yo ), according to Refs [29,87,88]

I (x, y) =



−∞



J (x

o− x

o, y o− y

o) · M(x

o, y o) · M(x

o, y o)

× H(x

o, y o) · H(x

o, y o) · dx

odyo dxo , dy o (35.21) where

M (xo, yo) are the points in the mask

H (x, y, xo, yo) represents the optical system transfer function from point (xo, yo) to (x, y).

When the mask and the transfer function are replaced by Fourier representations,

M (x, y) =



−∞



ˆM( p, q) · e −i2π(px+qy) dp dq (35.22a)

J (x, y) =



−∞



ˆJ( p, q) · e −i2π(px+qy) dp dq (35.22b) the image intensity can be rewritten as

I (x, y) =

+∞



+∞

 

ˆJ( p, q) · ˆH(p + p , q + q ) ˆH(p + p , q + q )

× ˆM( p , q ) · ˆM( p , q ) · e −i2π[(p −p )x+(q −q )y] dp dq dp dq dp dq (35.23) Changing the order of integration, the integral can be reexpressed as

I (x, y) =

+∞

+∞



TCC ( p , q , p , q ) · ˆM( p , q ) ˆM( p , q ) · e −i2π[(p −p )x+(q −q )y] dp dq dp dq

(35.24) where

TCC ( p , q , p , q ) =

+∞



+∞

ˆJ( p, q) ˆH(p + p , q + q ) ˆH( p + p , q + q )dp dq (35.25)

is called the transmission cross coefficient (TCC) An illustration of this overlap integral in the pupil plane is shown in Figure 35.27

This TCC overlap integral depends only on the illumination source and the transfer of light

through the lens, which are independent of mask layout J (p, q) in Figure 35.27 is a representation

of the projection of a circular source illumination This could just as well be an annular, quadrupole,

or other off-axis structure, as illustrated in Figure 35.16, or a more complex pattern, as shown in Figure 35.18 Only portions in frequency space (the pupil plane) where source light overlaps with the lens transmission (the shaded area) will contribute to the final image

The key element here is that the interaction of the source and lens can be precomputed as TCCs

and stored for later use, once the details of the mask layout M (x, y) are known This formulation for

imaging was originally presented by Hopkins [88] and is often called the Hopkins approach

Trang 9

p

FIGURE 35.27 Diagram of the integral of overlap for the computation of images using TCCs.

One example of the utility of this approach is the simulation of defocus Normally, the Fourier optical equations represent the image at the plane of focus However, for propagation beyond focus, the expansion of a spherical wave from a point follows a quadratic function that is equivalent to

introducing a fourth-order Zernike aberration Z4in the pupil plane [89] (See Table 35.1) Computation

of a defocused image therefore becomes equivalent to the computation of an in-focus image with a suitable degree of fourth-order aberration By precomputing the TCCs for a system with fourth-order aberration, defocus images for a mask pattern can therefore be calculated merely by using different sets of precalculated TCCs

35.3.3 MASKTRANSMISSIONFUNCTION

In our formulations of imaging so far, the mask transmission is a simple function, M (x, y) Typically,

this is a binary mask, having a value of 0 or 1 depending on the pixel coordinates In the Kirchhoff approximation, mentioned in Section 35.2.2.2, the mask transmission is exactly this function How-ever, in a real photomask, with layers of chrome coated onto a substrate of quartz, the wavefronts reflect and scatter off the three-dimensional structures, and the wavefront can be a complicated function of position, amplitude, and phase

This wavefront can still be represented as a 2D function, in which each pixel has its own trans-mission value and a phase factor, depending on the phase shift of the transmitted light To derive this representation, however, a simple scalar representation of the field at the mask will not suffice Instead, a full vector EM field computation may be required

35.3.3.1 FDTD

A widely used first-principles method for simulating the electromagnetic field over time is the finite-difference time domain (FDTD) method [90–93] This is illustrated in Figure 35.28 Here,

a grid in time and space is established, and the initial conditions for sources (charge and current) determined and the field at the boundaries determined Then, using Maxwell equations in a finite

difference form, the time step is incremented, and the E-field recomputed, based on the previous E field and the curl of H at the previous time step Once this is generated, the time step in incremented again, and the H field is computed, based on the previous H field and the curl of the E field.

As an example, following the notation of Erdmann [93], the Maxwell equations for a transverse

electric (TE) field mode can be represented for grid point i, j at time step n in finite difference

form as

Trang 10

Alpert/Handbook of Algorithms for Physical Design Automation AU7242_C035 Finals Page 721 24-9-2008 #28

Ex

Hx

∆z

∆y

∆x

FIGURE 35.28 Illustration of the geometry used in the computation of EM fields according to the FDTD

method (Adapted from Taflove, A and Hagness, S.C., Computational Electrodynamics: The Finite-Difference

Time-Domain Method, Artech House, Boston, 2005 With permission After Yee, K S., IEEE Trans Antennas Propagation, AP-14, 302, 1966, Copyright IEEE With permission.)

H xn +1/2 i,j = H xn −1/2

µ x



E yn i,j+1− E yn

i,j



(35.26a)

Hzn +1/2 i,j = H zn −1/2

µ x



Eyn i,j − E yn

i +1,j



(35.26b)

Eyn+1

i,j = C a

i,j · E yn i,j + C b

i,j



· H xn +1/2 i,j − · H xn +1/2

i,j−1 + · H zn +1/2

i −1,j − · H xn +1/2

i,j

 (35.26c)

where the coefficients C a and C bdepend on the materials properties and charge densities:

Cai,j =1−σi,j t

2εi,j

  

1+σi,j t

2εi,j



(35.27a)

Cbi,j = t

2εi,j

  

1+σi,j t

2εi,j



(35.27b)

From the initial conditions, the suitable fields are computed at half time steps throughout the spatial grid, and the revised fields are then used for the computation of the complementary fields for the next half time step Each step, of course, could be designated as a unit time step for the algorithm

But then the entire algorithm (E generating H; H generating E) would then require two time steps to

come full circle The use of half time steps is therefore convenient so that the entire algorithm counts

a single cycle in a single unit time step This staggered computation is illustrated in Figure 35.29 The calculation proceeds through time and space until the maximum time allocated is reached For a steady-state source of excitation (e.g., incident electromagnetic waves), the time interval should be chosen such that the final few cycles reach a steady state, and can be time averaged to give average local fields and intensity values

For this method to work, the optical properties of each point in the computation grid must

be specified For metals (such as the chrome photomask layer), this can be difficult, because the refractive index is less than 1 and a denser grid may be required However, because the optical

Ngày đăng: 03/07/2014, 20:20

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm