Mask 3D (M3D) effects distort diffraction amplitudes from EUV masks. Electromagnetic (EM) simulations are used to rigorously calculate the distorted diffraction amplitudes. However, EM simulations are highly time consuming for OPC applications. The distorted diffraction amplitude can be characterized by M3D parameters. We develop a convolutional neural network (CNN) model which predicts M3D parameters very fast from input mask patterns. In this work, we train CNN using test mask data with various characteristics of metal layers. The accuracy of the CNN is good for the test mask data. However, when we use new mask data that mimic device patterns, the accuracy of the CNN is worsened. Starting from the CNN pre-trained by the test mask data, we improve the accuracy of the CNN by additional training using larger dataset including both the test mask data and the new mask data. The accuracy of the CNN is slightly improved by the fine tuning.
BackgroundMask three-dimensional (3D) effects distort diffraction amplitudes from extreme ultraviolet masks. In a previous work, we developed a convolutional neural network (CNN) that predicted distorted diffraction amplitudes very fast from input mask patterns.AimIn this work, we reduce both the time for preparing the training data and the time for image intensity integration.ApproachWe reduce the time for preparing the training data by applying weakly guiding approximation to 3D waveguide model. The model solves Helmholtz type coupled vector wave equations of two polarizations. The approximation decomposes the coupled vector wave equations into two scalar wave equations, reducing the computation time to solve the equations. Regarding the image intensity integration, Abbe’s theory has been used in electromagnetic (EM) simulations. The transmission cross coefficient (TCC) formula is known to be faster than Abbe’s theory, but the TCC formula cannot be applied to source position dependent diffraction amplitudes in EM simulations. We derive source position dependent TCC (STCC) formula starting from Abbe’s theory to reduce the image intensity integration time.ResultsWeakly guiding approximation reduces the time of EM simulation by a factor of 5, from 50 to 10 min. STCC formula reduces the time of the image intensity integration by a factor of 140, from 10 to 0.07 s.ConclusionsThe total time of the image intensity prediction for 512 nm×512 nm area on a wafer is ∼0.1 s. A remaining issue is the accuracy of the CNN.
Recently, Ising machines, which solve Quadratic Unconstrained Binary Optimization (QUBO) problems in a short computation time, have attracted attention. In this paper, we propose a mask optimization method using an Ising machine to obtain a mask with high fidelity to target patterns and high tolerance to process variation. In the proposed method, a mask pattern is improved by repeatedly solving QUBO problems by the Ising machine. In experiments, we applied the proposed method using the Ising machine to various patterns and evaluated it in comparison with existing methods in fidelity to target patterns, tolerance to process variation, and execution time.
In our previous works, a convolutional neural network was developed which predicted diffraction amplitudes from extreme ultraviolet masks very fast. In this work, we reduce both the time for preparing the training data and the time for image intensity integration. We reduce the time for preparing the training data by applying weakly guiding approximation to 3D waveguide model. The model solves Helmholtz type coupled vector wave equations of two polarizations. The approximation decomposes the coupled vector wave equations into two scalar wave equations, reducing the computation time to solve the equations. Regarding the image intensity integration, Abbe’s theory has been used in electromagnetic simulations. Transmission cross coefficient (TCC) formula is known to be faster than Abbe’s theory, but TCC formula cannot be applied to source position dependent diffraction amplitudes in electromagnetic simulations. We derive source position dependent TCC formula starting from Abbe’s theory to accelerate the image intensity integration.
BackgroundMask 3D (M3D) effects distort diffraction amplitudes from extreme ultraviolet masks. In our previous work, we developed a convolutional neural network (CNN) that very quickly predicted the distorted diffraction amplitudes from input mask patterns. The mask patterns were restricted to Manhattan patterns.AimWe verify the potentials and the limitations of CNN using imec 3 nm node (iN3) mask patterns.ApproachWe apply the same CNN architecture in the previous work to mask patterns, which mimic iN3 logic metal or via layers. In addition, to study more general mask patterns, we apply the architecture to iN3 metal/via patterns with optical proximity correction (OPC) and curvilinear via patterns. In total, we train five different CNNs: metal patterns w/wo OPC, via patterns w/wo OPC, and curvilinear via patterns. After the training, we validate each CNN using validation data with the above five different characteristics.ResultsWhen we use the training and validation data with the same characteristics, the validation loss becomes very small. Our CNN architecture is flexible enough to be applied to iN3 metal and via layers. The architecture has the capability to recognize curvilinear mask patterns. On the other hand, using the training and validation data with different characteristics will lead to large validation loss. The selection of training data is very important for obtaining high accuracy. We examine the impact of M3D effects on iN3 metal layers. A large difference is observed in the tip to tip (T2T) critical dimension calculated by the thin mask model and thick mask model. This is due to the mask shadowing effect at T2T slits.ConclusionsThe selection of training data is very important for obtaining high accuracy. Our test results suggest that layer specific CNN could be constructed, but further development of CNN architecture could be required.
Mask 3D effects distort diffraction amplitudes from EUV masks. In the previous work, we developed a CNN which predicted the distorted diffraction amplitudes very fast from input mask patterns. The mask patterns in the work were restricted to Manhattan patterns. In general, the accuracy of neural networks depends on their training data. The CNN trained by Manhattan patterns cannot be used to general mask patterns. However, our CNN architecture contains 70 M parameters, and the architecture itself could be applied to general mask patterns. In this work, we apply the same CNN architecture to mask patterns which mimic iN3 logic metal or via layers. Additionally, to study more general mask patterns, we train CNNs using iN3 metal/via patterns with OPC and curvilinear via patterns. In total we train five different CNNs: metal patterns w/wo OPC, via patterns w/wo OPC, and curvilinear via patterns. After the training, we validate each CNN using validation data with the above five different characteristics. When we use the training and validation data with same characteristics, the validation loss becomes very small. Our CNN architecture is flexible enough to be applied to iN3 metal and via layers. On the other hand, using the training and validation data with different characteristics will lead to large validation loss. The selection of training data is very important to obtain high accuracy. We examine the impact of mask 3D effects on iN3 metal layer. Large difference is observed in T2T CD calculated by thin mask model and thick mask model. This is due to the mask shadowing effect at T2T slits. Our CNN successfully predicts T2T CD of thick mask model, which is sensitive to the mask 3D effect.
BackgroundIn the previous work, we developed a convolutional neural network (CNN), which reproduces the results of the rigorous electromagnetic (EM) simulations in a small mask area. The prediction time of CNN was 5000 times faster than the calculation time of EM simulation. We trained the CNN using 200,000 data, which were the results of EM simulation. Although the prediction time of CNN was very short, it took a long time to build a huge amount of the training data. Especially when we enlarge the mask area, the calculation time to prepare the training data becomes unacceptably long.AimReducing the calculation time to prepare the training data.ApproachWe apply data augmentation technique to increase the number of training data using limited original data. The training data of our CNN are the diffraction amplitudes of mask patterns. Assuming a periodic boundary condition, the diffraction amplitudes of the shifted or flipped mask pattern can be easily calculated using the diffraction amplitudes of the original mask pattern.ResultsThe number of training data after the data augmentation is multiplied by 200 from 2500 to 500,000. Using a large amount of training data, the validation loss of CNN was reduced. The accuracy of CNN with augmented data is verified by comparing the CNN predictions with the results of EM simulation.ConclusionsData augmentation technique is applied to the diffraction amplitude of the mask pattern. The data preparation time is reduced by a factor of 200. Our CNN almost reproduces the results of EM simulation. In this work, the mask patterns are restricted to line and space patterns. It is a challenge to build several CNNs for specific mask patterns or ultimately a single CNN for arbitrary mask patterns.
Data augmentation is a powerful technique in deep learning to increase the number of training data by using limited original data. We apply this technique to EUV lithography simulation based on convolutional neural network (CNN). In previous work, we developed a prototype CNN which reproduces the results of the rigorous electromagnetic (EM) simulations in a small mask area. The prediction time of CNN was 5,000 times faster than the calculation time of EM simulation. We trained the CNN by using 200,000 data which were the results of EM simulation. Although the prediction time of CNN was very short, it took a long time to build a huge amount of the training data. Especially when we enlarge the mask area the calculation time to prepare the training data becomes unacceptably long. The EM calculation time for 1,024 nm X 1,024 nm mask area takes 162 s. It will take a year to calculate 200,000 mask patterns. The training data of our CNN is the diffraction amplitudes of mask patterns. Assuming a periodic boundary condition, the diffraction amplitudes of the shifted or flipped mask pattern can be easily calculated by using the diffraction amplitudes of the original mask pattern. We apply this data augmentation technique to reduce the data preparation time for 1,024 nm X 1,024 nm mask area by a factor of 200. The accuracy of CNN is verified by comparing the CNN predictions with the results of EM simulation. Our CNN successfully reproduces critical dimensions and edge placement errors of line and space patterns.
Background: Thin mask model has been conventionally used in optical lithography simulation. In extreme ultraviolet (EUV) lithography thin mask model is not valid because the absorber thickness is comparable to the mask pattern size. Rigorous electromagnetic (EM) simulations have been used to calculate the thick mask amplitudes. However, these simulations are highly time consuming.
Aim: Proposing a prototype of a convolutional neural network (CNN) which reduces the calculation time of rigorous EM simulations in a small mask area with specific mask patterns.
Approach: We construct a CNN which reproduces the results of the EM simulation. We define mask 3D amplitude as the difference between the thick mask amplitude and the thin mask amplitude. The mask 3D amplitude of each diffraction order is approximated using three parameters which represent the on-axis and the off-axis mask 3D effects. The mask 3D parameters of all diffraction orders are trained by a CNN.
Results: The input and the targets of the CNN are a cut-mask pattern and mask 3D parameters calculated by the EM simulation, respectively. After the training with 199,900 random cut-mask patterns, the CNN successfully predicts the mask 3D parameters of new cut-mask patterns.
Conclusions: We construct a CNN which predicts the diffraction amplitudes from 2D EUV mask patterns. After the training, the CNN successfully reproduces the mask 3D amplitude. CNN prediction is 5000 times faster than the rigorous EM simulation. Next challenge is to construct a practical CNN which covers a large area with general mask patterns.
Thin mask model has been conventionally used in optical lithography simulation. In this model the diffracted waves from the mask are assumed to be Fourier transform of the mask pattern. In EUV (Extreme UltraViolet) lithography thin mask model is not valid because the absorber thickness is comparable to the mask pattern size. Fourier transformation is not suitable for calculating the diffracted waves from thick masks. Rigorous electromagnetic simulations such as finitedifference time-domain method, rigorous coupled wave analysis and 3D waveguide model are used to calculate the diffracted waves from EUV masks. However, these simulations are highly time consuming. We reduce the calculation time by adapting a CNN (Convolutional Neural Network). We calculate the far-field diffraction amplitudes from an EUV mask by using the 3D waveguide model. We divide the diffraction amplitudes into the thin mask amplitudes (Fourier transform of the mask pattern) and the residual mask 3D amplitudes. The incident angle dependence of the mask 3D amplitude for each diffraction order is fitted by using three parameters which represent the on-axis and the off-axis mask 3D effects. We train a CNN where the inputs are 2D mask patterns and the targets are the mask 3D parameters of all diffraction orders. After the training, the CNN successfully predict the mask 3D parameters. The CNN prediction is 5,000 times faster than the electromagnetic simulation. We extend the transmission cross coefficient formula to include the off-axis mask 3D effects. Our formula is applicable to arbitrary source shapes and defocus. We can use the eigen value decomposition method to accelerate the calculation.
Thin mask model has been conventionally used in optical lithography simulation. In this model the diffracted waves from the mask are assumed to be Fourier transform of the mask pattern. This assumption is the basis of Hopkins' method and sum of coherent system model. In EUV (Extreme UltraViolet) lithography thin mask model is not valid because the absorber thickness is comparable to the mask pattern size. Fourier transformation cannot be applied to calculate the diffracted waves from thick masks. Rigorous electromagnetic simulations such as finite-difference time-domain method, rigorous coupled wave analysis and 3D waveguide method are used to calculate the diffracted waves from EUV masks. However, these simulations are highly time consuming. We reduce the calculation time by adapting a convolutional neural network. We construct a convolutional network which can predict the diffracted waves from 1D EUV mask patterns. We extend the TCC method to include the off-axis mask 3D effects. Our model is applicable to arbitrary source shapes and defocus.
KEYWORDS: Feature extraction, Feature selection, Lithography, Simulation of CCA and DLA aggregates, Machine learning, Manufacturing, Computer programming, Semiconducting wafers, Design for manufacturability, Very large scale integration
As VLSI device feature sizes are getting smaller and smaller, lithography hotspot detection and elimination have become more important to avoid yield loss. Although various machine learning based methods have been proposed, it is not easy to find appropriate parameters to achieve high accuracy. This paper proposes a feature selection method by using the probability distributions of layout features. Our method enables automatic feature optimization and classifier construction. It can be adaptive to different layout patterns with various features. In order to evaluate hotspot detection methods in the situation close to actual problem, dataset based on ICCAD2019 dataset is used for evaluation. Experimental results show the effectiveness of our method and limitations.
As one of Resolution Enhancement Techniques, a mask optimization such as Pixel-based Optical Proximity Correction or Inverse Lithography Technology is well discussed. In this paper, a pixel-based mask optimization using 0-1 Quadratic Programming problem (0-1 QP) is proposed to obtain enough image contour fidelity and tolerance to process variation in a short time. By formulating 0-1 QP to maximize intensity slope around between edges of target patterns, suppression of image contour distortion by the process variation is realized. The defined 0-1 QP is relaxed into Lagrangian relaxation problem and an approximate solution of the defined 0-1 QP is obtained by solving Lagrangian relaxation problem by using Subgradient method and gradient deciding method. Moreover, by applying a correction method which corrects boundary pixel of target patterns precisely into the mask obtained by 0-1 QP, enough shape fidelity toward target patterns can be obtained.
LELECUT type triple patterning lithography is one of the most promising techniques in 14 nm logic node and beyond. To prevent yield loss caused by overlay error, LELECUT mask assignment, which is tolerant to overlay error, is desired. We propose a method that obtains a LELECUT assignment that is tolerant to overlay error. The proposed method uses positive semidefinite relaxation and randomized rounding technique. In our method, the cost function that takes the length of boundary of features determined by the cut mask into account is introduced.
LELECUT type triple patterning lithography is one of the most promising techniques in the next generation lithography. To prevent yield loss caused by overlay error, LELECUT mask assignment which is tolerant to overlay error is desired. In this paper, we propose a method that obtains an LELECUT assignment which is tolerant to overlay error. The proposed method uses positive semide_nite relaxation and randomized rounding technique. In our method, the cost function that takes the length of boundary of features determined by the cut mask into account is introduced.
Litho-Etch-Litho-Etch (LELE) type double patterning technology (DPT) is known to have an advantage of layout flexibility. However, there are two problems when a hotspot, which is not fixable by tuning OPC, is detected. One is repeating a data preparation flow including decomposition, OPC, and verification by lithography simulation is quite time consuming. The other is a risk to introduce new hotspots at different locations. In this report, a new method to fix hotspots with layout modification of limited area will be presented. The proposed method can reduce not only turnaround time to fix a hotspot but also the number of iterations since it prevents generation of hotspots at new locations.
In this paper, we propose a fast layout decomposition algorithm in litho-etch-litho-etch (LELE) type double patterning considering the yield. Our proposed algorithm extracts stitch candidates properly from complex layouts including various patterns, line widths and pitches. The planarity of the conflict graph and independence of stitch-candidates are utilized to obtain a layout decomposition with minimum cost efficiently for higher yield. The validity of our proposed algorithm is confirmed by using benchmark layout patterns used in literatures as well as layout patterns generated to fit the target manufacturing technologies as much as possible. In our experiments, our proposed algorithm is 7.7 times faster than an existing method on average.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.