Through techniques such as ILT, curvilinear designs and their associated masks have demonstrated benefits over Manhattan type for delivering superior wafer lithography process latitude. Moreover, a number of native design applications such as silicon photonic IC and curvilinear interconnect require delivery of masks with non-Manhattan geometries. Consequently, as enabled by the use of multi-beam mask writers (MBMW), we see the adoption of curvilinear masks in production to grow steadily. One of the more challenging topics for curvilinear adoption is on determining the optimum tradeoff between mask manufacturability and wafer imaging. To maximize the benefits of curvilinear masks without incurring an undue impact from mask complexity, it is beneficial to develop optimized layout validation checks such as MRC which can be implemented to achieve an optimum tradeoff. We will present a methodology to perform curvilinear mask manufacturability optimization using a specially designed set of parametric curvilinear test patterns. The techniques are demonstrated in support of a DRAM implementation study where ILT is applied to improve the wafer performance of a contact type layer. We describe a parametric test chip covering curvature, width, space and area and the mask data generated is applied to evaluate different curvilinear layout constructs and correlations between mask manufacturability and simulated wafer performance. We revisit the question on whether ILT actually leads to relaxed MRC constraints compared to Manhattan designs for the same design application. In addition, advanced mask characterization techniques such as 2D contouring are applied to consider the limitations of purely geometrical rule checking versus a full model based approach that can consider mask pattern fidelity in ILT layout generation.
Machine Learning (ML) based technologies are actively being adopted in the computational lithography domain. ML-based methods have the potential to enhance the accuracy of predictive models, speed up the run-times of the mask optimization processes and produce consistent results compared with the other numerical methods. In this paper, we present the result of an ML-based ILT application to an advanced DRAM contact layer for both core and periphery region. In our ML ILT method, golden mask layouts are generated by ProteusTM ILT tool for the sampled target layouts to obtain reliable training inputs, which are then used to train a custom-designed Convolutional Neural Network (CNN). The trained CNN is plugged-in to the conventional ILT flow as an initial mask provider and the entire
To successfully transfer design patterns to wafer, it is essential to calibrate different types of models to describe the optical, physical, and chemical effects in chip manufacturing process. In recent years, there have also been active investigations of machine learning (ML) models to capture various aspects of semiconductor processes. As it is well known, model training time and model accuracy are heavily influenced by the input data. It is becoming increasingly important to provide highly efficient methods to automatically generate effective pattern samples from full chip designs. A straightforward approach, simple random sampling, can be highly efficient to generate effective samples for a homogeneous population. However, real world chip layouts are characterized by geometrical and lithographical feature distributions that vary significantly across the full chip design space. The complexity of the problem necessitates the adoption of a comprehensive set of approaches for sampling as well as flexibility in customizing the sampling strategy for various applications. In this paper, we investigate automatic layout sampling to optimize the coverage and diversity of patterns given the need for minimizing training sample size or other constraints, and therefore adopting various unsupervised learning techniques. The flow scales very well with computation resources to efficiently process full chip layouts. A simple, standard interface is provided for typical usage, but flexible programming APIs are available to customize the sampling strategy for advanced applications. Results demonstrate that the samples generated by this flow have increased diversity, which leads to significantly reduced model training time with comparable or increased model accuracy.
In advanced semiconductor memory manufacturing, the feature size keeps aggressively shrinking, creating problems in the fabrication process and leading to decreasing yield. Three key factors that can impact memory process and yield are lithographic process window, full field CD uniformity (CDU), and correction run time performance. In this paper, we describe and present a mask processing technique utilizing a) global array detect (GAD) for detecting and optimizing cell repetition, b) periodic boundary condition (PBC) for preserving simulation and mask symmetry, and c) cell-level ILT (CLILT) flow to process repeated cell regions and blend various design parts. With GAD + PBC + CL-ILT processing, we can achieve a perfectly consistent mask array region with enlarged process window and minimum local CD variation for a full field mask. Moreover, with fewer pattern units (called templates) to process, we can complete full chip ILT with reasonable time and compute resources compared to OPC full chip correction. In this paper, we show simulation and wafer print results including pattern fidelity, process window, mask consistency, and run time data.
In this paper, we will present a machine learning solution targeted for memory customers including both assist feature and main feature mask synthesis. In a previous paper, we demonstrated machine learning ILT solutions for the creation of assist features using a neural network. In this paper, we extend the solution to include main features masks, which we can create using machine learning models which take into account the full ILT corrected masks during training. In practice, while the correction of main features is often visually more intuitive, there are underlying edge to edge and polygon to polygon interactions that are not easily captured by local influence edge perturbations found in typical OPC solvers but can be captured by ILT and machine learning solutions trained on ILT masks.
Memory cells and access structures consume a large percentage of area in embedded devices so there is a high return from shrinking the cell area as much as possible. This aggressive scaling leads to very difficult resolution, 2D CD control and process window requirements. As the scaling drives lithography even deeper into the low-k1 regime, cooptimization of design layout, mask, and lithography is critical to deliver a production-worthy patterning solution. Computational lithography like Inverse Lithography Technology (ILT) has demonstrated it is an enabling technology to derive improved solutions over traditional OPC as reported in multiple prior publications. In this paper, we will present results of a study on advanced memory cell design optimization with Cell-Level ILT (CL-ILT) where significant design hierarchy can be retained during ILT optimization. Large numbers of cell design variations are explored with automatically generated patterns from ProteusTM Test Pattern Generator (TPG). Fully automated flows from pattern generation to mask synthesis with ILT, data analysis and results visualization are built on ProteusTM Work Flow (PWF) for exploring a fully parameterized design space of interest. Mask complexity including assist features (AF) types, rule or model based, and main feature segmentation are also studied to understand the impact on wafer lithographic performance. A heatmap view of results generated from this design exploration provides a clear and intuitive way to identify maximum design limits of memory cells. Comparison of results from ILT and traditional OPC will be presented as well with both wafer and simulation data.
Since its introduction at Luminescent Technologies and continued development at Synopsys, Inverse Lithography Technology (ILT) has delivered industry leading quality of results (QOR) for mask synthesis designs. With the advent of powerful, widely deployed, and user-friendly machine learning (ML) training techniques, we are now able to exploit the quality of ILT masks in a ML framework which has significant runtime benefits. In this paper we will describe our MLILT flow including training data selection and preparation, network architectures, training techniques, and analysis tools. Typically, ILT usage has been limited to smaller areas owing to concerns like runtime, solution consistency, and mask shape complexity. We will exhibit how machine learning can be used to overcome these challenges, thereby providing a pathway to extend ILT solution to full chip logic design. We will demonstrate the clear superiority of ML-ILT QOR over existing mask synthesis techniques, such as rule based placements, that have similar runtime performance.
The difficulties involved in ramping EUV lithography to volume manufacturing have highlighted the critical task of understanding process, layout design and device interactions, and also of optimizing the overall product integration to reduce undesirable interactions. In this paper, we demonstrate mask synthesis methods that using rigorous EUV lithography models together with inverse lithography technology (ILT) for EUV process window and CD control improvement. To enable this new capability, we have linked the broad EUV physical effect modeling capability of our rigorous lithography simulator, Sentaurus Lithography (S-Litho), with our highly flexible production proven ILT mask synthesis solution (Proteus ILT). This new combined capability can take advantage of the wide range of EUV modeling capabilities including rigorous electromagnetic mask/substrate modeling. The advantages of using S-Litho rigorous simulation for ILT optimization is further benefited from significant speed enhancements using new high performance EUV mask 3D capabilities. ILT has been extensively used in a range of lithographic areas for DUV and EUV including logic hot-spot fixing, memory layout correction, dense memory cell optimization, assist feature (AF) optimization, source optimization, complex patterning design rules and design-technology co-optimization (DTCO). The combined optimization capability of these two technologies therefore will have a wide range of useful EUV applications. We will highlight the specific benefits of the rigorous DUV and EUV ILT functionality for several advanced applications including resist profile optimization for resist top- oss and resist descumming and process window improvement.
Despite the large difficulties involved in extending 193i multiple patterning and the slow ramp of EUV lithography to full manufacturing readiness, the pace of development for new technology node variations has been accelerating. Multiple new variations of new and existing technology nodes have been introduced for a range of device applications; each variation with at least a few new process integration methods, layout constructs and/or design rules. This had led to a strong increase in the demand for predictive technology tools which can be used to quickly guide important patterning and design co-optimization decisions.
In this paper, we introduce a novel hybrid predictive patterning method combining two patterning technologies which have each individually been widely used for process tuning, mask correction and process-design cooptimization. These technologies are rigorous lithography simulation and inverse lithography technology (ILT). Rigorous lithography simulation has been extensively used for process development/tuning, lithography tool user setup, photoresist hot-spot detection, photoresist-etch interaction analysis, lithography-TCAD interactions/sensitivities, source optimization and basic lithography design rule exploration. ILT has been extensively used in a range of lithographic areas including logic hot-spot fixing, memory layout correction, dense memory cell optimization, assist feature (AF) optimization, source optimization, complex patterning design rules and design-technology co-optimization (DTCO). The combined optimization capability of these two technologies will therefore have a wide range of useful applications. We investigate the benefits of the new functionality for a few of these advanced applications including correction for photoresist top loss and resist scumming hotspots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.