1. Trang chủ
  2. » Luận Văn - Báo Cáo

Automated design of analog and high frequency circuits a computational intelligence approach

243 1 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Automated Design of Analog and High-frequency Circuits
Tác giả Bo Liu, Georges Gielen, Francisco V. Fernández
Người hướng dẫn J. Kacprzyk
Trường học Universidad de Sevilla
Chuyên ngành Computational Intelligence
Thể loại Sách
Năm xuất bản 2014
Thành phố Sevilla
Định dạng
Số trang 243
Dung lượng 4,35 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Cấu trúc

  • 1.1 Introduction (14)
  • 1.2 An Introduction into Computational Intelligence (18)
    • 1.2.1 Evolutionary Computation (18)
    • 1.2.2 Fuzzy Logic (20)
    • 1.2.3 Machine Learning (22)
  • 1.3 Fundamental Concepts in Optimization (22)
  • 1.4 Design and Computer-Aided Design of Analog/RF IC (24)
    • 1.4.1 Overview of Analog/RF Circuit (24)
    • 1.4.2 Overview of the Computer-Aided Design (26)
  • 1.5 Summary (28)
  • 2.1 Analog IC Sizing: Introduction and Problem Definition (31)
  • 2.2 Review of Analog IC Sizing Approaches (33)
  • 2.3 Implementation of Evolutionary Algorithms (35)
    • 2.3.1 Overview of the Implementation of an EA (35)
    • 2.3.2 Differential Evolution (36)
  • 2.4 Basics of Constraint Handling Techniques (39)
    • 2.4.1 Static Penalty Functions (39)
    • 2.4.2 Selection-Based Constraint Handling Method (40)
  • 2.5 Multi-objective Analog Circuit Sizing (41)
    • 2.5.1 NSGA-II (41)
    • 2.5.2 MOEA/D (44)
  • 2.6 Analog Circuit Sizing Examples (46)
    • 2.6.1 Folded-Cascode Amplifier (46)
    • 2.6.2 Single-Objective Constrained Optimization (46)
    • 2.6.3 Multi-objective Optimization (48)
  • 2.7 Summary (50)
  • 3.1 Challenges in Analog Circuit Sizing (53)
  • 3.2 Advanced Constrained Optimization Techniques (54)
    • 3.2.1 Overview of the Advanced Constraint (54)
    • 3.2.2 A Self-Adaptive Penalty Function-Based Method (56)
  • 3.3 Hybrid Methods (59)
    • 3.3.1 Overview of Hybrid Methods (59)
    • 3.3.2 Popular Hybridization and Memetic Algorithm (60)
  • 3.4 MSOEA: A Hybrid Method for Analog IC Sizing (62)
    • 3.4.1 Evolutionary Operators (62)
    • 3.4.2 Constraint Handling Method (65)
    • 3.4.3 Scaling Up of MSOEA (65)
    • 3.4.4 Experimental Results of MSOEA (68)
  • 3.5 Summary (73)
  • 4.1 Introduction (75)
  • 4.2 The Motivation of Analog Circuit Sizing (76)
    • 4.2.1 Why Imprecise Specifications Are Necessary (76)
    • 4.2.2 Review of Early Works (77)
  • 4.3 Design of Fuzzy Numbers (78)
  • 4.4 Fuzzy Selection-Based Constraint Handling (80)
  • 4.5 Single-Objective Fuzzy Analog IC Sizing (82)
    • 4.5.1 Fuzzy Selection-Based Differential (82)
    • 4.5.2 Experimental Results and Comparisons (83)
  • 4.6 Multi-objective Fuzzy Analog Sizing (87)
    • 4.6.1 Multi-objective Fuzzy Selection Rules (88)
    • 4.6.2 Experimental Results for Multi-objective (90)
  • 4.7 Summary (93)
  • 5.1 Introduction to Analog Circuit Sizing Considering (96)
    • 5.1.2 Yield Optimization, Yield Estimation (97)
    • 5.1.3 Traditional Methods for Yield Optimization (99)
  • 5.2 Uncertain Optimization Methodologies (101)
  • 5.3 The Pruning Method (103)
  • 5.4 Advanced MC Sampling Methods (104)
    • 5.4.1 AYLeSS: A Fast Yield Estimation Method (106)
    • 5.4.2 Experimental Results of AYLeSS (110)
  • 5.5 Summary (114)
  • 6.1 Ordinal Optimization (118)
  • 6.2 Efficient Evolutionary Search Techniques (120)
    • 6.2.1 Using Memetic Algorithms (120)
    • 6.2.2 Using Modified Evolutionary Search Operators (121)
  • 6.3 Integrating OO and Efficient Evolutionary Search (123)
  • 6.4 Experimental Methods and Verifications of ORDE (126)
    • 6.4.1 Experimental Methods for Uncertain Optimization (126)
    • 6.4.2 Experimental Verifications of ORDE (127)
  • 6.5 From Yield Optimization to Single-Objective Analog (129)
    • 6.5.1 ORDE-Based Single-Objective Variation-Aware (130)
    • 6.5.2 Example (131)
  • 6.6 Bi-objective Variation-Aware Analog Circuit Sizing (132)
    • 6.6.1 The MOOLP Algorithm (133)
    • 6.6.2 Experimental Results (138)
  • 6.7 Summary (140)
  • 7.1 Introduction to Simulation-Based Electromagnetic (143)
  • 7.2 Review of the Traditional Methods (144)
    • 7.2.1 Integrated Passive Component Synthesis (144)
    • 7.2.2 RF Integrated Circuit Synthesis (146)
    • 7.2.3 Antenna Synthesis (147)
  • 7.3 Challenges of Electromagnetic Design Automation (148)
  • 7.4 Surrogate Model Assisted Evolutionary Algorithms (149)
  • 7.5 Gaussian Process Machine Learning (151)
    • 7.5.1 Gaussian Process Modeling (152)
    • 7.5.2 Discussions of GP Modeling (153)
  • 7.6 Artificial Neural Networks (156)
  • 7.7 Summary (157)
  • 8.1 Individual Threshold Control Method (162)
    • 8.1.1 Motivations and Algorithm Structure (162)
    • 8.1.2 Determination of the MSE Thresholds (163)
  • 8.2 The GPDECO Algorithm (166)
    • 8.2.1 Scaling Up of GPDECO (166)
    • 8.2.2 Experimental Verification of GPDECO (168)
  • 8.3 Prescreening Methods (169)
    • 8.3.1 The Motivation of Prescreening (169)
    • 8.3.2 Widely Used Prescreening Methods (171)
  • 8.4 MMLDE: A Hybrid Prescreening and Prediction Method (173)
    • 8.4.1 General Overview (173)
    • 8.4.2 Integrating Surrogate Models into EA (174)
    • 8.4.3 The General Framework of MMLDE (176)
    • 8.4.4 Experimental Results of MMLDE (177)
  • 8.5 SAEA for Multi-objective Expensive Optimization (181)
    • 8.5.1 Overview of Multi-objective Expensive (182)
    • 8.5.2 The Generation Control Method (183)
  • 8.6 Handling Multiple Objectives in SAEA (184)
    • 8.6.1 The GPMOOG Method (185)
    • 8.6.2 Experimental Result (188)
  • 8.7 Summary (190)
  • 9.1 Problem Analysis and Key Ideas (194)
    • 9.1.1 Overview of EMLDE (194)
    • 9.1.2 The Active Components Library and the Look-up (195)
    • 9.1.3 Handling Cascaded Amplifiers (196)
    • 9.1.4 The Two Optimization Loops (196)
  • 9.2 Naive Bayes Classification (198)
  • 9.3 Key Algorithms in EMLDE (199)
    • 9.3.1 The ABGPDE Algorithm (199)
    • 9.3.2 The Embedded SBDE Algorithm (0)
  • 9.4 Scaling Up of the EMLDE Algorithm (0)
  • 9.5 Experimental Results (0)
    • 9.5.1 Example Circuit (0)
    • 9.5.2 Three-Stage Linear Amplifier Synthesis (0)
  • 9.6 Summary (0)
  • 10.1 Main Challenges for the Targeted Problem (0)
  • 10.2 Dimension Reduction (0)
    • 10.2.1 Key Ideas (0)
    • 10.2.2 GP Modeling with Dimension Reduction (0)
  • 10.3 The Surrogate Model-Aware Search Mechanism (0)
  • 10.4 Experimental Tests on Mathematical Benchmark Problems (0)
    • 10.4.1 Test Problems (0)
    • 10.4.2 Performance and Analysis (0)
  • 10.6 Complex Antenna Synthesis with GPEME (0)
    • 10.6.1 Example 1: Microstrip-fed Crooked (0)
    • 10.6.2 Example 2: Inter-chip Wireless Antenna (0)
    • 10.6.3 Example 3: Four-element Linear Array Antenna (0)
  • 10.7 Summary (0)

Nội dung

Introduction

Computational intelligence (CI), a subset of artificial intelligence (AI), aims to replicate human cognitive abilities to create intelligent systems capable of performing tasks like scientific research An intelligent agent is designed to think, reason, learn, plan, and communicate, functioning as a machine brain CI focuses on nature-inspired methodologies to tackle complex computational challenges where traditional mathematical approaches fall short, such as optimizing non-differentiable functions and predicting outcomes from experimental data Key components of CI include Evolutionary Computation for global optimization, Artificial Neural Networks for machine learning, and Fuzzy Logic for reasoning under uncertainty Since its inception from 1940 to 1970, CI techniques have been effectively applied to a variety of real-world problems.

Since the 1950s, the complexity of industrial products has led to a significant demand for automated problem-solving, outpacing the growth of research and development capacities This trend highlights the urgent need for robust algorithms that deliver satisfactory performance Computational Intelligence (CI) has emerged as a vital solution to these challenges, significantly impacting various industrial sectors, including chemical engineering, bioinformatics, automobile design, intelligent transport systems, aerospace engineering, and nano-engineering This book focuses on the application of CI techniques in electronic design automation (EDA), particularly within the fast-paced semiconductor industry, where innovation is continually accelerating.

B Liu et al., Automated Design of Analog and High-frequency Circuits, 1 Studies in Computational Intelligence 501, DOI: 10.1007/978-3-642-39162-0_1, © Springer-Verlag Berlin Heidelberg 2014

Over the decades, the exponential increase in the number of transistors on chips, in line with Moore's law, has revolutionized technology, leading to faster computers, cell phones, digital televisions, and cameras As we move forward, the demand for advancements in areas like health, security, energy, and transportation will drive the "more than Moore" development This book focuses on the design automation methodologies for analog integrated circuits (ICs), high-frequency ICs, and antennas, highlighting recent challenges in the field To tackle these issues, we introduce state-of-the-art algorithms based on computational intelligence (CI) techniques, some of which represent cutting-edge research topics in CI.

• Challenge on high-performance analog IC design

The increasing demands of the market and advancements in IC fabrication technologies are leading to stricter specifications for modern analog circuits As device sizes shrink, traditional transistor equivalent circuit models often lack accuracy, while highly accurate SPICE models are too complex for manual designers Consequently, designing high-performance analog ICs within tight timelines presents significant challenges, even for experienced engineers When dealing with analog cells containing 20–50 transistors, performance optimization becomes particularly complex Although modern numerical optimization techniques have been applied to analog IC sizing, many existing methods still fall short in effectively handling objective optimization and constraints required for achieving high-performance designs.

As transistors continue to scale down, process variations pose significant challenges in industrial analog integrated circuit design Achieving fully optimized nominal design solutions is essential, but it is equally important to ensure high robustness and yield amidst fluctuating supply voltage, temperature conditions, and both inter-die and intra-die process variations These variations are increasing and are expected to worsen with future technological advancements.

According to ITRS reports, the threshold voltage variation of transistors reached 40% in 2011 and is expected to reach 100% within the next decade Even a single misplaced atom can significantly impair circuit performance or lead to failure, as illustrated by variability-induced failure rates across various circuit types with advancing technology Consequently, there is a pressing need for high-yield design as device sizes continue to shrink Designers often resort to over-design strategies to mitigate performance degradation caused by process variations, but this can negatively impact overall performance and increase power consumption or area requirements Recently, there has been a focus on variation-aware analog design to address these challenges.

Computational intelligence techniques have introduced various IC sizing methods; however, many of these approaches face significant limitations, including a lack of generality, insufficient accuracy, inapplicability to modern technologies, and inadequate speed.

• High-data-rate communication brings challenges to mm-wave circuit design

Fig 1.1 Variability-induced failure rates for three canonical circuit types [4]

Radio frequency (RF) integrated circuits are significantly transforming our lives by enabling voice, data, and entertainment access worldwide, from Bluetooth to satellite networks To support high-data-rate communication channels, it is essential to utilize more bandwidth, as suggested by Shannon’s theorem Modulating information-bearing signals around a carrier frequency enhances proper propagation, and higher carrier frequencies provide greater bandwidth availability Consequently, there has been a surge in research and applications of millimeter-wave (mm-wave) RF ICs.

At mm-wave frequencies, traditional passive component models used for low-GHz RF IC design become inaccurate due to distributed effects Consequently, designers must depend on S-parameters from electromagnetic (EM) simulations and rely on experience and trial-and-error, complicating the design of high-performance mm-wave RF ICs This challenge necessitates highly skilled designers, as achieving optimal performance remains difficult even for experts Moreover, most existing CAD tools for RF ICs primarily support low-GHz designs, further hindering advancements in mm-wave technology.

Antennas play a crucial role in communications, functioning alongside integrated circuits to convert electric currents into radio waves and vice versa They are essential components in both transmitters and receivers During transmission, a radio transmitter sends an oscillating radio frequency electric current to the antenna, which then radiates this energy as electromagnetic waves, commonly known as radio waves.

Antenna technology involves the interception of electromagnetic waves to generate a small voltage for amplification in receivers Research in this field is divided into two main areas: antenna theory and electromagnetic (EM) simulation, which is rapidly advancing, and practical antenna design, which remains largely based on experience and trial While simple antennas have established design methodologies, complex antennas often require designers to identify key parameters before applying experiential techniques To enhance this process, evolutionary algorithms have been integrated into the automation of antenna design.

Antenna optimization, while capable of producing high-quality results, is often hindered by its computational expense due to embedded electromagnetic (EM) simulations This process can require several days to weeks for optimization, which restricts the feasibility of using evolutionary algorithms for automated antenna design.

With respect to the computational intelligence aspect, the above three design challenges correspond to the following research topics:

• Computationally expensive black-box optimization

High-performance analog IC design can benefit from advanced evolutionary computation algorithms, but most evolutionary algorithms are not equipped to handle the severe constraints involved in analog IC sizing Hybrid methods are often necessary to achieve highly optimized solutions, and while improving effectiveness has garnered attention, the primary focus in variation-aware analog IC sizing, mm-wave IC, and antenna synthesis is on efficiency Variation-aware analog IC sizing aims for efficient solutions to uncertain optimization, while mm-wave IC and antenna synthesis prioritize efficient handling of small to medium-scale computationally expensive black-box optimization Although the goal is to achieve highly optimized designs, enhancing efficiency is crucial for timely completion of the design process, which can take from hours to several days Despite progress in computational intelligence, these research areas remain in their infancy, with many fundamental challenges yet to be resolved.

This book is essential for researchers and engineers in both electronic design automation (EDA) and computational intelligence (CI) fields It offers advanced techniques for sizing analog integrated circuits (ICs) with stringent performance criteria, efficient strategies for optimizing analog IC yield, multi-objective approaches for variation-aware analog IC sizing, and comprehensive methods for synthesizing integrated passive components.

This article introduces a comprehensive guide on both linear and nonlinear RF amplifiers operating at mm-wave frequencies, along with complex antenna designs It covers constraint handling methods, hybrid approaches, and uncertain optimization techniques for robust design, as well as surrogate model-assisted evolutionary algorithms The content caters to both beginners and advanced practitioners, offering tutorials and state-of-the-art methodologies for effective learning and application.

An Introduction into Computational Intelligence

Evolutionary Computation

Evolutionary computation algorithms leverage biological evolution principles to find optimal solutions to complex problems Originating in the 1950s, these algorithms utilize Darwinian concepts like "survival of the fittest" for automated problem-solving, marking the foundation of evolutionary computation (EC).

Evolutionary computation (EC) draws inspiration from natural evolution, demonstrating success across various domains It encompasses genetic algorithms (GAs), evolution strategies (ES), and evolutionary programming (EP), collectively referred to as evolutionary algorithms (EAs) Central to EC algorithms is a population of individuals processed through operators such as reproduction, random variation, and selection, evaluated by fitness functions These operators aim to produce candidates with increasingly higher fitness values Selection relies on the fitness function, determining how effectively a candidate meets objectives and influencing its survival probability This "generate and test" problem-solving approach favors candidates with superior fitness for future iterations.

The process of an Evolutionary Algorithm (EA) begins with the random initialization of a population Crossover follows, where segments of binary strings from different parent candidates are combined to create new child candidates, aiming to enhance the fitness value through the recombination of beneficial information Various crossover operators, such as single-point and multi-point crossover, can be utilized The next phase, mutation, introduces new information to the population, facilitating global exploration; for instance, in bitwise mutation, a "0" can be flipped to a "1" in a binary string After mutation, the child candidates are formed, and selection is performed to favor candidates with higher fitness values for future generations Techniques like roulette-wheel selection assign probabilities based on fitness, while tournament selection randomly compares two candidates, selecting the one with the superior fitness value.

Fig 1.2 Key concepts in evolutionary algorithms (from

Population: Set of Individuals (Solutions) Parent: Member of Current Generation Children:Members of Next Generation Generation: Successively Created Populations (EA Iteration)

Chromosome: Solution’s Coded Form; Vector (String) Consists of Genes With Alleles Assigned

Fitness: Number Assigned to a Solution; Represent’s “Desirability”

1.2 An Introduction into Computational Intelligence 7

The standard flow of a Genetic Algorithm (GA) continues iterating until all positions are filled If the stopping criterion, such as reaching a maximum number of generations, is satisfied, the best solution identified up to that point is outputted; otherwise, the process initiates a new iteration.

In addition to Genetic Algorithms (GA), there are several other effective Evolutionary Algorithms (EAs) such as Differential Evolution (DE), Particle Swarm Optimization, and Ant Colony Algorithm While these algorithms differ from GA in various ways, they all operate within the evolutionary framework and have demonstrated significant success in real-world applications.

Evolutionary Algorithms (EAs) offer significant advantages over traditional methods like the Newton method, particularly for real-world challenges that are non-convex, discontinuous, and non-differentiable One of the key strengths of EAs is their global search capability, which enables them to effectively identify the globally optimal solution for a given function.

Fuzzy Logic

The real world is complex and full of uncertainty, which is unavoidable in most practical engineering applications However, the computer itself does not have the

Understanding the concept of "tall" varies among individuals, as one person may define it as over 5 feet while another may set the threshold at 6 feet Despite these differing perceptions, people can still communicate effectively about height due to the shared understanding of the term In contrast, computers struggle with such ambiguity, requiring precise definitions to determine if someone qualifies as tall To enhance computer intelligence, fuzzy logic has been developed, mirroring human brain functions to model complex systems Central to fuzzy logic are fuzzy sets, introduced by Zadeh, which allow for decision-making under uncertainty For instance, while a crisp set might strictly define "tall" as a height range of 5 to 7 feet, a fuzzy set recognizes a range around 6 feet, assigning varying degrees of membership to different heights This approach allows for a more nuanced understanding of concepts like "tall," accommodating the imprecision inherent in human reasoning.

In fuzzy logic, a height of 3 feet has a membership value of 0 for the term "tall," similar to the crisp set definition As height increases between 4.8 and 7.2 feet, the membership value gradually rises from 0 to 1, indicating a stronger association with the term "tall," particularly around 6 feet At 7 feet, the membership value for "tall" remains low, but it may correspond to a higher membership for the term "very tall." This illustrates how a single height can possess multiple membership values across different fuzzy sets, which are evaluated collectively.

Fuzzy sets are employed for fuzzy reasoning, utilizing linguistic rules to make inferences For example, if a man is tall and can move swiftly, he is likely to have good potential as a basketball player The concepts involved are defined by fuzzy sets, which allow for calculations to determine membership values and select appropriate rules Ultimately, a decision is reached through the process of defuzzification.

Fig 1.4 Height membership function for a crisp set, b fuzzy set (from [23])

1.2 An Introduction into Computational Intelligence 9

Machine Learning

Fuzzy reasoning relies on predefined decision rules, allowing intelligent agents to act based on these rules Additionally, these agents can autonomously generate new rules through a learning process The external environment influences what the agent learns, while its internal decision-making determines how it acquires that knowledge.

Machine learning encompasses a performance component that dictates actions and a learning component that enhances decision-making It is primarily categorized into three types: supervised learning, unsupervised learning, and reinforcement learning This book focuses exclusively on supervised learning for tasks related to classification and prediction.

Classification and prediction are crucial techniques in data mining, widely applied across various scientific and engineering fields Classification involves determining the category of new observations based on a training dataset with known classifications In contrast, prediction focuses on estimating the function values for unassessed inputs using a training set of previously evaluated inputs and their corresponding outputs In the realm of bioinformatics, intelligent machines are employed to classify DNA sequences effectively.

Classification is crucial in communication for enhancing speech and handwriting recognition In the realm of economics, numerous stock market analysis tools rely on predictive models This book explores statistical learning techniques for prediction, which require significant computational resources Additionally, it delves into mm-wave technologies.

IC design automation necessitates extensive and costly electromagnetic (EM) simulations to evaluate design candidates Utilizing traditional evolutionary algorithms (EAs) for synthesis can lead to impractical time requirements To improve optimization efficiency, statistical machine learning techniques are employed for online learning and predicting function values through active learning This approach is central to the research on surrogate model assisted evolutionary algorithms (SAEA).

Fundamental Concepts in Optimization

Electronic design automation problems are typically cast as optimization problems. Fundamental concepts of optimization used in the following chapters are introduced in this Section.

An unconstrained single-objective optimization problem can be described by (1.1): minimize f(x) s.t x∈ [a,b] d (1.1)

In optimization problems, the vector of decision variables is represented by x, which has a dimension denoted as d The search ranges for these decision variables are defined within the interval [a,b] d The objective function, f(x), aims to find the optimal solution, where the value of x ∈ [a,b] d is such that no other point x ∗ ∈ [a,b] d yields a lower value of f(x ∗ ) in the context of minimization The term “s.t.” stands for “subject to,” indicating any constraints applied to the optimization process.

An unconstrained multi-objective optimization problem aims to minimize multiple objectives, represented as f1(x), f2(x), , fm(x), within specified decision variable ranges [a, b]d These optimization techniques can be classified into two main categories: priori methods, where a decision maker establishes preferences to convert the multi-objective problem into a single-objective one through aggregation techniques, and posteriori methods, which generate a set of optimal trade-off candidate solutions for evaluation A Pareto optimal solution represents the best trade-off among objectives, and there can be numerous Pareto optimal solutions, collectively known as the Pareto set (PS), with their corresponding representations in the objective space referred to as the Pareto front (PF).

A constrained optimization problem aims to minimize a function f(x) while adhering to constraints g_i(x) ≤ 0 for i = 1, 2, , k, within a specified range x ∈ [a, b] d In single-objective minimization, a lower function value indicates a better candidate solution, while constraint values must remain non-positive; for instance, g_1(x) = -10 is preferable to g_1(x) = -5, but g_1(x) = 1 is infeasible The primary objective is to identify the optimal solutions that satisfy all constraints Although the example illustrates a single-objective scenario, constrained optimization can also involve multiple objectives, where the goal is to approximate the Pareto Front while meeting all constraints.

Stochastic optimization involves single or multiple objectives, which may include constraints influenced by random or fuzzy variables This type of optimization is characterized by the variability in evaluations of objectives or constraints, even when the decision variables remain constant Various methods exist to address these uncertainties, including expected value, chance value, and dependent chance value, which will be explored in subsequent chapters.

Computationally expensive optimization encompasses various categories of optimization problems characterized by time-consuming function evaluations of objectives or constraints High computational costs can arise from factors such as the need for numerous statistical samples in analog circuit yield optimization, where each sample's evaluation may be quick, but the total time accumulates significantly Similarly, in electromagnetic (EM) simulation, while each candidate requires only one simulation, the simulation process itself is lengthy The subsequent chapters will delve into single- and multi-objective expensive optimization problems, addressing both constrained and unconstrained scenarios.

Benchmark test problems play a crucial role in intelligent optimization research, particularly within the field of evolutionary computation (EC) Researchers commonly utilize a variety of benchmark functions—mathematical functions characterized by complexities such as non-differentiability, multi-modality, high dimensionality, and non-separability—to evaluate and compare different optimization algorithms Despite their challenging nature, many benchmark functions have known globally optimal solutions, allowing for quantitative assessment of algorithm performance This book will reference and employ benchmark problems throughout its discussions.

Design and Computer-Aided Design of Analog/RF IC

Overview of Analog/RF Circuit

The process of chip manufacturing begins with the breakdown of a large analog/RF system into smaller sub-blocks, each containing numerous transistors and passive components These sub-blocks are further decomposed to the cell level, where analog/RF cells perform specific functions such as amplification or mixing Decomposition can follow a top-down approach, starting with the design of the highest level to define sub-block specifications, or a bottom-up approach, which begins with the design of individual components This structured methodology ensures efficient design and functionality at every level of the chip.

The analog cell design flow, illustrated in Fig 1.5, creates Pareto-optimal surfaces from lower-level blocks, defining the design space for higher levels until the top level is achieved This bottom-up design approach requires multi-objective optimization, a process facilitated by CAD tools While this book does not delve into CAD tools for hierarchical system design, it emphasizes CAD methods specifically for designing analog and RF cells.

The design flow of an analog cell begins with defining the design specifications, which guide the designer in selecting or modifying a circuit topology Once the topology is established, the designer focuses on specifying parameters such as transistor sizes, capacitor values, and biasing voltages or currents to ensure the circuit meets the required specifications through simulation If the simulation results fall short, adjustments may be necessary, including resizing or changing the topology Upon successful simulation, the layout phase begins, translating the parametric design into a physical layout for fabrication However, this layout process introduces parasitics due to the non-ideal effects of silicon materials and interconnects, which must be accounted for in the design.

1.4 Design and Computer-Aided Design of Analog/RF IC 13 verification is necessary The parasitics are extracted and a more complex simulation including the parasitics is then performed Again, if the specifications are not met, a redesign cycle is needed If the post-layout simulation result meets the specifications, we can fabricate it into a real chip However, it is not guaranteed that the fabricated chip can work If the measurement result does not meet the specifications, we need to find the problem and re-design The whole process is often time consuming. The RF IC design flow has similarities with the analog IC design flow described in Fig.1.5 One of the main differences is that a large amount of effort needs to be spent on the integrated passive components (e.g., inductor, transformer, instead of transistors), especially at mm-wave frequencies At such frequencies, the wave- lengths of signals are roughly the same as the dimensions of the devices and the lumped-element circuit theory is inaccurate Due to the distributed effects, most of the equivalent circuit models are narrow-banded This means that one can build a good parasitic-aware equivalent circuit model at 60 GHz, but the same model cannot describe the behavior of the passive component at 70 GHz with reasonable accuracy

In low-frequency analog IC design, equivalent circuit models are essential for manual design processes However, in RF IC design, such models are frequently unavailable, leading designers to depend on their experience, intuition, and less efficient simulators to forecast circuit performance and adjust design parameters.

Overview of the Computer-Aided Design

As technology advances, the use of CAD tools has become essential for circuit design, which can be categorized into five main classes: analysis tools, insight tools, editors, verification tools, and design automation tools Among these, analysis tools serve as evaluators, assessing the performance of designs in a forward manner, while design automation tools operate in a backward approach, delivering the optimal design based on specified requirements.

The most famous analysis tool for analog ICs is the SPICE circuit simulator [29,

SPICE is a critical tool in both manual design and CAD, widely used for circuit simulation across integrated circuit (IC) design The advancement in computing power has facilitated the adoption of "simulation in the loop" design automation methodologies SPICE requires inputs such as technology device model parameters, a netlist detailing circuit design, and specified analysis types The technology model accurately represents devices, while the netlist includes circuit topology, device sizes, and biasing conditions Kirchoff’s law is applied to analyze the circuit, determining node voltages and device currents, ultimately leading to the calculation of performance results, including DC, AC, and transient analyses.

Basic concepts and background analysis are essential for understanding analog ICs DC analysis determines the biasing conditions of the devices, while AC analysis evaluates small-signal performance across various frequencies Transient analysis, on the other hand, examines the behavior of the circuit over time Together, these analyses enable a comprehensive evaluation of an analog IC's performance.

The Mentor Graphics Resistor-Capacitor Extraction (RCX) tool within Calibre is a valuable resource for post-layout simulation, as it creates a new netlist that incorporates all layout-induced parasitics for SPICE simulation.

In RF IC design, key analysis tools include electromagnetic (EM) analysis, small signal analysis, and steady-state analysis At mm-wave frequencies, traditional parasitic-aware equivalent circuit models for passive devices become inaccurate, necessitating the use of EM simulation to calculate S-parameters These S-parameters facilitate the evaluation of various electrical properties, such as gain, return loss, reflection coefficient, and amplifier stability, using matched loads, which simplifies RF IC design The development of EM simulation algorithms is a significant research area, with popular simulators like Momentum, CST, and IE3D leading the field Additionally, SPICE is employed in RF IC design to conduct circuit simulations utilizing S-parameter models provided by EM simulators, highlighting the importance of steady-state analysis in the overall design process.

RF IC analysis, such as harmonic balance simulation.

Analog EDA tools are primarily categorized into several types, including system design tools, topology generation tools, analog cell optimization tools, and yield optimization tools, as well as combinations of these categories.

System design tools are primarily aimed at analog system-level design, with some offering features for lower-level cell sizing An example of this is the multi-objective bottom-up (MOBU) method, which utilizes multi-objective optimization techniques to build the analog system from the Pareto front of basic cells.

Topology generation tools assist designers in selecting, designing, and sizing topologies A prime example is the MOJITO system, which creates a vast array of potential topologies through a hierarchically organized combination of analog cells The system employs a search algorithm to explore the design space and conducts multi-objective optimization.

Analog cell optimization tools utilize predefined topologies to size analog building blocks effectively These tools aim to optimize specific goals, such as power consumption, while adhering to the design specifications set by the designer.

1.4 Design and Computer-Aided Design of Analog/RF IC 15

High-yield design has become increasingly crucial in recent years, with tools available to enhance yield by fine-tuning design parameters Advanced tools integrate yield considerations directly into the sizing process, resulting in circuits that are optimized for both performance and yield This approach is known as yield/variation-aware analog IC sizing, exemplified by the Wicked tool.

Combination tools can finish more than one of the above tasks For example, the MOJITO-N tool [40] can generate a circuit topology considering variation- aware sizing.

Design automation tools for RF ICs are currently limited in functionality and still developing Existing tools primarily focus on low-GHz RF IC synthesis, typically under 10 GHz, relying on equivalent circuit models for passive components These tools often necessitate a layout template that restricts design flexibility, potentially overlooking optimal solutions Additionally, while some in-house tools used by RF IC designers can address mm-frequency RF IC optimization, they depend heavily on an effective initial design and merely adjust parameters for performance enhancement, falling short of true design automation.

RF IC optimization is often mistakenly equated with low-frequency analog IC optimization, leading some to believe that existing simulation tools can be applied effectively While the future of RF EDA is promising due to unresolved challenges, the application of "simulation in the loop" optimization methods is not suitable for RF ICs For instance, a case study on designing a mm-wave amplifier demonstrated that this method, even when paired with advanced global optimization algorithms, resulted in a lengthy 9-day design process Although the outcome was satisfactory, it fails to meet the rapid time-to-market demands, highlighting the need for further research to enhance RF IC design automation while preserving strong optimization capabilities.

Summary

This introductory chapter has outlined the fundamental concepts of computational intelligence and the automation of analog and RF IC design, while also reviewing the developments and challenges faced in these areas The subsequent chapters will delve into algorithm frameworks and practical algorithms aimed at addressing the issues discussed herein.

1 Russell S, Norvig P, Canny J, Malik J, Edwards D (1995) Artificial intelligence: a modern approach Prentice hall Englewood Cliffs, New Jersey

2 Eiben A, Smith J (2003) Introduction to evolutionary computing Springer Verlag, Berlin

3 Dreslinski R, Wieckowski M, Blaauw D, Sylvester D, Mudge T (2010) Near-threshold comput- ing: reclaiming moore’s law through energy efficient integrated circuits Proc IEEE 98(2):253– 266

4 ITRS (Sept 2011) ITRS report http://www.itrs.net/

5 Liu B, Wang Y, Yu Z, Liu L, Li M, Wang Z, Lu J, Fernández F (2009d) Analog circuit optimization system based on hybrid evolutionary algorithms Integr VLSI J 42(2):137–148

6 Gielen G, Eeckelaert T, Martens E, McConaghy T (2007) Automated synthesis of complex analog circuits In: Proceedings of 18th european conference on circuit theory and design, pp 20–23

7 McConaghy T, Palmers P, Gao P, Steyaert M, Gielen G (2009a) Variation-aware analog struc- tural synthesis: a computational intelligence approach Springer Verlag, Berlin

8 Liu B, Fernández F, Gielen G (2011a) Efficient and accurate statistical analog yield optimization and variation-aware circuit sizing based on computational intelligence techniques IEEE Trans Comput Aided Des Integr Circ Syst 30(6):793–805

9 Niknejad A, Hashemi H (2008) mm-Wave silicon technology: 60GHz and beyond Springer Verlag, New York

10 Choi K, Allstot D (2006) Parasitic-aware design and optimization of a CMOS RF power amplifier IEEE Trans Circ Syst I Regul Pap 53(1):16–25

11 Tulunay G, Balkir S (2008) A synthesis tool for CMOS RF low-noise amplifiers IEEE Trans Comput Aided Des Integr Circ Syst 27(5):977–982

12 Ramos J, Francken K, Gielen G, Steyaert M (2005) An efficient, fully parasitic-aware power amplifier design optimization tool IEEE Trans Circ Syst I Regul Pap 52(8):1526–1534

13 Balanis C (1982) Antenna theory: analysis and design Wiley, New York

14 Poian M, Poles S, Bernasconi F, Leroux E, Steffé W, Zolesi M (2008) Multi-objective opti- mization for antenna design In: Proceedings of IEEE international conference on microwaves, communications, antennas and electronic systems, pp 1–9

15 Yeung S, Man K, Luk K, Chan C (2008) A trapeizform U-slot folded patch feed antenna design optimized with jumping genes evolutionary algorithm IEEE Trans Antennas Propag 56(2):571–577

16 Mezura-Montes E (2009) Constraint-handling in evolutionary optimization Springer Verlag, Berlin

17 Hart W, Krasnogor N, Smith J (2005) Recent advances in memetic algorithms Springer Verlag, Berlin

18 Fogel D (2006) Evolutionary computation: toward a new philosophy of machine intelligence. Wiley-IEEE Press, Piscataway

19 Coello C, Lamont G, Veldhuizen D (2007) Evolutionary algorithms for solving multi-objective problems Springer-Verlag, New York

20 Price K, Storn R, Lampinen J (2005) Differential evolution: a practical approach to global optimization Springer-Verlag, New York

21 Poli R, Kennedy J, Blackwell T (2007) Particle swarm optimization Swarm Intell 1(1):33–57

22 Dorigo M, Birattari B, Stützle T (2006) Ant colony optimization IEEE Comput Intell Mag 1(4):28–39

23 Ross T (1995) Fuzzy logic with engineering applications Wiley, New York

24 Zadeh L (1965) Fuzzy sets Inf control 8(3):338–353

25 Liu B (2002) Theory and practice of uncertain programming Physica Verlag, Heidelberg

26 Eiben A, Bọck T (1997) Empirical investigation of multiparent recombination operators in evolution strategies Evol Comput 5(3):347–365

27 Gielen G, McConaghy T, Eeckelaert T (2005) Performance space modeling for hierarchical synthesis of analog integrated circuits In: Proceedings of the 42nd annual design automation conference, pp 881–886

In their 2012 study, Liu et al introduced an innovative synthesis method for high-frequency linear RF amplifiers that leverages evolutionary computation and machine learning techniques This approach, published in the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, demonstrates significant efficiency improvements in amplifier design, showcasing the potential of advanced computational methods in electronic engineering.

29 Synopsys (2013) HSPICE homepage http://www.synopsys.com/Tools/Verification/ AMSVerification/CircuitSimulation/HSPICE/Pages/default.aspx

30 Cadence (2013) cadence design system homepage http://www.cadence.com/us/pages/default. aspx

31 Mentor-Graphics (2013) Mentor graphics homepage http://www.mentor.com/

32 Yu W (2009) Electromagnetic simulation techniques based on the FDTD method Wiley, New York

33 Agilent (2013) Agilent technology homepage http://www.home.agilent.com/

34 CST (2013) CST computer simulation technology homepage http://www.cst.com/

35 Eeckelaert T, McConaghy T, Gielen G (2005) Efficient multiobjective synthesis of analog circuits using hierarchical pareto-optimal performance hypersurfaces In: Proceedings of the conference on design, automation and test, pp 1070–1075

36 McConaghy T, Palmers P, Gielen G, Steyaert M (2007) Simultaneous multi-topology multi- objective sizing across thousands of analog circuit topologies In: Proceedings of the 44th design automation conference, pp 944–947

37 Medeiro F, Rodríguez-Macías R, Fernández F, Domínguez-Castro R, Huertas J, Rodríguez- Vázquez A (1994b) Global design of analog cells using statistical optimization techniques. Analog Integr Circ Sig Process 6(3):179–195

38 Medeiro F, Fernández F, Dominguez-Castro R, Rodriguez-Vazquez A (1994a) A statistical optimization-based approach for automated sizing of analog cells In: Proceedings of the IEEE/ACM international conference on Computer-aided design, pp 594–597

39 MunEDA (2013) MunEDA homepage http://www.muneda.com/index.php

40 McConaghy T, Palmers P, Gielen G, Steyaert M (2008) Genetic programming with reuse of known designs for industrially scalable, novel circuit design, Chap 10 Genetic Programming Theory and Practice V Springer, pp 159–184

41 Allstot D, Choi K, Park J (2003) Parasitic-aware optimization of CMOS RF circuits Springer,New York

Fundamentals of Optimization Techniques in Analog IC Sizing

Chapters 2, 3, and 4 focus on the sizing of high-performance analog integrated circuits under nominal conditions, laying the groundwork for advanced topics such as variation-aware analog IC sizing and electromagnetic simulation-based mm-wave integrated circuit and antenna synthesis This section defines the core problem and introduces commonly used evolutionary algorithms along with fundamental constraint handling methods.

This chapter is structured into several sections: Section 2.1 presents the problem at hand, while Section 2.2 reviews existing methods for analog circuit sizing In Section 2.3, we introduce the general implementation of an evolutionary algorithm (EA), focusing on the differential evolution (DE) algorithm, which serves as the primary search mechanism in this book Section 2.4 discusses two fundamental constraint handling techniques, and Section 2.5 presents two commonly used multi-objective evolutionary algorithms (MOEAs) Additionally, Section 2.6 includes two practical examples, culminating in a summary of the chapter in Section 2.7.

Analog IC Sizing: Introduction and Problem Definition

VLSI technology is increasingly integrating mixed analog-digital circuits into complete systems-on-a-chip, but designing the analog components remains challenging due to their complexity and the expertise required The absence of automated synthesis or sizing methodologies leads to prolonged design times, heightened complexity, and increased costs, necessitating highly skilled designers As a result, there is significant interest in developing automated synthesis methodologies for analog circuits The design process involves both topological-level and parameter-level design, with this book focusing on the latter to optimize parameter selection and enhance performance for specified circuit topologies, assuming that the designer provides the circuit topology.

B Liu et al., Automated Design of Analog and High-frequency Circuits, 19 Studies in Computational Intelligence 501, DOI: 10.1007/978-3-642-39162-0_2, © Springer-Verlag Berlin Heidelberg 2014

20 2 Fundamentals of Optimization Techniques in Analog IC Sizing

An analog IC sizing system serves two primary purposes: automating the design of parameters to eliminate tedious manual trade-offs and addressing complex design challenges that are difficult to tackle by hand For a circuit synthesis solution to be widely accepted, it must demonstrate accuracy, ease of use, generality, robustness, and acceptable run-time Additionally, a high-performance analog IC sizing system should effectively handle intricate problems, closely align with designer requirements, and achieve highly optimized outcomes Numerous parameter-level design strategies, methods, and tools have emerged in recent years, with some even reaching commercialization, which will be reviewed in Sect 2.2.

Analog circuit sizing challenges are often framed as constrained optimization problems, which can be either single- or multi-objective in nature In single-objective scenarios, the goal typically involves minimizing a specific objective, such as power consumption, while adhering to constraints like maintaining a DC gain above a predefined threshold This relationship can be mathematically represented as: minimize x f(x) subject to g(x)≥0, h(x)=0, with x constrained within the bounds [X L ,X H ].

In analog circuit design, the objective function f(x) represents the performance function to be minimized, while h(x) denotes the equality constraints, primarily based on Kirchhoff’s current law (KCL) and voltage law (KVL) The design variables are represented by vector x, with X L and X H indicating their respective lower and upper bounds Additionally, the vector g(x) ≥ 0 encompasses user-defined inequality constraints For instance, in the given circuit topology, the sizing problem aims to optimize the sizing and biasing of all devices, including transistors and capacitors, to minimize power consumption while adhering to constraints such as a DC gain of at least 80 dB, a GBW of 2 MHz, a phase margin of 50°, and a slew rate of 1.5 V/µs.

In multi-objective analog circuit sizing, we optimize multiple performance metrics concurrently to achieve an approximate Pareto-optimal front This approach enables us to analyze the trade-offs and sensitivities among various objectives effectively.

1 The maximization of a design objective can easily be transformed into a minimization problem by just inverting its sign.

2.1 Analog IC Sizing: Introduction and Problem Definition 21

In a CMOS three-stage amplifier design, multi-objective sizing allows designers to set performance metrics as either objectives or constraints Objectives involve exploring trade-offs among various performance factors, while constraints define minimum or maximum performance values without considering trade-offs Typically, only a select few performances are treated as objectives, with others designated as constraints, as designers may prioritize specific trade-offs Additionally, functional constraints, such as ensuring transistors operate in the saturation region, are crucial For instance, design goals may include minimizing power and area, while ensuring that the DC gain exceeds 80 dB, the gain-bandwidth product (GBW) is greater than 2 MHz, the phase margin is at least 50 degrees, and the slew rate is no less than 1.5 V/μs.

Review of Analog IC Sizing Approaches

Analog integrated circuit sizing can be achieved through two primary methods: knowledge-based and optimization-based approaches Knowledge-based synthesis involves formulating design equations that allow for the direct calculation of design parameters based on specified performance characteristics.

22 2 Fundamentals of Optimization Techniques in Analog IC Sizing

The effectiveness of knowledge-based sizing tools in complex circuits and modern technologies is often hindered by inadequate solution quality, particularly in terms of accuracy and robustness Additionally, these tools require significant preparatory time and effort to create design plans or equations, face challenges when applied to different technologies, and are restricted to a narrow range of circuit topologies.

Optimization-based synthesis transforms circuit design challenges into function minimization problems, solvable through numerical methods This approach often employs a performance evaluator within an iterative optimization loop, particularly when the evaluator is equation-based, reflecting circuit behavior However, deriving these equations can be time-consuming and may lead to inaccuracies due to necessary simplifications In contrast, simulation-based methods utilize SPICE-like simulations for performance evaluation, offering superior accuracy and ease of use This book emphasizes simulation-based optimization, linking circuit simulators like HSPICE with programming environments such as MATLAB or C++ While this method may incur longer computation times, modern circuit simulation software keeps these durations manageable, typically ranging from minutes to tens of minutes for circuits with numerous transistors Analog circuit optimization techniques are generally categorized into deterministic optimization algorithms and stochastic search algorithms Traditional deterministic methods, including Newton and Levenberg-Marquardt, are available in commercial simulators but face challenges like the need for a good starting point and the risk of settling for local minima Researchers have proposed solutions, such as improved initial point determination and geometric programming methods, which ensure convergence to a global minimum.

Recent research has increasingly focused on stochastic search algorithms, particularly evolutionary computation (EC) methods such as genetic algorithms, differential evolution, and genetic programming These approaches necessitate specialized formulations of design equations, which can lead to similar drawbacks as traditional equation-based methods.

In the realm of analog IC sizing, genetic algorithms (GA) have emerged as effective optimization tools, utilized widely in both industry and academic settings To address practical design constraints, many existing methodologies incorporate the penalty function method, which effectively manages these limitations.

Implementation of Evolutionary Algorithms

Overview of the Implementation of an EA

Chapter 1 introduces the fundamental concepts of evolutionary computation through the example of the genetic algorithm, focusing on the implementation of an evolutionary algorithm (EA) An EA program typically adheres to a structured procedure that guides its operations.

Step 1: Initialize the population of individuals by random generation.

Step 2: Evaluate the fitness of each individual in the initial population.

Step 3: Evolution process until the termination condition is met:

Step 3.1: Select the parent individuals for reproduction.

Step 3.2: Generate offsprings (child population) based on the parent indi- viduals through crossover and mutation.

Step 3.3: Evaluate the fitness of each individual in the child population. Step 3.4: Update the population by replacing less fitting individuals by individuals with good fitness.

Different evolutionary algorithms (EAs) utilize various representations for solutions, such as binary strings, real numbers, and computer program structures Correspondingly, the common types of EAs include canonical genetic algorithms, evolution strategies, and genetic programming Additionally, effective search performance relies on the appropriate combination of mutation, crossover, and selection operators.

The targeted problems in this book, the design automation of analog and RF

The differential evolution (DE) algorithm is a powerful tool for solving numerical optimization problems and serves as a key example of evolutionary algorithms (EAs) DE utilizes vector differences and is particularly effective for continuous global optimization challenges, making it a preferred choice throughout this book In addition to DE, other competitive computational intelligence methods, such as real-coded genetic algorithms, particle swarm optimization, and evolution strategies, also demonstrate significant potential for addressing similar optimization problems.

24 2 Fundamentals of Optimization Techniques in Analog IC Sizing

Differential Evolution

This book aims to enhance specific system properties through the careful selection of system parameters The evaluation of candidate performances is conducted via simulation, leading to challenges often categorized as black-box optimization problems, which may exhibit nonlinear, non-convex, and non-differentiable characteristics Notably, Differential Evolution (DE) is acknowledged as a highly effective evolutionary algorithm for global optimization in continuous spaces.

Users generally demand that a practical optimization technique fulfills the fol- lowing requirements [25] (minimization is considered):

(1) Ability to handle non-differentiable (gradients are not available or difficult to be calculated), nonlinear and multimodal cost functions (more than one or even numerous local optima exist).

(2) Parallelizability to cope with computation-intensive cost functions.

(3) Ease of use, i.e., few control variables to steer the minimization These variables should also be robust and easy to choose.

(4) Good convergence properties, i.e., consistent convergence to the global minimum in consecutive independent trials.

Differential Evolution (DE) was specifically designed to meet several key requirements common to all Evolutionary Algorithms (EAs) To address the third requirement, DE incorporates concepts from the Nelder and Mead algorithm by utilizing information from within the vector population to modify the search space Its self-organizing mechanism uses the difference between two randomly selected population vectors to adjust an existing vector Extensive testing under diverse conditions has demonstrated DE's high performance on complex benchmark problems While theoretical descriptions of convergence properties exist for many optimization methods, only thorough empirical testing can validate their effectiveness.

DE scores very well in this regard for complex benchmark problems [25,26].

In the following, the DE operations are described in detail.

Differential Evolution (DE) is a parallel direct search optimization technique that employs a fixed-size population of N P d-dimensional parameter vectors, denoted as x i (t) = [x i , 1 , x i , 2 , , x i , d ], where i ranges from 1 to N P for each iteration t The population size, N P, remains constant throughout the minimization process Initially, the parameter vectors are randomly selected to ensure comprehensive coverage of the parameter space, typically assuming a uniform probability distribution for random choices If a preliminary solution exists, the initial population can be created by introducing normally distributed random deviations to this nominal solution DE produces new parameter vectors by incorporating weighted differences between two existing vectors in the population.

2.3 Implementation of Evolutionary Algorithms 25 vectors to a third vector Let this operation be called mutation The mutated vector’s parameters are then mixed with the parameters of another predetermined vector (the corresponding vector generated in the last iteration), the target vector, to yield the so-called trial vector Parameter mixing is often referred to as “crossover” in the EC community and will be explained later in more detail If the trial vector yields a lower cost function value than the target vector, the trial vector replaces the target vector in the following generation This last operation is called selection Each population vector has to serve once as the target vector so that N P competitions take place in one generation (iteration).

More specifically DE’s basic strategy can be described as follows:

For each target vector x i (t); i = 1,2, ,N P, a mutant vector is generated according to

The equation V i (t+1) = x r 1 (t) + F(x r 2 (t) − x r 3 (t)) describes a differential evolution process, where r 1, r 2, and r 3 are distinct random integers selected from a set of size N P, which must be at least four to ensure they differ from the running index i The parameter F, a constant within the range (0, 2], regulates the amplification of the differential variation between x r 2 (t) and x r 3 (t), with a typical choice of F between 0.8 and 1 for single-objective problems In multi-objective optimization using differential evolution (DE), the selection of F varies based on the specific search mechanism A visual representation in Figure 2.2 demonstrates the various vectors involved in the iteration of V i (t+1).

In order to increase the diversity of the perturbed parameter vectors, crossover is introduced To this end, the trial vector:

Fig 2.2 An example of the process for generating V i ( t + 1 ) (two-dimensional)

26 2 Fundamentals of Optimization Techniques in Analog IC Sizing

Fig 2.3 Illustration of the crossover process for D = 7 parameters (from [25])

1 2 3 4 5 6 7 rand(3)

Ngày đăng: 04/10/2023, 15:51

TỪ KHÓA LIÊN QUAN

w