Volume 1 photovoltaic solar energy 1 05 – historical and future cost dynamics of photovoltaic technology Volume 1 photovoltaic solar energy 1 05 – historical and future cost dynamics of photovoltaic technology Volume 1 photovoltaic solar energy 1 05 – historical and future cost dynamics of photovoltaic technology Volume 1 photovoltaic solar energy 1 05 – historical and future cost dynamics of photovoltaic technology Volume 1 photovoltaic solar energy 1 05 – historical and future cost dynamics of photovoltaic technology Volume 1 photovoltaic solar energy 1 05 – historical and future cost dynamics of photovoltaic technology
Trang 1GF Nemet and D Husmann, University of Wisconsin–Madison, Madison, WI, USA
© 2012 Elsevier Ltd All rights reserved
References
1.05.1 Introduction: Observed Reductions in the Cost of Photovoltaics
The cost of photovoltaics (PV) has declined by a factor of nearly 700 since the 1950s, which is more than that for any other energy technology in that period At present, however, PV remains a niche electricity source and in the overwhelming majority of situations does not compete economically with conventional sources, such as coal and gas, or even with other renewable sources, such as wind and biomass The extent to which the technology improves over the next few decades will determine whether PV reaches terawatt scale and makes a meaningful contribution to reducing greenhouse gas emissions or remains limited to niche applications
In this chapter, we discuss the observed cost reductions in PV and the factors affecting them We discuss the use of learning curves for forecasting PV costs and the nonincremental changes to technology that complicate such models We include a discussion of an alternate forecasting methodology that incorporates R&D impacts and conclude with implications for policy and items for future progress
Reductions have occurred across a wide variety of components within PV systems Foremost, the cost of PV modules have declined from about $2700 W−1 in the 1950s to around $3 W−1 in 2006 (Figure 1) [1] Although difficult to verify, claims that the marginal cost of manufacturing modules is as low as $1 W−1 are now widespread [2]
Comprehensive Renewable Energy, Volume 1 doi:10.1016/B978-0-08-087872-0.00103-7 47
Trang 21950 Figure 2 Levelized cost of electricity generated from PV (2008$ kWh−1) [1]
Most assessment of PV has focused on the evolution of the core technology, the modules that convert sunlight to electricity As the prices of modules have fallen, the rest of the components that comprise a PV system have accounted for an increasing share of the overall costs A study of balance-of-system (BOS) prices in the 1990s found that the rate of technology improvement in BOS components has been quite similar to that of modules [3], which is rather surprising given the heterogeneous set of components and activities that fall under the rubric of BOS Inverters, the largest hardware component of BOS, improved more slowly than modules; it was the noninverter costs – installation, wiring, mounting systems – that improved even more quickly Dispersion in the data for BOS components is higher than that for modules
Ultimately, because technology adoption decisions are based on the cost of electricity produced, the cost of fully installed systems is what matters most Recent work at Lawrence Berkeley National Laboratory has documented the cost of installed systems
in the 2000s [4] Costs have fallen from over $10 W−1 in 1998 to under $8 W−1 in 2007 Figure 2 shows the long-term decline in the cost of electricity from PV, a factor of 800 reduction
1.05.2 What Caused the 700 Reduction in the Cost of PV?
A variety of factors, including government activities, have enabled the nearly 3 orders of magnitude reduction in the cost of PV over the past six decades Despite this achievement, the technology still remains expensive, such that widespread deployment depends on substantial future improvement No single determinant predominantly explains the improvement to date; R&D, economies of scale, learning by doing, and knowledge spillovers from other technologies have all played a role in reducing system costs Moreover, interactions among factors that enable knowledge feedbacks – for example, between demand subsidies and R&D – have also proven important
But not all of the factors were important for the entire sequence of the technology’s development Certain factors dominated for periods These stages have been correlated with shifts in the geographic foci of effort, both by the private sector and by the government: in the 1970–85, R&D to improve efficiencies and manufacturing techniques; in the 1990s to early 2000s, long-term demand programs to enable economies of scale; and in the 2000s, efforts to stimulate local learning by doing to reduce installation costs in addition to continuous improvements through manufacturing scale [5] As alternatives to crystallized silicon PV emerge, efforts to improve them follow a similar cycle
Trang 31.05.2.1 Identifying Drivers of Change
Nemet [6] sought to understand the drivers behind technical change in PV by disaggregating historical cost reductions into observable technical factors That study spanned the period of nascent commercialization in the mid-1970s through the early 2000s During the 26-year period studied, there was a factor of 20 reduction in the cost of PV modules
The results of that study point to two factors that stand out as most important: plant size accounts for 43% of the change in PV cost and efficiency accounts for 30% of the change Of the remaining factors, the declining cost of silicon accounts for 12% Yield, silicon consumption, wafer size, and polycrystalline share each have impacts of 3% or less These observed changes are summarized
in Figure 3 The following sections discuss the sources of these observed changes Table 1 summarizes this discussion
1.05.2.2 R&D and Efficiency Improvements
The doubling of the average electrical conversion efficiency of PV systems since the 1970s has been important to cost reductions, accounting for about a third of the decline in cost over time R&D, especially public sector R&D, has been central to this change (Figure 4) Data on the highest laboratory cell efficiencies over time show that of the 16 advances in efficiency since 1980 [7], only 6 were accomplished by firms that manufacture commercial cells Most of the improvements were accomplished by universities, none of which would have learned from experience with large-scale production; government and university R&D programs produced 10 of the
16 breakthroughs in cell efficiency Almost every one of the 20 most important improvements in PV occurred during a 10-year period between the mid-1970s and the mid-1980s [8], most of them in the United States where over a billion was invested in PV R&D during that period [9] Section 1.05.4 discusses a novel methodology for identifying these types of nonincremental improvements
1.05.2.3 Sequential Niche Markets
Deployment of PV has benefited from a sequence of niche markets where users of the technology were less price-sensitive and had strong preferences for characteristics such as reliability and performance that allowed product differentiation Governments have played a large role in creating or enhancing some of these niche markets In the 1960s and 1970s, the US space program and the
$25.30
Figure 3 Portion of cost reduction in PV modules accounted for by each factor [1]
Table 1 Summary of effects of items on the cost of PV (1980–2001) and reasons for the change
in each factor
Share change in PV module costs attributable Item (%) Main drivers of change in each factor
Plant size 43 Expected future demand and risk management
Efficiency 30 R&D, some LBD for lab-to-market
Silicon cost 12 Spillover benefit from information technology industry
Si used 3 LBD and technology spillover
Polycrystalline share 2 New process, LBD possible
LBD, learning by doing
Trang 4Department of Defense accounted for more than half of the global market for PV (Figure 5) The high cost of electricity in space allowed PV to be competitive even at an early stage Subsequent markets – telecom repeater stations, off-grid homes, and especially consumer electronics such as toys, calculators, and watches – were, importantly, independent of government decisions, and allowed the industry to expand from the mid-1980s until the late 1990s when energy prices, and thus alternative energy, were low social priorities From the 1990s onward, households that had strong preferences for environmental protection, especially conspicuously, created larger markets
1.05.2.4 Expectations about Future Demand
Increasing demand for PV has reduced costs by enabling opportunities for economies of scale in manufacturing Nemet [6]
assembled empirical data to populate a simple engineering-based model identifying the most important factors affecting the cost
of PV over the past three decades The study found that three factors account for almost all of the observed cost reductions: (1) a 2 orders of magnitude increase in the size of manufacturing facilities, which provided opportunities for economies of scale (Figure 6); (2) a doubling in the electrical conversion efficiency of commercial modules; and (3) a fall in the price of the primary input
Trang 5material, purified silicon Because investments in larger facilities take time to pay off, economies of scale depend on expectations of future demand As a result, public programs that reduce uncertainty by setting clear long-term expectations, such as Japan’s Sunshine Program in the 1990s [11], are more effective at enabling scale economies than generous subsidies that can suddenly disappear, such as California’s incentives for wind and solar in the early 1980s Government programs with longer time horizons such as Germany’s in the 2000s and the recently launched California Solar Initiative create similar opportunities [1, 12] Japan’s program was especially innovative in that it not only took a long time horizon but also set a declining subsidy such that it fell to zero after
10 years of the program This provided not only expectations of demand but also clear expectations of future levels of subsidy Germany’s program is especially notable in that production there has become sufficient to create external economies of scale – the emergence of machine tool manufacturers that now produce equipment specifically for the PV industry [13] Similarly, production of lower purity, thus cheaper, solar-grade silicon is now profitable because plants can be built at large scale We also have begun to observe some economies of scale in unit size, where large installations show much lower per-watt costs [4] Increasing installation size is likely to become more important as economies of scale reduce module-manufacturing costs, leaving installation costs as an increasingly large share of system costs
The main drivers of the change in plant size over the period were growth in expected future demand and the ability to manage investment risk Whether experience plays a role in enabling the shift to large facilities depends on new manufacturing problems at larger scales and how experience may help in overcoming these problems Examples from three PV firms indicate that limited manufacturing experience did not preclude rapid increases in production Mitsubishi Electric expanded from essentially zero production in 1997 to 12 MW and as of 2000 planned to expand to 600 MW by 2012 [14] While the firm had decades of experience in research and satellite PV applications, its cumulative production was minimal It began substantial manufacturing activity only with the opening of its Iida plant and its entry into the Japanese residential PV market in 1998 Similarly, Q-Cells, a German firm, began producing cells only in 2001 with a 12 MW line and increased production to 50 MW in only 2 years [15] Sharp considered construction of a 500 MW yr−1 plant in 2006, which would amount to a 10-fold expansion in the firm’s capacity in only 5 years By
2011, it had increased capacity to 2.8 GW yr−1 Note that by mid-2011, six firms had manufacturing capacities above 2 GW yr−1 In the rapid expansions of the past 10 years, the ability to raise capital and to take on the risk of large investments that enable construction of large manufacturing facilities appears to have played more important roles than learning by experience in enabling cost reductions These results support the claim that “sometimes much of what is attributed to experience is due to scale” [16]
1.05.2.5 Learning by Doing
Learning from experience in production has played a role, albeit not a dominant one, in reducing module costs These changes occurred in several different processes Experience in manufacturing led to lower defect rates and the utilization of the entire wafer area, which increased yields Experience was probably important in enabling the growth of larger crystals and the formation of longer conductors from cell edges to electrical junctions; savings accrued from the ensuing larger wafer sizes Less silicon was consumed as experience helped improve sawing techniques so that less crystal was lost as sawdust and thinner cells could be produced The development of wire saws, a spillover technology from the radial tire industry, is less clearly related to experience Learning by doing helped the gradual shift to polycrystalline ingots Casting of rectangular multicrystalline ingots was a new technology that partially derives from experience with the Czochralski process for growing individual crystals While learning by doing and experience play more important roles in these factors, together they account for only 10% of the overall change in module cost [6]
Learning by doing plays a much more important role in reducing installation costs [17] An important aspect of this learning is that it is a local phenomenon, whereas module production is truly global [18] As installation costs become a large portion of costs, the extent of this learning by doing will be important Whether or not the benefits of this learning are appropriable will determine whether governments need to play a role in promoting learning investments More generally, the global aspect of module manufacturing suggests that interfirm and international technology spillovers are likely to be more important in module production than in installation Finally, learning by doing may be important in the translation of laboratory breakthroughs to commercial products, as observed in Figure 4
1.05.2.6 Intertechnology Spillovers
Like many technologies, PV has benefited from the adoption of innovations that originated in other industries These include the use of excess purified silicon from the chip-making industry, the use of wire saws from radial tires to slice multiple silicon wafers, electronic connectors to ease installation, screen-printing techniques from lithography, as well as an array of manufacturing techniques taken from microprocessors (for crystallized silicon) and liquid crystal displays (LCDs; for thin film)
1.05.2.7 Materials
Reductions in the cost of purified silicon were a spillover benefit from manufacturing improvements in the microprocessor industry Until the 2000s, the PV industry accounted for less than 15% of the world market for purified silicon [19] Through that time, the PV industry did not purify its own silicon but instead purchased silicon from producers whose main customers were in the much larger microprocessor industry, where purity standards were higher Therefore, experience in the PV industry was irrelevant to silicon cost reductions More recently, input costs, especially of purified silicon, increased in the mid-2000s in line with other commodity prices
Trang 62000
Moderately concentrated
If lifetimes and efficiencies can be maintained at these lower levels of purity, there is strong potential for materials costs to fall beyond the short-term business cycle effects
1.05.2.8 Drivers Related to Supply and Demand
Changes in demand and in market structure have affected prices as well, even if they do not directly affect the technical characteristics of the technology
1.05.2.8.1 Industry structure
Changes in industry concentration have affected market power and have led to a changing relationship between prices and costs over time Market share data indicate a decline in industry concentration during this period This change typically produces an increase in competitiveness, a decrease in market power, and lower profit margins For example, there were only two US firms shipping terrestrial PV from 1970 to 1975 [21, 22] In 1978, about 20 firms were selling modules and the top 3 firms made up 77%
of the industry [23] By 1983, there were dozens of firms in the industry, with the largest three firms accounting for only 50% of the megawatts sold [24]
One way to quantify this change is to use the Herfindahl–Hirschman index (HHI), which provides a way of measuring industry concentration [25, 26] The HHI is calculated by summing the squares of the market shares of all firms in an industry The maximum possible HHI is 10 000 The data show a trend to a less concentrated US market during the 1970s (Figure 7) Concentration in the global market remained stable in the 1990s, the period for which comprehensive worldwide data are available The increase in international trade in PV over the last three decades indicates that the relevant scale of analysis shifted from a national market in the earlier years to an international market today
1.05.2.8.2 Demand shocks and rising elasticity
Demand dynamics have shifted prices in opposite directions Foremost, the surge in subsidy programs for PV in the 2000s resembles
a demand shock, as observed in other sectors The industry has had a difficult time adjusting quickly The very high levels of demand
in the 2005–07 period are in part to blame for the high prices during that period This led to a reversal in the multidecade downward cost trajectory that can be observed in Figure 9 Some of this cost increase was due to higher materials costs [20] and the rest likely due to higher willingness to pay as aggressive subsidy programs brought new consumers to the market for PV
In contrast, historically, the shift from satellites to terrestrial applications affected prices because of a difference in the demand elasticity of the two types of customers Price data from that period provide some supporting evidence In 1974–79, the price per watt of PV modules for satellites was 2.5 times higher than that for terrestrial modules [10] The impact of this price difference on average PV prices is calculated by taking into account the change in market share mentioned below In this period, the combination
of these price and market shifts accounts for $22 of the $28 price decline not explained by the model Satellite customers, with their hundreds of millions of dollars of related investments, almost certainly had a higher willingness to pay for PV panels than early terrestrial applications such as telecom repeater sites or buoys for marine navigation The difference in quality must account for some of the price difference But the difference in willingness to pay may also have led to higher differences between cost and price for satellite than for terrestrial applications
Another historical explanation for the change in cost is that changes in production methods occurred due to an increase in the number of customers and the types of products they demanded There was a shift away from a near-monopsony market in the early 1970s Originally a single customer, the US space program, accounted for almost all sales Conversely, in 1976, the US government
Figure 7 Concentration in the PV industry (HHI) [1]
Trang 7Rural off-grid Consumer
productsResid off-grid
Green on-grid Resid on-grid
Market size (log scale) Figure 8 Illustrative demand curve for PV electricity Reproduced from Nemet GF (2009) Interim monitoring of cost dynamics for publicly supported energy technologies Energy Policy 37(3): 825–835 [28]
accounted for only one-third of terrestrial PV purchases [27] With the rise of the terrestrial industry, a larger set of customers emerged over the course of the decade When this change in the structure of demand occurred, one result was the shift away from producing customized modules, such as the 20 kW panels on Skylab, to producing increasingly standard products at much higher volumes Figure 8 provides a schematic example of what a demand curve for PV electricity might look like considering both historical data and projections of the future opportunity The x-axis shows the size of each market segment The y-axis shows that the price customers are willing to pay for PV electricity in each segment Note the use of log–log axes For example, in the upper left region, the cost of electricity is very high in off-grid and remote locations, although markets are relatively small At the other end of the curve, markets for wholesale electricity are very large, but require PV to compete with large central station sources of electricity
1.05.2.9 Quality and Product Attributes
Changes in quality and product attributes also affected costs During the 1970s, the market for PV modules shifted toward terrestrial applications Whereas in 1974, 94% of PV systems were manufactured for applications in space, by 1979, the market share for space had fallen to 36% The shift led to a reduction in the quality of modules, which rendered certain characteristics nonessential, allowing manufacturers to switch to less costly processes
First, space and weight constraints on rockets required high-efficiency panels to maximize watts delivered per square meter The relaxation of this requirement for terrestrial applications enabled manufacturers to employ two important cost-saving processes
[10] Cheaper materials were now tolerable Modules could use the entire area of the silicon wafer – even the portions near the edges, which tend to suffer from defects and high electrical resistivity Also, the final assembly process could use a chemical polish to enhance light transmission through the glass cover, rather than the more expensive ground optical finish that was required for satellites Second, reliability targets fell Satellite programs, such as Vanguard and Skylab, needed satellite PV modules that would operate reliably without maintenance, perhaps for 20 years Terrestrial applications, on the other hand, could still be useful with much shorter lifetimes The rapid growth in the terrestrial market was the main driver of this change
There are three assumptions that are commonly made when applying the experience curve model using prices rather than costs: that margins are constant over time, that margins are close to zero with only minor perturbations, and that margins are often negative due to forward pricing Yet changes in demand and industry structure are important in that they erode support for these three assumptions Indeed, earlier work pointed out that firms’ recognition of the value of market domination, particularly during incipient commercialization, leads to unstable pricing behavior [29] An implication of the variation in the price–cost margin is that industry structure affects the learning rate In the case of an industry that becomes more competitive over time, such as PV, a price-based experience curve overestimates the rate of technical progress
One solution for addressing this problem would be for future work to obtain real cost data where possible An alternative would
be to use an approach in which costs can be derived from prices and market shares using the assumptions in Cournot competition that a firm’s profit margins decrease as the number of firms in the market increases and that a single firm’s profit margins increase with that firm’s market share [30] However, comparisons of competing technologies are best made on the basis of prices, not costs, since prices reflect what a consumer faces in deciding whether to adopt a technology and which to adopt A more general approach would be to incorporate market dynamics into predictions of technological change: industry concentration, market power, and changes in elasticity of demand affect prices The HHI analysis described above shows that concentration is not stable over time, especially if international trade is taken into account The assumptions of perfect competition and prices equal marginal cost are too strong in the early stages of the product life cycle when the technology is improving rapidly, industry structure is unstable, and new types of customers are entering the market
Trang 8100
1.05.2.10 Interactions between R&D and Experience in Production
While there do appear to be periods when one factor was dominant in supporting innovation, it is also the case that the interaction
of various factors was important In particular, Japan in the 1990s had both strong demand-side policies and support for R&D [31]
In this case, it was not so much efficiency breakthroughs and alternative cell designs that drove improvements, but rather support from the Japanese government, via the Ministry of International Trade and Industry (MITI) This enabled coordination of expectations and sharing of best practices, which enabled manufacturing improvements The influence of the US federal government may have played a similar role in enabling the 1970s breakthroughs, not just through providing resources from R&D but by creating a sense of commitment that convinced many to work on the technical and market challenges associated with commercializing this nascent technology [32]
1.05.3 Using Learning Curves to Predict Costs
The potential for future cost reductions, combined with the magnitude of the potential impact of very inexpensive PV, has created a strong demand for tools that enable prediction of future costs The experience curve has emerged as the most important of these tools The following section surveys the use of experience curves in policy and modeling, caveats in their application, assessment of their reliability, and the implications for policy makers [33] While this section focuses on the historically dominant design in the industry, crystalline silicon PV, more recent work has looked at other approaches, such as thin films [34] and organics (discussed below) Also, note here that we use the terminology experience curve (defined above), rather than the more specific concept of the learning curve, which, strictly defined, focuses on labor costs and within-firm knowledge accumulation
Characterizations of technological change have identified patterns in the ways that technologies are invented, improve, and diffuse into society [35] Studies have described the complex nature of an innovation process in which uncertainty is inherent [36], knowledge flows across sectors are important [37], and lags can be long [38] Perhaps because of characteristics such as these, theoretical work on innovation provides only a limited set of methods with which to predict changes in technology The learning curve model offers an exception
Experience curves have been assembled for a wide range of technologies While there is broad variation in the observed rates of
‘learning’, studies do provide evidence that costs almost always decline as cumulative production increases [16, 39–42] The roots of these microlevel observations can be traced back to early economic theories about the importance of the relationship between specialization and trade, which were based in part on individuals developing expertise over time [41] The notion of the experience curve varies from the more specific formulation behind the learning curve in that it aggregates from individuals to entire industries and from labor costs to all manufacturing costs In the literature on experience curves, the technological ‘learning’ refers to a broad set of improvements in the cost and performance of technologies, not strictly to the more precise notion of learning by doing [43]
An experience curve for PV modules is shown in Figure 9, that is, a double-logarithmic graph of PV module price as a function of cumulative capacity
1.05.3.1 Use of Experience Curves in Modeling and Policy
Experience curves have become a widely used tool both for models of future energy supply and to inform the design of public policies related to PV For example, they provide a method for evaluating the cost-effectiveness of public policies to support new technologies [44] and for weighing public technology investment against environmental damage costs [45] Energy supply models now also use learning curves to implement endogenous improvements in technology Prior to the 1990s, technological change was typically included as an exogenous increase in energy conversion efficiency or ignored [46] Studies in the 1990s began to use the learning curve to treat technology dynamically [47, 48], and since then, it has become a powerful and widely used model for
Figure 9 Experience curve for PV modules, 1976–2006 Reproduced from Nemet GF (2009) Interim monitoring of cost dynamics for publicly supported energy technologies Energy Policy 37(3): 825–835 [28]
Trang 9projecting technological change Recent work, however, has cautioned that uncertainties in key parameters may be significant [49], making application of the learning curve to evaluate public policies inappropriate in some cases [50]
1.05.3.1.1 Modeling
The rate and direction of future technological change in energy technologies are important sources of uncertainty in models that assess the costs of stabilizing the climate [51] Treatment of technology dynamics in integrated assessment models has become increasingly sophisticated [52] as models have incorporated lessons from the economics of innovation and as increased processing power and improved algorithms have enabled optimization of phenomena such as increasing returns, which in the past had made computation unwieldy [53] Yet the representation of technological change in large energy-economic models remains highly stylized relative to the state of the art of understanding about the economics of innovation [54] Perhaps one reason for the lag between the research frontier for the economics of innovation and the modeling of it has to do with incompatibilities in the methodological approaches of the two fields On the one hand, research on the economics of innovation has tended to emphasize uncertainty [55], cumulativeness [38], and nonergodicity [56] The outcomes of this line of inquiry, which dates back to Schumpeter
[35], and even Marx [57], have often been characterized by richness of description, a case study approach, and, arguably, more progress with rigorous empirical observation than with strong theoretical claims On the other hand, optimization and simulation models require compact quantitative estimation of parameters, with uncertainties that do not become infinite once propagated through the model One of the few concepts that have bridged the epistemological gap between the economics of innovation and the integrated assessment of climate change is the experience curve Experience curves provide a way to project future costs conditional on the cumulative quantity of capacity produced The resulting cost predictions are less deterministic than those generated by temporal-based rates of technological change, but they are also not simply scenarios, internally consistent descriptions
of one possible future state of technology; they are conditional predictions
1.05.3.1.2 Policy
Large programs and deviations from trends in cost reductions are challenging policy makers to make decisions about whether, when, and how much to stimulate the development of energy technologies that have high external benefits The net benefits of subsidies and other incentives programs depend heavily on the extent to which technologies improve over time Experience curves provide a way for policy makers to incorporate technology dynamics into decisions that involve the future costs of technologies They are now used widely to inform decisions that involve billions, and even trillions, of dollars in public funding The general notion that learning from experience leads to cost reductions and performance improvements is well supported by a large array of empirical studies across a variety of technologies But the appropriateness of using experience curves to guide policy is less uniformly acknowledged Despite caveats in previous work, the cost projections that result from experience curves are typically used without characterizing uncertainty in those estimates As a result, experience curves are now employed widely to inform decisions that involve billions of dollars in public funds They have been used both directly – as graphical exhibits to inform debates – and indirectly – as inputs to energy-economic models that simulate the cost of achieving environmental goals Much of the early work to translate the insights from experience curve studies to energy policy decisions is included in a study for the International Energy Agency [49] Other studies have used the tool directly to make claims about policy implications [44, 45, 58]
As mentioned above, energy-economic models that minimize the cost of energy supply now also include experience curve relationships to include technology dynamics Model comparison studies have found that models’ estimates of the social costs of policy are sensitive to how technological change is characterized [51] Working Group III of the Intergovernmental Panel on Climate Change (IPCC) used results from a variety of energy-economic models to estimate the magnitude of economically available greenhouse gas emissions in its Fourth Assessment Report [59] The results of this assessment are widely used to inform national climate change policies, as well as the architecture for the next international climate policy regime In the 17 models they review, some form of experience curve is used to characterize technological change in at least 10 of those models See Table 11.15 of IPCC
[59] Another influential report in 2006, the Stern Review on the Economics of Climate Change [60], relied heavily on experience curves
to model technological change This report has been central to the formation of climate policy in the United Kingdom and has played a role in debates in the United States as well, both at the federal level and in California The International Energy Agency relies on experience curves in its assessment of the least cost method for meeting greenhouse gas reduction targets and energy demand for 2050 [61] Note that the ‘learning investments’ that result from the analyses in this report are estimated in a range of
$5–8 trillion Debates about subsidies and production requirements for ethanol also use historical experience curves as a justification for public support of the production of biofuels [62] At the state level, experience curves have provided one of the most influential justifications for a $3 billion subsidy program for PV [12] Experience curves have also been used in economic models of the cost of meeting California’s ambitious greenhouse gas reduction targets [63] Finally, in decisions by the 24 states that have passed renewable portfolio standards, debates include discussions of how mandatory renewables deployment will bring down its cost [64, 65]
1.05.3.2 Problems with Using Experience Curves
Demand among policy makers for rigorous, transparent, and reliable tools with which to predict future costs continues to be high Yet an array of studies, which are cited below, warn about the limitations of experience curves
Trang 10� �
The learning curve model operationalizes the explanatory variable experience using a cumulative measure of production or use Change in cost typically represents the dependent variable and provides a measure of learning and technological improvement Learning curve studies have experimented with a variety of functional forms to describe the relationship between cumulative capacity and cost [66] The log-linear function is most common perhaps for its simplicity and generally high goodness-of-fit to observed data The central parameter in this learning curve model is the exponent, defining the slope of a power function, which appears as a linear function when plotted on a log–log scale This parameter is known as the learning coefficient (b) and can be used
to calculate the progress ratio (PR) and learning ratio (LR) as shown below C0 and Ct are unit costs at time t = 0 and time t, respectively, and Q0 and Qt represent cumulative outputs at time t = 0 and time t, respectively
PV, 0.23, lies near the mode of the distribution Argote and Epple [69] explored this variation further and proposed four alternative hypotheses for the observed technical improvements: economies of scale, knowledge spillovers, and two opposing factors, organizational forgetting and employee turnover Despite such critiques, the application of the learning curve model has persisted, without major modifications, as a basis for predicting technical change, informing public policy, and guiding firm strategy Below, the advantages and limitations of using the more general version of the learning curve for such applications are outlined The experience curve provides an appealing model for several reasons:
1 Availability of the two empirical time series required to build an experience curve – cost and production data – facilitates testing
of the model As a result, a rather large body of empirical studies has emerged to support the model Compare the simplicity of obtaining cost and production data with the difficulty of quantifying related concepts such as knowledge stocks [70] and inventive output [71] Still, data quality and uncertainty are infrequently explicitly addressed and as shown below can have a large impact on results
2 Earlier studies of the origin of technical improvements, such as in the aircraft industry [39] and shipbuilding [40], provide narratives consistent with the theory that firms learn from past experience
3 Studies cite the generally high goodness-of-fit of power functions to empirical data over several years, or even decades, as validation of the model
4 The dynamic aspect of the model – the rate of improvement adjusts to changes in the growth of production – makes the model superior to forecasts that treat change purely as a function of time
5 The reduction of the complex process of innovation to a single parameter, the learning rate, facilitates its inclusion in large optimization and simulation models
The combination of a rich body of empirical literature and the more recent applications of learning curves in predictive models has revealed weaknesses that echo earlier critiques
Figure 10 Frequency distribution of learning rates calculated in 156 learning curve studies The learning rate for PV, 0.23, lies slightly above the mode
of the distribution Reproduced from Nemet GF (2009) Interim monitoring of cost dynamics for publicly supported energy technologies Energy Policy 37(3): 825–835 [28]
Trang 111 The timing of future cost reductions is highly sensitive not only to changes in the market growth rate but also to small changes in the learning rate Although a coefficient of determination for an experience curve of >0.95 is considered a strong validation of the experience curve model, variation in the underlying data can lead to uncertainty about the timing of cost reductions on the scale of decades Two world surveys of PV prices [72, 73] produce learning rates of 0.26 and 0.17 What may appear as a minor difference has a large effect For example, assuming a steady industry growth rate of 15% per year, consider how long it will take for PV costs to reach a threshold of $0.30 W−1, an estimate for competitiveness with conventional alternatives Just the difference in the choice of data set used produces a crossover point of 2039 for the 0.26 learning rate and 2067 for the 0.17 rate, a difference of 28 years McDonald and Schrattenholzer [68] show that the range of learning rates for energy technologies in general is even larger Neij et al
[50] find that calculations of the cost-effectiveness of public policies are sensitive to such variation Wene [49] observes this sensitivity as well and recommends an ongoing process of policy evaluation that continuously incorporates recent data
2 The experience curve model gives no way to predict discontinuities in the learning rate In the case of PV, the experience curve switched to a lower trajectory around 1980 As a result, experience curve-based forecasts of PV in the 1970s predicted faster technological progress than actually occurred [3] Discontinuities present special difficulties at early stages in the life of a technology Early on, only a few data points define the experience curve, while at such times decisions about public support may
be most critical Early work in economics is skeptical about the assumption that historically observed rates of learning can be expected to continue in the future Arrow [43] argued that that learning is subject to “sharply diminishing returns” Looking at studies within single plants, Hall and Howell [74] and Baloff [75] find that learning rates become essentially flat after a relatively short amount of time – approximately 2 years in these studies Some have suggested that as a result a cubic or logistic function offers a more realistic functional form than a power function [76]
3 Studies that address uncertainty typically calculate uncertainties in the learning rate using the historical level of variance in the relationship between cost and cumulative capacity This approach ignores uncertainties and limitations in the progress of the specific technical factors that are important in driving cost reductions [49] For example, constraints on individual factors, such as theoretical efficiency limits, might affect our confidence in the likelihood of future cost reductions
4 Due to their application in planning and forecasting, emphasis has shifted away from learning curves based on employee productivity and plant-level analysis toward experience curves aggregating industries and including all components of operating cost While the statistical relationships generally remain strong, the conceptual story begins to look stretched, as one must make assumptions about the extent to which experience is shared across firms In the strictest interpretation of the learning-by-doing model applied to entire industries, one must assume that each firm benefits from the collective experience of all The model assumes homogeneous knowledge spillovers among firms
5 The assumption that cost reductions are only determined by experience, as represented by cumulative capacity, ignores the effect
of knowledge acquired from other sources, such as from R&D or from other industries Earlier, Sheshinski [77] wrestled with the separation of the impact of two competing factors, investment and output Others have addressed this limitation by incorporating additional factors such as workforce training [78], R&D [79, 80], and the interactions between R&D and diffusion [31] Colinearity among the explanatory variables requires large and detailed data sets, the scarcity of which has so far limited widespread application of these more sophisticated models
6 Experience curves ignore changes in quality beyond the single dimension being analyzed [81] The dependent variable is limited
to cost normalized by a single measure of performance – for example, hours of labor per aircraft, dollars per watt, or cents per megabyte Performance measures like these ignore changes in quality such as aircraft speed, reliability of power generation, and the compactness of computer memory
1.05.3.3 How Reliable Are Experience Curve Predictions?
Given these issues, how well do experience curves perform? Throughout the critiques, a primary concern is the issue of unacknowledged uncertainty Although studies have cautioned that policy makers must contend with discontinuities and uncertainties in future learning rates, few do; the cost projections that result from experience curves are typically estimated without acknowledging uncertainty Yet a wide array of studies now have pointed to serious reservations about using experience curve projections to inform policy decisions Wene [49] emphasized the ways that experience curves could be used to design subsidy programs, but cautioned about the key uncertainties in parameters because “small changes in progress ratios will change learning investments considerably” Concerned about the scale of this uncertainty problem, one study concluded “we do not recommend the use of experience curves to analyze the cost effectiveness of policy measures” and recommended using multiple methods instead [82] More recently, Neij [83]
compared experience curve projections to those based on bottom-up models, as well as expert predictions, and found that they
“agree in most cases” However, in some cases, large uncertainties that emerge from the bottom-up analyses are “not revealed” by experience curve studies Rubin et al [84] indicate that early prototypes often underestimate the costs of commercially viable applications so that costs rise Koomey and Hultman [85] have documented a more persistent form of this cost inflation effect for nuclear reactors Addressing PV specifically, Borenstein [86] argues that experience curve-based analyses do not justify government programs because they conflate multiple effects and ignore appropriability concerns
Trang 12Three sources of uncertainty complicate experience curve predictions First, there is the typical dispersion in learning rates caused
by imperfect correlations between cumulative capacity and cost Sark [87] explores the effects of this ‘r-squared’ variation to calculate
an error around the learning rate Inconsistencies in the chosen system boundaries, for example, geographic scope, may introduce some of this variation The second source has to do with whether historically observed learning rates can be expected to continue in the future Even in his seminal work on learning by doing, Arrow [43] argued that that learning is subject to “sharply diminishing returns” Looking at studies within single manufacturing facilities, Hall and Howell [74] and Baloff [75] find that learning rates become essentially flat after a relatively short amount of time – approximately 2 years in these studies As a result, some have suggested that a cubic or logistic function offers a more realistic functional form than a power function [76] The third source of uncertainty derives from the choice of historical time period used to calculate learning rates The timing issue captures variation in the source data, as well as changes in the slope over time
Studies have characterized the effects of uncertainty A prominent study showed that calculating the error in the PR could be used
to develop a range of learning rates to use for sensitivity analysis in policy modeling [88] Nemet [28] assessed these sources of uncertainty using a simple and transparent model of the costs of subsidizing technologies until they are competitive with alternatives Those calculations include (1) the learning rate, (2) the year at which the cost of a subsidized technology approaches
a target level, and (3) the discounted cost of government subsidies needed to achieve that level The first result from that study is that there is a wide dispersion in learning rates depending on what time period was used That study estimated the learning rate for PV in each of the 253 time periods of 10 years or greater between 1976 and 2006 Figure 11 plots these learning rates by the year at which each time series ends For example, the values shown for 1995 include all 11 time series that end in 1995 This set of values indicates the range of learning rates that would have been available to an analyst using experience curves to project costs in 1995 The data begin in 1985 because that is the first year for which 10 years of historical data (1976–85) are available The data reveal two features about the trend in calculated learning rates: (1) there is a negative time trend, the mean of the learning rate values has decreased over time by approximately 0.005 per year, and (2) the dispersion in learning rate values around the annual mean has increased over time The dispersion includes an oscillation with maxima in 1995 and 2006
The second result from that study was estimation of the break-even year using the dispersion in learning rates described above The target cost for PV modules used in this example is Pa= $1 W−1 [89] A 73% subsidy on actual 2006 prices is needed for consumers’ costs to equal this target Figure 12(b) shows distributions of the estimated years at which the price of PV will equal that
of this competing technology Descriptive statistics for these distributions are shown in Table 2 for all time series and for all series that end in 2006 The median crossover year for all series, ta = 2034, occurs 14 years earlier than the estimates using only data through 2006, ta = 2048 Note that the dispersion has also increased with the more recent data set
The third result from that study was to calculate the learning investment required to subsidize PV to the crossover point The median cost to subsidize PV is $62 billion when using all time series and $163 billion when using only the time series that end in 2006 Note that a difference in median learning rate of 40% leads to a difference in median program costs of between a factor of 2 and 3 The dispersion in costs has also become large; the range from the 5th percentile to the 95th percentile spans an order of magnitude Furthermore, notice that costs around the 95th percentile become very large, rising to the tens of trillions Slow learning has nonlinear effects on cost and leads to very expensive subsidy programs – even when these future costs are discounted to present values
Figure 12 summarizes these results Figure 12(a) shows the distribution of learning rates for all 253 periods (black columns) The white columns show the distribution of rates using only those series that end in 2006 The latter is the data set one would expect
a contemporary planner to use The median of the distribution of learning rates from all 253 time series (LR = 0.21) is substantially higher than the median of the series ending in 2006 (LR = 0.15), although this difference is not significant
1.05.3.3.1 Assessing the significance of recent deviations
One immediate policy implication of these results is that the possibility of very expensive subsidy programs makes early identification of such a scenario important The price escalation in the mid-2000s prompts the question, “do recently observed costs represent a
0.3 0.25 0.2 0.15 0.1 0.05
Trang 13All series from 1976 to 2006 (n = 253) Series ending in 2006 (n = 22)
significant deviation from the historical trend or does historical variation explain them?” Nemet [28] introduces two methods for addressing this question First, recent costs are compared to the confidence interval for the power function resulting from the dispersion
in past observations Second, these costs are compared to the set of all possible experience curve forecasts made over time The first method uses straightforward statistics to examine whether recent variation fits within the confidence interval for observations around the power function This variation is caused by the imperfect fit of the power function to the experience curve data [87] Here a confidence interval is constructed for the data through 2003 This range is compared to the most recent 3 years of data, 2004, 2005, and
2006, to determine whether they fit within the range defined by projecting the experience curve for 3 years The data from 1976 to 2003 have r 2 = 0.98 and LR = 0.22 The variation around the experience curve power function using least squares yields a 95% confidence interval around the LR of 0.22 0.01 Projecting the experience curve to the capacity reached in 2006 (E2006) yields a 95% confidence interval of expected costs in 2006 of $1.58–$2.51 The actual value for 2006, $3.74, lies outside this range
The second approach assumes the perspective of a policy analyst making ex ante forecasts each year, incorporating new data as they become available This approach assesses whether recent observations could have been projected by the set of all possible historical forecasts To illustrate, Figure 13 shows the predictions, over time, of the price of PV for the cumulative capacity that was reached in 2006, E2006 The first result is that none of the 231 possible projections for 2006 would have predicted a level at or above the actual 2006 price Next, this method is used to project prices for the cumulative capacities reached in all years from 1986 to
2006 In Figure 14, the range in gray represents the full range of forecasts for the capacity that was reached each year For example, the gray range for 2006 includes all of the 231 data points portrayed in Figure 13 Actual prices each year are shown as a line with