1. Trang chủ
  2. » Kinh Doanh - Tiếp Thị

An Outline of the history of economic thought - Chapter 11 docx

28 372 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 28
Dung lượng 256,92 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

A series of newnotions and research directions originated from the union of the twoapproaches which are still alive today: the notion of co-operative games, inwhich players are able to m

Trang 1

At the Margins of Orthodoxy

11.1 Games, Evolution and Growth

11.1.1 Game theory

Game theory was formulated as a logical instrument for investigatingsituations in which the results of the choices of some agents are at leastpartially determined by the choices of other agents with conflicting interests.This theory has been relevant, above all, in tackling the problems posed

by situations of conflict and co-operation between rational and intelligentdecision-makers ‘Rational’ means that each agent makes his own choiceswith the aim of maximizing his subjective expected utility, whenever thedecisions of the other agents are specified ‘Intelligent’ means that each agent

is in a position to know everything about the structure of the situations inwhich he finds himself, just like the theorist who studies them In particular,each decision-maker knows that the other decision-makers are intelligentand rational

The definitive link between game theory and economic theory was onlyestablished in 1944, with the publication of The Theory of Games andEconomic Behaviour by von Neumann and Morgenstern A series of newnotions and research directions originated from the union of the twoapproaches which are still alive today: the notion of co-operative games, inwhich players are able to make agreements and threats which are rationallyfulfilled; the analysis of coalitions, which has resumed the pioneering studies

of Edgeworth and has led to modern core analysis; the axiomatic definition

of expected utility and the demonstration of its importance as a criterion ofchoice in uncertainty conditions; and the application of the formalism ofgame theory to a wide series of economic problems Some circumstances,however, such as the novelty of the concepts and of their mathematicaldemonstrations, initially limited the diffusion of game theory, especially inthe field of the social sciences, to which it was mainly directed It was notuntil the end of the 1950s, with the publication of Games and Decisions (1957)

by R Luce and H Raiffa and The Strategy of Conflict (1960) by T Schelling,that game theory became widely known It was only at that time that the firstinteresting economic applications appeared

Today the theory can deal with many classes of ‘games’, even if theattention of the scholars has focused on certain interesting cases The two-person zero-sum games were among the first to receive an exhaustive general

Trang 2

treatment Beyond their own importance, which certainly cannot be looked, zero-sum games have been of vital importance for game theory inthat the conceptual apparatus developed to analyse them has turned out to

over-be applicable to more general cases

Particularly important was the notion of ‘safety level’, the minimum off level which a player is able to guarantee himself independently of thestrategies of the other A game has a rational outcome from the individualpoint of view if the pay-off received by each player is not lower than his safetylevel; if an individual is rational, he will always act in such a way as to ensurethat at least that level of pay-off can be obtained with certainty The safetylevel can be calculated both for the pure strategies (which are sequences ofwell-determined actions) and for mixed strategies (in which, in one or morestages of the game, the choice of action is made by means of a stochasticexperiment, such as tossing a coin)

pay-As early as the beginning of the century, E Zermelo, in ‘U¨ ber eineAnwendung der Mengenlehre auf die Theorie des Schachspiels’ (1913), hadalready succeeded in proving that the game of chess is strictly determined.Obviously, this does not mean that the ‘optimal’ strategy for this game is easy

to find, as every chess player knows well In any case, Zermelo’s theorem had

an extraordinary importance, in that it was the prototype for the increasinglymore general theorems to emerge in the following years In practice, in order

to ‘export’ the theorem from the field of zero-sum games, the concept ofrational outcome from the individual point of view has been replaced by that

of strategic equilibrium A strategy brings equilibrium if it maximizes the level

of pay-off obtained by a player, given the strategies chosen by all the others.This basic concept was introduced by John Nash in ‘Non-cooperativeGames’ (1951), and it is still known as the ‘Nash equilibrium’ It is based onthe idea that, in equilibrium, the strategies must consist of replies which are

‘mutually best’, in the sense that no player is able to do better than he does,given the actions of the other players Nash’s work permitted H W Kuhn toprove that every n-person perfect-information game (in which all playersknow the whole structure of the game, from the pay-offs to the possiblemoves of the others) has an equilibrium in terms of pure strategy Despite itsinterest, this theorem is not particularly potent, because there is nothing toexclude the case where a game has a high number of equilibria, so that it isnot clear which outcome would prevail if all the players were rational This iswhat actually happens in the great majority of games Thus a recent branch

of research has tried to refine the set of strategic equilibria with the use of themost varied auxiliary criteria The results of such research are still contro-versial, as there is, in fact, no ‘objectively valid’ criterion on which to basesuch a refinement process

The most important refinements have been made by John Harsanyi in

‘Games of Incomplete Information Played by Bayesian Players’ (1967–8)and in ‘The Tracing Procedure’ (1975) Harsanyi introduced a more general

429

at the margins of orthodoxy

Trang 3

class of games, defined as ‘Bayesian games’, in which the players may notknow with certainty the structure of the game R Selten, on the other hand,introduced the notion of ‘perfect equilibrium’ in ‘Re-examination of thePerfectness Concept for Equilibrium in Extensive Games’ (1975) He startedfrom the observation that quite a number of Nash equilibria are imperfect inthe sense that they are based on threats of action depending on circum-stances which never occur in the equilibrium situation, and which the playerswould never take into consideration if they could choose The notion of

‘perfect equilibrium’ eliminates this kind of imperfection An importantrecent synthesis of these two lines of research has been made by D Krepsand R Wilson with the notion of ‘sequential equilibrium’

In the case of two-person zero-sum games, things are much clearer: vonNeumann’s famous minimax theorem states, in fact, that, if the number offeasible pure strategies is finite, such games are determinate, or, rather, thatthey admit a single rational result in terms of mixed strategies The theoremhas had an enormous impact on the development of the subject (thedemonstration itself of the existence of a competitive equilibrium has beenobtained from a generalization of one of the demonstrations of the minimaxtheorem), and for a long period zero-sum games were considered as the fieldfor the application of the theory

The modern theory, although it has pushed the original ideas of vonNeumann and Morgenstern far forward, has encountered formidable prob-lems For example, even in the field of co-operative games there is usually amultiplicity of possible equilibria The number and the nature of the equilibriaassociated with a certain game will thus be determined by the particularinterpretation of the game, the set of strategies available to the players, andthe ‘rationality criteria’ to which they adhere There is no universally validcriterion of choice Each proposed criterion selects ‘reasonable’ equilibria forcertain games; but for other games it excludes some equally ‘reasonable’equilibria in order to choose some other less ‘plausible’ ones

Another difficulty emerged in the 1950s and 1960s which concernedcomplete and perfect information games In complete-information games,the players understand the nature of the game; in perfect-information ones,

on the other hand, the players know both the nature of the game and all thepreceding moves of the other players This limited the field of the phenomenawhich the theory was able to tackle, and therefore restricted its possibleapplications in economics The theoretical developments in the 1970s and1980s, especially the work of Harsanyi and Selten, have partially remediedthis deficiency

A recent approach has considered repeated games, also called supergames.Strategic behaviour can change if, instead of playing ‘once and for all’,the individuals know that the game may be repeated a certain, perhapsindefinite, number of times A typical example is the well-known ‘prisoner’sdilemma’, a non-zero-sum game which gives rise to a non-cooperative

430 at the margins of orthodoxy

Trang 4

solution if it is played only once, but which can generate co-operativebehaviour if it is repeated a certain number of times R Luce and H Raiffawere among the first to highlight the dilemma This consists in the fact that

an egoistic choice is rational but does not lead to the best possible solution,while the co-operative choice is irrational for the person who makes it unlessthe reply of the other player is also co-operative What is best for the indi-vidual is not necessarily best for the individuals taken together The interest

in repeated games is due to a theorem, known as the ‘Folk Theorem’, whichestablishes a basic analogy between repeated games and non-repeatedco-operative games by pointing out that the emergence of factors such

as ‘reputation’ or ‘credibility’, which are typical of repeated games, cannaturally lead the players to explore the possibilities of co-operativesolutions This is because these factors can give efficacy to the agreementsand threats that make co-operation possible An experimental verification ofthe theorem was given by R Axelrod (The Evolution of Cooperation, 1984),who demonstrated how co-operative results tend to prevail in a gamerepeated an indefinite number of times

In view of the close link between ‘oligopolistic indetermination’ and gametheory, it is not surprising that the conceptual apparatus of the latter hasfound a wide application in industrial economics In his 1947 article, ‘PriceTheory and Oligopoly’, K W Rothschild complained that, when dealingwith oligopoly, economists let themselves be too influenced by analogiesfrom mechanics and biology Rothschild believed that in the study ofoligopolist situations it is preferable to refer to those fields of research thatstudy moves and counter-moves, the fight for power, and strategic behavi-our In fact, the use of game theory has led, in recent times, to the revaluation

of concepts such as entry barriers and the relationship between the structure

of the market and technical change As M Shubik has indicated in Strategyand Market Structure (1959), the most important result in this contexthas been the following: the causal link proceeding from the structure of themarket to the behaviour and the performance has been replaced by the ideathat the structure of the market, intended as the number and size of firmsoperating in it, is endogenously determined by the strategic interactions ofthe firms

Another fruitful area of application of game theory has been that ofbargaining (theories of contracts, auctions, and collective bargaining), inwhich two or more individuals must come to an agreement for the share of

a given stake, with the constraint that, in the case of a failure to reachagreement, nobody receives anything This problem has allowed economists

to give a precise definition to a key economic notion: that of ‘contractualpower’ The more one party is ‘anxious’ to conclude an agreement, the more

he will be disposed to give way Ken Binmore’s ‘Modeling Rational Players’(1987) is an important work in this context In it, the traditional notion of

‘substantive rationality’ has been replaced by that of ‘algorithmic rationality’,

431

at the margins of orthodoxy

Trang 5

which resumes and generalizes Herbert Simon’s famous notion of ‘proceduralrationality’.

Finally, a very recent field of application of game theory is that of thetheory of economic policy (monetary and fiscal policy and internationaleconomic co-operation), where Selten’s ‘perfection criterion’ and the notion

of ‘sequential equilibrium’ have been widely used in relation to the concepts

of ‘credibility’ and ‘reputation’ of players such as governments and unions.Among the most important works on this subject are R J Aumann and

M Kurz, ‘Power and Taxes’ (1977) and P Dubey and M Shubik, ‘A Theory

of Money and Financial Institutions’ (1978)

The greater fertility of the economic applications of game theory bycomparison with that of other mathematical instruments also depends,perhaps, on the fact that this theory was not borrowed from anotherdiscipline, but was developed within economic research, which has favouredthe formulation of concepts and formal procedures well suited to therepresentation of social and economic interactions

However, it is important not to forget that there are still severelimitations in the modelling within game theory For example, the choices

of the most appropriate notions of ‘individual rationality’ and ‘gameequilibrium’ are multifarious and partially arbitrary And even where a well-defined notion of equilibrium has been accepted, the problem often stillremains—especially in supergames—of the multiplicity of outcomes In anycase, it remains true that game theory, because of its rigorous logicalstructure, enables economists to classify different types of rationality andequilibrium, and is becoming a viable alternative research approach to theneo-Walrasian one

11.1.2 Evolutionary games and institutions

The theory of evolutionary games was developed within the area of logical studies, where it provided a simple and elegant formalization ofDarwin’s theory of natural selection It is based on the idea that evolution in

bio-a biologicbio-al context depends on the differentibio-al reproduction of the ‘mostsuitable’ individuals or elements; this concept will be clarified later JohnMaynard Smith summed up the first stage of research in his book, Evolutionand the Theory of Games In condensing the results of over ten years ofresearch, the great English biologist made known outside the circle ofspecialists the intriguing connection between the concept of game, suitablyreinterpreted for the ‘animal conflict’ context, and the more basic notions ofstrategic rationality developed in economic research and decision theory,primarily the notion of Nash equilibrium After an initial period of cautiousinterest, from the early 1990s economists began to pay increasing attention

to this new area of the discipline, to which they devoted more and more oftheir research efforts Their interest can be traced back to the impasse that

432 at the margins of orthodoxy

Trang 6

characterized literature on the so-called refinements of Nash equilibria in theearly part of that decade, as recalled in the previous paragraph.

For the notion of Nash equilibrium to be considered a really usefulinstrument, in the presence of a plurality of equilibria it was necessary tocome up with a convincing and practical criterion for deciding which of them

to select in relation to the structure and characteristics of the game Severalrefinement criteria had been proposed during the 1980s, with much use ofinventiveness and technical skills, but all appeared to have substantial limits

In particular, each refinement criterion appeared to have been tailor-madefor a certain type of game and was quite inadequate in contexts other thanthose for which it had been conceived The monumental work of JohnHarsanyi and Reinhard Selten, A General Theory of Equilibrium Selection inGames, published in 1988 after a long period of gestation, purported to havethe last word on the argument by putting forward a general theory that wasvalid for every possible type of game But, despite the enormous effortand the important results achieved, the ‘general theory’ was so complexand intricate that it was soon clear that the research programme had sub-stantially failed

The advent of the evolutionary theory of games was thus hailed with muchsatisfaction and a certain amount of relief: thanks to this theory it was at lastpossible to construct a ‘social dynamics’ through which players ended byconverging on the choice of a unique Nash equilibrium, based on well-defined conditions This kind of choice, which seemed impossible in theaprioristic type of approach inherent in the theory of refinements, could berealized ex post, as the result of a process of interaction between a largenumber of agents However, when it came to the point, the evolutionaryapproach too left problems to be solved For instance, in the case of gameswith a sufficiently high number of strategies, evolutionary dynamics couldeasily give rise to complex behaviours that did not contemplate convergencetowards a final Nash equilibrium

This disappointment was soon overcome by another possible tion of evolutionary games, one that saw it in terms of bounded rationality

interpreta-As Ken Binmore observed, the problem of the ‘eductionist’ approach togame theory laid precisely in the difficulty of building a priori a theory ofstrategic rationality that was valid in every circumstance, while throughthe alternative ‘evolutionary’ approach it was possible to demonstrate howand in what conditions certain a priori rational behaviours became sociallydiffused They could be spread through social learning mechanisms even

in the presence of players with modest calculus abilities and inflexibleand substantially adaptive behavioural patterns, based on a Simon-typesatisficing type of logic Even with very simple models, the theory showedthat rational behaviour, in the broadest sense, might not ensure maximumpay-offs for players There was consequently a direct and unequivocalchallenge to the Darwinian theory on which various utilitarian economists

433

at the margins of orthodoxy

Trang 7

had tried to base the hypothesis of Homo oeconomicus rationality Contrary

to a longstanding belief, a superior rationality did not necessarily implygreater possibilities of profit or survival in the presence of competitiveinteraction These results opened up new and important prospects particu-larly for those economists who were most dissatisfied with the ultimateimplications of the traditional neoclassical approach

The theory of evolutionary games puts forward an interesting static notion

of equilibrium and an even more interesting specification of the dynamicselection process The new notion of equilibrium is expressed in terms of anevolutionary stable state, a condition that calls for a robust strategy profilenot only in respect of single individual deviations (as in the case of Nashequilibrium) but also in respect of deviations chosen simultaneously by analbeit small fraction of players This significant innovation has, however, oneimportant fault: in a given game there is no guarantee that a Nash equilib-rium exists which satisfies that condition

The most widely studied dynamic process in the evolutionary game theory

is the so-called replicator dynamics, according to which the weight of astrategy in a population of players increases more the higher is the reward ofthat strategy compared with the mean gain In other words, strategies that

do better than the mean are rewarded at the expense of those that do worse,the more so the better they do When replicator dynamics converge, they do

so towards Nash equilibria, and possess a certain number of interestingproperties, such as the tendency to eliminate any strategy that is strictlydominated in the game

In more recent years, economists have begun to use notions of tionary stable state and replicator dynamics to build models that can beapplied to a wide variety of contexts, with particular emphasis on phe-nomena which were barely considered in the past, such as the formation ofsocial conventions, the impact and evolution of social norms and culturalevolution Another fertile field of application concerns the classic problems

evolu-of institutional change and the evolution evolu-of individual preferences Theseproblems can now be tackled in a new way Today the theory of evolutionarygames is becoming an alternative paradigm to the neoclassical approach,thanks also to its ability to explain endogenously crucial phenomena like thepredominance of a certain level of rationality among economic agentsand the survival conditions of non self-interested individual motives Models

of evolutionary games are particularly suitable for the study of culturalevolution In The Evolution of Social Contract, Brian Skyrms showed howthe rules of justice and co-operative types of rules evolve in time and inspecific environmental contexts, even between self-interested subjects.Another fertile field of application deals with explaining the emergence ofsocial rules such as those of reciprocity, sympathy and altruism Here thereference is The Complexity of Cooperation, by R Axelrod, in which, for thefirst time, a particular genetic algorithm has been applied to the theory of

434 at the margins of orthodoxy

Trang 8

evolutionary games Genetic algorithms were initially developed in studies

on artificial intelligence and reproduce the mechanisms that drive biologicalevolution to search for more efficient methods of ensuring adaptation to aplurality of environmental contexts Referring to a ‘prisoner’s dilemma’ type

of situation, Axelrod demonstrated that the tit-for-tat strategy which hehimself ‘invented’ and expounded in his contribution of 1984, is very robust,and that cooperative and reciprocating strategies tend to prevail in a widevariety of situations

To conclude, in the theoretical scenario opened up by the theory ofevolutionary games, there is growing convergence among research under-taken by biologists, economists and sociologists The foundations are beinglaid down for a science of social behaviour which goes far beyond the olddisciplinary boundaries of positivist ascendancy

11.1.3 The theory of endogenous growth

To understand the reasons behind the great success of the endogenousgrowth theory in macroeconomics over the last decade, it is necessary tobegin from the so-called ‘Solow residue’ This expression indicates all thosedeterminants of the growth processes that cannot be reduced to labour andcapital contributions With hindsight it can be said that one of the chiefmerits of Solow’s 1956 model (see section 9.2.4) was the idea that economicgrowth cannot be completely explained by increases in the stock of pro-ductive factors Nonetheless, his theory left various questions unanswered:given that technical progress lies at the base of labour and capital produc-tivity increases, how can this phenomenon be modelled? How can anendogenous explanation be given for the long-run growth rate?

Despite the objective importance of these questions and the innovativecontributions made by economists such as Kenneth Arrow and NicolasKaldor, in the 1960s the neoclassical students’ concern with growth problemswaned, preference being given to business cycle theory However, this period

of oblivion was not to last for long As N Foss showed, beginning in theearly 1980s, economists gradually shifted their interest in the business cycle

to include the study of real growth factors, to the detriment of monetaryfactors The work of F Kydland and E Prescott is an example of this shift;the trend hailed the arrival of a change of climate which soon proved to beextremely favourable to the advent of the ‘new growth theory’

The theory of endogenous growth officially came to light in 1986, the year

of publication of P Romer’s fundamental article Increasing Returns andLong-Run Growth It describes a model of competitive equilibrium in whichthe growth process is guided by Marshall type increasing returns Theaggregate production function has the following characteristics: outputdepends on the capital stock, on labour and on the costs of R&D activities;

in addition, the spillover from private research brings improvements in the

435

at the margins of orthodoxy

Trang 9

stock of public knowledge Knowledge is intended as an input with anincreasing marginal productivity The model therefore combines two basichypotheses: competitive equilibrium and endogenous technological change.Romer pointed out that there are different forms of knowledge: at the oneextreme we have basic scientific knowledge, at the other, knowledge intended

as a set of specific instructions relating to a determined operation Romerobserved that ‘there is no reason to expect the accumulation determinants ofthese different types of knowledge to be the same [ ] There is, therefore, noreason to expect a unified theory of knowledge growth’ (p 1009)

One fundamental aspect shared by the numerous endogenous growthmodels is their characterization of knowledge as a public good: knowledgehas in fact, at least to a certain extent, precisely the characteristics of non-excludability and non-rivalry of these goods The idea is that the non-excludability from the use of new knowledge created by an individual firmgenerates positive effects on the production possibilities of other firms.The hypotheses of increasing returns and non-excludability of knowledgewere already present in the work on learning by doing published by Arrowover two decades earlier Arrow had put forward the idea that increasingreturns appear to be a direct effect of the discovery of new knowledge Thiswas the basic idea developed later by Romer

One of the most important implications of Romer’s model is the following:the per capita outputs of different countries do not necessarily converge: lessdeveloped countries may well grow at a lower rate than more advancedcountries or indeed may not grow at all Later, in a model elaborated in 1990,Romer considered an economy divided into three sectors (research, inter-mediate goods and final goods), and characterized by monopolisticallycompetitive markets He furthermore assumed that while the stock of tech-nological knowledge was non rival, human capital was The model predictsthat economies endowed with a higher stock of human capital grow at afaster rate than the others There will be an acceleration in growth also as aconsequence of opening up to international trade

In 1988 Robert Lucas presented a model similar to Romer’s 1986 version,but assumed that two different sectors and two different types of investment(in physical capital and human capital) are to be taken into account Theonly exogenous magnitude hypothesized was the population growth rate.While physical capital would accumulate through the usual neoclassicalmechanism, the growth of human capital would be regulated by the fol-lowing ‘law’: independently of the stock already achieved, a steady level ofeffort corresponds to a steady rate of growth of the stock Lucas held thatone of the chief merits of the ‘new growth theory’ lies in its capacity toelaborate formal models to explain both the growth of advanced countriesand developing countries On the contrary, in the 1960s, the idea that it isnecessary to resort to distinct theories prevailed There are, furthermore,significant political implications, since public authorities appear to have

436 at the margins of orthodoxy

Trang 10

greater ‘room for manoeuvre’ in situations of endogenous growth Solow’smodel, where the long-run growth rate depends exclusively on exogenoustechnological change, ended by not assigning any role to institutional sub-jects outside the market; conversely, endogenous growth models admit, forexample, that changes in tax regime can influence the growth rate.

Another important contribution to the theory of endogenous growth wasmade by P Aghion and P Howitt, who attempted to demonstrate that firmsmay accumulate knowledge through numerous channels from formal edu-cation to product innovations Great importance is attached to industrialinnovations which improve product quality The Schumpeterian viewunderlying Aghion and Howitt’s endogenous growth model can be sum-marized as follows: growth is the effect of technological progress which, inturn, depends on competition between firms that undertake research andcreate innovations Every innovation gives rise to the production of a newtype of intermediate good, as a result of which a final product can be pro-duced in conditions of greater efficiency From the individual firm’s point ofview, the incentive to invest in research comes from the prospect of building

up monopoly rents through the legal protection granted by innovations Onthe other hand, in a dynamic context, those very rents are made fruitless bysubsequent innovations, which render the old products obsolete almost assoon as they are produced This analytical scheme also contemplates theexistence of a strong relationship between the innovative firm’s market powerand the degree of excludability of knowledge This excludability in turndepends critically on the presence of legal institutions responsible for pro-tecting ownership rights as well as on the very nature of knowledge

11.2 The Theory of Production as a Circular Process

11.2.1 Activity analysis and the non-substitution theorem

In Chapter 8 we spoke of a tradition in input–output analysis that originated

at the beginning of the twentieth century in Russia with Dmitriev, and thenemigrated to Germany, with von Charasoff and von Bortkiewicz There, inthe second half of the 1920s, this tradition inspired the work of Leontief andRemak In the same chapter we also spoke of Menger’s Viennese Kolloquium,and of the work of Schlesinger and Wald on the problem of the existence ofsolutions in the general-equilibrium model, and we also mentioned vonNeumann’s movements between Berlin and Vienna

This line of thought was transplanted to America in the 1930s and there,after the Second World War, bore diverse and notable fruits We havealready mentioned von Neumann’s contribution to the birth of game theoryand of the balanced-growth models We have also presented Leontief ’sresearch in input–output analysis Finally, in Chapter 10 we spoke of the

437

at the margins of orthodoxy

Trang 11

influence exerted by these lines of thought on the development of the Walrasian approach Now we should like to examine another two importanttheoretical developments which also began after the Second World War, andwhich can be interpreted as developments and extensions of Leontief ’s andvon Neumann’s models: activity analysis and the non-substitution theorem.The classic work for both these theoretical developments is undoubtedlyActivity Analysis of Production and Allocation (1951), a book edited byTjalling Charles Koopmans.

neo-Activity analysis is a generalization of von Neumann’s model in terms

of linear programming—a generalization consisting of the introduction ofdiverse scarce resources and the use of the model to solve the problem oftheir efficient allocation The first ideas on linear programming, as alreadymentioned, go back to Kantorovic’s 1939 study But the theory only took off

in 1947, after the discovery of the simplex method, an efficient way of solving

a linear-programming problem The author was George Bernard Dantzig,who had discovered linear programming independently of Kantorovic Hisfirst important work on the argument was Programming in a Linear Structure(1948) A second was published in the above-mentioned 1951 book edited byKoopmans

The practical applications of linear programming have been especiallyimportant at the level of the firm Here, however, we are interested in thetheoretical applications, and the most important of these is activity analysis

An activity is a technically feasible combination of inputs, that is, a nique It is assumed that there are more activities than resources Theso-called ‘primal’ problem consists in choosing a vector of activity levelswhich maximizes the level of final output, given the prices of final goods, insuch a way that no more than the resources available are used The ‘dual’problem, on the other hand, consists of the determination of prices whichminimize the costs of the utilized resources in such a way that the productioncost of each good produced is not lower than its price The latter conditionensures that there are no profits

tech-The solution of the programming problems ensures several results: thedetermination of equilibrium levels of prices and production; the maxim-ization of national output; the minimization of production costs; the choice

of efficient techniques In other words, we have not just an equilibrium but

an optimal solution

The work edited by Koopmans also included four articles on the so-called

‘non-substitution theorem’ The authors were Georgescu-Roegen, Arrow,Koopmans, and Samuelson Initially, this theorem was formulated as atheoretical application of Leontief ’s model It is based on the followinghypotheses: there is only one primary input, let us say labour; that input isindispensable for the production of all goods; each production processproduces only one output; there are constant returns to scale and perfectcompetition The problem is to choose the most profitable activity, i.e that

438 at the margins of orthodoxy

Trang 12

which minimizes costs Now, under these hypotheses, prices and activitylevels are independent, and the set of activities chosen as the most profitable

to obtain a given vector of final demand remains the most profitable for theproduction of any other vector

This latter result is of crucial importance The activities are chosen by theentrepreneurs with the aim of minimizing costs; if there are no other primaryinputs besides labour, there will be a unique set of activities which is the mostprofitable; as there are constant returns to scale, a technique which is mostprofitable at a certain level of activity is also so at any other level; thereforethe choice of techniques does not change with variations in the composition

of demand and the quantities produced, and prices depend solely on nical conditions of production

tech-There are two different interpretations of the relevance of this theorem.The first to be advanced interprets the term ‘substitution’ as a synonym for

‘change of techniques’ In this case the theorem serves to demonstrate therobustness of Leontief ’s and similar models The hypothesis of fixed coef-ficients, which appears in such models, is not restrictive, as was argued bysome of Leontief ’s critics In fact, the theorem demonstrates that the pre-vailing coefficients can be interpreted as those that have been chosen by theentrepreneurs from a vast range of technical possibilities, and that this choice

is not modified by changes in final demand

In another interpretation, the ‘non-substitution theorem’ aims to point outthat, with variations in the composition of demand, there is no substitutionamong the primary factors On the other hand, it is obvious that therecannot be substitution among primary factors when there is only one.Therefore, the relevance of the theorem would seem to lie in the fact that,when there is more than one primary factor, substitution is possible andtakes place each time there is a change in consumer tastes This seems toconfirm the traditional neoclassical view that, in general, if the demandincreases for a good with a high intensity of a certain factor, there will be anincrease in the price of that good, but also in the demand and the remu-neration of the factor in question, and, consequently, the prices of all theother goods will change In general, therefore, prices depend on the demandfor final goods and the scarcity of primary inputs We will see in the nextsection how much caution is needed to sustain an argument of this type.Meanwhile, we should like to point out that there are cases in which thenon-substitution theorem does not hold, and in which the substitution ofprimary factors plays no role at all One of these is where the returns to scaleare not constant Here it is clear that variations in demand will have relevanteffects on the cost of the goods and therefore also on their prices But this hasnothing to do with the substitution among primary factors Another case isthat of joint production Here, generally, variations in demand change theconvenience conditions for the activation of different processes, as a certaingood can be produced jointly with some other by using different activities

439

at the margins of orthodoxy

Trang 13

Thus, variations in demand can cause changes in the techniques activatedand therefore in the costs and the prices of goods But, again, this hasnothing to do with substitution among primary factors.

Finally, we should like to mention three studies of the 1960s, one by

P A Samuelson, one by J A Mirrlees and one by J E Stiglitz, in which thetheory was generalized with the introduction of an interest rate, and with aspecial case of joint production The introduction of the interest ratemodifies the results of the theorem in the sense that there is a different pricesystem for each different value of the interest rate The ‘dynamic’ character

of the theorem would consist of the possibility of applying it to an economywhich is growing in steady-state

11.2.2 The debate on the theory of capital

Although the possibility of substitution among primary factors is excludedunder the hypotheses of the non-substitution theorem, it would seem possible

to argue for another type of substitution: that between capital and labour.Even excluding the effects of final demand on prices, is it not possible that asignificant relationship exists between demand for the ‘productive factors’,labour and capital, and their remunerations? If the prices of the factorservices are indexes of scarcity, then the following should occur: with anincrease in the wage–interest ratio, there should be an increase in the demandfor the services of capital in relation to that of the services of labour Underperfect competition the real compensations to factors should equal theirmarginal productivity; therefore, a decreasing function should link thecapital intensity of the techniques to the relative cost of capital; a decrease inthe marginal productivity of capital in relation to that of labour should becaused by the substitution of labour by capital

This is the neoclassical theory of distribution Already Wicksell, as we tioned in Chapter 6, had noticed the strangeness of certain phenomena (latercalled ‘Wicksell effects’), and pointed out the possibility of some ‘paradoxes’ inthe relationship between the capital intensity of the techniques and theremuneration of capital However, it was only in the debate of the 1950s and1960s that the problem was solved The debate was opened by J V Robinsonwith the article ‘The Production Function and the Theory of Capital’ (1953–4),

men-in which she put forward an argument men-inspired by Sraffa’s ‘Introduction’(1951) to Ricardo’s Principles: that the ‘degree of mechanization’ of a pro-ductive technique can increase, rather than decrease, following an increase inthe interest–wage ratio Robinson also noted that the origin of this strangeeffect is to be found in the impossibility of measuring capital in physical terms,given its heterogeneous composition, and the consequent necessity to measure

it in value Then D Champernowne, in a comment on Robinson’s article,while acknowledging the importance of the problem, suggested that it could besolved by measuring capital by means of an index of his own construction,

440 at the margins of orthodoxy

Trang 14

although he admitted that his index might not work in some ‘strange’ cases.Robinson counter-attacked, especially in a section of The Accumulation ofCapital (1956), where she pointed out that the strange relationship existingbetween the prices of factor services and the capital intensity of techniques

is not due to purely ‘financial’ phenomena, as Champernowne seemed tosuggest, but can be generated by real technical change

In that year, by a strange historical quirk, the first neoclassical aggregategrowth models came to light: those of Solow and Swan, already discussed inChapter 9 These models used exactly the same aggregate production func-tion and the same theory of capital which had been criticized by Robinson.This certainly helped to liven up the party In 1960 Sraffa’s Production ofCommodities by Means of Commodities was published, a book which con-tained, albeit in very concise form, all the elements needed to clarify thequestion At the same time, Garegnani’s Il capitale nelle teorie della distri-buzione was published, a book in which criticism of the neoclassical theory ofcapital was explicitly formulated

Robinson’s criticism was accepted without resistance by many neoclassicaleconomists—for example by Morishima and Hicks As late as 1962 and

1965, however, Samuelson and Levhari made attempts to resolve the lem in a different way from that suggested by Robinson The debate reachedits climax in 1966, when the Quarterly Journal of Economics published aspecial issue dedicated to capital theory Decisive were Pasinetti’s andGaregnani’s contributions But the most important one was Samuelson’s

prob-‘Summing Up’, in which he acknowledged the validity of the criticisms and,while trying to minimize their importance, he admitted the error inherent inthe neoclassical theory of aggregate capital This closed the debate, even ifthe aftermath continued until the early 1970s The final word on this problemwas given by Garegnani in ‘Heterogeneous Capital, the Production Functionand the Theory of Distribution’ (1970)

In order to explain this subject in the simplest way, we will use a model of

an economy in which only two goods are produced, a consumer and a capitalgood, by means of capital and labour:

p¼ wlkþ kkpð1 þ rÞ

1¼ wlcþ kcpð1 þ rÞThe price of the consumer good is taken as numeraire, w is the real wage, p theprice of the capital good, r the rate of profit, which is equal to the rate ofinterest, lkand lcthe labour coefficient in the two industries, and kcand kkthecapital coefficients With a few simple algebraic passages it is possible toobtain, from the two equations, a decreasing function linking wages and profit:

Ngày đăng: 06/07/2014, 04:20

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm