Microeconomics is the study of the behaviour of individuals and small impacting organisations in making decisions on the allocation of limited resources. The modern field of microeconomics arose as an effort of neoclassical economics school of thought to put economic ideas into mathematical mode.
Traditional marginalism
An early attempt was made by
Antoine Augustine Cournot in
Researches on the Mathematical Principles of the Theory of Wealth (1838) in describing a spring water duopoly that now bears his name. Later,
William Stanley Jevons's
Theory of Political Economy (1871),
Carl Menger's Principles of Economics
(1871), and
Léon Walras's
Elements of Pure Economics: Or the theory of social wealth (1874–77)
gave way to what was called the
Marginal Revolution. Some common ideas behind those works were models or arguments characterised by rational economic agents maximising utility under a budget constrain. This arose as a necessity of arguing against the
labour theory of value associated with
classical economists such as Adam Smith, David Ricardo and Karl Marx. Walras also went as far as developing the concept of
general equilibrium of an economy.
Alfred Marshall's textbook,
Principles of Economics was published in 1890 and became the dominant textbook in England for a generation. His main point was that Jevons went too far in emphasising
utility as an attempt to explain prices over costs of production. In the book he writes:
"There are few writers of modern times who have approached as near to the brilliant originality of Ricardo as
Jevons has done. But he appears to have judged both
Ricardo and
Mill harshly, and to have attributed to them doctrines narrower and less scientific than those they really held. Also, his desire to emphasise an aspect of value to which they had given insufficient prominence, was probably in some measure accountable for his saying, "Repeated reflection and inquiry have led me to the somewhat novel opinion that value depends entirely upon utility." (Theory, p. 1) This statement seems to be no less one-sided and fragmentary, and much more misleading, than that into which Ricardo often glided with careless brevity, as to the dependence of value on cost of production; but which he never regarded as more than a part of a larger doctrine, the rest of which he had tried to explain."
In the same appendix he further states:
"Perhaps Jevons' antagonism to Ricardo and Mill would have been less if he had not himself fallen into the habit of speaking of relations which really exist only between demand price and value as though they held between utility and value; and if he had emphasised as
Cournot had done, and as the use of mathematical forms might have been expected to lead him to do, that fundamental symmetry of the general relations in which demand and supply stand to value, which coexists with striking differences in the details of those relations. We must not indeed forget that, at the time at which he wrote, the demand side of the theory of value had been much neglected; and that he did excellent service by calling attention to it and developing it. There are few thinkers whose claims on our gratitude are as high and as various as those of Jevons: but that must not lead us to accept hastily his criticisms on his great predecessors."
Marshall's idea of solving the controversy was that the
demand curve could be derived by aggregating individual consumer demand curves, which were themselves based on the consumer problem of maximising
utility. The
supply curve could be derived by superimposing a representative firm supply curves for the
factors of production and then
market equilibrium would be given by the intersection of demand and supply curves. He also introduced the notion of different market periods: mainly
short run and
long run. This set of ideas gave way to what economists call
perfect competition, now found in the standard microeconomics texts, even though Marshall himself had stated:
"The process of substitution, of which we have been discussing the tendencies, is one form of competition; and it may be well to insist again that we do not assume that competition is perfect. Perfect competition requires a perfect knowledge of the state of the market; and though no great departure from the actual facts of life is involved in assuming this knowledge on the part of dealers when we are considering the course of business in
Lombard Street, the
Stock Exchange, or in a
wholesale Produce Market; it would be an altogether unreasonable assumption to make when we are examining the causes that govern the supply of labour in any of the lower grades of industry. For if a man had sufficient ability to know everything about the market for his labour, he would have too much to remain long in a low grade. The older economists, in constant contact as they were with the actual facts of business life, must have known this well enough; but partly for brevity and simplicity, partly because the term "
free competition" had become almost a catchword, partly because they had not sufficiently classified and conditioned their doctrines, they often seemed to imply that they did assume this perfect knowledge."
Jacob Viner presented an early procedure for constructing
cost curves in his “Cost Curves and Supply Curves” (1931),
[8] the paper was an attempt to reconcile two streams of thought when dealing with this issue at the time: the idea that supplies of
factors of production were given and independent of rate of remuneration (
Austrian School) or dependent on rate of remuneration (English School, that is followers of Marshall). Viner argued that, “The differences between the two schools would not affect qualitatively the character of the findings,” more specifically, “...that this concern is not of sufficient importance to bring about any change in the prices of the factors as a result of a change in its output.”
In Viner's terminology—now considered standard—the
short run is a period long enough to permit any desired output change that is technologically possible without altering the scale of the plant—but is not long enough to adjust the scale of the plant. He arbitrarily assumes that all factors can, for the short run, be classified in two groups: those necessarily fixed in amount, and those freely variable.
Scale of plant is the size of the group of factors that are fixed in amount in the short-run, and each scale is quantitatively indicated by the amount of output that can be produced at the lowest
average cost possible at that scale. Costs associated with the fixed factors are
fixed costs. Those associated with the variable factors are
direct costs. Note that fixed costs are fixed only in their aggregate amounts, and vary with output in their amount per unit, while direct costs vary in their aggregate amount as output varies, as well as in their amount per unit. The spreading of
overhead is therefore a short-run phenomenon and not to be confused with the long-run.
He explains that if the
law of diminishing returns holds that output per unit of variable factor falls as total output rises, and that if the prices of the factors remain constant—then average direct costs increase with output. Also, if atomistic competition prevails—that is, the individual firm output won't affect product prices—then the individual firm short-run
supply curve equals the short run
marginal cost curve. In the long run, the supply curve for industry can be constructed by summing individual marginal cost curves abscissas. He also explains that:
- Internal economies of scale are primarily a long-run phenomenon and are due either to reductions in the technical coefficients of production (technical economies=increasing productivity by improved organisation or methods of production) or to discounts resulting from larger size (pecuniary economies).
- Internal diseconomies of scale can be avoided by increasing industry output by increasing the number of plants without increasing the scale of the plant.
- External economies of scale are also either technical or pecuniary, but in this case are due to the aggregate behaviour of the industry, and refer to the size of output of the industry as a whole.
- External diseconomies of scale may occur if as industry output rises the unit price of factors and materials rises as well due to increasing competition for inputs with other industries.
It should be made clear that these long-run results only hold if producer are rational actors, that is able to optimise their production so as to have an optimal scale of plant.
Imperfect competition and game theory
In 1929
Harold Hotelling published "Stability in Competition" addressing the problem of instability in the classic Cournout model:
Bertrand criticised it for lacking equilibrium for prices as independent variables and
Edgeworth constructed a dual monopoly model with correlated demand with also lacked stability. Hotteling proposed that demand typically varied
continuously for relative prices, not
discontinuously as suggested by the later authors.
Following
Sraffa he argued for "the existence with reference to each seller of groups who will deal with him instead of his competitors in spite of difference in price", he also noticed that traditional models that presumed the uniqueness of price in the market only made sense if the commodity was standardised and the market was a point: akin to a
temperature model in
physics, discontinuity in heat transfer (price changes) inside a body (market) would lead to instability. To show the point he built a model of market located over a line with two sellers in each extreme of the line, in this case maximising profit for both sellers leads to a stable equilibrium. From this model also follows that if a seller is to choose the location of his store so as to maximise his profit, he will place his store the closest to his competitor: "the sharper competition with his rival is offset by the greater number of buyers he has an advantage". He also argues that clustering of stores is wasteful from the point of view of transportation costs and that public interest would dictate more spatial dispersion.
A new impetus was given to the field when around 1933 Joan
Robinson and Edward H. Chamberlin, published respectively, The Economics of Imperfect Competition(1933) and The Theory of Monopolistic Competition (1933), introducing models of imperfect competition. Although the monopoly case was already exposed in Marshall's Principles of Economics and Cournot had already constructed models of duopoly and monopoly in 1838, a whole new set of models grew out of this new literature. In particular the monopolistic competition model results in a non efficient equilibrium. Chamberlin defined monopolistic competition as, "...challenge to traditional viewpoint of economics that competition and monopoly are alternatives and that individual prices are to be explained in terms of one or the other." He continues, "By contrast it is held that most economic situations are composite of both competition and monopoly, and that, wherever this is the case, a false view is given by neglecting either one of the two forces and regarding the situation as made up entirely of the other."
William Baumol provided in his 1977 paper the current formal definition of a
natural monopoly where “an industry in which multiform production is more costly than production by a monopoly” (p. 810):
mathematically this equivalent to
subadditivity of the cost function. He then sets out to prove 12 propositions related to strict
economies of scale, ray
average costs, ray concavity and transray convexity: in particular strictly declining ray average cost implies strict declining ray subadditivity, global economies of scale are sufficient but not necessary for strict ray subadditivity.
In 1982 paper
Baumol defined a
contestable market as a market where "entry is absolutely free and exit absolutely costless", freedom of entry in
Stigler sense: the incumbent has no cost discrimination against entrants. He states that a contestable market will never have an economic profit greater than zero when in equilibrium and the equilibrium will also be efficient. According to Baumol this equilibrium emerges endogenously due to the nature of contestable markets, that is the only industry structure that survives in the long run is the one which minimises total costs. This is in contrast to the older theory of industry structure since not only industry structure is not exogenously given, but equilibrium is reached without add hoc hypothesis on the behaviour of firms, say using reaction functions in a duopoly. He concludes the paper commenting that regulators that seek to impede entry and/or exit of firms would do better to not interfere if the market in question resembles a contestable market.
Externalities and market failure
In 1937, “
The Nature of the Firm” was published by
Coase introducing the notion of
transaction costs (the term itself was coined in the fifties), which explained why firms have an advantage over a group of independent contractors working with each other.
The idea was that there were transaction costs in the use of the market: search and information costs, bargaining costs, etc., which give an advantage to a firm that can internalise the production process required to deliver a certain good to the market. A related result was published by Coase in his “
The Problem of Social Cost” (1960), which analyses solutions of the problem of
externalities through
bargaining,
in which he first describes a cattle herd invading a farmer's crop and then discusses four legal cases:
Sturges v Bridgman,
Cooke v Forbes,
Bryant v Lejever, and
Bass v Gregory. He then states:
"In earlier sections, when dealing with the problem of rearrangement of
legal rights through the market, it was argued that such a rearrangement would be made through the market whenever this would lead to an increase in the value of production. But this assumed costless market transactions. Once the costs of carrying out market transactions are taken into account it is clear that such rearrangement of rights will only be undertaken when the increase in the value of production consequent upon the rearrangement is greater than the costs which would be involved in bringing it about. When it is less, the granting of an
injunction (or the knowledge that it would be granted) or the
liability to pay damages may result in an activity being discontinued (or may prevent its being started) which would be undertaken if market transactions were costless. In these conditions the initial delimitation of legal rights does have an effect on the efficiency with which the economic system operates. One arrangement of rights may bring about a greater value of production than any other. But unless this is the arrangement of rights established by the legal system, the costs of reaching the same result by altering and combining rights through the market may be so great that this optimal arrangement of rights, and the greater value of production which it would bring, may never be achieved."
This then becomes relevant in context of
regulations. He argues against the
Pigovian tradition:
"...The problem which we face in dealing with actions which have harmful effects is not simply one of restraining those responsible for them. What has to be decided is whether the gain from preventing the harm is greater than the loss which would be suffered elsewhere as a result of stopping the action which produces the harm. In a world in which there are costs of rearranging the rights established by the legal system, the courts, in cases relating to
nuisance, are, in effect, making a decision on the economic problem and determining how resources are to be employed. It was argued that the courts are conscious of this and that they often make, although not always in a very explicit fashion, a comparison between what would be gained and what lost by preventing actions which have harmful effects. But the delimitation of rights is also the result of
statutory enactments. Here we also find evidence of an appreciation of the reciprocal nature of the problem. While statutory enactments add to the list of nuisances, action is also taken to legalise what would otherwise be nuisances under the
common law. The kind of situation which economists are prone to consider as requiring Government action is, in fact, often the result of Government action. Such action is not necessarily unwise. But there is a real danger that extensive Government intervention in the economic system may lead to the protection of those responsible for harmful being carried too far."
This period also marks the beginning of mathematical modelling of
public goods with
Samuelson's “The Pure Theory of Public Expenditure” (1954), in it he gives a set of equations for efficient provision of
public goods (he called them collective consumption goods), now known as the
Samuelson condition.
He then gives a description of what is now called the
free rider problem:
"However no
decentralised pricing system can serve to determine
optimally these levels of
collective consumption. Other kinds of "voting" or "signalling" would have to be tried. But, and this is the point sensed by
Wicksell but perhaps not fully appreciated by
Lindahl, now it is in the selfish interest of each person to give false signals, to pretend to have less interest in a given collective consumption activity than he has, etc."

Around the 1970s the study of
market failures again came into focus with the study of
information asymmetry. In particular three authors emerged from this period:
Akerlof,
Spence, and
Stiglitz. Akerlof considered the problem of bad quality cars driving good quality cars out of the market in his classic “
The Market for Lemons” (1970) because of the presence of asymmetrical information between buyers and sellers.
Spence explained that
signaling was fundamental in the
labour market, because since employers can't know beforehand which candidate is the most productive, a college degree becomes a signaling device that a firm uses to select new personnel.
A synthesising paper of this era is “
Externalities in Economies with Imperfect Information and Incomplete Markets” by Stiglitz and
Greenwald:
the basic model consists of
householdsthat maximise a utility function, firms that maximise profit—and a
government that produces nothing, collects taxes, and distributes the proceeds. An initial equilibrium with no taxes is assumed to exist, a vector x of household consumption and vector z of other variables that affect household utilities (externalities) are defined, a vector π of profits is defined along with a vector E of households expenditures. Since the
envelope theorem holds, if the initial non taxed equilibrium is
Pareto optimalthen it follows that the
dot products Π(between π and the time derivative of z) and B (between E and the time derivative of z) must equal each other. They state:
"Except in the special case (which is unlikely to hold generically) where Î and B exactly cancel each other out, the existence of these externalities will make the initial equilibrium inefficient and guarantee the existence of welfare-improving tax measures."
One application of this result is to the already mentioned
Market for Lemons, which deals with
adverse selection: households buy from a pool of goods with heterogeneous quality considering only average
quality, since in general the equilibrium is not efficient, any tax that raises average quality is beneficial (in the sense of
optimal taxation). Other applications were considered by the authors, such as tax distortions,
signaling,
screening,
moral hazard,
incomplete markets, queue
rationing,
unemployment and
rationing equilibrium.
Behavioural economics
Kahneman and
Tversky published a paper in 1979 criticising the very idea of the
rational economic agent.
The main point is that there is an asymmetry in the psychology of the economic agent that gives a much higher value to losses than to gains. This article is usually regarded as the beginning of
behavioural economics and has consequences particularly regarding the world of
finance. The authors summed the idea in the abstract as follows:
"...In particular, people underweight outcomes that are merely probable in comparison with outcomes that are obtained with certainty. This tendency, called
certainty effect, contributes to risk aversion in choices involving sure gains and to risk seeking in choices involving sure losses. In addition, people generally discard components that are shared by all prospects under consideration. This tendency, called the
isolation effect, leads to inconsistent
preferences when the same choice is presented in different forms."
Great Recession and executive compensation
More recently, the Great Recession and the ongoing controversy on executive compensation brought the principal–agent problem again to the centre of debate, in particular regarding corporate governance and problems with incentive structures.
(Source:https://en.wikipedia.org/wiki/History_of_microeconomics)
No comments:
Post a Comment