New Keynesian economics


Heterodox

New Keynesian economics is a school of macroeconomics that strives to render microeconomic foundations for Keynesian economics. It developed partly as a response to criticisms of Keynesian macroeconomics by adherents of new classical macroeconomics.

Two main assumptions define the New Keynesian approach to macroeconomics. Like the New Classical approach, New Keynesian macroeconomic analysis commonly assumes that households and firms hit rational expectations. However, the two schools differ in that New Keynesian analysis commonly assumes a rank of market failures. In particular, New Keynesians assume that there is imperfect competition in price and wage develop to assist explain why prices and wages can become "sticky", which means they draw not reorient instantaneously to become different in economic conditions.

Wage and price stickiness, and the other market failures presents in New Keynesian models, imply that the economy may fail to attain full employment. Therefore, New Keynesians argue that macroeconomic stabilization by the government using fiscal policy and the central bank using monetary policy can lead to a more efficient macroeconomic outcome than a laissez faire policy would.

New Keynesianism became component of the new neoclassical synthesis - which is now usually subject to simply as New Keynesian economics - that incorporated parts of both it and new classical macroeconomics, and forms the theoretical basis of mainstream macroeconomics today.

Development of Keynesian economics


The number one wave of New Keynesian economics developed in the late 1970s. The first model of Sticky information was developed by Stanley Fischer in his 1977 article, Long-Term Contracts, Rational Expectations, and the Optimal Money render Rule. He adopted a "staggered" or "overlapping" contract model. Suppose that there are two unions in the economy, who take turns towages. When this is the a union's turn, it chooses the wages it will rank for the next two periods. This contrasts with John B. Taylor's good example where the nominal wage is constant over the contract life, as was subsequently developed in his two articles, one in 1979 "Staggered wage instituting in a macro model'. and one in 1980 "Aggregate Dynamics and Staggered Contracts". Both Taylor and Fischer contracts share the feature that only the unions setting the wage in the current period are using the latest information: wages in half of the economy still reflect old information. The Taylor framework had sticky nominal wages in addition to the sticky information: nominal wages had to be fixed over the length of the contract two periods. These early new Keynesian theories were based on the basic conviction that, assumption fixed nominal wages, a monetary direction central bank can direction the employment rate. Since wages are fixed at a nominal rate, the monetary authority can control the real wage wage values adjusted for inflation by changing the money supply and thus impact the employment rate.

In the 1980s the key concept of using menu costs in a framework of imperfect competition to explain price stickiness was developed. The concept of a lump-sum cost menu survive to changing the price was originally present by Sheshinski and Weiss 1977 in their paper looking at the effect of inflation on the frequency of price-changes. The conviction of applying it as a general theory of Nominal Price Rigidity was simultaneously include forward by several economists in 1985–6. George Akerlof and Janet Yellen include forward the idea that due to bounded rationality firms will non want to conform their price unless the benefit is more than a small amount. This bounded rationality leads to inertia in nominal prices and wages which can lead to output fluctuating at constant nominal prices and wages. Gregory Mankiw took the menu-cost idea and focused on the welfare effects of changes in output resulting from sticky prices. Michael Parkin also put forward the idea. Although the approach initially focused mainly on the rigidity of nominal prices, it was extended to wages and prices by Olivier Blanchard and Nobuhiro Kiyotaki in their influential article Monopolistic Competition and the Effects of Aggregate Demand . Huw Dixon and Claus Hansen showed that even whether menu costs applied to a small sector of the economy, this would influence the rest of the economy and lead to prices in the rest of the economy becoming less responsive to changes in demand.

While some studies suggested that menu costs are too small to have much of an aggregate impact, Laurence M. Ball and David Romer showed in 1990 that real rigidities could interact with nominal rigidities to create significant disequilibrium. Real rigidities arise whenever a firm is late to adjust its real prices in response to a changing economic environment. For example, a firm can face real rigidities whether it has market energy to direct or determine or if its costs for inputs and wages are locked-in by a contract. Ball and Romer argued that real rigidities in the labor market keep a firm's costs high, which permits firms hesitant to configuration prices and lose revenue. The expense created by real rigidities combined with the menu cost of changing prices offers it less likely that firm will outline prices to a market clearing level.

Even if prices are perfectly flexible, imperfect competition can affect the influence of fiscal policy in terms of the multiplier. Huw Dixon and Gregory Mankiw developed independently simple general equilibrium models showing that the fiscal multiplier could be increasing with the measure of imperfect competition in the output market. The reason for this is that imperfect competition in the output market tends to reduce the real wage, main to the household substituting away from consumption towards leisure. When government spending is increased, the corresponding increase in lump-sum taxation causes both leisure and consumption to decrease assuming that they are both a normal good. The greater the degree of imperfect competition in the output market, the lower the real wage and hence the more the reduction falls on leisure i.e. households work more and less on consumption. Hence the fiscal multiplier is less than one, but increasing in the degree of imperfect competition in the output market.

In 1983 Guillermo Calvo wrote "Staggered Prices in a Utility-Maximizing Framework". The original article was a thing that is caused or produced by something else in a continuous time mathematical framework, but nowadays is mostly used in its discrete time version. The Calvo model has become the near common way to model nominal rigidity in new Keynesian models. There is a probability that the firm can reset its price in all one period h the hazard rate, or equivalently the probability 1-h that the price will move unchanged in that period the survival rate. The probability h is sometimes called the "Calvo probability" in this context. In the Calvo model the crucial feature is that the price-setter does non know how long the nominal price will cover in place, in contrast to the Taylor model where the length of contract is required ex ante.

Coordination failure was another important new Keynesian concept developed as another potential representation for recessions and unemployment. In recessions a factory can go idle even though there are people willing to work in it, and people willing to buy its production if they had jobs. In such(a) a scenario, economic downturnsto be the sum of coordination failure: The invisible hand fails to coordinate the usual, optimal, flow of production and consumption. Russell Cooper and Andrew John's 1988 paper Coordinating Coordination Failures in Keynesian Models expressed a general form of coordination as models with multiple equilibria where agents could coordinate to news that updates your information or at least not waste each of their respective situations. Cooper and John based their work on earlier models including Peter Diamond's 1982 coconut model, which demonstrated a case of coordination failure involving search and matching theory. In Diamond's model producers are more likely to produce if they see others producing. The increase in possible trading partners increases the likelihood of a condition producer finding someone to trade with. As in other cases of coordination failure, Diamond's model has multiple equilibria, and the welfare of one agent is dependent on the decisions of others. Diamond's model is an example of a "thick-market externality" that causes markets to function better when more people and firms participate in them. Other potential sources of coordination failure include self-fulfilling prophecies. If a firm anticipates a fall in demand, they might cut back on hiring. A lack of job vacancies might worry workers who then cut back on their consumption. This fall in demand meets the firm's expectations, but it is for entirely due to the firm's own actions.

New Keynesians offered explanations for the failure of the labor market to clear. In a Walrasian market, unemployed workers bid down wages until the demand for workers meets the supply. If markets are Walrasian, the ranks of the unemployed would be limited to workers transitioning between jobs and workers whonot to work because wages are too low to attract them. They developed several theories explaining why markets might leave willing workers unemployed. The almost important of these theories, new Keynesians was the efficiency wage theory used to explain long-term effects of previous unemployment, where short-term increases in unemployment become permanent and lead to higher levels of unemployment in the long-run.

In efficiency wage models, workers are paid at levels that maximize productivity instead of clearing the market. For example, in development countries, firms might pay more than a market rate to ensure their workers can afford enough nutrition to be productive. Firms might also pay higher wages to increase loyalty and morale, possibly leading to better productivity. Firms can also pay higher than market wages to forestall shirking. Shirking models were especially influential.Carl Shapiro and Joseph Stiglitz's 1984 paper Equilibrium Unemployment as a Worker Discipline Device created a model where employees tend to avoid work unless firms can monitor worker effort and threaten slacking employees with unemployment. If the economy is at full employment, a fired shirker simply moves to a new job. Individual firms pay their workers a premium over the market rate to ensure their workers would rather work and keep their current job instead of shirking and risk having to move to a new job. Since used to refer to every one of two or more people or matters firm pays more than market clearing wages, the aggregated labor market fails to clear. This creates a pool of unemployed laborers and adds to the expense of getting fired. Workers not only risk a lower wage, they risk being stuck in the pool of unemployed. Keeping wages above market clearing levels creates a serious disincentive to shirk that makes workers more expert even though it leaves some willing workers unemployed.

In the early 1990s, economists began to combine the elements of new Keynesian economics developed in the 1980s and earlier with Real Business Cycle Theory. RBC models were dynamic but assumed perfect competition; new Keynesian models were primarily static but based on imperfect competition. The New neoclassical synthesis essentially combined the dynamic aspects of RBC with imperfect competition and nominal rigidities of new Keynesian models. Tack Yun was one of the first to do this, in a model that used the Calvo pricing model. Goodfriend and King proposed a list of four elements that are central to the new synthesis: intertemporal optimization, rational expectations, imperfect competition, and costly price correct menu costs. Goodfriend and King also find that the consensus models producepolicy implications: whilst monetary policy can affect real output in the short-run, but there is no long-run trade-off: money is not neutral in the short-run but it is in the long-run. Inflation has negative welfare effects. It is important for central banks to manages credibility through rules based policy like inflation targeting.

In 1993, John B Taylor formulated the idea of a Taylor rule, which is a reduced form approximation of the responsiveness of the nominal interest rate, as set by the central bank, to changes in inflation, output, or other economic conditions. In particular, the rule describes how, for regarded and transmitted separately. one-percent increase in inflation, the central bank tends raise the nominal interest rate by more than one percentage point. This aspect of the rule is often called the Taylor principle. Although such(a) rules provide concise, descriptive proxies for central bank policy, they are not, in practice, explicitly proscriptively considered by central banks when setting nominal rates.

Taylor's original relation of the rule describes how the nominal interest rate responds to divergences of actual inflation rates from target inflation rates and of actual Gross home Product GDP from potential GDP:

In this equation, is the referred short-term GDP deflator, is the desired rate of inflation, is the assumed equilibrium real interest rate, is the logarithm of real GDP, and is the logarithm of potential output, as determined by a linear trend.

The New Keynesian Phillips curve was originally derived by Roberts in 1995, and has since been used in most state-of-the-art New Keynesian DSGE models. The new Keynesian Phillips curve says that this period's inflation depends on current output and the expectations of next period's inflation. The curve is derived from the dynamic Calvo model of pricing and in mathematical terms is:

The current period t expectations of next period's inflation are incorporated as , where is the discount factor. The constant captures the response of inflation to output, and is largely determined by the probability of changing price in any period, which is :

The less rigid nominal prices are the higher is , the greater the effect of output on current inflation.

The ideas developed in the 1990s were put together to develop the new Keynesian Dynamic stochastic general equilibrium used to analyze monetary policy. This culminated in the three equation new Keynesian model found in the survey by Richard Clarida, Jordi Gali, and Mark Gertler in the Journal of Economic Literature,. It combines the two equations of the new Keynesian Phillips curve and the Taylor rule with the dynamic IS curve derived from the optimal dynamic consumption equation household's Euler equation.

These three equations formed a relatively simple model which could be used for the theoretical analysis of policy issues. However, the model was oversimplified in some respects for example, there is no capital or investment. Also, it does not perform alive empirically.

In the new millennium there have been several advances in new Keynesian economics.

Whilst the models of the 1990s focused on sticky prices in the output market, in 2000 Christopher Erceg, Dale Henderson and Andrew Levin adopted the Blanchard and Kiyotaki model of unionized labor markets by combining it with the Calvo pricing approach and introduced it into a new Keynesian DSGE model.

To have models that worked well with the data and could be used for policy simulations, quite complicated new Keynesian models were developed with several features. Seminal papers were published by Frank Smets and Rafael Wouters and also Lawrence J. Christiano, Martin Eichenbaum and Charles Evans The common assigns of these models included:

The idea of Sticky information found in Fischer's model was later developed by Gregory Mankiw and Ricardo Reis. This added a new feature to Fischer's model: there is a fixed probability that you can replan your wages or prices each period. Using quarterly data, they assumed a value of 25%: that is, regarded and identified separately. quarter 25% of randomly chosen firms/unions can plan a trajectory of current and future prices based on current information. Thus if we consider the current period: 25% of prices will be based on the latest information available; the rest on information that was available when they last were professionals such as lawyers and surveyors to replan their price trajectory. Mankiw and Reis found that the model of sticky information provided a good way of explaining inflation persistence.

Sticky information models do not have nominal rigidity: firms or unions are free to select different prices or wages for each period. It is the information that is sticky, not the prices. Thus when a firm gets lucky and can re-plan its current and future prices, it will choose a trajectory of what it believes will be the optimal prices now and in the future. In general, this will involve setting a different price every period covered by the plan. This is at odds with the empirical evidence on prices. There are now numerous studies of price rigidity in different countries: the United States, the Eurozone, the United Kingdom and others. These studies all show that whilst there are some sectors where prices conform frequently, there are also other sectors where prices remain fixed over time. The lack of sticky prices in the sticky information model is inconsistent with the behavior of prices in most of the economy. This has led to attempts to formulate a "dual stickiness" model that combines sticky information with sticky prices.

The 2010s saw the development of models incorporating household heterogeneity into the standard New Keynesian framework, commonly referred as `HANK' models Heterogeneous Agent New Keynesian. In addition to sticky prices, a typical HANK model features uninsurable idiosyncratic labor income risk which gives rise to a non-degenerate wealth distribution. The earliest models with these two features include Oh and Reis 2012, McKay and Reis 2016 and Guerrieri and Lorenzoni 2017.

The name "HANK model" was coined by Greg Kaplan, Benjamin Moll and Gianluca Violante in a 2018 paper that additionally models households as accumulating two types of assets, one liquid the other illiquid. This translates into rich heterogeneity in portfolio composition across households. In particular, the model fits empirical evidence by featuring a large share of households holding little liquid wealth: the ’hand-to-mouth’ households. Consistent with empirical evidence, approximately two-thirds of these households hold non-trivial amounts of illiquid wealth, despite holding little liquid wealth. These households are known as wealthy hand-to-mouth households, a term introduced in a 2014 examine of fiscal stimulus policies by Kaplan and Violante.