Macroeconomic model


Heterodox

A macroeconomic framework is an analytical tool designed to describe a operation of a problems of economy of a country or a region. These models are usually intentional to inspect the comparative statics as living as dynamics of aggregate quantities such(a) as the a object that is caused or produced by something else amount of goods together with services produced, calculation income earned, the level of employment of productive resources, and the level of prices.

Macroeconomic models may be logical, mathematical, and/or computational; the different bracket of macroeconomic models serve different purposes and take different advantages and disadvantages. Macroeconomic models may be used to clarify and illustrate basic theoretical principles; they may be used to test, compare, and quantify different macroeconomic theories; they may be used to hold "what if" scenarios usually to predict the effects of turn in monetary, fiscal, or other macroeconomic policies; and they may be used to generate economic forecasts. Thus, macroeconomic models are widely used in academia in teaching and research, and are also widely used by international organizations, national governments and larger corporations, as living as by economic consultants and think tanks.

Types


Simple textbook descriptions of the macroeconomy involving a small number of equations or diagrams are often called ‘models’. Examples put the IS-LM model and Mundell–Fleming model of Keynesian macroeconomics, and the Solow model of neoclassical growth theory. These models share several features. They are based on a few equations involving a few variables, which can often be explained with simple diagrams. many of these models are static, but some are dynamic, describing the economy over many time periods. The variables thatin these models often cost macroeconomic aggregates such(a) as GDP or total employment rather than individual option variables, and while the equations relating these variables are allocated to describe economic decisions, they are not normally derived directly by aggregating models of individual choices. They are simple enough to be used as illustrations of theoretical points in introductory explanations of macroeconomic ideas; but therefore quantitative a formal request to be considered for a position or to be permits to do or have something. to forecasting, testing, or policy evaluation is usually impossible without substantially augmenting the sorting of the model.

In the 1940s and 1950s, as governments began accumulating national income and product accounting data, economists types out to construct quantitative models to describe the dynamics observed in the data. These models estimated the relations between different macroeconomic variables using mostly linear time series analysis. Like the simpler theoretical models, these empirical models spoke relations between aggregate quantities, but many addressed a much finer level of segment for example, studying the relations between output, employment, investment, and other variables in many different industries. Thus, these models grew to add hundreds or thousands of equations describing the evolution of hundreds or thousands of prices and quantities over time, devloping computers essential for their solution. While the option of which variables to include in used to refer to every one of two or more people or matters equation was partly guided by economic notion for example, including past income as a determinant of consumption, as suggested by the idea of adaptive expectations, variable inclusion was mostly determined on purely empirical grounds.

Dutch economist Jan Tinbergen developed the number one comprehensive national model, which he built for the Netherlands in 1936. He later applied the same modeling array to the economies of the United States and the United Kingdom. The first global macroeconomic model, Wharton Econometric Forecasting Associates' LINK project, was initiated by Lawrence Klein. The framework was cited in 1980 when Klein, like Tinbergen ago him, won the Nobel Prize. Large-scale empirical models of this type, including the Wharton model, are still in usage today, particularly for forecasting purposes.

Econometric studies in the first component of the 20th century showed a negative correlation between inflation and unemployment called the Phillips curve. Empirical macroeconomic forecasting models, being based on roughly the same data, had similar implications: they suggested that unemployment could be permanently lowered by permanently increasing inflation. However, in 1968, Milton Friedman and Edmund Phelps argued that this obvious tradeoff was illusory. They claimed that the historical explanation between inflation and unemployment was due to the fact that past inflationary episodes had been largely unexpected. They argued that if monetary authorities permanently raised the inflation rate, workers and firms would eventually come to understand this, at which detail the economy would return to its previous, higher level of unemployment, but now with higher inflation too. The stagflation of the 1970s appeared to bear out their prediction.

In 1976, Robert Lucas, Jr., published an influential paper arguing that the failure of the Phillips curve in the 1970s was just one example of a general problem with empirical forecasting models. He pointed out that such(a) models are derived from observed relationships between various macroeconomic quantities over time, and that these relations differ depending on what macroeconomic policy regime is in place. In the context of the Phillips curve, this means that the description between inflation and unemployment observed in an economy where inflation has usually been low in the past would differ from the relation observed in an economy where inflation has been high. Furthermore, this means one cannot predict the effects of a new policy regime using an empirical forecasting model based on data from preceding periods when that policy regime was non in place. Lucas argued that economists would fall out unable to predict the effects of new policies unless they built models based on economic fundamentals like preferences, technology, and budget constraints that should be unaffected by policy changes.

Partly as a response to the Lucas critique, economists of the 1980s and 1990s began to construct microfounded macroeconomic models based on rational choice, which have come to be called dynamic stochastic general equilibrium DSGE models. These models begin by specifying the set of agents active in the economy, such as households, firms, and governments in one or more countries, as alive as the preferences, technology, and budget constraint of each one. Each agent is assumed to make an optimal choice, taking into account prices and the strategies of other agents, both in the current period and in the future. Summing up the decisions of the different types of agents, it is possible to find the prices that equate render with demand in every market. Thus these models embody a type of equilibrium self-consistency: agentsoptimally condition the prices, while prices must be consistent with agents’ supplies and demands.

DSGE models often assume that all agents of a assumption type are identical i.e. there is a ‘representative household’ and a ‘representative firm’ and can perform perfect calculations that forecast the future correctly on average which is called rational expectations. However, these are only simplifying assumptions, and are not essential for the DSGE methodology; many DSGE studies intention for greater realism by considering heterogeneous agents or various types of adaptive expectations. Compared with empirical forecasting models, DSGE models typically have fewer variables and equations, mainly because DSGE models are harder to solve, even with the guide of computers. Simple theoretical DSGE models, involving only a few variables, have been used to analyze the forces that drive business cycles; this empirical work has given rise to two leading competing environments called the real business cycle model and the New Keynesian DSGE model. More elaborate DSGE models are used to predict the effects of make different in economic policy and evaluate their impact on social welfare. However, economic forecasting is still largely based on more traditional empirical models, which are still widely believed togreater accuracy in predicting the affect of economic disturbances over time.

A closely related methodology that pre-dates DSGE modeling is computable general equilibrium CGE modeling. Like DSGE models, CGE models are often microfounded on assumptions approximately preferences, technology, and budget constraints. However, CGE models focus mostly on long-run relationships, making them most suited to studying the long-run impact of permanent policies like the tax system or the openness of the economy to international trade. DSGE models instead emphasize the dynamics of the economy over time often at a quarterly frequency, making them suited for studying business cycles and the cyclical effects of monetary and fiscal policy.

Another modeling methodology that has developed at the same time as DSGE models is Agent-based computational economics ACE, which is a variety of Agent-based modeling. Like the DSGE methodology, ACE seeks to break down aggregate macroeconomic relationships into microeconomic decisions of individual agents. ACE models also begin by introducing the set of agents that represent the economy, and specify the types of interactions individual agents can have with each other or with the market as a whole. Instead of creation the preferences of those agents, ACE models often jump directly to specifying their strategies. Or sometimes, preferences are specified, together with an initial strategy and a learning predominance whereby the strategy is adjusted according to its past success. Given these strategies, the interaction of large numbers of individual agents who may be very heterogeneous can be simulated on a computer, and then the aggregate, macroeconomic relationships that occur from those individual actions can be studied.

DSGE and ACE models have different advantages and disadvantages due to their different underlying structures. DSGE models may exaggerate individual rationality and foresight, and understate the importance of heterogeneity, since the rational expectations, representative agent case maintain the simplest and thus the nearly common type of DSGE model to solve. Also, unlike ACE models, it may be difficult to analyse local interactions between individual agents in DSGE models, which instead focus mostly on the way agents interact through aggregate prices. On the other hand, ACE models may exaggerate errors in individual decision-making, since the strategies assumed in ACE models may be very far from optimal choices unless the modeler is very careful. A related case is that ACE models which start from strategies instead of preferences may move vulnerable to the Lucas critique: a changed policy regime should generally afford rise to changed strategies.