Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

EH_ch1_4

.pdf
Скачиваний:
7
Добавлен:
26.03.2016
Размер:
549.06 Кб
Скачать

Chapter 2

Introduction to the Techniques

2.1 Introduction

In this second introductory chapter we will introduce the main analytical technique which we will use to study convergence under learning dynamics when the economy is subject to stochastic shocks. We will do this in the context of a simple economic model: the cobweb market model introduced in Chapter 1. Our presentation in this chapter will be heuristic and the techniques will be rigorously developed subsequently. Later chapters will also show how to apply these tools to study the dynamics of learning in numerous macroeconomic models.

In the cobweb model there is a unique rational expectations equilibrium (REE). Even if there is a unique REE, convergence under learning is far from obvious since the situation is not analogous to the standard econometric setup. Because in general the economic variables depend on forecasts, they depend on the agents’ estimates.1 Thus the agents are estimating the parameters of a system which in turn depend on the estimates. It is thus possible, and we will see examples, that if agents’ estimates deviate from the parameter values of an REE, the actual law of motion under these perceptions will be best described by parameters which are even farther from the REE. In consequence, the estimates will be driven farther and farther from the REE values over time, so that the REE is unstable.

We will develop conditions, called expectational stability conditions, which govern whether or not a given REE is stable. When there are multiple REE,

1This is the sense in which such systems are “self-referential.”

25

26

View of the Landscape

these conditions must be interpreted as local stability conditions since then the evolution of the system, and its possible rest points, will depend on the initial perceptions as well as other factors. In fact, it is within models with multiple equilibria that the study of adaptive learning is most fruitful since it provides guidance on what can happen in such models: can the economy become stuck in inefficient steady states? Can it converge to cycles or random fluctuations, even when a deterministic steady state exists? Can the economy begin to track explosive bubble paths? Once we have the technical apparatus in hand, we will consider all of these issues in later parts of the book.

Besides focusing on a version of bounded rationality which makes a minimal deviation from RE, we will also focus, through most of the book, on the asymptotic issue of whether adaptive learning converges to a particular REE in the limit. There are other questions of considerable interest: how fast does convergence take place? What are the properties of the transitional paths en route to the REE? If the economy undergoes frequent structural shifts, will the estimates still converge and how should adaptive agents allow for this? In the last part of the book we will take up these issues. However, we begin with what is clearly the central question: if agents estimate a statistical model which is a correct specification of an REE, under what circumstances will the estimates converge to that REE?

2.2 The Cobweb Model

In this book we will address this issue of stability in the context of a wide variety of stochastic economic models: linear and nonlinear, univariate and multivariate. These will cover a wide range of the macroeconomic models which are currently employed or which have been employed over the last 25 years. In particular, we will be able to study in detail the issue of what solutions emerge under adaptive learning when multiple equilibria are present. However, to present the central techniques it is most convenient to consider a linear univariate model with a unique REE: the cobweb model of supply and demand in an isolated market. This is in fact the model investigated by Muth (1961) in his classic formulation of rational expectations. As noted in the previous chapter, its properties under least squares learning were investigated by Bray and Savin (1986) and Fourgeaud, Gourieroux, and Pradel (1986).

The structural model consists of demand and supply equations:

dt = mI mppt + v1t ,

st = rI + rppte + rw wt1 + v2t ,

Introduction to the Techniques

27

where mp , rp > 0 and v1t and v2t

are unobserved white noise shocks. The for-

mulation here generalizes the version given in Chapter 1 by permitting supply to depend on a vector of observable shocks wt1. Bray and Savin (1986) make the assumption that wt is an iid process. This is much stronger than necessary. One can, for example, permit wt to follow a stationary exogenous VAR (vector autoregression), driven by a multivariate white noise shock with bounded moments. For convenience we assume that Ewt = 0, and we denote the unconditional second moment matrix by Ewt wt = .

Assuming market clearing, st = dt , yields the reduced form

 

 

 

 

 

pt = µ + αpte + δ wt1 + ηt ,

(2.1)

where µ

=

 

rI )/mp, δ

= −

p

= −

rp/mp . Note that α < 0.

 

(mI

 

 

m1rw , and α

 

ηt = (v1t v2t )/mp , so that we can write ηt iid(0, ση2). Under rational expectations, pte = Et1pt , where Et1pt denotes the expectation of pt conditional on information available at time t 1. Operating with Et1 on both sides of equation (2.1) and solving for Et1pt , we obtain

Et1pt = (1 α)1µ + (1 α)1δ wt1.

Since also pt Et1pt

= ηt , it follows that there is a unique rational expecta-

tions equilibrium given by

 

 

 

 

 

 

 

 

 

t = ¯ +

¯

t1 +

t

 

 

 

 

 

p

a

b w

 

η ,

 

 

 

 

 

where

 

α)1µ

 

¯

=

 

α)1δ.

¯ =

(1

and

(1

a

 

b

 

 

We remark that the reason why this model has a unique REE is that pt does not depend on expected future prices.

2.3 Econometric Learning

Although the REE is unique, we can still ask whether it is learnable in the following sense. Suppose that firms believe that prices follow the process

pt = a + b wt1 + ηt ,

(2.2)

corresponding to the REE, but that a and b are unknown to them. There are different possible explanations for this. Firms may be unable to calculate the REE,

28

View of the Landscape

although they know the form of the economic structure, because the structural parameters are unknown. Alternatively, the form of the structure may be unknown, but firms may reasonably assume that pt depends linearly on the vector of exogenous observable shocks. In any event, we assume that equation (2.2) is the perceived law of motion of the firms and that they attempt to estimate a and b. This is our key bounded rationality assumption: we back away from the rational expectations assumption, replacing it with the assumption that, in forecasting prices, firms act like econometricians.2

Under this assumption we have the following model of the evolution of the economy. Suppose that firms have data on the economy from periods i = 0, . . . , t 1. Thus the time-(t 1) information set is {pi , wi }ti=01. We suppose that firms estimate a and b by a least squares regression of pi on wi1 and an intercept. Their estimates will be updated over time as more information is collected. Letting (at1, bt1) denote the estimates through time t 1, their forecasts at t 1 are given by

 

 

 

pte = at1 + bt1wt1.

 

 

(2.3)

The standard least squares formula gives the equations

 

 

at

1

 

 

t

1

 

1

 

t

 

1

 

 

 

=

 

 

 

 

 

,

 

 

1

 

zi1zi1

 

(2.4)

bt

i

1

i

 

zi1pi

 

 

 

 

 

=

1

 

 

 

 

 

 

 

=

 

 

 

 

 

 

 

where

zi = 1 wi .

We now have a fully specified dynamic system defined by the equations (2.1), (2.3), and (2.4): at time t 1, expectations are formed according to equations (2.3) and (2.4). Given wt1 and the random draw for ηt , the time-t price is determined by equation (2.1). Then parameters can be updated. Adding (pt , wt1) to the data set, revised estimates at and bt are computed. Given the random draw

for w , forecasts pe

1

are made, which together with the new shock η

t+1

deter-

t

 

t

+

 

 

 

 

 

 

 

mine pt+1

, and this process is continued over time. The question of interest is

whether a

 

→ ¯

 

 

 

b

 

t

→ ∞

 

 

 

t

a and bt

 

¯

as

 

 

 

 

In the cobweb model the key parameter satisfies α < 0, but there are other structural models with the same reduced form, so we can pose the problem more

2As indicated in Chapter 1, in making this assumption we are modifying our view of firms to make them behave more like economists who believe the economy is in an REE and use data to estimate the parameters of the REE law of motion.

Introduction to the Techniques

29

generally, allowing α to be unrestricted. This is illustrated by the following example.

Example: Lucas Aggregate Supply Model. In Chapter 1 we presented the following model, due to Lucas (1973), consisting of aggregate supply function

qt = q¯ + π(pt pte) + ζt ,

where π > 0, and aggregate demand function

mt + vt = pt + qt ,

where vt is a velocity shock. We now assume that velocity depends in part on exogenous observables wt1 so that

 

 

 

 

 

 

vt = µ + γ wt1 + ξt ,

 

 

 

 

 

 

 

and that money supply follows the policy rule

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

mt

= ¯ +

 

 

+

ρ wt

 

 

 

 

 

 

 

 

 

 

 

 

 

 

m

ut

 

 

 

1.

 

 

 

 

 

 

 

Here ut , ξt , and ζt are white noise shocks. The reduced form is

 

 

 

 

pt

=

(1

+

π)1(m

µ

q)

π(1

+

π)1pe

+

(1

+

π)1

+

γ ) wt 1

 

 

¯ +

 

− ¯ +

 

 

 

t

 

 

 

 

 

 

+ (1 + π)1(ut + ξt ζt ).

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This equation is precisely of the form (2.1) with α = π(1 + π)1

and ηt =

(1 + π)1(ut

+ ξt ζt ). Note that in this example, 0 < α < 1.

 

 

 

 

The answer to the question of whether, under least squares learning, the system converges to the unique REE is given by the following result.

Theorem 2.1. Consider the dynamic system (2.1), (2.3), and (2.4). If α < 1,

then

t

 

¯

 

at

 

 

a¯

with probability 1. If α > 1, then convergence occurs with

 

b

 

 

b

 

probability 0.

Thus the REE is stable under least squares learning for both of our examples. An example of an unstable REE would be the cobweb model in which the demand curve is upward sloping and steeper than the supply curve, i.e., mp < 0 with

| mp |< rp .

30

View of the Landscape

Theorem 2.1 is an extremely strong global result, both in the positive case and in the negative case. The positive result was proved in Bray and Savin (1986) using direct arguments based on martingale convergence theorems. The negative result, which should be interpreted as stating that when α > 1, (at , bt ) converges with probability 0 to any point (a, b ), can be shown using the techniques for stochastic recursive algorithms, in particular results from Ljung (1977), as was demonstrated by Marcet and Sargent (1989c). Because in the coming chapters we will develop general techniques suitable for application to a wide range of economic applications, we will not present the proof of Bray and Savin, but instead provide a heuristic development of the techniques we will use throughout the book.

2.4 Expectational Stability

The condition α < 1 can be interpreted in terms of a general stability principle, known as “expectational stability” or “E-stability.” Since, as we will see, this principle works quite generally to provide the condition for the stability of an REE under adaptive learning, we introduce the concept now.

The basic required concept is the map from the perceived law of motion (PLM) to the actual law of motion (ALM). The E-stability principle stated in its most comprehensive form is that the mapping from the PLM to the ALM governs the stability of equilibria under learning. More specifically, E-stability conditions obtained from this mapping provide the conditions for asymptotic stability of an REE under least squares learning. We focus here on obtaining this condition for the cobweb model.

We begin with the assumption that agents have a PLM which they use to make forecasts of the variables of interest. Usually we take the form of the PLM to correspond to the REE of interest. Thus in the current case we take the PLM to

be of the form (2.2), pt

=

 

+

+

ηt . For a

= ¯

= ¯

 

a

 

b wt 1

 

a and b

b, the PLM would

be the REE, but we allow for the possibility that agents have “nonrational” expectations. For any given values of a and b, the appropriate time-(t 1) forecast of pt is given by

pte = a + b wt1.

(2.5)

Inserting equation (2.5) into equation (2.1), one can solve for the actual law of motion, or ALM, implied by the PLM:

pt = + αa) + + αb) wt1 + ηt .

(2.6)

Introduction to the Techniques

 

 

 

31

This implicitly defines the mapping from the PLM to the ALM

 

b

=

δ + αb

 

 

T a

 

µ + αa

.

(2.7)

The interpretation of the ALM is that it describes the stochastic process followed by the economy if forecasts are made under the fixed rule given by the PLM.

We can now define E-stability in the form appropriate for determining the stability of the REE under least squares learning. Note first that the unique REE for our model is the unique fixed point of the T -map (2.7). Consider the differential equation

d

a

 

a

 

a

 

 

 

b

= T

b

 

b

,

(2.8)

where τ denotes “notional” or “artificial” time. We say that the REE is expectationally stable, or E-stable, if the REE is locally asymptotically stable under equation (2.8). Intuitively, E-stability determines the stability of the REE under a stylized learning rule in which the PLM parameters a and b are adjusted slowly

 

¯ ¯

in the direction of the implied ALM parameters. The REE (a, b ) is E-stable if

¯ ¯

¯ ¯

small displacements from (a, b )

are returned to (a, b ) under this rule.

Expectational stability in this form was introduced in Evans (1989) and Evans and Honkapohja (1992). The closely related notion of iterative expectational stability, which appeared earlier in the literature, will be discussed below.

To determine E-stability in our example, combine equations (2.7) and (2.8) and write the differential equation component by component to obtain

 

da

= µ + 1)a,

 

 

 

dbi

= δi + 1)bi , for i = 1, . . . , n,

where n is the dimension of w. It follows that the REE is E-stable if and only if α < 1. Note that this is precisely the condition obtained by Bray and Savin for convergence of least squares learning.

The connection between E-stability and the convergence of least squares learning turns out to be quite general, applying in a very wide range of models. This is a great advantage since E-stability conditions are often easy to work out, while the technical analysis of the convergence of econometric learning is substantially more involved.

32

View of the Landscape

2.5 Rational vs. Reasonable Learning

Before discussing the analysis of econometric learning, i.e., the justification of Bray and Savin’s result, we briefly note the sense in which we are assuming bounded rationality. Recall that agents assume that data is being generated by the process pt = a + b wt1 + ηt , but that they do not know the parameters a and b. At time t they have estimates (at , bt ) which they use to make their forecasts, so that pte is given by equation (2.3). It follows that under least squares learning, the true process followed by pt is given by

pt = µ + α(at1 + bt1wt1) + δ wt1 + ηt ,

or

pt = + αat1) + + αbt1) wt1 + ηt ,

so that the “intercept” and the coefficient on wt1 are not constant but are evolving over time. Agents are thus estimating an econometrically misspecified model and this is the sense in which they are not fully rational.

However, note that least squares learning may be (in Bray’s words) “reasonable” even if it is not fully rational. The first and most important point is

t

→ ¯ ¯

→ ∞

that if α < 1, then (at , b )

(a, b ) as t

. Thus, asymptotically the mis-

specification is vanishingly small as the coefficients of the process cease to vary over time. Second, the misspecification may not even be statistically detectable during the transition. This will depend on the details: the initial deviation from

¯ ¯ the value of Var , and the size of . Bray and Savin (1986) investi-

(a, b ) , (ηt ) α

gate this issue and show that in many cases the temporary misspecification during the transition to REE would not be detectable by standard good econometric practice.

2.6 Recursive Least Squares

We now return to the problem of showing convergence under least squares learning. In the remainder of this chapter we will outline the techniques which we will be using throughout this book to establish whether convergence to an REE takes place. The crucial first step is to reformulate the dynamic system as a stochastic recursive algorithm.

We begin by noting that the standard least squares regression formula has a recursive formulation. In fitting the equation yi = c xi + ei using data i = 1, . . . , T on the k × 1 independent vector xi and the dependent variable yi , the

Introduction to the Techniques

 

 

 

 

 

33

value of the k × 1 coefficient vector c which minimizes

T

2

is given by the

i

 

1 ei

least squares formula3

i T 1 xi xi

1

 

 

=

 

 

c =

i T 1 xi yi .

 

 

 

 

 

 

 

 

 

 

 

 

 

=

 

=

 

 

 

 

c can instead be computed using the recursive least squares (RLS) formulas

ct

=

ct

1

+

t

1R1xt (yt

x

ct

1),

 

 

 

t

 

t

(2.9)

Rt

= Rt1 + t1(xt xt Rt1).

 

 

ct and Rt denote the coefficient vector and the moment matrix for xt using data i = 1, . . . , t. To generate the least squares values, the initial value for the recursion must be set appropriately.4 With these initial values, equation (2.9) generates the usual least squares formula for ct , the least squares coefficient vector using data i = 1, . . . , t, and c above is given by c = cT . This can be verified by induction.5 Note that (yt xt ct1) is the most recent forecast error at t.

We now apply the RLS formulas to our learning problem. Our agents are running a least squares regression of pi on zi1, where zi = ( 1 wi ). For convenience, write

φt = at bt

for the vector of coefficients including the intercept. Applying the RLS formulas, we obtain

 

 

 

 

φ

=

 

φ

 

+

t1R1z

t1

 

p

 

φ

 

z

t1

,

 

 

 

 

t

 

 

t1

 

 

t

 

 

t

 

t1

 

 

 

 

 

 

 

 

=

 

 

 

+

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Rt

 

 

Rt

1

 

t1

 

zt

1zt

 

1

 

 

Rt

1 .

 

 

 

 

 

Since pt is given by equations (2.1) and (2.3), we have

 

 

 

 

 

 

 

 

 

 

pt = + αat1) + + αbt1) wt1 + ηt

 

3Letting y denote the T × 1 column vector with ith component yi and X denote the T × k

matrix given by X = (x1, . . . , xT ) , the formula can be equivalently written in the better known

form c = (X X)1X y.

 

 

 

 

 

is of full rank and letting yk denote yk = (y1, . . . , yk ) , the

 

4Assuming Xk = (x1, . . . , xk )

initial value ck is given by ck

=

 

k

 

 

k

 

=

 

 

k

 

 

 

 

 

 

 

 

 

 

 

 

 

(X Xk )1X yk

 

X1yk and the initial value Rk is given by

Rk

=

k1X Xk

=

k1 k

 

xi x .

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

k

 

i 1

 

 

i

 

=

 

 

 

 

 

=

 

 

 

 

 

 

 

=

 

 

 

 

5

 

formulas

=

=

 

1

 

 

 

 

 

t

 

1R

1

 

 

x y , the recursions (2.9)

 

 

Using the

 

Rt

 

 

t

i

1 xi xi and ct

 

 

 

 

t

 

 

i

 

1

 

i i

can be seen to lead to the least squares formula.

34

View of the Landscape

or

 

pt = T (φt1) zt1 + ηt ,

(2.10)

 

a

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

where T (φ) T b is given by equation (2.7). Note that pt is determined by the

ALM generated by the perceptions φ

 

(at 1, b

 

). Combining equations,

 

 

 

 

 

t1 =

 

t1

 

 

 

 

 

 

we arrive at the stochastic recursive system

 

 

 

 

 

 

 

 

 

 

φt

φt1

t1Rt1zt1

zt

1(T (φt1)

φt1)

+

ηt ,

(2.11)

Rt =

Rt 1

+ t1

zt

1zt

1

Rt

1 .

 

 

 

(2.12)

=

+

 

 

 

 

 

 

→ ∞

 

 

We want to know whether equations (2.11)–(2.12) converge as t

. Let φ

 

 

 

 

 

 

 

 

 

 

 

 

 

 

¯

 

(a, b ). Our claim, following Bray and Savin, is that if α < 1, then φt

φ with

¯ ¯

 

 

 

 

 

 

 

 

 

 

 

 

 

 

¯

 

 

probability 1. Since ¯ = ¯ it also follows from equation (2.10) that the price

T (φ) φ, process converges to the REE.

To show convergence formally requires results from the stochastic approximation literature.

2.7Convergence of Stochastic Recursive Algorithms

There is a substantial literature in statistics and engineering which concerns itself precisely with the convergence of stochastic recursive algorithms such as equations (2.11)–(2.12). (This method is also called stochastic approximation.) Marcet and Sargent (1989c) showed how this technique, in particular the results of Ljung (1977), could be applied in economics to the analysis of adaptive learning. In Chapter 6 we will provide the technical details for this tool, and in this section we provide the central technique.

We consider a stochastic recursive algorithm (SRA) of the form

θt = θt1 + γt Q(t, θt1, Xt ),

(2.13)

where θt is a vector of parameter estimates, Xt is the state vector, and γt is a deterministic sequence of “gains.” The function Q expresses the way in which the estimate θt1 is revised in line with the last period’s observations. In our example, θt1 will include all components of φt1 and Rt , Xt will include the effects of zt1 and ηt , and γt = t1. In the following section we give the details of how equations (2.11)–(2.12) can be put into the form (2.13). Although,

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]