## 1 Introduction

The model considered in this paper is a GARCH(1,1) process:

(Return Process) | |||

(Volatility Process) |

where is a sequence of independent identically distributed (i.i.d) variables such that and .

Unlike conventional GARCH(1,1) process, the innovation process considered in this paper is a mildly-integrated GARCH process whose key parameters, and , are changing with the sample size, viz.

and

The limiting process of this GARCH process is first derived in Berkes et al. (2005) by imposing the assumption . Extending their results, we obtain the limiting process that applies to parameter values that covers the whole range of . This is a non-trivial extension because when the process deviates further from the integrated GARCH process, the approximation errors in Berkes et al. (2005) diverges and thus a different normalization is needed.

## 2 Main Results

The main results are summarized in the following one proposition and three theorems. The first proposition modifies the additive representation for in Berkes et al. (2005) to accommodate . Based on the proposition, we establish three theorems to describe the asymptotic behaviours of and under the cases respectively.

To establish the additive representation of , we make the following assumptions on the distribution of the innovations and the convergence rate of the GARCH coefficients, and .

###### Assumption 1.

is an i.i.d sequence with and , for some .

###### Assumption 2.

, and .

Assumption 1 imposes a non-degeneracy condition on the distribution of and thus ensures its applicability to the central limit theorem. Assumption 2 bounds the convergence rate of so that the normalized sequence could converge to a proper limit. Based on these assumptions, we obtain a modified additive representation for in Proposition 1 on the top of Berkes et al. (2005).

###### Proposition 1 (Additive Representation).

###### Remark 1.

The key difference between our results and Berkes et al. (2005) is the convergence rate of the approximation errors. In Berkes et al. (2005), the approximation error , is of order or asymptotically. Hence, these errors are negligible only when . We relax this restrictive assumption by normalizing the original terms with . Under this new normalization, all the approximation errors remains negligible when .

To formulate the theorems below, I introduce the following notations. For define , . Further, we need the assumptions for relative convergence rate between and to regulate the asymptotic behaviours of returns and volatilities for near-stationary case.

###### Assumption 3.

, while , as .

Assumption 3 imposes a rate condition on the localized parameters and . This condition is less restrictive than that in Berkes et al. (2005) in the sense that instead of requiring to converge to 0, we allow it to diverge slowly at a rate of . The relaxation of the assumption also attributes to the change of the normalization.

###### Theorem 1 (Near-stationary Case).

In addition, the random variables

are asymptotically independent, each with the asymptotic distribution equals to that of .

###### Theorem 2 (Integrate Case).

In addition, the random variables

are asymptotically independent, each with the asymptotic distribution equals to that of .

Similar to the near-stationary case, we have to impose additional assumption on the relative speed of converging to zero between and .

###### Assumption 4.

, as .

###### Theorem 3 (Near-explosive Case).

In addition, the random variables

are asymptotically independent, each with the asymptotic distribution equals to that of .

###### Remark 2.

As one may notice, the rate of convergence for both volatility process and return process in all three cases decreases to 0 asymptotically. These seemingly awkward results are reasonable in the sense that the convergence rate is a part of the normalization which reflects the order of the process. In other words, when we compute a partial sum of s in form of , the normalization just plays the role of which is usually required to decrease to 0 for applying a central limit theorem.

## 3 Proofs

In this section, I present detailed proofs for all the propositions and the theorems listed in the previous section. For readers’ convenience, I provide a roadmap for understanding the proofs of the theorems. In general, the proofs are done in three steps:

Step 1: We decompose the volatility process into 4 components, , , by expanding the multiplicative form provided in Proposition 1.

Step 2: We show the first 3 volatility components are negligible after normalization, and the last term converges to a proper limit by using Cramer-Wold device and Liapounov central limit theorem or Donsker’s theorem.

Step 3: We figure out a normalization to make the normalized volatility converges to 1. Then, applying this normalization to the return process, we complete the proof.

###### Proof of Proposition 1.

First, note the GARCH(1,1) model can be written into the following multiplicative form:

Note that

Then by Assumption 1 and Chow & Teicher (2012), we have the almost sure convergence of

Therefore, the term above is

Now consider the sequence of events

From the previous result we know . Then by Taylor expansion, , on the event , which implies

Now by direct plugging into the key multiplicative term we care about, we have

Further, note is an i.i.d sequence with , then we know

which implies

Similarly, we define the sequence of events

which is known to have the property . Then by Taylor expansion, when , on the event

and by law of iterated logarithm, we know

Combining the results above, we have thus showed that

Lastly, by the equation above, we know

and this establishes . ∎

###### Proof of Theorem 1.

First, we focus on the volatilities. Denote , ,

For , note by Lemma 4.1 in Berkes et al. (2005), we have

(1) |

and note that

(2) |

Then by equation (1), (2) and Proposition 1 we have

Lastly, for , by Lemma 4.1 in 1 we have

Therefore, we only have to consider the last term in the above equation. Define

and

Then by Cramer-Wold device (Theorem 29.4 of Billingsley (1995)), we have

Observe that

we then have

Observe also that, for some , , we have

and by Jensen’s inequality, we know for some ,

This implies that

Now we can easily check the Liapounov’s condition, where

Then by Liapounov central limit theorem (Theorem 27.3, p.362 of Billingsley (1995)), we have

Comments

There are no comments yet.