Estimation of Market Risk Measures in Mexican Financial Time Series

Alberto Saavedra Espinosa1*

Correspondence: *. Circuito Exterior s/n, Coyoacán, Cd. Universitaria, C. P: 04510 Ciudad de México, CDMX. Tel. 01 55 5622 4992. E-mail: E-mail:


Abstract:

The objectives of this work are to investigate whether: i) a GARCH model with Generalized Pareto Distribution (GPD) innovations, complemented with an EWMA volatility forecast in order to consider practical problems that might arise in GARCH applications that comprise long periods of time, appropriately estimate risk measures (VaR and Expected Shortfall) for Mexican financial series, at high confidence levels; ii) the estimates yielded by such model are better than those given by a GARCH with Gaussian or Student-t innovations. Our quality assessment and comparison between models consist of backtests of the risk measures estimates yielded by each method used in this paper. Our results show that: i) the methodology used in this paper appropriately estimates our two risk measures; ii) the GARCH-GPD model yields better results than the GARCH-Gaussian and GARCH-t-Student models. Our results are limited to one-day risk measures estimates. As far as we know, our results on the Expected Shortfall are the first of its kind for Mexican series. We conclude that the study achieved its objectives and there are important areas of opportunity for further studies.

Received: 2017 February 7; Accepted: 2017 August 1

rmef. 2017 ; 12(4)
doi: 10.21919/remef.v12i4.234

Keywords: JEL Classification: G11, G17, C22.
Keywords: Keywords: Risk Analysis, Value at Risk, Volatility Forecasting, GARCH, Extreme Value Theory, Market Risk, Expected Shortfall.
Keywords: Palabras clave: Análisis de riesgos, Valor en riesgo, Pronóstico de volatilidad, GARCH, Teoría de valores extremos, Riesgo de mercado, Déficit esperado.

1. Introduction

Modern society relies on the proper functioning of the financial system and has a collective interest in its stability. Financial crises, such as the one of 2008, have pointed out the importance of ensuring that financial institutions properly measure and balance the risks they take and hold enough capital to withstand any foreseeable problems. Consequently, society views risk management positively and confers regulators the task of forging a framework that safeguards its interests.

Therefore, it is important to develop and test accurate methodologies to measure the risks to which a financial institution is exposed. In this regard, the main financial risk categories are market risk, credit risk, operative risk, and liquidity risk. This paper studies and assesses a technique to measure the best-known risk in the banking industry: market risk.

The studied technique uses daily information of the changes of market risk factors and has two important components. The first one is a stochastic volatility model to model and forecast the risk factors dynamics. The second one is the use of Extreme Value Theory (EVT) to complement such dynamic model.

Specifically, it uses a GARCH model with Generalized Pareto innovations to estimate two important risk measures: Value-at-Risk (VaR) and Expected Shortfall.1 Alexander McNeil and Rüdiger Frey first suggested this kind of model in their extraordinary contribution Estimation of Tail-Related Risk Measures for Heteroscedastic Financial Time Series: An Extreme Value Approach (McNeil, J. A. and Frey, R., 2000). Such technique takes advantage of the power of the Peaks Over Threshold (POT) method to describe tails of univariate distributions to estimate risk measures for financial time series at high confidence levels.

However, in contrast with McNeil, J. A. and Frey, R., 2000, the technique used in this paper investigates how suitable is to complement the GARCH volatility-modelling component of the mentioned methodology with an Exponential Weighted Moving Average (EWMA) approach, on the days when the GARCH specification presents a practical problem such as a non-significant parameter.2 This feature might be an advantage in applications that cover periods of considerable length.

Thus, the objectives of the presented study are to investigate:

  1. If the risk measures estimates that the proposed technique yields for Mexican data, during both crisis and relative calm seasons in financial markets, and especially for high confidence levels, are well behaved.
  2. How does the risk measures estimates yielded by the proposed technique compare with estimations with another GARCH modeling approach: to model GARCH innovations with a Gaussian or Student-t distribution.

For both points, our quality assessment and comparison between methods will be made using backtests of the risk measures estimates yielded by each method used in this paper.

The Mexican financial series used for our study are the USD/MXN exchange rate (FIX) and the main Mexican stock index, the Prices and Quotes Index (IPC for its acronym in Spanish).

When exploring relevant literature on this type of applications we find that Kourouma, L. et al. (2011) carried out an exercise similar to ours; however, such work focuses on US stock indices and asset prices such as oil and corn during a financial crisis. As far as we know, there has not been an application like ours in Mexican time series.

On the other hand, in Fernández, V. P. (2003) and Aguirre, A. I. et al. (2013) a GARCH model is used in conjunction with the POT Method to analyze, among other assets, a Mexican financial series: the IPC. However, such applications focus solely on the estimation and evaluation of VaR. That is, the Expected Shortfall analysis, which is an equally important risk measure, is left out.

In addition, in López, E. (2013) a GARCH model and another technique from the EVT, the Block Maxima Method, is used to analyze another Mexican financial series: the FIX exchange rate. However, this work also focuses solely on VaR; other Mexican financial series and the Expected Shortfall are left out of the analysis.

Consequently, our work is separated from those made previously in several senses. First, while some attention had been devoted to the study of the IPC using GARCH-GPD methods, these exercises focused solely on the estimation and evaluation of VaR. That is, they ignored the Expected Shortfall, which is an equally important risk measure that has the advantage of having better theoretical properties than VaR; specifically, to be a coherent risk measure.

Thus, as far as we know this is the first analysis of this nature that estimates and evaluates Expected Shortfall estimates at high confidence levels for Mexican series.

On the other hand, our application also contrasts with previous work in the fact that it analyzes the efficiency in Mexican data of the GARCH-GPD method in periods both of crisis (unlike Fernández, V. P. (2003)) and relative calm (in contrast to Kourouma, L. et al. (2011), López, E. (2013) and Aguirre, A. I. et al. (2013)).

Indeed, because the quality of conditional VaR and Expected Shortfall estimates was evaluated at high confidence levels, the length of the backtests used was long enough to evaluate the methodology used both in periods of crisis (as in 2008) and relative calm in the markets.

Another peculiarity of our application is related to a classic problem of applications with extremes: the choice of a threshold from which the tail of a distribution is considered to start. Specifically, our work explored how adequate for Mexican data is the methodology suggested by McNeil, A. J. and Frey, R. (2000) to choose an optimal threshold for the right tail of a GARCH innovation distribution. Investigating the adequacy of this method in Mexican series is of great importance: it shows the presence of an important empirical fact in national data, which may be relevant in subsequent analyzes.

After this introduction, our paper structure is as follows: section two describes the methodology used in our study, including the general development of our backtesting procedures; section three presents the main results of our application; section four presents our conclusions and a few ideas for future work on this topic.

2. Methodology

Let

<mml:mo>{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msubsup>
<mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>∞</mml:mi>
</mml:mrow>
</mml:msubsup>
be a stationary time series that represents daily observations of log-returns of the price of a financial asset.3 We will assume that the dynamics of X t is given by a GARCH(1,1)4 model:

[Formula ID: e1]
<mml:msub>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>Z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msubsup>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>β</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
(1).

Where α0 > 0, α1 = 0, β1 = 0 and the innovation process Z t is a Strict White Noise (i.e. it is formed by independent and identically distributed random variables) with distribution function FZ (z) with zero mean and unitary variance. We will suppose that σt is measurable with respect to F(t-1) , the available information of the log-returns process up to time t - 1.

Given that our backtests will have a considerable length,5 we will complement our volatility modelling with an EWMA approach. This is related with two typical scenarios that could affect our GARCH volatility forecasting:

  1. The parameter
    <mml:msub>
    <mml:mrow>
    <mml:mover>
    <mml:mrow>
    <mml:mi>α</mml:mi>
    </mml:mrow>
    <mml:mo>ˆ</mml:mo>
    </mml:mover>
    </mml:mrow>
    <mml:mrow>
    <mml:mn>0</mml:mn>
    </mml:mrow>
    </mml:msub>
    of the GARCH model fitted to the series is not statistically significant (which in our application is checked with the traditional t-statistic test of statistical significance).
  2. The GARCH process fitted to the series is not stationary (or, equivalently, that
    <mml:mover>
    <mml:mrow>
    <mml:msub>
    <mml:mrow>
    <mml:mi>α</mml:mi>
    </mml:mrow>
    <mml:mrow>
    <mml:mn>1</mml:mn>
    </mml:mrow>
    </mml:msub>
    </mml:mrow>
    <mml:mo>ˆ</mml:mo>
    </mml:mover>
    <mml:mo>+</mml:mo>
    <mml:mover>
    <mml:mrow>
    <mml:msub>
    <mml:mrow>
    <mml:mi>β</mml:mi>
    </mml:mrow>
    <mml:mrow>
    <mml:mn>1</mml:mn>
    </mml:mrow>
    </mml:msub>
    </mml:mrow>
    <mml:mo>ˆ</mml:mo>
    </mml:mover>
    <mml:mo>></mml:mo>
    <mml:mn>1</mml:mn>
    .

While such cases are rare (specially the second one), they could appear in some days in an application that covers an extended period of time (approximately 20 years) like ours. Thus, while the volatility of our series will always be estimated through a GARCH(1,1) model, its forecasts will instead be given by:

[Formula ID: e2]
<mml:mover>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mo>=</mml:mo>
<mml:mfenced>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mi> </mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:mfenced>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>-</mml:mo>
<mml:mover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mi> </mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mover>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>f</mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mo>+</mml:mo>
<mml:mover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>β</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mo>></mml:mo>
<mml:mn>1</mml:mn>
<mml:mi> </mml:mi>
<mml:mi>o</mml:mi>
<mml:mi> </mml:mi>
<mml:mi>B</mml:mi>
<mml:mi> </mml:mi>
<mml:mo>(</mml:mo>
<mml:mover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mo>)</mml:mo>
<mml:mo>></mml:mo>
<mml:mn>0.05</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>β</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mo>+</mml:mo>
<mml:mover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mi> </mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:mover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>β</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mover>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi> </mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi> </mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi> </mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>e</mml:mi>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>

Where

<mml:mover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
,
<mml:mover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
and
<mml:mover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>β</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
are the estimated values of the GARCH model fitted to our series on day t and B(α0) is the p-value of the t-statistic test applied to
<mml:mover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
.

This is, on the days when any of the cases previously mentioned occur, we use an EWMA approach to forecast the volatility of our series. On the rest of the days, we use the forecast given by our GARCH specification.

Note that, given the definition of (2), our EWMA specification almost replicates the volatility forecast suggested by the GARCH model.6 This is a desirable feature in such quantity if the GARCH model achieves to estimate appropriately the volatility of Mexican financial time series.

Since our application will investigate the adequacy of different models for the innovation distribution, we estimate the parameters of our GARCH models using a Quasi-Maximum likelihood approach. This is in order to avoid making any prior assumptions on the innovation distribution.7

Given that,

[Formula ID: e3]
<mml:msub>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>F</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>F</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mi>P</mml:mi>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>Z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>≤</mml:mo>
<mml:mi> </mml:mi>
<mml:mi>x</mml:mi>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>F</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>

[Formula ID: e4]
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>F</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>

we have that VaR and Expected Shortfall formulas turn out to be:

[Formula ID: e5]
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>(</mml:mo>
<mml:mi>Z</mml:mi>
<mml:mo>)</mml:mo>
(3).

[Formula ID: e6]
<mml:mi>E</mml:mi>
<mml:msub>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:mo>=</mml:mo>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mi>E</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>(</mml:mo>
<mml:mi>Z</mml:mi>
<mml:mo>)</mml:mo>
(4).

Where σ (t+1) follows the specification given in (2) and qα(Z)t and ESα(Z) denote the risk measures associated to the distribution of Zt, which, by hypothesis, do not depend on t.

When the innovation distribution is modeled by a Gaussian or Student-t distribution, such risk measures can be computed easily. In fact, they are given by:

Table 1.

Risk Measures Formulas for the Gaussian and Student-t models.


TFN1Formulas for quantile (second column) and Expected Shortfall (third column) for the Gaussian and Student-t models.


On the other hand, we can use the POT method to model the innovation distribution. The POT method is based on the Gnedenko-Balkema-Pickands-de Haan theorem, which is a limit result from the Extreme Value Theory that, roughly speaking, says that the excess distribution, Fu(x) = P(X-u ≤ x|X > u), of an i.i.d. sequence of random variables can be approximated by a Generalized Pareto Distribution, for a big enough threshold u.

Specifically, as Aguirre, A. I. et al. (2013) summarizes, the Gnedenko-Balkema-Pickands-de Haan theorem tells us is that, for a large class of univariate distributions, the excess distribution Fu , for big values of u, is approximately equal to

[Formula ID: e7]
<mml:msub>
<mml:mrow>
<mml:mi>F</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>u</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>≈</mml:mo>
<mml:mi>G</mml:mi>
<mml:mfenced>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:mfenced>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mn>1</mml:mn>
<mml:mo>-</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mfenced>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>-</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>β</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mi>ξ</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>f</mml:mi>
<mml:mi> </mml:mi>
<mml:mi>ξ</mml:mi>
<mml:mo>≠</mml:mo>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>1</mml:mn>
<mml:mo>-</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>β</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:msup>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>f</mml:mi>
<mml:mi> </mml:mi>
<mml:mi> </mml:mi>
<mml:mi>ξ</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>

Where ξ Є = σ + ξ(u-μ), ξ and β are known, respectively, as the form and scale parameters of the GPD.

In the former expression, three possible distributions can be obtained, depending on the value of the parameter ξ of the GPD. If ξ > 0, the GPD is a Pareto distribution with parameters α = 1/ξ and κ = β/ξ, for x > 0. For ξ = 0 the GPD corresponds to an exponential distribution with rate 1/β and x > 0. Finally, if ξ < 0, the GPD take the form of type II Pareto Distribution, which is defined in the range 0 < x < β/ξ.

There are at least two ways to work with thresholds. One is through the value of the threshold itself; i.e. a relative large value in terms of the magnitude of our observations. The second one is to express the threshold in terms of the number of data points that are above it. An example of the second approach for a sample of 1,000 observations would be to fix the value of u equal to the (k + 1) order statistic; thus, if k = 100 we have 100 observations above u and we consider that the tail of the distribution of our sample consists of the 10% biggest observations in it.

For the first approach, one can use the graph of the Sample Mean Excess Function to find appropriate threshold values to use in the POT Method. The Sample Mean Excess Function is an empirical estimator of the function e (u) = E (Y-u|Y > u). For an i.i.d. sequence of n random variables,

<mml:msubsup>
<mml:mrow>
<mml:mfenced>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msubsup>
, it is given by:

[Formula ID: e8]
<mml:msub>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced>
<mml:mrow>
<mml:mi>u</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mo>∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mi>m</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>x</mml:mi>
<mml:mfenced>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>-</mml:mo>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mo>∑</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mrow>
<mml:mi>I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi> </mml:mi>
<mml:mo>></mml:mo>
<mml:mi>u</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>

That is, en(u) represents the sum of the excesses above the threshold, u, divided by the number of observations that exceed u. The graph of the Sample Mean Excess Function is defined as:

[Formula ID: e9]
<mml:mfenced>
<mml:mrow>
<mml:mfenced>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced>
<mml:mrow>
<mml:mi>u</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo><</mml:mo>
<mml:mi>u</mml:mi>
<mml:mo><</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>

Where

<mml:msub>
<mml:mrow>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>></mml:mo>
<mml:mo>⋯</mml:mo>
<mml:mo>></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
represent the order statistics of the sample
<mml:msubsup>
<mml:mrow>
<mml:mfenced>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msubsup>
. The interpretation of the graph of the Sample Mean Excess Function is: if from a certain threshold, the points show an linear upward trend, then there are signs of heavy tail behavior. In the same way, data with exponential distribution would give an approximately horizontal line. Lastly, data from a short tail distribution would show a linear downward trend. In all cases, the threshold from which any of these behaviors is noted might be a good candidate to apply the POT Method.

Regarding the second approach, McNeil, A. J. and Frey, R. (2000) suggest carrying out a simulation study. Specifically, because a pair of desirable characteristics in an estimator is that it is unbiased and with least Mean Squared Error (MSE), they suggest the following: identify a distribution that represents at least a good approximation to the true distribution of innovations (e.g. a Student-t distribution, because the series of financial returns are usually leptokurtic). If such a distribution can be seen as a particular case of the GPD (e.g. a Student-t distribution with v degrees of freedom can be considered as a GPD with a shape parameter

<mml:mi>ξ</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mi>ν</mml:mi>
), one can use this fact to estimate the bias and MSE of
<mml:mi>ξ</mml:mi>
, and some other amount of interest (e.g. a high quantile of the distribution) of the fitted GPD. Then, they suggest plotting the bias and MSE of these estimates as a function of k (the number of data that conform the tail of the distribution). Finally, one should explore the graphs generated by this method and find the value of k that minimizes both the bias and mean square error of the estimates of
<mml:mi>ξ</mml:mi>
and any other amount of interest.

Once the threshold for applying the POT Method has been selected, the parameters of the GPD can be estimated via maximum likelihood, based on the sample versions of the GARCH innovations; that is, the standardized residuals of the model. In our application, we estimate GPD parameters by Maximum likelihood.8

In this way, it is possible to use the approximation given by the Gnedenko-Balkema-Pickands-de Haan theorem and develop the following VaR and Expected Shortfall formulas for a distribution whose tails are modeled by a GPD:

[Formula ID: e10]
<mml:msub>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>u</mml:mi>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mover>
<mml:mrow>
<mml:mi>β</mml:mi>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mover>
<mml:mrow>
<mml:mi>ξ</mml:mi>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mfrac>
<mml:mfenced>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mfenced>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>u</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mo>(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>-</mml:mo>
<mml:mi>α</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:mover>
<mml:mrow>
<mml:mi>ξ</mml:mi>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:mrow>
</mml:msup>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>;</mml:mo>
<mml:mi> </mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>-</mml:mo>
<mml:mover>
<mml:mrow>
<mml:mi>ξ</mml:mi>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mfrac>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mover>
<mml:mrow>
<mml:mi>β</mml:mi>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover>
<mml:mrow>
<mml:mi>ξ</mml:mi>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>u</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>-</mml:mo>
<mml:mover>
<mml:mrow>
<mml:mi>ξ</mml:mi>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mfrac>
(5).

Where

<mml:msub>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>u</mml:mi>
</mml:mrow>
</mml:msub>
represents the amount of data above the threshold and n the total size of the sample used. Thus, by substituting these estimates in the equations given in (3) and (4), it is possible to construct the estimators of
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
and
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
based on a GARCH-GPD model.

2.1 Backtesting

To evaluate the estimates of

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
and
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
obtained through the GARCH-Normal, GARCH-Student-t and GARCH-GPD models it is possible to develop backtests.

We evaluate the performance of

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
and
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
on a historical series
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>…</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
, where
<mml:mi>m</mml:mi>
<mml:mo>≫</mml:mo>
<mml:mi>n</mml:mi>
, based on a memory of n days on days
<mml:mi>t</mml:mi>
<mml:mo>∈</mml:mo>
<mml:mi>T</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>n</mml:mi>
<mml:mo>,</mml:mo>
<mml:mo>⋯</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>m</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
. This means that if we worked, for example, with a time window of n = 1,000 banking days, for each prediction of
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
and
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
we would use, approximately, 4 years of daily information.

To perform our

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
backtest, on each day
<mml:mi>t</mml:mi>
<mml:mo>∈</mml:mo>
<mml:mi>T</mml:mi>
we fit a new GARCH model to the corresponding loss observations, we calculate the standardized residuals of the model and determine a new estimator of the innovation distribution via The POT method (or any other model for innovations). Then, we use the formula given in (3) and, based on the model fitted to the log-returns, we estimate
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
. Next, we compare
<mml:msubsup>
<mml:mrow>
<mml:mover>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
with
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:msub>
for the values of α considered appropriate; in our case, we were interested in
<mml:mi>α</mml:mi>
<mml:mo>∈</mml:mo>
<mml:mn>0.95,0.99,0.995,0.999</mml:mn>
. We say that a violation occurred whenever
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mover>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
.

It is possible to develop a binomial test based on the number of violations to evaluate the performance of our estimates of

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
. Assuming the dynamics of equations (2) and (3), the violation indicator at time
<mml:mi>t</mml:mi>
<mml:mo>∈</mml:mo>
<mml:mi>T</mml:mi>
is a Bernoulli random variable:

[Formula ID: e11]
<mml:msub>
<mml:mrow>
<mml:mi>I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>></mml:mo>
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>Z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msub>
<mml:mo>∼</mml:mo>
<mml:mi>B</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>i</mml:mi>
<mml:mfenced>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>-</mml:mo>
<mml:mi>α</mml:mi>
</mml:mrow>
</mml:mfenced>

Moreover,

<mml:msub>
<mml:mrow>
<mml:mi>I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
and
<mml:msub>
<mml:mrow>
<mml:mi>I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
are independent for
<mml:mi>s</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>∈</mml:mo>
<mml:mi>T</mml:mi>
and
<mml:mi>s</mml:mi>
<mml:mo>≠</mml:mo>
<mml:mi>t</mml:mi>
, since
<mml:msub>
<mml:mrow>
<mml:mi>Z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
and
<mml:msub>
<mml:mrow>
<mml:mi>Z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
are independent. Thus,

[Formula ID: e12]
<mml:mrow>
<mml:munder>
<mml:mo>∑</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>∈</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
<mml:mo>∼</mml:mo>
<mml:mi>B</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>l</mml:mi>
<mml:mfenced>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>d</mml:mi>
<mml:mfenced>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>-</mml:mo>
<mml:mi>α</mml:mi>
</mml:mrow>
</mml:mfenced>

that is, the total number of violations has a binomial distribution under the proposed model.

In this way, under the null hypothesis that the model correctly estimates the conditional quantiles of the studied series, the empirical version of the statistic

<mml:mrow>
<mml:munder>
<mml:mo>∑</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>∈</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mover>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
comes from the
<mml:mi>B</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi> </mml:mi>
<mml:mfenced>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>d</mml:mi>
<mml:mfenced>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>-</mml:mo>
<mml:mi>α</mml:mi>
</mml:mrow>
</mml:mfenced>
distribution.

Therefore, we can perform a two-tailed binomial test to test this null hypothesis against the alternative that the method entails a systematic error in the estimation of

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
that leads to very few or too many violations. A smaller p-value than our level of significance (typically 0.05) in this binomial test will be interpreted as evidence against the null hypothesis.

Now, it is also possible to develop a backtest to evaluate our estimates of

<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>.</mml:mo>
This backtest is similar to that developed in the case of
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
and will allow us to investigate whether the model proposed in the methodology discussed in this Section provides reasonable estimates of
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
. This time we are interested in the size of the difference between
<mml:msub>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:msub>
and
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
given the event of a violation to the
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
quantile. We define the residuals

[Formula ID: e13]
<mml:msub>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>-</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>Z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mi>E</mml:mi>
<mml:mfenced>
<mml:mrow>
<mml:mi>Z</mml:mi>
<mml:mo>⃒</mml:mo>
<mml:mfenced>
<mml:mrow>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>(</mml:mo>
<mml:mi>Z</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mfenced>
<mml:mo>.</mml:mo>

It is clear that under our model (2) these residuals are i.i.d. and that, conditioned to the event

<mml:mfenced>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>X</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>></mml:mo>
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
or equivalently {Zt +1 > qα(Z)}, have an expected value equal to zero.

Thus, suppose again that we perform our backtest over the days in the set T. We can form empirical versions of these residuals on the days when a violation to the corresponding quantiles occurs, that is, on the days when

<mml:mfenced>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mover>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
. We will refer to these residuals as excess residuals and will denote them by

<mml:mo>{</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>∈</mml:mo>
<mml:mi>T</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>></mml:mo>
<mml:mover>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mo>_</mml:mo>
<mml:mo>}</mml:mo>
, where
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>-</mml:mo>
<mml:mover>
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>

where

<mml:mo>(</mml:mo>
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>)</mml:mo>
is a conditional estimate of the Expected Shortfall.

In this way, under the null hypothesis that we correctly estimate the dynamics of the loss process σ(t+1) and the first moment of the truncated innovation distribution (E[Z|Z>qα(Z)]), the excess residuals should behave as a sample of i.i.d. observations with mean zero.

To test the zero-mean hypothesis we use a bootstrap test that makes no assumption about the distribution underlying the excess residuals. Specifically, a two-tailed test that contrasts our null hypothesis that excess residuals have a mean statistically equal to zero against the alternative that such residuals have a mean different than zero; which would mean that the conditional Expected Shortfall was systematically underestimated or overestimated.

In essence, the bootstrap test we use consists of the following steps:

  1. take the excess residuals as data;
  2. calculate the average of these data (this will be the initial value of the test statistic: the simple average);
  3. resample various series satisfying the null hypothesis (i.e. series whose mean does equal zero) from the original data set (i.e. the excess residuals);
  4. calculate and store the means of the resampled series;
  5. compare the values of the averages of the resampled series against the mean of the original series (the initial value of our test statistic);
  6. estimate the p-value of the test by calculating the number of times that the resampled averages were further (either below or above) from zero than the average of our original series: if those events represent less than 5% of the tests, the hypothesis that the original data have average zero is rejected.

3. Results

In this section, we present the results of our application. First, we show how our methodology works on a particular day, for both the FIX and IPC series. Then, we present the results of the backtests used to evaluate the performance of such methodology in our data.

In total, our series are conformed by daily observations of FIX and IPC between January 2, 1996 and February 12, 2016. However, in the first part of this section we will use the following sets of information:

  • FIX: from February 17, 2012 to February 12, 2016.
  • IPC: from September 7, 2011 to August 31, 2015.

Specifically, we will use 1,000 daily observations on the fitting of each GARCH model. This means that we will use, approximately, 4 years of daily information to estimate our risk measures.

The parameters of the GARCH models fitted to the data used in this section, as well as the p-values of the corresponding statistical significance tests, are presented in Table 2:

Table 2.

Punctual Estimation and P-value of the T-statistic test of statistical significance of GARCH(1,1) parameters for FIX and IPC series.



Table 2 shows that all the parameters of the GARCH model fitted to the IPC series are significant. In the FIX case, only the α0 parameter is non-significant. According to (2), this means that we use an EWMA to forecast the volatility of the FIX series on this particular day, while that of the IPC series is estimated using the formula given by our GARCH specification:

[Formula ID: e14]

In order to evaluate the goodness of fit of these models, we estimate their volatility and compute the sample versions of the innovations of our series. Specifically, if a GARCH model correctly estimates the volatility of a time series, we expect its innovations to behave as a Strict White Noise.

The volatility estimated by the fitted GARCH models is presented in Figure 1. Once this quantity has been estimated, we obtain the sample observations of the innovations of each model dividing each observation by the corresponding value of the estimated volatility.


[Figure ID: f1] Figure 1.

GARCH Estimation of FIX and IPC Volatility.


For both series, we test the SWN hypothesis through graphic and numerical tests applied to their GARCH innovations, its squares, and its absolute values. For the first, we look at the graphs of the Autocorrelation function (ACF) and Partial Autocorrelation function (PACF). For the second, we apply a Ljung-Box test.

Figure 2 presents the graphs of the ACF and PACF for both FIX and IPC innovations. Examining Figure 2 allows us to see that there is no evidence that FIX nor IPC innovations present a serial dependence of its lagged values. Thus, this Figure suggests that the GARCH model achieved to estimate correctly the volatility of both series.


[Figure ID: f2] Figure 2.

Graphs of the ACF and PACF for both FIX and IPC innovations.


Regarding the Ljung-Box test, Table 3 summarizes the results of its application to the GARCH innovations of the FIX and IPC series.

Table 3.

P-values of the Ljung-Box Tests Applied to the FIX and IPC Innovations.


TFN2P-values of the Ljung-Box applied to the innovations of the GARCH(1,1) models fitted to the FIX (upper half of the table) and IPC (bottom half of the table) series. Ljung-Box tests were based on three different lags (h), which are shown by column.


Table 3 shows that the hypothesis of non-correlation was not rejected for neither of our series, in any of the nine cases considered. Therefore, Table 3 provides further evidence that the GARCH model adequately estimated the volatility of our series.

Given that the results contained in Figure 2 and Table 3 give us good certainty that the GARCH model correctly estimates the volatility dynamics of our series, we now continue with the analysis of their innovations.

Our first objective is to find a first appropriate threshold to use in the POT Method. Thus, we use the graph of the Sample Mean Excess Function. Figure 3 shows this graph for both the IPC and FIX innovations.


[Figure ID: f3] Figure 3.

FIX and IPC Innovation.


Figure 3 shows a couple of potential thresholds to make a first GPD fit to the GARCH innovations of our series. In particular, it suggests using a threshold around 2 for the FIX innovations and one around 1 for IPC innovations. Therefore, we fit a GPD model to the innovations series using the suggested thresholds.9 A visual assessment of the quality of such fit is shown in Figure 4.


[Figure ID: f4] Figure 4.

FIX and IPC Innovation.


Explicitly, Figure 4 contains the curves of the excess distribution (on the left) and the tail (on the right) of the GPD model, for the FIX and IPC innovations, with the empirical versions of such curves superimposed respectively. Hence, this Figure let us see that the thresholds suggested by Figure 3 allow the GPD model to achieve an excellent goodness of fit in both series of innovations.

Once we have seen that our first GPD models had a good fit to the innovation series, we compare such fit with the one that we would get with a Gaussian and Student-t models. The estimated parameters of the three models are summarized in Table 4.

Table 4.

Models Fitted to the FIX and IPC Innovations.


TFN3Punctual estimations of the parameters of the Gaussian (first row), GPD (middle row), and Student-t (third row) models fitted to the FIX and IPC innovations.


Figure 5 contains a graph that shows a comparison of the goodness of fit of the three models at the (right) tail of the distributions of both innovation series. Such Figure let us see that, in both cases, the GPD model has the best fit; moreover, it shows us that the Student-t distribution seems to overestimate the heaviness of the right tail of our two innovations series, whereas the Gaussian clearly underestimates it.


[Figure ID: f5] Figure 5.

Model comparison for FIX and IPC Innovation


Using the formulas given in Table 1, equation (5), the parameters in Table 3 and the selected thresholds, we can estimate VaR and Expected Shortfall for the innovations of our series. Table 5 contains such estimations.

Table 5.

Risk Measures for the Innovations Series.


TFN4Punctual estimations of VaRα and ESα obtained through the Gaussian, Student-t, and GPD models fitted to the FIX and IPC innovations, for several confidence levels (α).


The quantities in Table 5 show that the monotony relationship between the tails estimated by our three models for innovations is also present in our risk measures estimations: The Student-t model yields the biggest estimations, the Gaussian model the smallest ones, and those of the GPD model lie between the former two.

Now, once we have estimated our risk measures for the innovation distribution and forecasted the volatility of our series, we can estimate the corresponding risk measures for our Mexican financial series. To do so, we use the formulas given in (3) and (4).

Table 6 contains the estimations of our conditional risk measures. Note that, because such quantities are just an escalated version of the estimates contained in Table 5, they present the same monotony relationship.

Table 6.

Conditional Risk Measures for Mexican Financial Series.


TFN5Punctual estimations of VaRα and ESα obtained through the GARCH(1,1) models fitted to the FIX and IPC series vary in the models (Gaussian, Student-t, and GPD) fitted to the FIX and IPC innovations for several confidence levels (α).


With the former results ends the first part of our application. This first approximation suggest that, among our three alternatives, the GARCH-GPD model seems to have the best fit to the right tail of our series. Moreover, it signals that the GARCH-Student-t model might overestimate such tails, whereas the GARCH-Gaussian model may underestimate them. Nevertheless, in order to validate such results, we must perform a formal evaluation, which is the purpose of the second part of this section.

Notice that in order to implement a backtest of our conditional risk measures estimates we need to automate all the steps followed in the previous section. This can be done easily for all such steps, except for one: the selection of the threshold to use in the POT Method.

Indeed, in an automatized routine, it would be impossible to analyze in each step the information given by the graph of the Sample Mean Excess Function or any other graphical tool. Thus, for our backtest purposes, we need an automatable algorithm to select an appropriate threshold to fit a GPD model to the GARCH innovations of our series.

To do so, we use the second way to work with thresholds: express them as a function of the number of observations above them. Now, it is important to notice that Figure 5 shows that the Studet-t distribution represents, at least, an approximation to the observed distribution of the innovations of our two series. Thus, we can apply the results of the simulation study of McNeil, A. J. and Frey, R. (2000) in our data.

Indeed, McNeil and Frey simulation study involved innovations series whose distributions was well approximated by a Student-t distribution. Therefore, given that our data satisfy such condition, we will apply their result and assume that the right tail of our both innovations distributions is formed by the 10% of the larger innovations.

To check if this approach to fit a GPD model yields good results in our data, we make a graphical assessment. Figure 6 shows a comparison of the fit, to the right tail of both FIX and IPC innovations distribution, of a Gaussian model, a Student-t model and GPD model fitted following the results of McNeil and Frey’s simulation study.


[Figure ID: f6] Figure 6.

Model Comparison for FIX and IPC Innovations Distributions.


Examining Figure 6 we can notice that the goodness of fit achieved applying the result of McNeil and Frey is comparable to the one we achieved analyzing the graph of the Sample Mean Excess Function. In other words, the methodology yields excellent results in our Mexican financial data.

Notice that the threshold selection methodology used to fit the GPD models shown in Figure 6 can be easily automatized. Consequently, we can now automatize all the steps needed to apply the GARCH-GPD methodology and implement the backtests procedures described in Section 3.

To be consistent with the analysis performed in the first section, the

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
and
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
backtest of our application will be based on a time mobile window of n=1,000 banking days. This means that the set of days on which we will perform our backtests will be conformed by the daily observations of FIX and IPC log-returns from December 29, 1999 to February 12, 2016; the information of the days between January 2, 1996 and December 28, 1999 will be used to fit the first GARCH(1,1) model to each series.

In this way, the backtests of our series will be carried out on a set of m=4,060 observations and each estimation of

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
and
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
will be done based on approximately 4 years of daily information. Notice that the length of this period was extended as much as possible because we will carry out evaluations on estimates of high quantiles and expected shortfalls at considerably high confidence levels.10

For FIX and IPC series, Table 7 shows the results of the

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
backtests. Specifically, Table 7 shows the p-values of the binomial tests applied to the samples of VaR violation indicators, the number of historical and (in parentheses) expected violations to each
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
estimate. The information is separated by confidence level and model fitted to our Mexican data.

Table 7.

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
Estimates Backtest Results.


TFN6The table shows, by model fitted to the GARCH innovations, the p-values of the binomial tests applied to the samples of

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
violation indicators, historical violations to the
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
estimates and, in parentheses, the expected
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
violations by confidence level.


On the other hand, Table 8 contains the results of the

<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
backtests. Such table shows the p-values of the zero-mean bootstrap tests applied to the excess residuals samples. The information is classified by model fitted to the series and the tested confidence levels.

Table 8.

Results of the
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
Estimates Backtest.


TFN7The table shows the p-values of the bootstrap tests applied to the residual samples of the difference between X (t+1) and

<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
on the days when the VaR was exceeded.


The results contained in Tables 7 and 8 show that the estimates of

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
and
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
obtained through GARCH-Gaussian and GARCH-Student-t models present a similar behavior for the FIX and IPC series.

The GARCH-Gaussian model adequately estimates the

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
only for relatively low confidence levels (α = 0.95). This is reflected in two aspects. The first one is a p-value greater than our significance level in the corresponding binomial test. The second corresponds to the fact that the observed violations to these quantiles are approximately equal to those expected.

For larger confidence levels, the

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
estimates obtained through the GARCH-Gaussian model are very poor: they strongly underestimate the true value of both time series quantiles. This can be seen in the p-value of the binomial test and in the high number of violations to the corresponding quantiles. In contrast, the GARCH-Student-t model showed a clear tendency to overestimate
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
for relatively low confidence levels (α ≤ 0.99). This can be seen in the low number of violations to these quantiles. However, for higher confidence levels, this model provided quantiles estimates consistent with the history of both time series.

In the case of

<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
, the GARCH-Gaussian model provides very poor estimates for all the confidence levels considered. This is shown in the result of our zero-mean bootstrap test: at any confidence level considered, the hypothesis that excess residual samples have a mean statistically equal to zero is strongly rejected.

These results, in conjunction with those of Table 7, reflect that the tail of the conditional distribution of our Mexican financial time series is considerably heavier than that of GARCH type models with Gaussian innovations. Consequently, quantifying the market risk of this type of series using processes with normal innovations (conditional normality) is a poor alternative: the probability that they assign to extreme events turns out to be far below of what is observed in reality.

On the other hand, the GARCH-Student-t models provided good

<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
estimates at high confidence levels, (α > 0.99). This is reflected in the results of the corresponding bootstrap tests: in these cases, the hypothesis that the excess residual samples have a mean statistically equal to zero is not rejected.

However, because these models significantly overestimated the true value of

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
for low confidence levels, the
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
estimates for such confidence levels are poor, regardless of what is observed in the backtests of the
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
. To see this, remember that the definition of
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
is “the expected loss when
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
is exceeded”; therefore, an estimation error in
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
immediately affects any estimate of
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
.

Finally, our backtests reveal that the GARCH-GPD model was the only one capable of delivering

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
and
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
estimates consistent with what was historically observed in both time series, for all the confidence levels considered. This can be seen in the results of the binomial and bootstrap test: in all cases, the binomial behavior and zero-mean hypothesis were not rejected. In consequence, this model was the only one able to properly estimate the heaviness of the right tail of the conditional loss distribution of both Mexican time series.

4. Conclusions

Our work shows that the GARCH-GPD model was the only one capable of delivering

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
and
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
estimates consistent with the history of both time series, for all confidence levels considered. This means that this model was the only one that properly estimated the heaviness of the right tail of the conditional loss distribution of both Mexican time series.

This is particularly relevant in the case of Expected Shortfall: through the applied zero mean bootstrap tests, our paper presents the first piece of evidence of this kind that points that the GARCH-GPD model satisfactorily estimates such risk measure, at high confidence levels, for financial Mexican financial time series. Moreover, our analysis also revealed that the GARCH-Normal (for any confidence level) and GARCH-Student-t (for confidence levels below 99%) models perform poorly when estimating such risk measure in Mexican series.

In addition, our two case studies showed that the GARCH-GPD hybrid model is able to provide reasonable estimates of

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
and
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
both in times of crisis, where large-scale losses occur in clusters over a considerable period of time, and in times of relative calm in the markets.

Indeed, our backtests’ length was large enough to test the GARCH-GPD model in both scenarios. Specifically, they contained in full the 2008 financial crisis and the high volatility period in the FIX Exchange Rate presented in late 2015 and early 2016 (which was strongly related to the worldwide drop in oil prices in that period and to the uncertainty about the monetary policy decision of USA’s central bank).

Regarding similar studies on the subject, our results on

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
and
<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
estimates consistent with the history of both time series, for all confidence levels considered. This means that this model was the only one estimates obtained from a GARCH-GPD model are in accordance with what Kourouma, L. et al. (2011) (for α ∈ 0.95,0.99,0.995), Fernández, V. P. (2003) (for α ∈ 0.99,0.995), and Aguirre, A. I. et al. (2013) (for α = 0.95) found in their respective analyses. However, focusing on relative small confidence levels (α = 0.95), they contrast with the particular finding of Fernández, V. P. (2003), who found that the GARCH-GPD model might yield poor
<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
estimates.

For Expected Shortfall, our findings contrast with those of Kourouma, L. et al. (2011) in the international landscape, whose tests suggest that estimates of

<mml:mi>E</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
based on a GARCH-GPD might overestimate the value of such risk measure.

In general, the success of the GARCH-GPD model both in times of high volatility and in those of relative calm may be attributed to the flexibility of the GPD to model the tail of a distribution: through the sign and magnitude of its shape parameter, it can appropriately model heavy, lightweight and even short tails.

In this way, the above characteristic allows the GARCH-GPD model to adjust properly and timely its estimate of the heaviness of the tail of the innovation distribution. Moreover, this feature in the model seems to end up having the same effect on the tail of the conditional loss distribution. In other words, the GPD gives the model sufficient flexibility to adapt to periods where both large (associated with higher magnitude innovations) and moderate (coupled with rather modest innovations) losses occur.

On the other hand, our work also confirmed that the method to select an optimal backtests threshold given by the simulation study proposed by McNeil, A. J. and Frey, R. (2000) is applicable to Mexican financial time series. This reveals an interesting empirical fact present in Mexican financial series: when filtering their volatility through a GARCH model, it is possible to consider that the tail of the innovation distribution associated with such model is constituted by, approximately, the largest 10% observations of the sample of such innovations.

The facts that: i) the GARCH-GPD model makes up a good tool to quantify the market risk in Mexican financial series; and ii) that such model has shown a better performance than the GARCH-Normal and GARCH-Student-t; have important implications for financial institutions with exposure to Mexican series: it yields a methodology that could make their market risk estimates more accurate. That is, a methodology that could make its regulatory capital closer to its economic capital.

However, the results presented in this paper are only a first step to verify that such an objective is achievable through this methodology. Indeed, further analysis of the effectiveness of this model to describe of Mexican series’ returns should be made in the future. In particular, one could investigate if the results found in our application hold for Mexican financial returns of lower frequencies, such as 5 or 10 days. This mainly stems from the fact that the Basel Committee requires banks to calculate market risk capital requirements based on their 10-Day VaR, at 99% confidence.

Thus, an analysis as the one proposed would make possible to know whether Mexican banks would have enough incentives to follow to best practices in market risk management: one could calculate directly the capital requirements that institutions must satisfy and investigate their behavior under a risk-sensitive methodology, such as the one used in this paper.

Nevertheless, when performing an exercise as the described above, one should not overlook the Expected Shortfall. In the future, banking institutions’ capital requirements may become more dependent on such risk measure, as set out in the document Minimum Capital Requirements for Market Risk, published by the Basel Committee in January 2016.11

Additionally, future studies should explore the behavior of market risk measures for Mexican series in a multivariate context. That is, they must take into account the dependence between several returns of Mexican assets: in both the short and long terms.

In this regard, it is our believe that such an analysis should also be based on EVT: traditional dependence measures, such as the Pearson correlation coefficient, are based on deviations from the mean and give the same weight to extreme observations than to the rest of them. Moreover, most of such dependence measures only consider linear dependence between random variables.

Instead, the dependence analysis of this class of data could be based on the use of copulas. In particular, one could investigate which copula allows a better estimation of conditional risk measures for a portfolio composed by the series used in this paper (or other Mexican series), when its marginal distributions are modeled, in each period, by a GARCH-GPD model (or another more general model that adequately models such distributions).

An analysis like the previous one should also be executed in Mexican series of returns of different frequencies, like 1, 5 or 10 banking days. The latter case could investigate the behavior of the market risk capital requirements of a financial institution in a more realistic context.


*.

fn12The results presented in this paper are part of the authors bachelor thesis, which was supervised by Dr. María Asución Begoña Fernández Fernández. The author greatly thanks Dr. Begoña for her invaluable comments and support during the whole elaboration process of this work. The author also thanks two anonymous referees for helpful comments and observations on a previous version of this paper.


1.

fn1From a mathematical point of view, VaR is a quantile of the loss distribution, 𝑋, of an asset whereas Expected Shortfall is given by 𝐸[|𝑋>𝑉𝑎𝑅]. Regarding the interpretation of these risk measures, VaR can be thought as the loss that is not exceeded with a high probability (the so-called confidence level) while Expected Shortfall represents the expected loss given that the loss 𝑋 exceeds VaR.


2.

fn2EWMA is a more informal volatility forecasting technique popularized in J.P. Morgan’s RiskMetrics. Specifically, EWMA is a volatility exponential smoothing (it weights the data from most recent to most distant with a sequence of exponentially decreasing weights that sum to almost one) of the form

<mml:msub>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:msub>
<mml:msubsup>
<mml:mrow>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:mi>λ</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:mo>(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>-</mml:mo>
<mml:mi>λ</mml:mi>
<mml:mo>)</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:msub>
<mml:msubsup>
<mml:mrow>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
. Thus, EWMA is essentially a recursive scheme where the prediction at time t is obtained from the prediction at time t - 1. The choice of the parameter λ is subjective; the smaller the value the less weight is put on the most recent information. In Section 2, we discuss an approach that suggest how to select this value based on a GARCH specification. In this regard, more information about this technique and its relationship with GARCH models is available in Sections 4.2.4 and 4.4.1 of McNeil, A. J. et al. (2005).


3.

fn3Given that risk management typically focuses in analyzing financial losses, in risk measuring studies is typical to associate a positive sign to losses and a negative one to gains. This is done for both practical (e.g. work with positive random variables) and theoretical (e.g. facilitate the application of certain statistical techniques to losses data) reasons. Our paper follows this practice and associates a positive sign to Mexican peso depreciations and IPC losses.


4.

fn4There are many possible elections for the series dynamics (e.g. ARCH, HARCH processes), however, our election of the GARCH(1,1) is motivated by the capacity of such model to parsimoniously reproduce some of the main stylized facts (see Section 4.1.1 of McNeil, A. J. et al. (2005)) of financial returns series. Furthermore, studies like Hansen, P. R. and Lunde, A. (2005) have shown that such models are able to deliver accurate short-term volatility forecasts for series of financial returns. Evidence of the goodness of fit that this model delivers for our series is shown in Section 3.


5.

fn5In fact, as our goal is to evaluate risk measures estimations at high confidence levels (e.g. a=0.999), our backtests will have to use several years of daily information in order to observe loses that occur with the corresponding low probabilities (e.g. 0.001).


6.

fn6Indeed, works like Martínez, J. (2014) and ours show that for certain Mexican financial series, the value of the sum

<mml:mover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mo>+</mml:mo>
<mml:mover>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>β</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
is typically close to 1.


7.

fn7For those interested readers who are not familiar with Quasi-Maximum likelihood estimation procedures, we recommend reading Chapter 4 of McNeil, A. J. et al. (2005).


8.

fn8For the reader interested in the details of the construction of the maximum likelihood estimators of the parameters of the GPD, we recommend reading Section 6.5.1 of Embrechts, P. et al. (1997).


9.

fn9Threshold selection is a problem that typically involves the analysis of several numerical, graphical, and model adequacy tests (e.g. investigate the stability of the form parameter of the GPD for along a certain spectral of thresholds) that are not discussed here for briefness reasons. We just show the results of the thresholds that best satisfied our criteria. Notwithstanding, the interested reader can find more information about threshold selection analysis for Mexican financial time series in Campa, M. A. (2001).


10.

fn10For instance, for the

<mml:mi>V</mml:mi>
<mml:mi>a</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0.995</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
we expect to see, on average, 0.005(E4,06020 violations to such quantile. This means that we would have, approximately, just 20 observations to perform the corresponding binomial and bootstrap tests to evaluate the quality of the estimations of our risk measures.


11.

fn11Such document is available at http://www.bis.org/bcbs/publ/d352.pdf.

References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.

Enlaces refback

  • No hay ningún enlace refback.


URL de la licencia: info:eu-repo/semantics/openAccess
Métricas de artículo
Cargando métricas ...

Metrics powered by PLOS ALM