Auriscon Auriscon

Sidebar

  • Home
  • Expertise
    • About Us
    • History
    • Technical Audit
    • Model Development
    • Business Processes
  • Insight Contents
    • IFRS 9 and Expected Credit Losses
    • Applications of Scenario Analysis in Finance
    • Analytics and Data in Auditing
    • Default Risk in the Trading Book
    • Methods of Machine Learning
    • System Dynamics Insights
  • Explore
  • Contact
  • Home
  • Expertise
    • About Us
    • History
    • Technical Audit
    • Model Development
    • Business Processes
  • Insight Contents
    • IFRS 9 and Expected Credit Losses
    • Applications of Scenario Analysis in Finance
    • Analytics and Data in Auditing
    • Default Risk in the Trading Book
    • Methods of Machine Learning
    • System Dynamics Insights
  • Explore
  • Contact

Uncategorised

Demo Under Construction

Details
Uncategorised
 

AURISCON Ltd


Disclaimer: Data, charts and commentary displayed are for information purposes only. Auriscon Ltd assumes no liability.

Regularization methods

Applying regularization techniques

 


UNDER CONSTRUCTION …


Demo VAR Method

Details
Uncategorised
 

Vector Autoregression (VAR) - An Outline

Estimation and Analysis of VAR Models


 

Disclaimer: Data, charts and commentary displayed are for information purposes only with no obligation and guarantee assumed.

 


Overview

  • The VAR methodology

  • Simulation of VAR processes

  • Estimation of VAR Models

  • Diagnostics of VAR Models


The VAR methodology

Vector Autoregressive (VAR) models render an important tool for analysing macroeconomic and financial data, specifically useful for analysing the dependence and dynamics of variables. VAR models explain a group of endogeneous (to be explained) variables based solely on its common history of data. Using VAR permits to build a time series model of a group of variables without having to specify any theoretical economic model to explain the relationships among variables.

The most simple VAR model with one lagged variable for each endogeneous variable is the VAR(1) model

\[y_{1,t} = a_{11} \cdot y_{1,t-1} + a_{12} \cdot y_{2,t-1} + \epsilon_{1,t}\] \[y_{2,t} = a_{21} \cdot y_{1,t-1} + a_{22} \cdot y_{2,t-1} + \epsilon_{2,t}\]

VAR(1) model in matrix notation

\[\begin{pmatrix} y_{1,t} \\ y_{2,t} \end{pmatrix} = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \begin{pmatrix} y_{1,t-1} \\ y_{2,t-1} \end{pmatrix} + \begin{pmatrix} \epsilon_{1,t} \\\epsilon_{2,t}\end{pmatrix}\]

A VAR model of order p VAR(p) will contain for each variable in the vector at the left-hand side of the equation system p lagged variables at the right hand side. Noteworthy is any magnitude of the coefficients close to 1 as this indicates persistence of shocks, or even non-stationarity. The VAR(p) model represented in compact matrix notation is

\[Y_{t} = \boldsymbol{A}_{1} \cdot Y_{t-1} + \boldsymbol{A}_{2} \cdot Y_{t-2} + ... + \boldsymbol{A}_{p} \cdot Y_{t-p} + \epsilon_{t}\]

It is important to note that the VAR methodology will not explain causal dependencies between economic variables, but rather provide insights in their dynamic relationships. The latter is conceptualised by means of Granger causality, e.g. \(y_{2}\) is not Granger causal for \(y_{1}\) if the lagged value of \(y_{2, t-1}\) is not impacting on the current value of \(y_{1, t}\). Consequently, when rejecting the hypothis \(a_{12} = 0\) \(y_{2}\) is Granger causal for \(y_{1}\) .


Stationarity
  • The estimation of VAR model parameters requires stationarity of variables, e.g. for a VAR model with endogenous variables only (no exogeneous variables inlcuded) the left-hand side variable vector should be covariance-stationary in all variabes.
  • Precisely, a stationary VAR model generates stationary time series for each of the endogeneous variables with time-invariant mean, variance and covariance. A simple approach to verify stationarity is to visually inspect the fluctuations for each endogenous variable around the mean.
  • The stationarity condition of a VAR(p) model with k endogenous variable can be checked evaluating the characteristic polynomial. Specifically, the VAR(p) is stable is the roots of the characteric polynomial lay outside of the unit circle. \[det (\boldsymbol{I}_{K} - \boldsymbol{A}_{1} \cdot z - ... - \boldsymbol{A}_{p} \cdot z^{p} ) = 0\]
  • According to Pfaff, the stationarity of a VAR(p) model is verified in practice by calculating the eigenvalues of the coefficient matrix A that is associated to the stacked VAR(1) representation of the VAR(p) model. If the moduli of the eigenvalues are less than one, then the VAR(p) process is stable. By contrast, eigenvalues greater or equal one indicate non-stationarity of variables.

\[V_{t} = \boldsymbol{A} \cdot V_{t-1} \] \[\boldsymbol{A} = \begin{bmatrix} A_{1} & A_{2} & ... & A_{p-1} & A_{p} \\ I & 0 & ... & 0 & 0 \\ 0 & I & ... & 0 & 0 \\ ... & ... & ... & ... & ... \\ 0 & 0 & ... & 0 & I \end{bmatrix} \]

  • Another test used to determine stationarity is the Augmented Dickey Fuller test ADF test.
  • Should a time series be non-stationary as evidenced by its unit roots, differencing of data permits to transform from a non-stationary to a stationary time series. However, in the case of unit roots and cointegration and Error Correction Model needs to be applied.

Estimation of VAR Models

The estimation of VAR models involve several aspects that require consideration.

  • The lag length p of the VAR(p) models needs to be determined. An information criterion such as the one from Akaike (AIC) or Bayes-Schwarts (BIC) should be applied to select an appropriate lag length.
  • To balance the number of endogeneous variables included in the model against the history of data available (note that the number of observations required for estimation corresponds to the square of the number of variables).
  • To transform raw variables, e.g. using logs to ensure constancy of variances and to use rates or differences to ensure stationarity.
  • To include deterministic terms when time series have been subtracted by mean values

Error terms and diagnostic testing
  • Due to the Least Square estimation method, requirements for the error terms exists. Specifically that the expectation value of each error term \(E[\epsilon_{i}] = 0\) (note that this requirement is satisfied if the time series exhibit a mean value of null. An expansion of equations by a constant is required should the empirical mean value be unequal null).
  • Furthermore, any autocorrelation (= serial correlation) of error terms is not permitted \(E[\epsilon_{i,t1} \cdot \epsilon_{i,t2}] = 0\) (note that autocorrelation of error terms typically lead to biased estimators).
  • The error terms \(\epsilon_{i}\) may be correlated however, with the dependency structure represented by a covariance matrix \(\Omega\). Typically, error terms are correlated for variables where exogeneous shocks impact simultaneously on variables. This occurs for situations where variables are not fully explained by its own past.

Impulse-Response Function

To evaluate the response of the VAR system to shocks in any of the variables, the Impuse-Response Function (IRF) approach is used. The IRF approach is helpful since the coefficients in the VAR model tell little about the dynamics. Specifically, a variable’s residual is unit shocked at an initial time and the shocked VAR dynamics is compared to the VAR dynamics without any shock. This is best illustrated with a VAR(1) model
\[y_{t} = \boldsymbol{A} \cdot y_{t-1} + \epsilon_{t}\] Using this setting leads to observing the impact of shocking one variable: At time \(t = 1\) variable \(y_{1}\) is shocked by \(\epsilon_{1} = 1\) and variable \(y_{2}\) is not shocked as specified by \(\epsilon_{2} = 0\).
At time \(t = 2\) the impact on variable \(y_{1}\) is \(y_{2} = a_{11} \cdot 1\) and the impact on variable \(y_{2}\) is a_{21} 1$. The approach is repeated for subsequent time periods.


Structural VAR (SVAR)

Contemporaneous variable that appear on the right hand side (RHS) of the set of equation indicate a contemporaneous feedback terms. This is illustrated by two endogeneous variables, where one variable impact on the other variable contemporaneously.

\[y_{1,t} = a_{11} \cdot y_{1,t-1} + a_{12} \cdot y_{2,t-1} + b_{12} \cdot y_{2,t} + \epsilon_{1,t}\] \[y_{2,t} = a_{21} \cdot y_{1,t-1} + a_{22} \cdot y_{2,t-1} + b_{21} \cdot y_{1,t} + \epsilon_{2,t}\]

The contemporaneous term can be taken over to the left hand side and the standard VAR form can be achieved after mulitplying with the inverse coeffcient matrix.

\[\begin{pmatrix} y_{1,t} \\ y_{2,t} \end{pmatrix} = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \begin{pmatrix} y_{1,t-1} \\ y_{2,t-1} \end{pmatrix} + \begin{bmatrix} 0 & b_{12} \\ b_{21} & 0 \end{bmatrix} \begin{pmatrix} y_{1,t} \\ y_{2,t} \end{pmatrix} + \begin{pmatrix} \epsilon_{1,t} \\\epsilon_{2,t}\end{pmatrix}\]

To ensure estimation, a SVAR(1) model would need to be restricted on the coefficients of the contemporaneous terms, e.g. one coefficient has to be set to 0 to ensure a valid definition of the LHS.


VAR with exogeneous variables (VARX)

Extensions of the VAR are implemented by including constants and trends, denoted by VARX. A system of VAR equations that contains a vector of exogeneous (= not explained) variables X

\[y_{t} = \textbf{A} \cdot y_{t-1} + \boldsymbol{B} \cdot X_{t} + \epsilon_{t}\]

Vector Error Correction Models (VECM)

If the variables used to make up the left-hand side vector y (and any exogeneous variables) are not-stationary, an application of the VAR estimation approach is not valid anymore. An estimation is nonetheless possible using the Vector Error Correction Model (VECM) providing the endogeneous variables in y are difference-stationary. Just using the first differences of I(1) integrated variables in a VAR Model may seem plausible an approach but would disregard the long-run relationship of variables, i.e. the long-run responses of variables to shocks in each variable. Therefore, a VECM Model will have to be used by adding a lagged error-correction term to the VAR Model to enable the capture of long-run tendencies. An example of a long-run relationship may be found in the dividend / price ratio of equity shares where the ratio is low and the price is high during bubbles, with reversion towards a long-run relationship. Another example often cited in literature is the Purchasing-Power-Parity (PPP) which states a convergence towards the law of one price when comparing the price of goods and services expressed in foreign and domestic currency.


 

Simulation of VAR processes

 

Given the specification of a VAR model, the simulation of data can be performed by sampling from the distribution of error terms.

 

Simulating a VAR(1) Model

  • The simulation of the bivariate VAR(1) process with correlated normal error terms has been executed using 200 data points. The correlation of error terms is specified by the following covariance matrix.

\[\boldsymbol{\Sigma} = \begin{pmatrix} 1.0 & 0.5 \\ 0.5 & 1.0 \end{pmatrix}\] - The VAR(1) model variables are stationary given the eigenvalues 0.9772002 and 0.1227998obtained from the coefficient matrix are less than one.

\[\begin{pmatrix} y_{1t} \\ y_{2t} \end{pmatrix} = \begin{pmatrix} -0.7 \\1.3\end{pmatrix} + \begin{bmatrix} 0.7 & 0.2 \\ 0.2 & 0.7 \end{bmatrix} \begin{pmatrix} y_{1t-1} \\ y_{2t-1} \end{pmatrix} + \begin{pmatrix} \epsilon_{1t} \\\epsilon_{2t}\end{pmatrix}\]

 

Simulating a VAR(2) Model

  • The simulation of the bivariate VAR(2) process with uncorrelated standard normal error terms has been executed using 200 data points

\[\begin{pmatrix} y_{1,t} \\ y_{2,t} \end{pmatrix} = \begin{pmatrix} 5.0 \\ 10.0 \end{pmatrix} + \begin{bmatrix} 5.0 & 0.2 \\ -0.2 & -0.5 \end{bmatrix} \begin{pmatrix} y_{1,t-1} \\ y_{2,t-1} \end{pmatrix} + \begin{bmatrix} -0.3 & -0.7 \\ -0.1 & 0.3 \end{bmatrix} \begin{pmatrix} y_{1,t-2} \\ y_{2,t-2} \end{pmatrix} + \begin{pmatrix} \epsilon_{1,t} \\\epsilon_{2,t}\end{pmatrix}\]

 


 

Estimation of VAR Models

Estimating a VAR(1) Model

  • We estimate a VAR(1) model using the 200 simulated data points of the above specifified bivariate VAR(1).

  • For the selection of the Lag order, i.e. the optimal lag length, the Akaike (AIC) and the Schwarz (SC) Information criteria are compared for varying lag orders (cf. table 1). By comparing both the AIC and the SIC values we observe a minimum absolute information values regarding SIC at a lag length of 1. Note that a minimum information value represents an optimal balance between a minimum residual variance and a maximum number of variables.

  • The coefficients of the estimated VAR(1) model are shown together with associated statistics in table 2 and 3.

TABLE 1: AIC and SIC Information criteria for various lag orders
 12345678
AIC(n) -2.082 -2.048 -2.04 -2.016 -1.981 -1.955 -1.951 -1.973
HQ(n) -2.041 -1.979 -1.95 -1.893 -1.830 -1.777 -1.745 -1.740
SC(n) -1.980 -1.878 -1.80 -1.711 -1.608 -1.514 -1.442 -1.396
FPE(n) 0.125 0.129 0.13 0.133 0.138 0.142 0.142 0.139
TABLE 2: Coefficient statistics of Variable Y1
 EstimateStd. Errort valuePr(>|t|)
y1.l1 0.762 0.074 10.33 0.000
y2.l1 -0.338 0.103 -3.27 0.001
TABLE 3: Coefficient statistics of Variable Y2
 EstimateStd. Errort valuePr(>|t|)
y1.l1 -0.207 0.041 -5.08 0
y2.l1 0.714 0.057 12.49 0

 

Diagnostics of VAR Models

 

Diagnostical Testing of VAR(2)

  • We first fit the following VAR(2) Model to the 200 data points obtained from simulating the above specified bivariate VAR(2) Model

 

\[Y_{t} = \boldsymbol{C} + \boldsymbol{A}_{1} \cdot Y_{t-1} + \boldsymbol{A}_{2} \cdot Y_{t-2}+ \boldsymbol{\epsilon}_{t}\]

\[\boldsymbol{C} = \begin{pmatrix} 4.449 \\ 10.275 \end{pmatrix}, \boldsymbol{A}_{1} = \begin{bmatrix} 0.496 & 0.206 \\ -0.241 & -0.512 \end{bmatrix}, \boldsymbol{A}_{2} = \begin{bmatrix} -0.272 & -0.642 \\ -0.122 & 0.292 \end{bmatrix}\]

 

  • The residuals of variable y1 are plotted together with empirical distribution and autocorrelation functions (only y1 is displayed). A basic assumption of the VAR model is that the residuals exhibit no serial autocorrelations. Therefore, the lag order of the estimated model should be sufficient to ensure absence of autocorrelations.

  • Autocorrelations in the residuals are tested using the asymptotic Portmanteu test with the following test-statistic (p-value) obtained 20.264 (0.682)

  • Heteroskedasticity in the residuals are tested using the Arch test with a test statistic value (p-value) of 48.377 (0.338) obtained.

  • Normality in the residuals are tested by applying the Jarque-Bera test to the residuals of each variable. For variable y1 the test statistic value (p-value) obtained is 2.497 (0.287). For variable y2 the test statistic value (p-value) obtained is 0.47 (0.79). Occurence of non-normality in residuals may be caused by outlier residuals which could indicate a misspecification of the VAR model.

  • Skewness and kurtosis of residuals is tested on a multi-variate basis with the following test statistic values (p-values) obained are 2.662 (0.264) and 0.581 (0.748)

  • The stationarity condition is tested based on the eigenvalues of the stacked coefficient matrices

TABLE 5: roots
 1st root2nd root3rd root4th root
value 0.83 0.6 0.521 0.521

Demo IMA Approach

Details
Uncategorised
 

Market Risk Internal Model Approach (IMA)

A sampled collection of requirements and approaches

Disclaimer: Data, charts and commentary displayed are for information purposes only and do not provide any advisory.


Overview

  • The Regulatory Context associated to Market Risk

  • Basel IV - The Changes resulting from the FRTB


 

The Regulatory Context associated to Market Risk

 

Basel 2.5 to 3 - The Outline of Regulatory Requirements

The Internal Model Approach (IMA) allows banking institutions to use their own risk models for capital calculations conditional on the approval of the supervisory authority. The specifics of the Value-at-Risk methodology used until Basel 3 vary among the models devised by the banks. However, certain criteria must be met:

 

\[VaR_{\alpha}(L):=P(L > VaR_{\alpha}(L))=1-\alpha\]

  • computed daily with 99% confidence level for a 10-day holding period with data sampled over at least one year;

  • all releveant market risk factors must be captured in the risk model and for options, the non-linear effect of risk factors must be included;

  • capital multiplier is determined in the range of 3-4 based on the model’s backtesting performance;

 

Basel requirements on VaR and SvaR methodologies, operation and reporting are implemented and set out under EU law in the Capital Requirements Regulation (CRR). Besides specification on methodology aspects and backtesting, requirements place also specific demands on data inputs and data quality, as well as capital charges pertaining to specific risk categories. Sampled details on requirements are listed under the following headings

  • The Requirement of VaR and SVaR Methodologies

  • The Backtesting Requirements

  • The Requirements of the IRC Methodology

  • The Requirements on the Risks Not in the Model

 

The key changes introduced by the Basel III initiative to add to Basel 2.5 include

  1. The re-definition of Tier 1 Capital to emphasise the importance of the common share and retained earnings as capital components;

  2. The introduction of a Leverage Ratio (LR) to prevent the build up of excessive leverage and in turn, any rapid deleveraging and downward trend in asset prices as a result thereof; \[LR = \frac{Tier 1 Capital}{Total Assets}\]

  3. The introduction of a Liquidity Coverage Ratio (LCR) to ensure sufficient liquid assets are maintained over a stressed regulatory scenario of 30 days such that the LCR remains greater than 100%; \[LCR = \frac{Liquid Assets}{Net Cash Outflow}\]

  4. The introduction of a Net Stable Funding Ratio (NSFR) to maintain stable funding sources within a one year period such that the NSFR remains greater than 100%.; \[NSFR = \frac{Funding available}{Funding required}\]

 

Basel 4 - The Outline of Regulatory Requirements

The Basel Committee of Banking Supervision (BCBS) published the minimum capital requirements for market risk in January 2019 (rev. February 2019). This final rule regulatory text - commonly referred to as the Fundamental Review of Trading Book (FRTB) - replaces the previous minimum capital requirements for market risk.The FRTB amendments to the regulation is implemented in the EU in the revised Credit Risk Regulation (CRR 2).

The revision pertains at the top level to the Standardized Approach (SA) and the Internal Model Approach (IMA). A key componentent of the SA approach is a sensitivity based capital calculation.

Key components of the IMA approach comprise the use of a liquidity adjusted Expected Shortfall (ES) calculation, the classification of risk factors as modellable or non-modellable (NMRF) and the replacement of the Increment Risk Charge by the Default Risk Charge (DRC).

Another enhancement incorporated under IMA and SA concerns the varying market liquidity associated to different risk factors (asset classes). Based on the assignment of liquidity horizons to risk factors a scaling of capital requirements in line with the risk of illiquidity is achieved, and the capital charge is reported for varying liquidity horizons.

Illustrated in the graph below is a comparison of changes introduced by the FRTB under the Internal Model Approach (IMA).

 

Basel IV - The Changes resulting from the FRTB

The Basel Committee of Banking Supervision (BCBS) published the Fundamental Review of Trading Book (FRTB) final rule to be incorporated under Basel IV. Implementations of Basel IV are performed by regulatory authorities such as FED, ECB/EBA and BOE/PRA with a deadline of January 2022.

The Basel Committee of Banking Supervision (BCBS) published the minimum capital requirements for market risk in January 2019 (rev. February 2019). This final rule regulatory text - commonly referred to as the Fundamental Review of Trading Book (FRTB) - replaces the previous minimum capital requirements for market risk.The FRTB amendments to the regulation is implemented in the EU in the revised Credit Risk Regulation (CRR 2).

Shown in the graph below are the components of FRTB split by Standardized (SA) and Internal Model Aproach (IMA).

 

The key changes introduced by the Fundamental Review of the Trading Book (FRTB) include

  1. The separate computation of the Standardized Approach (SA) capital charge for each trading desk. The separate computation under SA must be performed regardless of any computation already performed under IMA. This paralleled computation becomes mandatory to ensure a fall-back is available. The SA charge is computed along the key components involved: the sensitivity based charge, the Regulatory Risk Add On (RRAO) and the Default Risk Charge (DRC).
  • The Sensitivity Based Approach (SBA) accounts for linear and non-linear risk sensitivities and uses prescribed risk weights and correlation parameter

  • The Residual Risk Add-On (SRSA) as a component within standardized accounts for linear and non-linear risk sensitivities

  • The Default Risk Charge (DRC) as a component within standardized accounts for the credit risk in the trading book and uses prescribed risk weights and Basel parameter.

  1. The introduction of the Espected Shortfall (ES) methodology as a replacement for VaR and sVaR;
  • The Expected Shortfall metric, also sometimes referred to a conditional VaR, is defined as the expected loss given that the P&L exceeds the VaR at a 97.5% confidence level. According to Dowd (2005) - Measuring Market Risk, John Wiley & Sons - the ES metric can be interpreted as the average VaR for all confidence levels greater equal than the chosen one.

\[ES_{\alpha}(L)=\frac{1}{1-\alpha} \cdot \int_\alpha^1VaR_{\alpha}(L) d\alpha\]

  • The replacement of VaR with Expected Shortfall is motivated by the preferable properties of the latter. In contrast to the VaR metric, the ES metric captures tail risks by accounting for the magnitude of losses that occur above the confidence level.

  • Similar to the VaR approach, a scaling-up of the ES metric from the 1-day to the 10-day horizon can be performed and the ES metric must be calibrated to a period of market stress, i.e. the most stressed 12 month period based on the current portfolio. However, the identification of a suitable period of significant market stress can be based on a reduced set of risk factors, thereby reducing the complexity.

  • Moreover, the demise of the VaR/sVaR model concept also impacts on the measurement of the Risks Not in the Model (RNIM). Non-Modellable Risk Factors (NMRF) should be excluded from the Expected Shortfall (ES) Model and should instead be subject to stress scenario implied capital requirements, i.e. a loss calibrated to the 97.5% confidence level over a period of extreme stress. The assessment of the modellability of risk factors is a separate requirement of the FRTB and set out in the CRR.

  • Note that for a normal distributed Random Variable the following relationship between ES and VaR metrics holds \[ES_{0.975} = VaR_{0.99}\]

  1. The classification of risk factors as modellable vs. non-modellable requires an assessment of modellability. Risk factors identified as non-modellable (NMFR) are capitalized under a Stress Scenario Risk Measure (SSRM). This ensures that only risk factors that are sufficiently liquid and observable are used for the Expected Shortfall (ES) calculation.
  • The assessment of modellability of risk factors is performed quarterly in relation to criteria on the number of data points available for a certain risk factor. Precisely, 24 data points must be observable during the preceeding 12-month period, without any 90-day period with less than 4 data points.

  • The risk factors in scope of the modellability assessment include Interest Rate curves, systematic and idiosyncratic factors pertaining to Equity multi-factor models, parameters pertaining to volatility surfaces etc.

  • The identification of the stress scenario should be performed for each non-modellable risk factor.

  • Risk factors classified as of the NMRF type are subjected to a Stress Scenario Risk Measure (SSRM). Precisely, the SSRM is the loss resulting from applying the stress scenario to that risk factor. As a consequence a separate capital charge is applied to all non-modellable risk factors which is typically based on a Stressed Expected Shortfall (SES).

  1. The introduction of additional Liquidity Horizons to replace the uniform 10-day holding period horizon used under Basel 3
  • The new Liquidity Horizons replace the old 10-day horizon across all risk factors. As a result, the regulatory capital becomes more dear for risk factors associated to larger horizons (> 10 days)

  • Liquidity Horizons range from 10 to 120 days depending on the complexity of the asset type. Shown belown for illustration are different horizons as pertaining to various asset classes.

 Liquidity Horizon (days)
Equity Volatility 20
Interest Rate Volatility 40
Credit Spread Volatility (HY) 60

 

  1. The replacement of the Incremental Risk Charge (IRC) model by the Default Risk Charge (DRC) model. The DRC model captures the default of individual issuers as well as the simultaneous default of multiple issuers and computes the market value impact of those defaults on the debt and equity positions in the trading book.

  2. The application of a more rigorous model approval process at the trading desk level.

  3. The strenghtening of data integrity is highlighted by the introduction of the concept of modellable vs. non-modellable risk factors. A risk factor qualifies for being modellable if the data used for calibration have more than 24 data points per year (with a maximum period of one month between consecutive months). Capitalization of non-modellable risk factors is performed under a stress scenario and diversification (correlation) effects are not permitted.

 


The Requirement of VaR and SVaR Methodologies

Requirements on VaR and SvaR methodologies and model operation are set out in the Capital Requirements Regulation (CRR) Article 365 and 367. Among these requirements are specific demands on data inputs and data quality, as well as model assuptions and proxies:

  • Assumptions related to risk factors and their distributions are expected to be shown appropriate on market data;

  • Appropriateness of risk factors used for VaR/sVaR calculation should be satisfactory in terms of numbers of factors that reconcile with pricing and economic P&L by means of an inventorized mapping;

  • Appropriateness of pricing methods used for VaR/sVaR calculation should be satisfactory in terms of a reconciliattion with the pricing methos used for economic P&L;

  • For Monte Carlo simulations it is important to ensure a sufficient number of simulation runs to allow the production of precise and stable Var / sVaR figures;

  • The risk factor times series used to input the VaR / sVaR models are expected to be of sufficient data quality covering the extent of missing and replaced data and estblishing a documentation on the methodoldogy used for filling missing data. OUtliers in data should not be replaced unless there is convincing case of erroneous data.

  • For proxies used in VaR/sVaR calculation the recognition of valid approaches rests on the use of standard methods, e.g. the use of beta for equity returns, and the consistency of market data used for pricing (economic P&L) vis-a-vis the actual and hypothetical P&L. In this regards, transparency should be provided on the number of proxied time series;

 

The Backtesting Requirements

Note that a regulatory multiplication factor is used to establish a direct link between the number of outliers and the capital requirements. This way, the regulatory capital requirements are dependent on the model’s adequacy.

In a nutshell, backtesting compares the daily 99%-VaR against each of the hypothetical P&L and the actual P&L measured over the historical 1-year horizon. Note that the hypothetical P&L is calculated based on unchanged positions and obtained by revaluing the positions at the end of the previous day using the market data at the end of the current day. hence, P&L changes due to new positions and intraday trading are not accounted for and the emphasis is put on the change in risk factors.

By contrast, actual P&L is obtained from actual market data of the current day, including profits and losses from intraday trading but excluding commissions, valution adjustments and Net Interest Income.

The exceptions are measured as the outliers that exceed the daily VaR of either the hypothetical or actual loss at a certain day.The counting of exceptions applied to backtesting uses the greater number of either the hypothetical or the actual exceptions.

An overall objective of backtesting is to provide confidence that the VaR model adequately accounts for the instrument level volatility and the correlations.

CRR regulations as set out in CRR Articles 106 (3), 352 (2), 366 and 386 include a definition of the historical data period to be used for backtesting. Backtesting of VaR limits is mandatory an exercise for both hypothetical and actual P&L (cf. CRR Article 366 (3)).

The CRR requirements point out that

  • Backtesting should be based on the number of VaR overshootings over the most recent past of 250 business days. For this, the recognition of any overshooting should be based on a comparison of overshootings on hypothetical and and actual changes and the higher number should be used;

  • The recognition of VaR overshootings (exceptions) should be complemented by a rigorous analysis of the reasons involveld, e.g. portfolio analysis to identify positions and market moves responsible and model analysis to identify model shortcomings,

  • Documentation on policies and procedures should exists to explain the calculation details of actual & hypothetical P&L and how these differ;

  • The actual P&L used must be before fees and Net Interest Income and should use the same pricing and model configurations as used for calculation of economic P&L. Furthermore that Credit, Debt Valuation and Addional Valuation Adjustment (AVA) should be disregarded for the calculation of actual P&L but that all other valuation adjustments should be included. Finally that values changes due to counterparty default should be out-of-scope for calculation of actual P&L;

  • The hypothetical P&L on the other hand has to emphasis the change of risk factors only, and is calculated based on unchanged trading positions (disregarding any intraday change in positions or any new positions over the past day) with any valution adjustments being out-of-scope;


 

The Requirements of the IRC Methodology

The Incremental Risk Charge (IRC) is the risk charge associated with the default and migration risk of trading book positions excluding securitisations.

Note that with effect of implementing FRTB requirements, the IRC is required to be overriden by the Default Risk Charge (DRC).

 

  • The Incremental Risk Charge (IRC) is a one-year VaR at the 99.9% confidence level devised to measure the Credit Risk in the Trading Book. Its purpose is to capture losses due to credit migrations and defaults in the Trading Book;

  • The IRC approach is based on the simulation of asset returns over a 1-year horizon using a multi-period model. A multi-period model is required to account for intra-year liquidity horizons and for a rebalancing of the portfolio, i.e. replacing deteriorated credit risky instruments at the end of their liquidity horizons to roll back to a constant level of risk;

  • Parameter inputs to the model required include a correlation matrix of asset returns based on historical data and asset return thresholds obtained from a suitable credit migration matrix;

  • The minimum liquidity horizon permitted is 3 months. Typically, longer liquidity horizons are assigned to securities of comparatively worse credit quality.

  • A scenario generator is used to simulate the value change of each position at their respective liquidity horizons. Given that a rebalancing occurs for those postions defaulted or migrated in the scenario, a total value change a total value change needs to be calculated across positions over the one-year horizon. The total value change simulated for each position in the scenario is calculated by compounding the respective value changes at the liquidity horizons up to the one-year horizon. The distribution of portfolio value changes is thus available from the generation of a sufficient number of scenarios;

  • Regulatory expectations on the opeation and maintance of te IRC Model include that the number of simulations used is adequate to provide an accurate estimate of the charge. A confidence interval surrounding the IRC estimate should be provided to provide disclosure on the statistical errror. Bespoke Transition Matrices for Sovereigns and other type of issuers should be used. The Factor Model should consist of two kinds of systematic factors and the Correlation Matrix should be quantified based on market data. Finally, the estimates used for PDs and Recoveries should be primarily based on the banks own IRB measures, and a hierarchy of sources should be established and consistently applied to avoid cherry picking.

 


 

The Requirements on the Risks Not in the Model (RNIM)

The Risks not in Model (RNIM) is the risk charge associated to risks not considered in the internal model. Precisely this covers any risks not modelled in the internal model and any risks where the data issues prevent any modelling.

Reasons for risks not captured by VaR, SVaR and IRC model categories can include data issues (e.g. non-availability) or that certain risks can’t be modelled. The quantification of RNIM should be commensurate with the loss in each category (10day 99% VaR, 10 day 99% SVaR, 1year 99.9% IRC). CRR regulations on Risks not in the Model are set out in the articles 367 and 369.

 

  • The identification of RNIM is required to be an integrated element of the new product approval process and the backtesting / validation process;

  • The measurement of RNIM is required to quantify the impact of RNIV on VaR/sVaR or IRC on a standalone and on a total basis without considering diversification effects;

  • A monitoring of RNIV should be established and performed regularly;

 

Main Menu

  • Home
  • Expertise
    • About Us
    • History
    • Technical Audit
    • Model Development
    • Business Processes
  • Insight Contents
    • IFRS 9 and Expected Credit Losses
    • Applications of Scenario Analysis in Finance
    • Analytics and Data in Auditing
    • Default Risk in the Trading Book
    • Methods of Machine Learning
    • System Dynamics Insights
  • Explore
  • Contact

Consulting Service

  • Business Processes
  • Model Development
  • Technical Audit
  1. You are here:  
  2. Home
  3. Expertise
  4. Model Development
  5. Main Content
  6. Model Risk Governance

Contact

AURISCON Ltd
167-169 Great Portland Street
5th Floor
London W1W 5PF
United Kingdom
 
  • +44 (0)7506 456742
  • This email address is being protected from spambots. You need JavaScript enabled to view it.

Company Details

Company No: 11889090

VAT Reg No: 322451830

Copyright © 2023 - 2025 Auriscon Limited.
All rights reserved