Validation methodology from HKMA CA-G-4

更新时间:2023-07-21 14:06:01 阅读量: 实用文档 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

1. Validation of Rating discriminatory power:

?Cumulative Accuracy Profile (“CAP”) and itssummary index, the Accuracy Ratio (“AR”);?Receiver Operating Characteristic (“ROC”) and itssummary indices, the ROC measure and the PietraIndex;

?Bayesian error rate (“BER”);

?Conditional entropy, Kullback-Leibler distance, andConditional Information Entropy Ratio (“CIER”);

?Information value (“IV”);

?Kendall’s τ and Somers’ D (for shadow ratings);

?Brier score (“BS”); (this is similar to RMSE which we will do in validation)

?Divergence.

Not applicable since we will follow Basel’s rankingand our validation is not on discriminatory power.

2. Validation of PD calibration

?Binomial test with assumption of independentdefault events;

?Binomial test with assumption of non-zero defaultcorrelation;

?Chi-square test.

We will do this validation on the calibration, for the accuracy instead of conservative.

3. Validation of LGD estimates

?Comparisons between internal LGD estimatesand relevant external data sources This may not be applicable since currently in HK and PRC region, no bank has fully completed the IFRS9 modelling work, including HSBC. For bond, the external LGD has been used in IFRS9 model already.

?Comparisons between realised LGD of newdefaulted facilities and their LGD estimates We will do this.

4. Validation of EAD estimates

?back-test their internal EAD estimates against therealised EAD of the new defaulted facilities

We will do this for some segments which Basel EAD will be modified in IFRS9

?Where available, AIs should compare their internalestimates with external benchmarks.

This may not be applicable since currently in HK and PRC region, no bank has fully completed the IFRS9 modelling work, including HSBC.

?compare the estimated aggregate EAD amount forthe subject facility type with the realised aggregateEAD amount for that facility type

We will do this for some segments which Basel EAD will be modified in IFRS9

5. Benchmarking

?HKMA will expect AIs to obtain theirbenchmarks from third parties, provided that relevantexternal benchmarks for a specific portfolio areavailable. When external benchmarks are not used,despite being available, the HKMA will expect AIs toprovide valid justifications and demonstrate that theyhave other compensating measures

(prehensive back-testing at a frequency higher thanrequired, such as quarterly,

with sufficient defaultobservations to ensure the reliability of the back-testingresults) to ensure the accuracy of their rating systems.The HKMA will not accept cost implications as the solejustification for not using external benchmarks.

For external benchmark, it may not be applicable since currently in HK and PRC region, no bank has fully completed the IFRS9 modelling work, including HSBC. And that’s why we suggest using a higher frequency data to do the back testing.

?Where a relevant external benchmark is not available(e.g. PD of SME and retail exposures, LGD and EAD),an AI should develop an internal benchmark. Forexample, to benchmark against a model-based ratingsystem, an AI might employ internal rating reviewers tore-rate a sample of credits on an expert-judgementbasis.

Internal benchmark will be useful for IRB since we could ask rating reviewers to do the judgmental rating for some sampling. However our IFRS9 methodology is based on Basel rating, this is not applicable for IFRS9 validation. And normally we do not build new models with other methodology since it is not comparable for the model performance under different methodologies.

?the HKMA willnormally expect AIs to use in validating their ratingsystems and internal estimates:

?comparison of internal estimates with benchmarkswith respect to a common or similar set ofborrowers/facilities;

?comparison of internal ratings and migrationmatrices with the ratings and migration matrices ofthird parties such as rating agencies or data pools;

?comparison of internal ratings with external expertjudgements, for example, where a portfolio has notexperienced recent losses but historical experiencesuggests that the

risk of loss is greater than zero;

?comparison of internal ratings or estimates withmarket-based proxies for credit quality, such asequity prices, bond spreads, or premiums for creditderivatives;

?analysis of the rating characteristics of similarlyrated exposures; and

?comparison of the average rating output for theportfolio as a whole with actual experience for theportfolio rather than focusing on estimates forindividual

borrowers/facilities.

As HKMA required, if we choose external benchmark, we need to assess the quality in adequately representing the riskcharacteristics of the portfolio under consideration, including definition of default, rating criteria, data quality, frequency of rating updates and assessment horizon.This will be challenging to us since it is not easy to get all these information for peers.Also currently in HK and PRC region, no bank has fully completed the IFRS9 modelling work, including HSBC.

6. The other information that HKMA mentioned in CA-G-4:

Validation methodology mentioned in Terminology:

?“k-fold cross validation” means a kind of testemploying resamplingtechniques. The data set isdivided into k subsets. Each time, one of the ksubsets is used as the validation data set and theother k-1 subsets are put together to form thedevelopment data set. By

repeating theprocedures k times, the targeted test statisticacross all k trials is then computed;

?“bootstrapping” means a resampling technique withreplacement of the data sampled, aiming togenerate information on the distribution of theunderlying data set;

?“in-sample validation” means validation of a ratingsystem employing observations that have beenused for developing the rating system;

?“out-of-sample validation” means validation of arating system employing observations that have notbeen used for developing the rating system;

?“out-of-time validation” means validation of a ratingsystem employing observations that are notcontemporary with the data used for developing therating system;

Besides, the HKMA CA-G-4 also mentioned the validation methodology which AI should consider due to the data limitations is described as below:

If out-of-sample and out-oftimevalidations cannot be conducted due to dataconstraints, AIs will be expected to employ statisticaltechniques such as k-fold cross validation orbootstrapping for this purpose.

本文来源:https://www.bwwdw.com/article/1wym.html

Top