Skip to content

Portfolio API

creditriskengine.portfolio.stress_testing

Macro stress testing framework.

Supports EBA, BoE ACS, US CCAR/DFAST, and RBI methodologies.

References
  • EBA Methodological Note (EU-wide stress testing)
  • Bank of England: Annual Cyclical Scenario (ACS) framework
  • Federal Reserve: SR 15-18, SR 15-19 (CCAR/DFAST)
  • RBI Master Circular on Stress Testing

MacroScenario

A macroeconomic stress scenario.

Source code in creditriskengine\portfolio\stress_testing.py
class MacroScenario:
    """A macroeconomic stress scenario."""

    def __init__(
        self,
        name: str,
        horizon_years: int = 3,
        variables: dict[str, np.ndarray] | None = None,
        severity: str = "adverse",
    ) -> None:
        self.name = name
        self.horizon_years = horizon_years
        self.variables = variables or {}
        self.severity = severity

EBAStressTest

EBA stress test framework -- constrained bottom-up approach.

Implements the methodology used in the EU-wide stress testing exercise coordinated by the European Banking Authority.

Reference
  • EBA Methodological Note (latest: 2023/2025 exercise)
  • EBA GL/2018/04 on institutions' stress testing
Key features
  • Static balance sheet assumption (portfolio composition frozen).
  • 3-year projection horizon (baseline and adverse).
  • Constrained bottom-up: banks use own models, EBA provides macro scenario and prescriptive constraints on key parameters.
  • PD/LGD shifts derived from macro scenario translation.
  • Regulatory PD floor applied (CRR Art. 160).

Parameters:

Name Type Description Default
scenario MacroScenario

Macro scenario with at least 3 years of projections.

required
horizon_years int

Projection horizon (default 3, EBA standard).

3
static_balance_sheet bool

Whether to enforce static balance sheet.

True
pd_floor float

Regulatory PD floor (CRR Art. 160: 0.03% for corporate).

0.0003
Source code in creditriskengine\portfolio\stress_testing.py
class EBAStressTest:
    """EBA stress test framework -- constrained bottom-up approach.

    Implements the methodology used in the EU-wide stress testing exercise
    coordinated by the European Banking Authority.

    Reference:
        - EBA Methodological Note (latest: 2023/2025 exercise)
        - EBA GL/2018/04 on institutions' stress testing

    Key features:
        - Static balance sheet assumption (portfolio composition frozen).
        - 3-year projection horizon (baseline and adverse).
        - Constrained bottom-up: banks use own models, EBA provides macro
          scenario and prescriptive constraints on key parameters.
        - PD/LGD shifts derived from macro scenario translation.
        - Regulatory PD floor applied (CRR Art. 160).

    Args:
        scenario: Macro scenario with at least 3 years of projections.
        horizon_years: Projection horizon (default 3, EBA standard).
        static_balance_sheet: Whether to enforce static balance sheet.
        pd_floor: Regulatory PD floor (CRR Art. 160: 0.03% for corporate).
    """

    def __init__(
        self,
        scenario: MacroScenario,
        horizon_years: int = 3,
        static_balance_sheet: bool = True,
        pd_floor: float = 0.0003,
    ) -> None:
        if horizon_years < 3:
            raise ValueError("EBA stress test requires a minimum 3-year horizon.")
        self.scenario = scenario
        self.horizon_years = horizon_years
        self.static_balance_sheet = static_balance_sheet
        self.pd_floor = pd_floor
        logger.info(
            "EBAStressTest initialised: scenario='%s', horizon=%d years, "
            "static_bs=%s, pd_floor=%.4f",
            scenario.name,
            horizon_years,
            static_balance_sheet,
            pd_floor,
        )

    def translate_macro_to_pd_stress(
        self,
        base_pds: np.ndarray,
        gdp_sensitivity: float = 2.0,
    ) -> np.ndarray:
        """Translate macro scenario to PD stress multipliers.

        Simple linear translation: multiplier = 1 - sensitivity * gdp_growth_deviation.

        Args:
            base_pds: Baseline PDs (unused, for interface consistency).
            gdp_sensitivity: Sensitivity of PD to GDP growth deviation.

        Returns:
            PD multipliers per period (shape: horizon_years,).
        """
        gdp = self.scenario.variables.get("gdp_growth", np.zeros(self.horizon_years))
        baseline_gdp = 0.02  # Assumed baseline GDP growth
        multipliers = 1.0 - gdp_sensitivity * (gdp - baseline_gdp)
        return np.maximum(multipliers, 1.0)  # Floor at 1.0 (no benefit from growth)

    def translate_macro_to_lgd_stress(self) -> np.ndarray:
        """Translate macro scenario to LGD add-ons.

        Driven by house price index changes for secured lending.
        Negative HPI changes increase LGD.

        Returns:
            LGD add-ons per period (shape: horizon_years,).
        """
        hpi = self.scenario.variables.get(
            "house_price_index", np.zeros(self.horizon_years),
        )
        # Negative HPI changes increase LGD
        return np.maximum(-hpi * 0.5, 0.0)

    def run(
        self,
        base_pds: np.ndarray,
        base_lgds: np.ndarray,
        base_eads: np.ndarray,
    ) -> dict[str, Any]:
        """Run full EBA stress test projection.

        Translates the macro scenario into PD multipliers and LGD add-ons,
        then runs a multi-period projection under the static balance sheet
        assumption. PDs are floored at the regulatory minimum.

        Args:
            base_pds: Baseline PDs (n_exposures,).
            base_lgds: Baseline LGDs (n_exposures,).
            base_eads: Baseline EADs (n_exposures,).

        Returns:
            Dict with stressed PDs, LGDs, expected losses per period,
            cumulative EL, and scenario metadata.
        """
        base_pds = np.asarray(base_pds, dtype=np.float64)
        # Apply PD floor
        base_pds = np.maximum(base_pds, self.pd_floor)

        pd_mult = self.translate_macro_to_pd_stress(base_pds)
        lgd_add = self.translate_macro_to_lgd_stress()

        result = multi_period_projection(
            base_pds, base_lgds, base_eads, pd_mult, lgd_add
        )

        # Compute baseline EL for comparison
        baseline_el = float(
            np.sum(base_pds * np.asarray(base_lgds) * np.asarray(base_eads))
        )
        result["scenario"] = self.scenario.name
        result["severity"] = self.scenario.severity
        result["horizon_years"] = self.horizon_years
        result["static_balance_sheet"] = self.static_balance_sheet
        result["baseline_el"] = baseline_el
        result["delta_el"] = result["cumulative_el"] - baseline_el * self.horizon_years

        logger.info(
            "EBA stress test complete: scenario='%s', baseline_EL=%.2f, "
            "cumulative_stressed_EL=%.2f, delta=%.2f",
            self.scenario.name,
            baseline_el,
            result["cumulative_el"],
            result["delta_el"],
        )

        return result

translate_macro_to_pd_stress(base_pds, gdp_sensitivity=2.0)

Translate macro scenario to PD stress multipliers.

Simple linear translation: multiplier = 1 - sensitivity * gdp_growth_deviation.

Parameters:

Name Type Description Default
base_pds ndarray

Baseline PDs (unused, for interface consistency).

required
gdp_sensitivity float

Sensitivity of PD to GDP growth deviation.

2.0

Returns:

Type Description
ndarray

PD multipliers per period (shape: horizon_years,).

Source code in creditriskengine\portfolio\stress_testing.py
def translate_macro_to_pd_stress(
    self,
    base_pds: np.ndarray,
    gdp_sensitivity: float = 2.0,
) -> np.ndarray:
    """Translate macro scenario to PD stress multipliers.

    Simple linear translation: multiplier = 1 - sensitivity * gdp_growth_deviation.

    Args:
        base_pds: Baseline PDs (unused, for interface consistency).
        gdp_sensitivity: Sensitivity of PD to GDP growth deviation.

    Returns:
        PD multipliers per period (shape: horizon_years,).
    """
    gdp = self.scenario.variables.get("gdp_growth", np.zeros(self.horizon_years))
    baseline_gdp = 0.02  # Assumed baseline GDP growth
    multipliers = 1.0 - gdp_sensitivity * (gdp - baseline_gdp)
    return np.maximum(multipliers, 1.0)  # Floor at 1.0 (no benefit from growth)

translate_macro_to_lgd_stress()

Translate macro scenario to LGD add-ons.

Driven by house price index changes for secured lending. Negative HPI changes increase LGD.

Returns:

Type Description
ndarray

LGD add-ons per period (shape: horizon_years,).

Source code in creditriskengine\portfolio\stress_testing.py
def translate_macro_to_lgd_stress(self) -> np.ndarray:
    """Translate macro scenario to LGD add-ons.

    Driven by house price index changes for secured lending.
    Negative HPI changes increase LGD.

    Returns:
        LGD add-ons per period (shape: horizon_years,).
    """
    hpi = self.scenario.variables.get(
        "house_price_index", np.zeros(self.horizon_years),
    )
    # Negative HPI changes increase LGD
    return np.maximum(-hpi * 0.5, 0.0)

run(base_pds, base_lgds, base_eads)

Run full EBA stress test projection.

Translates the macro scenario into PD multipliers and LGD add-ons, then runs a multi-period projection under the static balance sheet assumption. PDs are floored at the regulatory minimum.

Parameters:

Name Type Description Default
base_pds ndarray

Baseline PDs (n_exposures,).

required
base_lgds ndarray

Baseline LGDs (n_exposures,).

required
base_eads ndarray

Baseline EADs (n_exposures,).

required

Returns:

Type Description
dict[str, Any]

Dict with stressed PDs, LGDs, expected losses per period,

dict[str, Any]

cumulative EL, and scenario metadata.

Source code in creditriskengine\portfolio\stress_testing.py
def run(
    self,
    base_pds: np.ndarray,
    base_lgds: np.ndarray,
    base_eads: np.ndarray,
) -> dict[str, Any]:
    """Run full EBA stress test projection.

    Translates the macro scenario into PD multipliers and LGD add-ons,
    then runs a multi-period projection under the static balance sheet
    assumption. PDs are floored at the regulatory minimum.

    Args:
        base_pds: Baseline PDs (n_exposures,).
        base_lgds: Baseline LGDs (n_exposures,).
        base_eads: Baseline EADs (n_exposures,).

    Returns:
        Dict with stressed PDs, LGDs, expected losses per period,
        cumulative EL, and scenario metadata.
    """
    base_pds = np.asarray(base_pds, dtype=np.float64)
    # Apply PD floor
    base_pds = np.maximum(base_pds, self.pd_floor)

    pd_mult = self.translate_macro_to_pd_stress(base_pds)
    lgd_add = self.translate_macro_to_lgd_stress()

    result = multi_period_projection(
        base_pds, base_lgds, base_eads, pd_mult, lgd_add
    )

    # Compute baseline EL for comparison
    baseline_el = float(
        np.sum(base_pds * np.asarray(base_lgds) * np.asarray(base_eads))
    )
    result["scenario"] = self.scenario.name
    result["severity"] = self.scenario.severity
    result["horizon_years"] = self.horizon_years
    result["static_balance_sheet"] = self.static_balance_sheet
    result["baseline_el"] = baseline_el
    result["delta_el"] = result["cumulative_el"] - baseline_el * self.horizon_years

    logger.info(
        "EBA stress test complete: scenario='%s', baseline_EL=%.2f, "
        "cumulative_stressed_EL=%.2f, delta=%.2f",
        self.scenario.name,
        baseline_el,
        result["cumulative_el"],
        result["delta_el"],
    )

    return result

BoEACSStressTest

Bank of England Annual Cyclical Scenario (ACS) stress test.

The BoE ACS is a concurrent stress test applied to major UK banks and building societies. It uses a scenario calibrated to the current risk environment rather than a fixed severity, making it cyclical — the scenario becomes more severe as systemic risks build up.

Reference
  • Bank of England: Stress testing the UK banking system (annual)
  • PRA SS3/19: Model risk management for stress testing
Key features
  • 5-year projection horizon (longer than EBA's 3-year).
  • Scenario severity varies with the financial cycle.
  • Hurdle rates: CET1, Tier 1 leverage, and systemic reference point.
  • IFRS 9 transitional and fully loaded capital trajectories.
  • Feedback effects from bank reactions (strategic management actions).

Parameters:

Name Type Description Default
scenario MacroScenario

Macro scenario with at least 5 years of projections.

required
horizon_years int

Projection horizon (default 5, BoE standard).

5
cet1_hurdle_pct float

CET1 hurdle rate as fraction (default 4.5%).

0.045
leverage_hurdle_pct float

Leverage ratio hurdle (default 3.25%).

0.0325
pd_floor float

Regulatory PD floor (default 0.03%).

0.0003
Source code in creditriskengine\portfolio\stress_testing.py
class BoEACSStressTest:
    """Bank of England Annual Cyclical Scenario (ACS) stress test.

    The BoE ACS is a concurrent stress test applied to major UK banks and
    building societies. It uses a scenario calibrated to the current risk
    environment rather than a fixed severity, making it *cyclical* — the
    scenario becomes more severe as systemic risks build up.

    Reference:
        - Bank of England: Stress testing the UK banking system (annual)
        - PRA SS3/19: Model risk management for stress testing

    Key features:
        - 5-year projection horizon (longer than EBA's 3-year).
        - Scenario severity varies with the financial cycle.
        - Hurdle rates: CET1, Tier 1 leverage, and systemic reference point.
        - IFRS 9 transitional and fully loaded capital trajectories.
        - Feedback effects from bank reactions (strategic management actions).

    Args:
        scenario: Macro scenario with at least 5 years of projections.
        horizon_years: Projection horizon (default 5, BoE standard).
        cet1_hurdle_pct: CET1 hurdle rate as fraction (default 4.5%).
        leverage_hurdle_pct: Leverage ratio hurdle (default 3.25%).
        pd_floor: Regulatory PD floor (default 0.03%).
    """

    def __init__(
        self,
        scenario: MacroScenario,
        horizon_years: int = 5,
        cet1_hurdle_pct: float = 0.045,
        leverage_hurdle_pct: float = 0.0325,
        pd_floor: float = 0.0003,
    ) -> None:
        if horizon_years < 5:
            raise ValueError("BoE ACS stress test requires a minimum 5-year horizon.")
        self.scenario = scenario
        self.horizon_years = horizon_years
        self.cet1_hurdle_pct = cet1_hurdle_pct
        self.leverage_hurdle_pct = leverage_hurdle_pct
        self.pd_floor = pd_floor
        logger.info(
            "BoEACSStressTest initialised: scenario='%s', horizon=%d years, "
            "CET1_hurdle=%.2f%%, leverage_hurdle=%.2f%%",
            scenario.name,
            horizon_years,
            cet1_hurdle_pct * 100,
            leverage_hurdle_pct * 100,
        )

    def translate_macro_to_pd_stress(
        self,
        gdp_sensitivity: float = 2.5,
        unemployment_sensitivity: float = 1.5,
    ) -> np.ndarray:
        """Translate BoE ACS macro scenario to PD stress multipliers.

        Uses both GDP growth and unemployment rate as drivers (dual-factor),
        reflecting the BoE's more comprehensive macro-credit linkage.

        PD multiplier = 1 - gdp_sens × (GDP - baseline_GDP)
                        + unemp_sens × (unemployment - baseline_unemp)

        Args:
            gdp_sensitivity: Sensitivity of PD to GDP growth deviation.
            unemployment_sensitivity: Sensitivity of PD to unemployment deviation.

        Returns:
            PD multipliers per period (shape: horizon_years,).
        """
        gdp = self.scenario.variables.get("gdp_growth", np.zeros(self.horizon_years))
        unemp = self.scenario.variables.get("unemployment", np.zeros(self.horizon_years))
        baseline_gdp = 0.015  # UK baseline GDP growth assumption
        baseline_unemp = 0.04  # UK baseline unemployment assumption

        multipliers = (
            1.0
            - gdp_sensitivity * (gdp[:self.horizon_years] - baseline_gdp)
            + unemployment_sensitivity * np.maximum(
                unemp[:self.horizon_years] - baseline_unemp, 0.0
            )
        )
        return np.maximum(multipliers, 1.0)

    def translate_macro_to_lgd_stress(
        self,
        hpi_lgd_sensitivity: float = 0.6,
    ) -> np.ndarray:
        """Translate BoE ACS macro scenario to LGD add-ons.

        House price index (HPI) declines drive LGD increases for secured
        lending.

        Args:
            hpi_lgd_sensitivity: Multiplier converting HPI declines to LGD
                add-ons (default 0.6 per BoE ACS methodology; EBA uses
                lower values around 0.4).

        Returns:
            LGD add-ons per period (shape: horizon_years,).
        """
        hpi = self.scenario.variables.get(
            "house_price_index", np.zeros(self.horizon_years),
        )
        return np.maximum(-hpi[:self.horizon_years] * hpi_lgd_sensitivity, 0.0)

    def run(
        self,
        base_pds: np.ndarray,
        base_lgds: np.ndarray,
        base_eads: np.ndarray,
        initial_cet1_ratio: float = 0.12,
        total_rwa: float | None = None,
    ) -> dict[str, Any]:
        """Run the full BoE ACS stress test projection.

        Translates the macro scenario into PD and LGD stress, projects
        losses over the 5-year horizon, and evaluates against BoE hurdle
        rates for CET1 and leverage.

        Args:
            base_pds: Baseline PDs (n_exposures,).
            base_lgds: Baseline LGDs (n_exposures,).
            base_eads: Baseline EADs (n_exposures,).
            initial_cet1_ratio: Starting CET1 ratio (default 12%).
            total_rwa: Total risk-weighted assets; defaults to sum(base_eads).

        Returns:
            Dict with stressed PDs, LGDs, expected losses, cumulative EL,
            CET1 trajectory, hurdle breach information, and scenario metadata.
        """
        base_pds = np.asarray(base_pds, dtype=np.float64)
        base_pds = np.maximum(base_pds, self.pd_floor)
        base_lgds = np.asarray(base_lgds, dtype=np.float64)
        base_eads = np.asarray(base_eads, dtype=np.float64)

        if total_rwa is None:
            total_rwa = float(np.sum(base_eads))

        pd_mult = self.translate_macro_to_pd_stress()
        lgd_add = self.translate_macro_to_lgd_stress()

        result = multi_period_projection(
            base_pds, base_lgds, base_eads, pd_mult, lgd_add
        )

        # CET1 trajectory: CET1 ratio declines with cumulative losses
        baseline_el = float(
            np.sum(base_pds * base_lgds * base_eads)
        )
        cet1_trajectory = np.empty(self.horizon_years, dtype=np.float64)
        cet1 = initial_cet1_ratio
        for t in range(self.horizon_years):
            loss_impact = result["period_el"][t] / total_rwa if total_rwa > 0 else 0.0
            cet1 -= loss_impact
            cet1_trajectory[t] = cet1

        min_cet1 = float(np.min(cet1_trajectory))
        min_cet1_year = int(np.argmin(cet1_trajectory)) + 1
        cet1_hurdle_breach = min_cet1 < self.cet1_hurdle_pct

        result["scenario"] = self.scenario.name
        result["severity"] = self.scenario.severity
        result["horizon_years"] = self.horizon_years
        result["baseline_el"] = baseline_el
        result["delta_el"] = result["cumulative_el"] - baseline_el * self.horizon_years
        result["cet1_trajectory"] = cet1_trajectory.tolist()
        result["min_cet1_ratio"] = min_cet1
        result["min_cet1_year"] = min_cet1_year
        result["cet1_hurdle_pct"] = self.cet1_hurdle_pct
        result["cet1_hurdle_breach"] = cet1_hurdle_breach
        result["leverage_hurdle_pct"] = self.leverage_hurdle_pct
        result["initial_cet1_ratio"] = initial_cet1_ratio

        logger.info(
            "BoE ACS stress test complete: scenario='%s', baseline_EL=%.2f, "
            "cumulative_stressed_EL=%.2f, min_CET1=%.4f (hurdle=%.4f, breach=%s)",
            self.scenario.name,
            baseline_el,
            result["cumulative_el"],
            min_cet1,
            self.cet1_hurdle_pct,
            cet1_hurdle_breach,
        )

        return result

translate_macro_to_pd_stress(gdp_sensitivity=2.5, unemployment_sensitivity=1.5)

Translate BoE ACS macro scenario to PD stress multipliers.

Uses both GDP growth and unemployment rate as drivers (dual-factor), reflecting the BoE's more comprehensive macro-credit linkage.

PD multiplier = 1 - gdp_sens × (GDP - baseline_GDP) + unemp_sens × (unemployment - baseline_unemp)

Parameters:

Name Type Description Default
gdp_sensitivity float

Sensitivity of PD to GDP growth deviation.

2.5
unemployment_sensitivity float

Sensitivity of PD to unemployment deviation.

1.5

Returns:

Type Description
ndarray

PD multipliers per period (shape: horizon_years,).

Source code in creditriskengine\portfolio\stress_testing.py
def translate_macro_to_pd_stress(
    self,
    gdp_sensitivity: float = 2.5,
    unemployment_sensitivity: float = 1.5,
) -> np.ndarray:
    """Translate BoE ACS macro scenario to PD stress multipliers.

    Uses both GDP growth and unemployment rate as drivers (dual-factor),
    reflecting the BoE's more comprehensive macro-credit linkage.

    PD multiplier = 1 - gdp_sens × (GDP - baseline_GDP)
                    + unemp_sens × (unemployment - baseline_unemp)

    Args:
        gdp_sensitivity: Sensitivity of PD to GDP growth deviation.
        unemployment_sensitivity: Sensitivity of PD to unemployment deviation.

    Returns:
        PD multipliers per period (shape: horizon_years,).
    """
    gdp = self.scenario.variables.get("gdp_growth", np.zeros(self.horizon_years))
    unemp = self.scenario.variables.get("unemployment", np.zeros(self.horizon_years))
    baseline_gdp = 0.015  # UK baseline GDP growth assumption
    baseline_unemp = 0.04  # UK baseline unemployment assumption

    multipliers = (
        1.0
        - gdp_sensitivity * (gdp[:self.horizon_years] - baseline_gdp)
        + unemployment_sensitivity * np.maximum(
            unemp[:self.horizon_years] - baseline_unemp, 0.0
        )
    )
    return np.maximum(multipliers, 1.0)

translate_macro_to_lgd_stress(hpi_lgd_sensitivity=0.6)

Translate BoE ACS macro scenario to LGD add-ons.

House price index (HPI) declines drive LGD increases for secured lending.

Parameters:

Name Type Description Default
hpi_lgd_sensitivity float

Multiplier converting HPI declines to LGD add-ons (default 0.6 per BoE ACS methodology; EBA uses lower values around 0.4).

0.6

Returns:

Type Description
ndarray

LGD add-ons per period (shape: horizon_years,).

Source code in creditriskengine\portfolio\stress_testing.py
def translate_macro_to_lgd_stress(
    self,
    hpi_lgd_sensitivity: float = 0.6,
) -> np.ndarray:
    """Translate BoE ACS macro scenario to LGD add-ons.

    House price index (HPI) declines drive LGD increases for secured
    lending.

    Args:
        hpi_lgd_sensitivity: Multiplier converting HPI declines to LGD
            add-ons (default 0.6 per BoE ACS methodology; EBA uses
            lower values around 0.4).

    Returns:
        LGD add-ons per period (shape: horizon_years,).
    """
    hpi = self.scenario.variables.get(
        "house_price_index", np.zeros(self.horizon_years),
    )
    return np.maximum(-hpi[:self.horizon_years] * hpi_lgd_sensitivity, 0.0)

run(base_pds, base_lgds, base_eads, initial_cet1_ratio=0.12, total_rwa=None)

Run the full BoE ACS stress test projection.

Translates the macro scenario into PD and LGD stress, projects losses over the 5-year horizon, and evaluates against BoE hurdle rates for CET1 and leverage.

Parameters:

Name Type Description Default
base_pds ndarray

Baseline PDs (n_exposures,).

required
base_lgds ndarray

Baseline LGDs (n_exposures,).

required
base_eads ndarray

Baseline EADs (n_exposures,).

required
initial_cet1_ratio float

Starting CET1 ratio (default 12%).

0.12
total_rwa float | None

Total risk-weighted assets; defaults to sum(base_eads).

None

Returns:

Type Description
dict[str, Any]

Dict with stressed PDs, LGDs, expected losses, cumulative EL,

dict[str, Any]

CET1 trajectory, hurdle breach information, and scenario metadata.

Source code in creditriskengine\portfolio\stress_testing.py
def run(
    self,
    base_pds: np.ndarray,
    base_lgds: np.ndarray,
    base_eads: np.ndarray,
    initial_cet1_ratio: float = 0.12,
    total_rwa: float | None = None,
) -> dict[str, Any]:
    """Run the full BoE ACS stress test projection.

    Translates the macro scenario into PD and LGD stress, projects
    losses over the 5-year horizon, and evaluates against BoE hurdle
    rates for CET1 and leverage.

    Args:
        base_pds: Baseline PDs (n_exposures,).
        base_lgds: Baseline LGDs (n_exposures,).
        base_eads: Baseline EADs (n_exposures,).
        initial_cet1_ratio: Starting CET1 ratio (default 12%).
        total_rwa: Total risk-weighted assets; defaults to sum(base_eads).

    Returns:
        Dict with stressed PDs, LGDs, expected losses, cumulative EL,
        CET1 trajectory, hurdle breach information, and scenario metadata.
    """
    base_pds = np.asarray(base_pds, dtype=np.float64)
    base_pds = np.maximum(base_pds, self.pd_floor)
    base_lgds = np.asarray(base_lgds, dtype=np.float64)
    base_eads = np.asarray(base_eads, dtype=np.float64)

    if total_rwa is None:
        total_rwa = float(np.sum(base_eads))

    pd_mult = self.translate_macro_to_pd_stress()
    lgd_add = self.translate_macro_to_lgd_stress()

    result = multi_period_projection(
        base_pds, base_lgds, base_eads, pd_mult, lgd_add
    )

    # CET1 trajectory: CET1 ratio declines with cumulative losses
    baseline_el = float(
        np.sum(base_pds * base_lgds * base_eads)
    )
    cet1_trajectory = np.empty(self.horizon_years, dtype=np.float64)
    cet1 = initial_cet1_ratio
    for t in range(self.horizon_years):
        loss_impact = result["period_el"][t] / total_rwa if total_rwa > 0 else 0.0
        cet1 -= loss_impact
        cet1_trajectory[t] = cet1

    min_cet1 = float(np.min(cet1_trajectory))
    min_cet1_year = int(np.argmin(cet1_trajectory)) + 1
    cet1_hurdle_breach = min_cet1 < self.cet1_hurdle_pct

    result["scenario"] = self.scenario.name
    result["severity"] = self.scenario.severity
    result["horizon_years"] = self.horizon_years
    result["baseline_el"] = baseline_el
    result["delta_el"] = result["cumulative_el"] - baseline_el * self.horizon_years
    result["cet1_trajectory"] = cet1_trajectory.tolist()
    result["min_cet1_ratio"] = min_cet1
    result["min_cet1_year"] = min_cet1_year
    result["cet1_hurdle_pct"] = self.cet1_hurdle_pct
    result["cet1_hurdle_breach"] = cet1_hurdle_breach
    result["leverage_hurdle_pct"] = self.leverage_hurdle_pct
    result["initial_cet1_ratio"] = initial_cet1_ratio

    logger.info(
        "BoE ACS stress test complete: scenario='%s', baseline_EL=%.2f, "
        "cumulative_stressed_EL=%.2f, min_CET1=%.4f (hurdle=%.4f, breach=%s)",
        self.scenario.name,
        baseline_el,
        result["cumulative_el"],
        min_cet1,
        self.cet1_hurdle_pct,
        cet1_hurdle_breach,
    )

    return result

CCARScenario

US CCAR/DFAST stress testing with 9-quarter projection horizon.

Implements the Fed's Comprehensive Capital Analysis and Review framework.

Reference
  • Federal Reserve: SR 15-18, SR 15-19 (CCAR/DFAST instructions)
  • 12 CFR 252 Subpart E (stress testing requirements)
Key features
  • 9-quarter projection horizon (Q1 through Q9).
  • Baseline, adverse, and severely adverse scenarios.
  • Pre-Provision Net Revenue (PPNR) hook for income projection.
  • Capital adequacy assessment at each quarter.

Parameters:

Name Type Description Default
scenario MacroScenario

MacroScenario used for the stress test.

required
horizon_quarters int

Number of projection quarters (default 9).

9
ppnr_quarterly ndarray | None

Optional pre-provision net revenue per quarter (9,). If not provided, PPNR is assumed to be zero each quarter.

None
Source code in creditriskengine\portfolio\stress_testing.py
class CCARScenario:
    """US CCAR/DFAST stress testing with 9-quarter projection horizon.

    Implements the Fed's Comprehensive Capital Analysis and Review framework.

    Reference:
        - Federal Reserve: SR 15-18, SR 15-19 (CCAR/DFAST instructions)
        - 12 CFR 252 Subpart E (stress testing requirements)

    Key features:
        - 9-quarter projection horizon (Q1 through Q9).
        - Baseline, adverse, and severely adverse scenarios.
        - Pre-Provision Net Revenue (PPNR) hook for income projection.
        - Capital adequacy assessment at each quarter.

    Args:
        scenario: MacroScenario used for the stress test.
        horizon_quarters: Number of projection quarters (default 9).
        ppnr_quarterly: Optional pre-provision net revenue per quarter (9,).
            If not provided, PPNR is assumed to be zero each quarter.
    """

    def __init__(
        self,
        scenario: MacroScenario,
        horizon_quarters: int = 9,
        ppnr_quarterly: np.ndarray | None = None,
    ) -> None:
        self.scenario = scenario
        self.horizon_quarters = horizon_quarters
        self.ppnr_quarterly = (
            np.asarray(ppnr_quarterly, dtype=np.float64)
            if ppnr_quarterly is not None
            else np.zeros(self.horizon_quarters)
        )
        if len(self.ppnr_quarterly) != self.horizon_quarters:
            raise ValueError(
                f"ppnr_quarterly must have exactly {self.horizon_quarters} elements."
            )
        logger.info(
            "CCARScenario initialised: scenario='%s', quarters=%d, "
            "cumulative_ppnr=%.2f",
            scenario.name,
            self.horizon_quarters,
            float(np.sum(self.ppnr_quarterly)),
        )

    def project_quarterly_losses(
        self,
        base_pds: np.ndarray,
        base_lgds: np.ndarray,
        base_eads: np.ndarray,
        pd_quarterly_multipliers: np.ndarray | None = None,
        lgd_add_ons_quarterly: np.ndarray | None = None,
    ) -> dict[str, Any]:
        """Project quarterly credit losses over the CCAR horizon.

        Quarterly PD is derived from annual PD:
            PD_q = 1 - (1 - PD_annual)^(1/4)
        The stress multiplier is then applied to the quarterly PD.

        Args:
            base_pds: Annual PDs (n_exposures,).
            base_lgds: Baseline LGDs (n_exposures,).
            base_eads: Baseline EADs (n_exposures,).
            pd_quarterly_multipliers: Optional quarterly PD stress factors
                (horizon_quarters,). Defaults to 1.0 each quarter.
            lgd_add_ons_quarterly: Optional LGD add-ons per quarter
                (horizon_quarters,). Defaults to 0.0.

        Returns:
            Dict with quarterly_losses matrix, per-quarter totals,
            cumulative loss trajectory, and total loss.
        """
        base_pds = np.asarray(base_pds, dtype=np.float64)
        base_lgds = np.asarray(base_lgds, dtype=np.float64)
        base_eads = np.asarray(base_eads, dtype=np.float64)

        if pd_quarterly_multipliers is None:
            pd_quarterly_multipliers = np.ones(self.horizon_quarters)
        else:
            pd_quarterly_multipliers = np.asarray(pd_quarterly_multipliers, dtype=np.float64)

        if lgd_add_ons_quarterly is None:
            lgd_add_ons_quarterly = np.zeros(self.horizon_quarters)
        else:
            lgd_add_ons_quarterly = np.asarray(lgd_add_ons_quarterly, dtype=np.float64)

        # Convert annual PD to quarterly: PD_q = 1 - (1 - PD_annual)^(1/4)
        quarterly_pds = 1.0 - np.power(np.maximum(1.0 - base_pds, 0.0), 0.25)

        n_q = self.horizon_quarters
        losses = np.zeros((n_q, len(base_pds)))

        for q in range(n_q):
            mult = (
                pd_quarterly_multipliers[q]
                if q < len(pd_quarterly_multipliers)
                else 1.0
            )
            stressed_q_pd = np.minimum(quarterly_pds * mult, 1.0)
            stressed_lgd = np.clip(base_lgds + lgd_add_ons_quarterly[q], 0.0, 1.0)
            losses[q] = stressed_q_pd * stressed_lgd * base_eads

        quarterly_totals = losses.sum(axis=1)

        return {
            "quarterly_losses": losses,
            "quarterly_totals": quarterly_totals.tolist(),
            "cumulative_loss": np.cumsum(quarterly_totals).tolist(),
            "total_loss": float(losses.sum()),
        }

    def run(
        self,
        base_pds: np.ndarray,
        base_lgds: np.ndarray,
        base_eads: np.ndarray,
        pd_quarterly_multipliers: np.ndarray | None = None,
        lgd_add_ons_quarterly: np.ndarray | None = None,
        initial_capital: float = 0.0,
    ) -> dict[str, Any]:
        """Execute the full CCAR stress scenario with capital trajectory.

        Combines credit loss projection with PPNR to compute net income
        and a quarter-by-quarter capital adequacy trajectory.

        Args:
            base_pds: Annual baseline PDs (n_exposures,).
            base_lgds: Baseline LGDs (n_exposures,).
            base_eads: EAD array (n_exposures,).
            pd_quarterly_multipliers: PD stress multipliers per quarter.
            lgd_add_ons_quarterly: Optional LGD add-ons per quarter.
            initial_capital: Starting capital buffer for capital trajectory.

        Returns:
            Dict with quarterly losses, PPNR, net income, capital trajectory,
            minimum capital point, and summary statistics.
        """
        loss_result = self.project_quarterly_losses(
            base_pds,
            base_lgds,
            base_eads,
            pd_quarterly_multipliers,
            lgd_add_ons_quarterly,
        )

        quarterly_totals = np.array(loss_result["quarterly_totals"])
        net_income = self.ppnr_quarterly - quarterly_totals

        capital_trajectory = np.empty(self.horizon_quarters, dtype=np.float64)
        capital = initial_capital
        for q in range(self.horizon_quarters):
            capital += net_income[q]
            capital_trajectory[q] = capital

        min_capital = float(np.min(capital_trajectory))
        min_capital_quarter = int(np.argmin(capital_trajectory)) + 1

        logger.info(
            "CCAR run complete: cumulative_loss=%.2f, min_capital=%.2f at Q%d",
            loss_result["total_loss"],
            min_capital,
            min_capital_quarter,
        )

        return {
            "scenario": self.scenario.name,
            "horizon_quarters": self.horizon_quarters,
            "quarterly_losses": loss_result["quarterly_totals"],
            "ppnr_quarterly": self.ppnr_quarterly.tolist(),
            "net_income_quarterly": net_income.tolist(),
            "capital_trajectory": capital_trajectory.tolist(),
            "cumulative_loss": loss_result["cumulative_loss"],
            "total_loss": loss_result["total_loss"],
            "cumulative_ppnr": float(np.sum(self.ppnr_quarterly)),
            "min_capital": min_capital,
            "min_capital_quarter": min_capital_quarter,
            "initial_capital": initial_capital,
            "final_capital": float(capital_trajectory[-1]),
        }

project_quarterly_losses(base_pds, base_lgds, base_eads, pd_quarterly_multipliers=None, lgd_add_ons_quarterly=None)

Project quarterly credit losses over the CCAR horizon.

Quarterly PD is derived from annual PD

PD_q = 1 - (1 - PD_annual)^(1/4)

The stress multiplier is then applied to the quarterly PD.

Parameters:

Name Type Description Default
base_pds ndarray

Annual PDs (n_exposures,).

required
base_lgds ndarray

Baseline LGDs (n_exposures,).

required
base_eads ndarray

Baseline EADs (n_exposures,).

required
pd_quarterly_multipliers ndarray | None

Optional quarterly PD stress factors (horizon_quarters,). Defaults to 1.0 each quarter.

None
lgd_add_ons_quarterly ndarray | None

Optional LGD add-ons per quarter (horizon_quarters,). Defaults to 0.0.

None

Returns:

Type Description
dict[str, Any]

Dict with quarterly_losses matrix, per-quarter totals,

dict[str, Any]

cumulative loss trajectory, and total loss.

Source code in creditriskengine\portfolio\stress_testing.py
def project_quarterly_losses(
    self,
    base_pds: np.ndarray,
    base_lgds: np.ndarray,
    base_eads: np.ndarray,
    pd_quarterly_multipliers: np.ndarray | None = None,
    lgd_add_ons_quarterly: np.ndarray | None = None,
) -> dict[str, Any]:
    """Project quarterly credit losses over the CCAR horizon.

    Quarterly PD is derived from annual PD:
        PD_q = 1 - (1 - PD_annual)^(1/4)
    The stress multiplier is then applied to the quarterly PD.

    Args:
        base_pds: Annual PDs (n_exposures,).
        base_lgds: Baseline LGDs (n_exposures,).
        base_eads: Baseline EADs (n_exposures,).
        pd_quarterly_multipliers: Optional quarterly PD stress factors
            (horizon_quarters,). Defaults to 1.0 each quarter.
        lgd_add_ons_quarterly: Optional LGD add-ons per quarter
            (horizon_quarters,). Defaults to 0.0.

    Returns:
        Dict with quarterly_losses matrix, per-quarter totals,
        cumulative loss trajectory, and total loss.
    """
    base_pds = np.asarray(base_pds, dtype=np.float64)
    base_lgds = np.asarray(base_lgds, dtype=np.float64)
    base_eads = np.asarray(base_eads, dtype=np.float64)

    if pd_quarterly_multipliers is None:
        pd_quarterly_multipliers = np.ones(self.horizon_quarters)
    else:
        pd_quarterly_multipliers = np.asarray(pd_quarterly_multipliers, dtype=np.float64)

    if lgd_add_ons_quarterly is None:
        lgd_add_ons_quarterly = np.zeros(self.horizon_quarters)
    else:
        lgd_add_ons_quarterly = np.asarray(lgd_add_ons_quarterly, dtype=np.float64)

    # Convert annual PD to quarterly: PD_q = 1 - (1 - PD_annual)^(1/4)
    quarterly_pds = 1.0 - np.power(np.maximum(1.0 - base_pds, 0.0), 0.25)

    n_q = self.horizon_quarters
    losses = np.zeros((n_q, len(base_pds)))

    for q in range(n_q):
        mult = (
            pd_quarterly_multipliers[q]
            if q < len(pd_quarterly_multipliers)
            else 1.0
        )
        stressed_q_pd = np.minimum(quarterly_pds * mult, 1.0)
        stressed_lgd = np.clip(base_lgds + lgd_add_ons_quarterly[q], 0.0, 1.0)
        losses[q] = stressed_q_pd * stressed_lgd * base_eads

    quarterly_totals = losses.sum(axis=1)

    return {
        "quarterly_losses": losses,
        "quarterly_totals": quarterly_totals.tolist(),
        "cumulative_loss": np.cumsum(quarterly_totals).tolist(),
        "total_loss": float(losses.sum()),
    }

run(base_pds, base_lgds, base_eads, pd_quarterly_multipliers=None, lgd_add_ons_quarterly=None, initial_capital=0.0)

Execute the full CCAR stress scenario with capital trajectory.

Combines credit loss projection with PPNR to compute net income and a quarter-by-quarter capital adequacy trajectory.

Parameters:

Name Type Description Default
base_pds ndarray

Annual baseline PDs (n_exposures,).

required
base_lgds ndarray

Baseline LGDs (n_exposures,).

required
base_eads ndarray

EAD array (n_exposures,).

required
pd_quarterly_multipliers ndarray | None

PD stress multipliers per quarter.

None
lgd_add_ons_quarterly ndarray | None

Optional LGD add-ons per quarter.

None
initial_capital float

Starting capital buffer for capital trajectory.

0.0

Returns:

Type Description
dict[str, Any]

Dict with quarterly losses, PPNR, net income, capital trajectory,

dict[str, Any]

minimum capital point, and summary statistics.

Source code in creditriskengine\portfolio\stress_testing.py
def run(
    self,
    base_pds: np.ndarray,
    base_lgds: np.ndarray,
    base_eads: np.ndarray,
    pd_quarterly_multipliers: np.ndarray | None = None,
    lgd_add_ons_quarterly: np.ndarray | None = None,
    initial_capital: float = 0.0,
) -> dict[str, Any]:
    """Execute the full CCAR stress scenario with capital trajectory.

    Combines credit loss projection with PPNR to compute net income
    and a quarter-by-quarter capital adequacy trajectory.

    Args:
        base_pds: Annual baseline PDs (n_exposures,).
        base_lgds: Baseline LGDs (n_exposures,).
        base_eads: EAD array (n_exposures,).
        pd_quarterly_multipliers: PD stress multipliers per quarter.
        lgd_add_ons_quarterly: Optional LGD add-ons per quarter.
        initial_capital: Starting capital buffer for capital trajectory.

    Returns:
        Dict with quarterly losses, PPNR, net income, capital trajectory,
        minimum capital point, and summary statistics.
    """
    loss_result = self.project_quarterly_losses(
        base_pds,
        base_lgds,
        base_eads,
        pd_quarterly_multipliers,
        lgd_add_ons_quarterly,
    )

    quarterly_totals = np.array(loss_result["quarterly_totals"])
    net_income = self.ppnr_quarterly - quarterly_totals

    capital_trajectory = np.empty(self.horizon_quarters, dtype=np.float64)
    capital = initial_capital
    for q in range(self.horizon_quarters):
        capital += net_income[q]
        capital_trajectory[q] = capital

    min_capital = float(np.min(capital_trajectory))
    min_capital_quarter = int(np.argmin(capital_trajectory)) + 1

    logger.info(
        "CCAR run complete: cumulative_loss=%.2f, min_capital=%.2f at Q%d",
        loss_result["total_loss"],
        min_capital,
        min_capital_quarter,
    )

    return {
        "scenario": self.scenario.name,
        "horizon_quarters": self.horizon_quarters,
        "quarterly_losses": loss_result["quarterly_totals"],
        "ppnr_quarterly": self.ppnr_quarterly.tolist(),
        "net_income_quarterly": net_income.tolist(),
        "capital_trajectory": capital_trajectory.tolist(),
        "cumulative_loss": loss_result["cumulative_loss"],
        "total_loss": loss_result["total_loss"],
        "cumulative_ppnr": float(np.sum(self.ppnr_quarterly)),
        "min_capital": min_capital,
        "min_capital_quarter": min_capital_quarter,
        "initial_capital": initial_capital,
        "final_capital": float(capital_trajectory[-1]),
    }

RBIStressTest

RBI (Reserve Bank of India) stress testing with sensitivity analysis.

Implements the RBI's stress testing framework as outlined in the RBI Master Circular on Stress Testing (DBOD.No.BP.BC.94/21.06.001) and the Financial Stability Report methodology.

Key features
  • Severity-calibrated credit quality deterioration (NPA migration).
  • Interest rate sensitivity analysis (EVE and NII impact).
  • Liquidity sensitivity analysis (LCR impact from deposit outflows).
  • Single-factor shock isolation for each risk driver.

Parameters:

Name Type Description Default
severity str

Stress severity level ('mild', 'moderate', or 'severe').

'moderate'
baseline_metrics dict[str, float] | None

Optional dict with baseline values for sensitivity analysis. Keys: 'npa_ratio', 'car', 'net_interest_income', 'total_advances'. If not provided, sensitivity methods that require these will raise ValueError.

None
Source code in creditriskengine\portfolio\stress_testing.py
class RBIStressTest:
    """RBI (Reserve Bank of India) stress testing with sensitivity analysis.

    Implements the RBI's stress testing framework as outlined in the
    RBI Master Circular on Stress Testing (DBOD.No.BP.BC.94/21.06.001)
    and the Financial Stability Report methodology.

    Key features:
        - Severity-calibrated credit quality deterioration (NPA migration).
        - Interest rate sensitivity analysis (EVE and NII impact).
        - Liquidity sensitivity analysis (LCR impact from deposit outflows).
        - Single-factor shock isolation for each risk driver.

    Args:
        severity: Stress severity level ('mild', 'moderate', or 'severe').
        baseline_metrics: Optional dict with baseline values for sensitivity
            analysis. Keys: 'npa_ratio', 'car', 'net_interest_income',
            'total_advances'. If not provided, sensitivity methods that
            require these will raise ValueError.
    """

    def __init__(
        self,
        severity: str = "moderate",
        baseline_metrics: dict[str, float] | None = None,
    ) -> None:
        self.severity = severity
        self.baseline_metrics = baseline_metrics or {}
        self._severity_map: dict[str, dict[str, float]] = {
            "mild": {"pd_mult": 1.5, "lgd_add": 0.05, "npa_shift_pct": 0.02},
            "moderate": {"pd_mult": 2.0, "lgd_add": 0.10, "npa_shift_pct": 0.05},
            "severe": {"pd_mult": 3.0, "lgd_add": 0.15, "npa_shift_pct": 0.10},
        }
        logger.info(
            "RBIStressTest initialised: severity='%s', baseline_metrics=%s",
            severity,
            list(self.baseline_metrics.keys()) if self.baseline_metrics else "none",
        )

    def _require_baseline(self, *keys: str) -> None:
        """Validate that required baseline metrics are present."""
        missing = set(keys) - set(self.baseline_metrics.keys())
        if missing:
            raise ValueError(
                f"Missing required baseline_metrics for this analysis: {missing}. "
                "Pass them in the RBIStressTest constructor."
            )

    def credit_quality_stress(
        self,
        base_pds: np.ndarray,
        base_lgds: np.ndarray,
        base_eads: np.ndarray,
    ) -> dict[str, float | str]:
        """Apply credit quality deterioration stress.

        Simulates NPA migration per RBI's macro stress testing framework.
        PDs are multiplied by a severity-dependent factor and LGDs receive
        an additive stress.

        Args:
            base_pds: Baseline PDs (n_exposures,).
            base_lgds: Baseline LGDs (n_exposures,).
            base_eads: Baseline EADs (n_exposures,).

        Returns:
            Dict with base EL, stressed EL, incremental provisions, and
            severity label.
        """
        base_pds = np.asarray(base_pds, dtype=np.float64)
        base_lgds = np.asarray(base_lgds, dtype=np.float64)
        base_eads = np.asarray(base_eads, dtype=np.float64)

        params = self._severity_map.get(self.severity, self._severity_map["moderate"])
        stressed_pds = np.minimum(base_pds * params["pd_mult"], 1.0)
        stressed_lgds = np.clip(base_lgds + params["lgd_add"], 0.0, 1.0)

        base_el = float((base_pds * base_lgds * base_eads).sum())
        stressed_el = float((stressed_pds * stressed_lgds * base_eads).sum())

        logger.debug(
            "Credit quality stress (%s): base_EL=%.2f, stressed_EL=%.2f",
            self.severity,
            base_el,
            stressed_el,
        )

        return {
            "base_el": base_el,
            "stressed_el": stressed_el,
            "incremental_provisions": stressed_el - base_el,
            "severity": self.severity,
            "pd_multiplier": params["pd_mult"],
            "lgd_add_on": params["lgd_add"],
        }

    def interest_rate_sensitivity(
        self,
        rate_shock_bps: float,
        duration_gap: float,
        total_assets: float,
        rate_sensitive_fraction: float = 0.6,
        avg_risk_weight: float = 0.75,
    ) -> dict[str, float]:
        """Interest rate sensitivity analysis.

        Estimates the impact of a parallel shift in interest rates on
        net interest income (NII) and economic value of equity (EVE).

        Impact on EVE: -Duration_gap x Delta_Rate x Total_Assets

        Requires baseline_metrics with 'net_interest_income' and
        'total_advances'.

        Args:
            rate_shock_bps: Interest rate shock in basis points (e.g. +200).
            duration_gap: Duration gap (years) between assets and liabilities.
            total_assets: Total asset value.
            rate_sensitive_fraction: Fraction of advances that are
                rate-sensitive (default 0.6 per RBI guidelines).
            avg_risk_weight: Average portfolio risk weight used to
                approximate RWA from total assets (default 0.75).

        Returns:
            Dict with EVE impact, NII impact, and stressed CAR estimate.
        """
        self._require_baseline("net_interest_income", "total_advances", "car")

        rate_shock = rate_shock_bps / 10_000.0

        # EVE impact
        eve_impact = -duration_gap * rate_shock * total_assets

        # NII impact: rate_shock × rate-sensitive portion of advances
        rate_sensitive_advances = (
            self.baseline_metrics["total_advances"] * rate_sensitive_fraction
        )
        nii_impact = rate_shock * rate_sensitive_advances
        stressed_nii = self.baseline_metrics["net_interest_income"] + nii_impact

        # CAR impact: EVE change relative to RWA
        rwa_proxy = total_assets * avg_risk_weight
        car_impact_pp = (eve_impact / rwa_proxy) * 100 if rwa_proxy > 0 else 0.0
        stressed_car = self.baseline_metrics["car"] + car_impact_pp

        logger.debug(
            "IR sensitivity: shock=%+dbps, EVE_impact=%.2f, NII_impact=%.2f, "
            "CAR %.2f -> %.2f",
            rate_shock_bps,
            eve_impact,
            nii_impact,
            self.baseline_metrics["car"],
            stressed_car,
        )

        return {
            "rate_shock_bps": rate_shock_bps,
            "eve_impact": eve_impact,
            "nii_impact": nii_impact,
            "baseline_nii": self.baseline_metrics["net_interest_income"],
            "stressed_nii": stressed_nii,
            "baseline_car": self.baseline_metrics["car"],
            "stressed_car": stressed_car,
            "car_change_pp": car_impact_pp,
        }

    def credit_quality_sensitivity(
        self,
        npa_increase_pct: float,
        provision_coverage_ratio: float = 0.70,
        avg_risk_weight: float = 0.75,
    ) -> dict[str, float]:
        """Credit quality sensitivity analysis via NPA ratio shift.

        Models the impact of an increase in non-performing assets on
        provisioning requirements and capital adequacy.

        Requires baseline_metrics with 'npa_ratio', 'car', 'total_advances'.

        Args:
            npa_increase_pct: Percentage point increase in NPA ratio
                (e.g. 2.0 means NPA ratio rises by 2 pp).
            provision_coverage_ratio: Provisioning coverage ratio for
                incremental NPAs (default 0.70).
            avg_risk_weight: Average portfolio risk weight used to
                approximate RWA from total advances (default 0.75).

        Returns:
            Dict with stressed NPA ratio, incremental provisions, and CAR impact.
        """
        self._require_baseline("npa_ratio", "car", "total_advances")

        baseline_npa = self.baseline_metrics["npa_ratio"]
        stressed_npa = baseline_npa + npa_increase_pct
        total_advances = self.baseline_metrics["total_advances"]

        incremental_npa_amount = (npa_increase_pct / 100.0) * total_advances
        incremental_provisions = incremental_npa_amount * provision_coverage_ratio

        # CAR impact: provisions reduce capital, RWA unchanged
        rwa_proxy = total_advances * avg_risk_weight
        car_reduction = (
            (incremental_provisions / rwa_proxy) * 100 if rwa_proxy > 0 else 0.0
        )
        stressed_car = self.baseline_metrics["car"] - car_reduction

        logger.debug(
            "Credit quality sensitivity: NPA +%.1fpp, provisions=%.2f, "
            "CAR %.2f -> %.2f",
            npa_increase_pct,
            incremental_provisions,
            self.baseline_metrics["car"],
            stressed_car,
        )

        return {
            "baseline_npa_ratio": baseline_npa,
            "stressed_npa_ratio": stressed_npa,
            "npa_increase_pct": npa_increase_pct,
            "incremental_npa_amount": incremental_npa_amount,
            "incremental_provisions": incremental_provisions,
            "provision_coverage_ratio": provision_coverage_ratio,
            "baseline_car": self.baseline_metrics["car"],
            "stressed_car": stressed_car,
            "car_reduction_pp": car_reduction,
        }

    def liquidity_sensitivity(
        self,
        deposit_outflow_pct: float,
        hqla: float,
        total_deposits: float,
        net_cash_outflows_30d: float,
    ) -> dict[str, float]:
        """Liquidity sensitivity analysis.

        Estimates the impact of deposit outflows on the Liquidity Coverage
        Ratio (LCR) as per Basel III / RBI guidelines.

        LCR = HQLA / Net cash outflows over 30 days

        Args:
            deposit_outflow_pct: Assumed deposit run-off percentage.
            hqla: High-quality liquid assets.
            total_deposits: Total deposit base.
            net_cash_outflows_30d: Baseline 30-day net cash outflows.

        Returns:
            Dict with baseline and stressed LCR, deposit outflow amount,
            and whether the RBI minimum LCR (100%) is breached.
        """
        if net_cash_outflows_30d <= 0:
            raise ValueError("net_cash_outflows_30d must be positive.")

        deposit_outflow = (deposit_outflow_pct / 100.0) * total_deposits
        stressed_outflows = net_cash_outflows_30d + deposit_outflow

        baseline_lcr = (hqla / net_cash_outflows_30d) * 100.0
        stressed_lcr = (
            (hqla / stressed_outflows) * 100.0 if stressed_outflows > 0 else 0.0
        )

        # RBI minimum LCR requirement: 100%
        lcr_breach = stressed_lcr < 100.0

        logger.debug(
            "Liquidity sensitivity: deposit_outflow=%.1f%%, LCR %.1f%% -> %.1f%%",
            deposit_outflow_pct,
            baseline_lcr,
            stressed_lcr,
        )

        return {
            "deposit_outflow_pct": deposit_outflow_pct,
            "deposit_outflow_amount": deposit_outflow,
            "baseline_lcr_pct": baseline_lcr,
            "stressed_lcr_pct": stressed_lcr,
            "lcr_breach": lcr_breach,
            "rbi_min_lcr_pct": 100.0,
        }

credit_quality_stress(base_pds, base_lgds, base_eads)

Apply credit quality deterioration stress.

Simulates NPA migration per RBI's macro stress testing framework. PDs are multiplied by a severity-dependent factor and LGDs receive an additive stress.

Parameters:

Name Type Description Default
base_pds ndarray

Baseline PDs (n_exposures,).

required
base_lgds ndarray

Baseline LGDs (n_exposures,).

required
base_eads ndarray

Baseline EADs (n_exposures,).

required

Returns:

Type Description
dict[str, float | str]

Dict with base EL, stressed EL, incremental provisions, and

dict[str, float | str]

severity label.

Source code in creditriskengine\portfolio\stress_testing.py
def credit_quality_stress(
    self,
    base_pds: np.ndarray,
    base_lgds: np.ndarray,
    base_eads: np.ndarray,
) -> dict[str, float | str]:
    """Apply credit quality deterioration stress.

    Simulates NPA migration per RBI's macro stress testing framework.
    PDs are multiplied by a severity-dependent factor and LGDs receive
    an additive stress.

    Args:
        base_pds: Baseline PDs (n_exposures,).
        base_lgds: Baseline LGDs (n_exposures,).
        base_eads: Baseline EADs (n_exposures,).

    Returns:
        Dict with base EL, stressed EL, incremental provisions, and
        severity label.
    """
    base_pds = np.asarray(base_pds, dtype=np.float64)
    base_lgds = np.asarray(base_lgds, dtype=np.float64)
    base_eads = np.asarray(base_eads, dtype=np.float64)

    params = self._severity_map.get(self.severity, self._severity_map["moderate"])
    stressed_pds = np.minimum(base_pds * params["pd_mult"], 1.0)
    stressed_lgds = np.clip(base_lgds + params["lgd_add"], 0.0, 1.0)

    base_el = float((base_pds * base_lgds * base_eads).sum())
    stressed_el = float((stressed_pds * stressed_lgds * base_eads).sum())

    logger.debug(
        "Credit quality stress (%s): base_EL=%.2f, stressed_EL=%.2f",
        self.severity,
        base_el,
        stressed_el,
    )

    return {
        "base_el": base_el,
        "stressed_el": stressed_el,
        "incremental_provisions": stressed_el - base_el,
        "severity": self.severity,
        "pd_multiplier": params["pd_mult"],
        "lgd_add_on": params["lgd_add"],
    }

interest_rate_sensitivity(rate_shock_bps, duration_gap, total_assets, rate_sensitive_fraction=0.6, avg_risk_weight=0.75)

Interest rate sensitivity analysis.

Estimates the impact of a parallel shift in interest rates on net interest income (NII) and economic value of equity (EVE).

Impact on EVE: -Duration_gap x Delta_Rate x Total_Assets

Requires baseline_metrics with 'net_interest_income' and 'total_advances'.

Parameters:

Name Type Description Default
rate_shock_bps float

Interest rate shock in basis points (e.g. +200).

required
duration_gap float

Duration gap (years) between assets and liabilities.

required
total_assets float

Total asset value.

required
rate_sensitive_fraction float

Fraction of advances that are rate-sensitive (default 0.6 per RBI guidelines).

0.6
avg_risk_weight float

Average portfolio risk weight used to approximate RWA from total assets (default 0.75).

0.75

Returns:

Type Description
dict[str, float]

Dict with EVE impact, NII impact, and stressed CAR estimate.

Source code in creditriskengine\portfolio\stress_testing.py
def interest_rate_sensitivity(
    self,
    rate_shock_bps: float,
    duration_gap: float,
    total_assets: float,
    rate_sensitive_fraction: float = 0.6,
    avg_risk_weight: float = 0.75,
) -> dict[str, float]:
    """Interest rate sensitivity analysis.

    Estimates the impact of a parallel shift in interest rates on
    net interest income (NII) and economic value of equity (EVE).

    Impact on EVE: -Duration_gap x Delta_Rate x Total_Assets

    Requires baseline_metrics with 'net_interest_income' and
    'total_advances'.

    Args:
        rate_shock_bps: Interest rate shock in basis points (e.g. +200).
        duration_gap: Duration gap (years) between assets and liabilities.
        total_assets: Total asset value.
        rate_sensitive_fraction: Fraction of advances that are
            rate-sensitive (default 0.6 per RBI guidelines).
        avg_risk_weight: Average portfolio risk weight used to
            approximate RWA from total assets (default 0.75).

    Returns:
        Dict with EVE impact, NII impact, and stressed CAR estimate.
    """
    self._require_baseline("net_interest_income", "total_advances", "car")

    rate_shock = rate_shock_bps / 10_000.0

    # EVE impact
    eve_impact = -duration_gap * rate_shock * total_assets

    # NII impact: rate_shock × rate-sensitive portion of advances
    rate_sensitive_advances = (
        self.baseline_metrics["total_advances"] * rate_sensitive_fraction
    )
    nii_impact = rate_shock * rate_sensitive_advances
    stressed_nii = self.baseline_metrics["net_interest_income"] + nii_impact

    # CAR impact: EVE change relative to RWA
    rwa_proxy = total_assets * avg_risk_weight
    car_impact_pp = (eve_impact / rwa_proxy) * 100 if rwa_proxy > 0 else 0.0
    stressed_car = self.baseline_metrics["car"] + car_impact_pp

    logger.debug(
        "IR sensitivity: shock=%+dbps, EVE_impact=%.2f, NII_impact=%.2f, "
        "CAR %.2f -> %.2f",
        rate_shock_bps,
        eve_impact,
        nii_impact,
        self.baseline_metrics["car"],
        stressed_car,
    )

    return {
        "rate_shock_bps": rate_shock_bps,
        "eve_impact": eve_impact,
        "nii_impact": nii_impact,
        "baseline_nii": self.baseline_metrics["net_interest_income"],
        "stressed_nii": stressed_nii,
        "baseline_car": self.baseline_metrics["car"],
        "stressed_car": stressed_car,
        "car_change_pp": car_impact_pp,
    }

credit_quality_sensitivity(npa_increase_pct, provision_coverage_ratio=0.7, avg_risk_weight=0.75)

Credit quality sensitivity analysis via NPA ratio shift.

Models the impact of an increase in non-performing assets on provisioning requirements and capital adequacy.

Requires baseline_metrics with 'npa_ratio', 'car', 'total_advances'.

Parameters:

Name Type Description Default
npa_increase_pct float

Percentage point increase in NPA ratio (e.g. 2.0 means NPA ratio rises by 2 pp).

required
provision_coverage_ratio float

Provisioning coverage ratio for incremental NPAs (default 0.70).

0.7
avg_risk_weight float

Average portfolio risk weight used to approximate RWA from total advances (default 0.75).

0.75

Returns:

Type Description
dict[str, float]

Dict with stressed NPA ratio, incremental provisions, and CAR impact.

Source code in creditriskengine\portfolio\stress_testing.py
def credit_quality_sensitivity(
    self,
    npa_increase_pct: float,
    provision_coverage_ratio: float = 0.70,
    avg_risk_weight: float = 0.75,
) -> dict[str, float]:
    """Credit quality sensitivity analysis via NPA ratio shift.

    Models the impact of an increase in non-performing assets on
    provisioning requirements and capital adequacy.

    Requires baseline_metrics with 'npa_ratio', 'car', 'total_advances'.

    Args:
        npa_increase_pct: Percentage point increase in NPA ratio
            (e.g. 2.0 means NPA ratio rises by 2 pp).
        provision_coverage_ratio: Provisioning coverage ratio for
            incremental NPAs (default 0.70).
        avg_risk_weight: Average portfolio risk weight used to
            approximate RWA from total advances (default 0.75).

    Returns:
        Dict with stressed NPA ratio, incremental provisions, and CAR impact.
    """
    self._require_baseline("npa_ratio", "car", "total_advances")

    baseline_npa = self.baseline_metrics["npa_ratio"]
    stressed_npa = baseline_npa + npa_increase_pct
    total_advances = self.baseline_metrics["total_advances"]

    incremental_npa_amount = (npa_increase_pct / 100.0) * total_advances
    incremental_provisions = incremental_npa_amount * provision_coverage_ratio

    # CAR impact: provisions reduce capital, RWA unchanged
    rwa_proxy = total_advances * avg_risk_weight
    car_reduction = (
        (incremental_provisions / rwa_proxy) * 100 if rwa_proxy > 0 else 0.0
    )
    stressed_car = self.baseline_metrics["car"] - car_reduction

    logger.debug(
        "Credit quality sensitivity: NPA +%.1fpp, provisions=%.2f, "
        "CAR %.2f -> %.2f",
        npa_increase_pct,
        incremental_provisions,
        self.baseline_metrics["car"],
        stressed_car,
    )

    return {
        "baseline_npa_ratio": baseline_npa,
        "stressed_npa_ratio": stressed_npa,
        "npa_increase_pct": npa_increase_pct,
        "incremental_npa_amount": incremental_npa_amount,
        "incremental_provisions": incremental_provisions,
        "provision_coverage_ratio": provision_coverage_ratio,
        "baseline_car": self.baseline_metrics["car"],
        "stressed_car": stressed_car,
        "car_reduction_pp": car_reduction,
    }

liquidity_sensitivity(deposit_outflow_pct, hqla, total_deposits, net_cash_outflows_30d)

Liquidity sensitivity analysis.

Estimates the impact of deposit outflows on the Liquidity Coverage Ratio (LCR) as per Basel III / RBI guidelines.

LCR = HQLA / Net cash outflows over 30 days

Parameters:

Name Type Description Default
deposit_outflow_pct float

Assumed deposit run-off percentage.

required
hqla float

High-quality liquid assets.

required
total_deposits float

Total deposit base.

required
net_cash_outflows_30d float

Baseline 30-day net cash outflows.

required

Returns:

Type Description
dict[str, float]

Dict with baseline and stressed LCR, deposit outflow amount,

dict[str, float]

and whether the RBI minimum LCR (100%) is breached.

Source code in creditriskengine\portfolio\stress_testing.py
def liquidity_sensitivity(
    self,
    deposit_outflow_pct: float,
    hqla: float,
    total_deposits: float,
    net_cash_outflows_30d: float,
) -> dict[str, float]:
    """Liquidity sensitivity analysis.

    Estimates the impact of deposit outflows on the Liquidity Coverage
    Ratio (LCR) as per Basel III / RBI guidelines.

    LCR = HQLA / Net cash outflows over 30 days

    Args:
        deposit_outflow_pct: Assumed deposit run-off percentage.
        hqla: High-quality liquid assets.
        total_deposits: Total deposit base.
        net_cash_outflows_30d: Baseline 30-day net cash outflows.

    Returns:
        Dict with baseline and stressed LCR, deposit outflow amount,
        and whether the RBI minimum LCR (100%) is breached.
    """
    if net_cash_outflows_30d <= 0:
        raise ValueError("net_cash_outflows_30d must be positive.")

    deposit_outflow = (deposit_outflow_pct / 100.0) * total_deposits
    stressed_outflows = net_cash_outflows_30d + deposit_outflow

    baseline_lcr = (hqla / net_cash_outflows_30d) * 100.0
    stressed_lcr = (
        (hqla / stressed_outflows) * 100.0 if stressed_outflows > 0 else 0.0
    )

    # RBI minimum LCR requirement: 100%
    lcr_breach = stressed_lcr < 100.0

    logger.debug(
        "Liquidity sensitivity: deposit_outflow=%.1f%%, LCR %.1f%% -> %.1f%%",
        deposit_outflow_pct,
        baseline_lcr,
        stressed_lcr,
    )

    return {
        "deposit_outflow_pct": deposit_outflow_pct,
        "deposit_outflow_amount": deposit_outflow,
        "baseline_lcr_pct": baseline_lcr,
        "stressed_lcr_pct": stressed_lcr,
        "lcr_breach": lcr_breach,
        "rbi_min_lcr_pct": 100.0,
    }

apply_pd_stress(base_pds, stress_multiplier, pd_cap=1.0)

Apply stress multiplier to PDs.

Parameters:

Name Type Description Default
base_pds ndarray

Baseline PD array.

required
stress_multiplier float

Multiplicative stress factor.

required
pd_cap float

Maximum PD cap.

1.0

Returns:

Type Description
ndarray

Stressed PD array.

Source code in creditriskengine\portfolio\stress_testing.py
def apply_pd_stress(
    base_pds: np.ndarray,
    stress_multiplier: float,
    pd_cap: float = 1.0,
) -> np.ndarray:
    """Apply stress multiplier to PDs.

    Args:
        base_pds: Baseline PD array.
        stress_multiplier: Multiplicative stress factor.
        pd_cap: Maximum PD cap.

    Returns:
        Stressed PD array.
    """
    stressed = base_pds * stress_multiplier
    return np.minimum(stressed, pd_cap)

apply_lgd_stress(base_lgds, stress_add_on, lgd_cap=1.0)

Apply additive stress to LGDs.

Parameters:

Name Type Description Default
base_lgds ndarray

Baseline LGD array.

required
stress_add_on float

Additive stress increase.

required
lgd_cap float

Maximum LGD cap.

1.0

Returns:

Type Description
ndarray

Stressed LGD array.

Source code in creditriskengine\portfolio\stress_testing.py
def apply_lgd_stress(
    base_lgds: np.ndarray,
    stress_add_on: float,
    lgd_cap: float = 1.0,
) -> np.ndarray:
    """Apply additive stress to LGDs.

    Args:
        base_lgds: Baseline LGD array.
        stress_add_on: Additive stress increase.
        lgd_cap: Maximum LGD cap.

    Returns:
        Stressed LGD array.
    """
    stressed = base_lgds + stress_add_on
    return np.clip(stressed, 0.0, lgd_cap)

stress_test_rwa_impact(base_rwa, stressed_rwa)

Calculate RWA impact from stress test.

Parameters:

Name Type Description Default
base_rwa float

Baseline RWA.

required
stressed_rwa float

Stressed RWA.

required

Returns:

Type Description
dict[str, float]

Dict with absolute and relative impact.

Source code in creditriskengine\portfolio\stress_testing.py
def stress_test_rwa_impact(
    base_rwa: float,
    stressed_rwa: float,
) -> dict[str, float]:
    """Calculate RWA impact from stress test.

    Args:
        base_rwa: Baseline RWA.
        stressed_rwa: Stressed RWA.

    Returns:
        Dict with absolute and relative impact.
    """
    delta = stressed_rwa - base_rwa
    pct = delta / base_rwa if base_rwa > 0 else 0.0

    return {
        "base_rwa": base_rwa,
        "stressed_rwa": stressed_rwa,
        "delta_rwa": delta,
        "pct_change": pct,
    }

scenario_library()

Predefined macroeconomic stress scenarios.

Provides a set of standard scenarios ranging from baseline to severely adverse, consistent with severity gradations used in EBA, CCAR, and RBI stress testing frameworks.

Returns:

Type Description
dict[str, MacroScenario]

Dict mapping scenario name to MacroScenario object.

dict[str, MacroScenario]

Scenarios provided: - 'baseline': Steady-state growth (GDP +2%, unemployment 4-4.5%) - 'mild_downturn': GDP -0.5% to +1.5%, unemployment 6-6.5% - 'moderate_recession': GDP -3% to +1%, unemployment rising 3pp - 'severe_recession': GDP -4% to +1%, unemployment 9-11% - 'stagflation': GDP -2%, inflation +4pp, rates +300bps - 'sovereign_crisis': GDP -5%, spreads +500bps, FX -20%

Source code in creditriskengine\portfolio\stress_testing.py
def scenario_library() -> dict[str, MacroScenario]:
    """Predefined macroeconomic stress scenarios.

    Provides a set of standard scenarios ranging from baseline to severely
    adverse, consistent with severity gradations used in EBA, CCAR, and
    RBI stress testing frameworks.

    Returns:
        Dict mapping scenario name to MacroScenario object.
        Scenarios provided:
            - 'baseline': Steady-state growth (GDP +2%, unemployment 4-4.5%)
            - 'mild_downturn': GDP -0.5% to +1.5%, unemployment 6-6.5%
            - 'moderate_recession': GDP -3% to +1%, unemployment rising 3pp
            - 'severe_recession': GDP -4% to +1%, unemployment 9-11%
            - 'stagflation': GDP -2%, inflation +4pp, rates +300bps
            - 'sovereign_crisis': GDP -5%, spreads +500bps, FX -20%
    """
    scenarios: dict[str, MacroScenario] = {
        "baseline": MacroScenario(
            name="Baseline",
            horizon_years=3,
            variables={
                "gdp_growth": np.array([0.02, 0.02, 0.02]),
                "unemployment": np.array([0.045, 0.04, 0.04]),
                "house_price_index": np.array([0.03, 0.03, 0.03]),
            },
            severity="baseline",
        ),
        "mild_downturn": MacroScenario(
            name="Mild Downturn",
            horizon_years=3,
            variables={
                "gdp_growth": np.array([-0.005, 0.005, 0.015]),
                "unemployment": np.array([0.06, 0.065, 0.06]),
                "house_price_index": np.array([-0.03, -0.01, 0.02]),
                "interest_rate_change_bps": np.array([0, -25, -25]),
            },
            severity="mild",
        ),
        "moderate_recession": MacroScenario(
            name="Moderate Recession",
            horizon_years=3,
            variables={
                "gdp_growth": np.array([-0.03, -0.01, 0.01]),
                "unemployment": np.array([0.07, 0.085, 0.08]),
                "house_price_index": np.array([-0.10, -0.05, 0.0]),
                "interest_rate_change_bps": np.array([0, -50, -50]),
            },
            severity="adverse",
        ),
        "severe_recession": MacroScenario(
            name="Severe Recession",
            horizon_years=3,
            variables={
                "gdp_growth": np.array([-0.04, -0.02, 0.01]),
                "unemployment": np.array([0.09, 0.11, 0.10]),
                "house_price_index": np.array([-0.15, -0.10, -0.03]),
                "equity_market_change": np.array([-0.35, -0.10, 0.05]),
            },
            severity="severely_adverse",
        ),
        "stagflation": MacroScenario(
            name="Stagflation",
            horizon_years=3,
            variables={
                "gdp_growth": np.array([-0.02, -0.01, 0.005]),
                "unemployment": np.array([0.07, 0.08, 0.075]),
                "inflation_change_pp": np.array([4.0, 3.0, 1.0]),
                "interest_rate_change_bps": np.array([300, 100, -50]),
            },
            severity="adverse",
        ),
        "sovereign_crisis": MacroScenario(
            name="Sovereign Crisis",
            horizon_years=3,
            variables={
                "gdp_growth": np.array([-0.05, -0.025, 0.0]),
                "unemployment": np.array([0.10, 0.115, 0.10]),
                "sovereign_spread_change_bps": np.array([500, 300, 100]),
                "fx_depreciation": np.array([-0.20, -0.10, -0.03]),
                "house_price_index": np.array([-0.15, -0.10, -0.03]),
            },
            severity="severely_adverse",
        ),
    }

    logger.debug("Scenario library loaded: %d scenarios.", len(scenarios))
    return scenarios

multi_period_projection(base_pds, base_lgds, base_eads, pd_multipliers, lgd_add_ons, amortisation_rates=None)

Project credit risk parameters over multiple periods with time-step simulation.

Applies period-specific stress factors to compute stressed PD, LGD, EAD, and expected loss projections. Optionally accounts for portfolio amortisation (run-off) across periods.

Useful for through-the-cycle projections in IFRS 9 ECL, CCAR, and EBA contexts. When amortisation_rates is None, a static balance sheet is assumed (EBA convention).

Reference
  • EBA Methodological Note (static balance sheet)
  • IFRS 9 B5.5.13 (lifetime ECL projection)

Parameters:

Name Type Description Default
base_pds ndarray

Baseline PDs (n_exposures,).

required
base_lgds ndarray

Baseline LGDs (n_exposures,).

required
base_eads ndarray

Baseline EADs (n_exposures,).

required
pd_multipliers ndarray

PD stress multipliers per period (n_periods,).

required
lgd_add_ons ndarray

LGD additive stress per period (n_periods,).

required
amortisation_rates ndarray | None

Optional per-period amortisation rate (n_periods,). Each entry is the fraction of EAD that amortises away per period. Defaults to zero (static balance sheet).

None

Returns:

Type Description
dict[str, Any]

Dict with: - 'stressed_pds': (n_periods, n_exposures) stressed PD matrix - 'stressed_lgds': (n_periods, n_exposures) stressed LGD matrix - 'expected_losses': (n_periods, n_exposures) per-exposure EL - 'period_el': (n_periods,) total EL per period - 'period_eads': (n_periods,) total outstanding EAD per period - 'cumulative_el': float cumulative expected loss across all periods

Raises:

Type Description
ValueError

If array lengths are inconsistent.

Source code in creditriskengine\portfolio\stress_testing.py
def multi_period_projection(
    base_pds: np.ndarray,
    base_lgds: np.ndarray,
    base_eads: np.ndarray,
    pd_multipliers: np.ndarray,
    lgd_add_ons: np.ndarray,
    amortisation_rates: np.ndarray | None = None,
) -> dict[str, Any]:
    """Project credit risk parameters over multiple periods with time-step simulation.

    Applies period-specific stress factors to compute stressed PD, LGD, EAD,
    and expected loss projections. Optionally accounts for portfolio
    amortisation (run-off) across periods.

    Useful for through-the-cycle projections in IFRS 9 ECL, CCAR, and EBA
    contexts. When amortisation_rates is None, a static balance sheet is
    assumed (EBA convention).

    Reference:
        - EBA Methodological Note (static balance sheet)
        - IFRS 9 B5.5.13 (lifetime ECL projection)

    Args:
        base_pds: Baseline PDs (n_exposures,).
        base_lgds: Baseline LGDs (n_exposures,).
        base_eads: Baseline EADs (n_exposures,).
        pd_multipliers: PD stress multipliers per period (n_periods,).
        lgd_add_ons: LGD additive stress per period (n_periods,).
        amortisation_rates: Optional per-period amortisation rate (n_periods,).
            Each entry is the fraction of EAD that amortises away per period.
            Defaults to zero (static balance sheet).

    Returns:
        Dict with:
            - 'stressed_pds': (n_periods, n_exposures) stressed PD matrix
            - 'stressed_lgds': (n_periods, n_exposures) stressed LGD matrix
            - 'expected_losses': (n_periods, n_exposures) per-exposure EL
            - 'period_el': (n_periods,) total EL per period
            - 'period_eads': (n_periods,) total outstanding EAD per period
            - 'cumulative_el': float cumulative expected loss across all periods

    Raises:
        ValueError: If array lengths are inconsistent.
    """
    base_pds = np.asarray(base_pds, dtype=np.float64)
    base_lgds = np.asarray(base_lgds, dtype=np.float64)
    base_eads = np.asarray(base_eads, dtype=np.float64)
    pd_multipliers = np.asarray(pd_multipliers, dtype=np.float64)
    lgd_add_ons = np.asarray(lgd_add_ons, dtype=np.float64)

    n_periods = len(pd_multipliers)
    n_exposures = len(base_pds)

    if len(lgd_add_ons) != n_periods:
        raise ValueError("lgd_add_ons length must match pd_multipliers length.")

    if amortisation_rates is not None:
        amortisation_rates = np.asarray(amortisation_rates, dtype=np.float64)
        if len(amortisation_rates) != n_periods:
            raise ValueError("amortisation_rates length must match n_periods.")
    else:
        amortisation_rates = np.zeros(n_periods)

    stressed_pds = np.zeros((n_periods, n_exposures))
    stressed_lgds = np.zeros((n_periods, n_exposures))
    expected_losses = np.zeros((n_periods, n_exposures))
    period_el = np.zeros(n_periods)
    period_eads = np.zeros(n_periods)

    current_eads = base_eads.copy()

    for t in range(n_periods):
        stressed_pds[t] = np.minimum(base_pds * pd_multipliers[t], 1.0)
        stressed_lgds[t] = np.clip(base_lgds + lgd_add_ons[t], 0.0, 1.0)
        expected_losses[t] = stressed_pds[t] * stressed_lgds[t] * current_eads
        period_el[t] = float(expected_losses[t].sum())
        period_eads[t] = float(current_eads.sum())

        # Apply amortisation and default write-off for next period
        default_writeoff = stressed_pds[t] * current_eads
        amort = amortisation_rates[t] * current_eads
        current_eads = np.maximum(current_eads - default_writeoff - amort, 0.0)

    cumulative_el = float(period_el.sum())

    logger.debug(
        "Multi-period projection: %d periods, %d exposures, total EL=%.2f",
        n_periods, n_exposures, cumulative_el,
    )

    return {
        "stressed_pds": stressed_pds,
        "stressed_lgds": stressed_lgds,
        "expected_losses": expected_losses,
        "period_el": period_el,
        "period_eads": period_eads,
        "cumulative_el": cumulative_el,
    }

reverse_stress_test(base_pds, base_lgds, base_eads, target_el, pd_multiplier_range=(1.0, 10.0), tolerance=0.001)

Find the PD stress multiplier that causes expected loss to hit a target.

Uses bisection method to search for the PD multiplier within the specified range that produces a portfolio expected loss equal to the target (within tolerance).

This is a key reverse stress testing technique: instead of asking "what is the loss under scenario X?", it asks "what scenario produces loss X?".

Reference
  • BCBS 239: Principles for effective risk data aggregation
  • EBA GL/2018/04: Guidelines on stress testing

Parameters:

Name Type Description Default
base_pds ndarray

Baseline PDs (n_exposures,).

required
base_lgds ndarray

Baseline LGDs (n_exposures,).

required
base_eads ndarray

Baseline EADs (n_exposures,).

required
target_el float

Target expected loss amount to solve for.

required
pd_multiplier_range tuple[float, float]

(low, high) search range for the PD multiplier.

(1.0, 10.0)
tolerance float

Convergence tolerance for absolute EL difference.

0.001

Returns:

Type Description
dict[str, Any]

Dict with: - 'multiplier': PD stress multiplier that achieves target EL - 'stressed_pds': Stressed PD array at the found multiplier - 'stressed_el': Actual EL at the found multiplier - 'iterations': Number of bisection iterations used

Raises:

Type Description
ValueError

If the target EL is not achievable within the given range.

Source code in creditriskengine\portfolio\stress_testing.py
def reverse_stress_test(
    base_pds: np.ndarray,
    base_lgds: np.ndarray,
    base_eads: np.ndarray,
    target_el: float,
    pd_multiplier_range: tuple[float, float] = (1.0, 10.0),
    tolerance: float = 0.001,
) -> dict[str, Any]:
    """Find the PD stress multiplier that causes expected loss to hit a target.

    Uses bisection method to search for the PD multiplier within the
    specified range that produces a portfolio expected loss equal to
    the target (within tolerance).

    This is a key reverse stress testing technique: instead of asking
    "what is the loss under scenario X?", it asks "what scenario
    produces loss X?".

    Reference:
        - BCBS 239: Principles for effective risk data aggregation
        - EBA GL/2018/04: Guidelines on stress testing

    Args:
        base_pds: Baseline PDs (n_exposures,).
        base_lgds: Baseline LGDs (n_exposures,).
        base_eads: Baseline EADs (n_exposures,).
        target_el: Target expected loss amount to solve for.
        pd_multiplier_range: (low, high) search range for the PD multiplier.
        tolerance: Convergence tolerance for absolute EL difference.

    Returns:
        Dict with:
            - 'multiplier': PD stress multiplier that achieves target EL
            - 'stressed_pds': Stressed PD array at the found multiplier
            - 'stressed_el': Actual EL at the found multiplier
            - 'iterations': Number of bisection iterations used

    Raises:
        ValueError: If the target EL is not achievable within the given range.
    """
    base_pds = np.asarray(base_pds, dtype=np.float64)
    base_lgds = np.asarray(base_lgds, dtype=np.float64)
    base_eads = np.asarray(base_eads, dtype=np.float64)

    low, high = pd_multiplier_range

    def _compute_el(mult: float) -> float:
        stressed = np.minimum(base_pds * mult, 1.0)
        return float(np.sum(stressed * base_lgds * base_eads))

    el_low = _compute_el(low)
    el_high = _compute_el(high)

    if target_el < el_low or target_el > el_high:
        raise ValueError(
            f"Target EL {target_el:.4f} is outside achievable range "
            f"[{el_low:.4f}, {el_high:.4f}] for multiplier range "
            f"[{low:.2f}, {high:.2f}]."
        )

    max_iterations = 1000
    n_iter = 0

    for n_iter in range(1, max_iterations + 1):  # noqa: B007
        mid = (low + high) / 2.0
        el_mid = _compute_el(mid)

        if abs(el_mid - target_el) < tolerance:
            break

        if el_mid < target_el:
            low = mid
        else:
            high = mid

    stressed_pds = np.minimum(base_pds * mid, 1.0)
    stressed_el = _compute_el(mid)

    logger.info(
        "Reverse stress test: target_EL=%.2f, found multiplier=%.4f, "
        "actual_EL=%.2f, iterations=%d",
        target_el,
        mid,
        stressed_el,
        n_iter,
    )

    return {
        "multiplier": mid,
        "stressed_pds": stressed_pds,
        "stressed_el": stressed_el,
        "iterations": n_iter,
    }

reverse_stress_capital_breach(base_pds, base_lgds, base_eads, cet1_capital, cet1_floor_pct=0.045, rwa_func=None)

Find the PD multiplier that would breach the CET1 minimum.

Determines the stress severity (expressed as a PD multiplier) at which portfolio expected losses would erode CET1 capital below the regulatory minimum ratio.

The CET1 ratio is computed as

CET1_ratio = (CET1_capital - EL) / RWA

A breach occurs when CET1_ratio < cet1_floor_pct.

Reference
  • CRR Art. 92: Own funds requirements
  • BCBS d424: Minimum capital requirements (Basel III final)

Parameters:

Name Type Description Default
base_pds ndarray

Baseline PDs (n_exposures,).

required
base_lgds ndarray

Baseline LGDs (n_exposures,).

required
base_eads ndarray

Baseline EADs (n_exposures,).

required
cet1_capital float

Current CET1 capital amount.

required
cet1_floor_pct float

Minimum CET1 ratio as a fraction (default 4.5%).

0.045
rwa_func Callable[..., float] | None

Optional callable(stressed_pds, base_lgds, base_eads) -> float to compute stressed RWA. If None, RWA = sum(base_eads).

None

Returns:

Type Description
dict[str, Any]

Dict with: - 'breach_multiplier': PD multiplier at which CET1 is breached - 'stressed_el': Expected loss at breach point - 'cet1_at_breach': CET1 ratio at breach point - 'iterations': Number of bisection iterations

Raises:

Type Description
ValueError

If CET1 is already breached at multiplier=1.0 or if no breach occurs even at multiplier=10.0.

Source code in creditriskengine\portfolio\stress_testing.py
def reverse_stress_capital_breach(
    base_pds: np.ndarray,
    base_lgds: np.ndarray,
    base_eads: np.ndarray,
    cet1_capital: float,
    cet1_floor_pct: float = 0.045,
    rwa_func: Callable[..., float] | None = None,
) -> dict[str, Any]:
    """Find the PD multiplier that would breach the CET1 minimum.

    Determines the stress severity (expressed as a PD multiplier) at
    which portfolio expected losses would erode CET1 capital below
    the regulatory minimum ratio.

    The CET1 ratio is computed as:
        CET1_ratio = (CET1_capital - EL) / RWA

    A breach occurs when CET1_ratio < cet1_floor_pct.

    Reference:
        - CRR Art. 92: Own funds requirements
        - BCBS d424: Minimum capital requirements (Basel III final)

    Args:
        base_pds: Baseline PDs (n_exposures,).
        base_lgds: Baseline LGDs (n_exposures,).
        base_eads: Baseline EADs (n_exposures,).
        cet1_capital: Current CET1 capital amount.
        cet1_floor_pct: Minimum CET1 ratio as a fraction (default 4.5%).
        rwa_func: Optional callable(stressed_pds, base_lgds, base_eads) -> float
            to compute stressed RWA. If None, RWA = sum(base_eads).

    Returns:
        Dict with:
            - 'breach_multiplier': PD multiplier at which CET1 is breached
            - 'stressed_el': Expected loss at breach point
            - 'cet1_at_breach': CET1 ratio at breach point
            - 'iterations': Number of bisection iterations

    Raises:
        ValueError: If CET1 is already breached at multiplier=1.0
            or if no breach occurs even at multiplier=10.0.
    """
    base_pds = np.asarray(base_pds, dtype=np.float64)
    base_lgds = np.asarray(base_lgds, dtype=np.float64)
    base_eads = np.asarray(base_eads, dtype=np.float64)

    def _rwa(stressed_pds: np.ndarray) -> float:
        if rwa_func is not None:
            return float(rwa_func(stressed_pds, base_lgds, base_eads))
        return float(np.sum(base_eads))

    def _cet1_ratio(mult: float) -> float:
        stressed = np.minimum(base_pds * mult, 1.0)
        el = float(np.sum(stressed * base_lgds * base_eads))
        rwa = _rwa(stressed)
        if rwa <= 0:
            return 0.0
        return (cet1_capital - el) / rwa

    # Check boundary conditions
    ratio_at_1 = _cet1_ratio(1.0)
    if ratio_at_1 < cet1_floor_pct:
        raise ValueError(
            f"CET1 ratio ({ratio_at_1:.4f}) is already below floor "
            f"({cet1_floor_pct:.4f}) at multiplier=1.0. "
            "No stress needed to breach."
        )

    ratio_at_10 = _cet1_ratio(10.0)
    if ratio_at_10 >= cet1_floor_pct:
        raise ValueError(
            f"CET1 ratio ({ratio_at_10:.4f}) does not breach floor "
            f"({cet1_floor_pct:.4f}) even at multiplier=10.0. "
            "Portfolio losses are insufficient to cause a breach."
        )

    low, high = 1.0, 10.0
    max_iterations = 1000
    n_iter = 0
    tolerance = 0.0001

    for n_iter in range(1, max_iterations + 1):  # noqa: B007
        mid = (low + high) / 2.0
        ratio = _cet1_ratio(mid)

        if abs(ratio - cet1_floor_pct) < tolerance:
            break

        if ratio > cet1_floor_pct:
            low = mid
        else:
            high = mid

    stressed_pds = np.minimum(base_pds * mid, 1.0)
    stressed_el = float(np.sum(stressed_pds * base_lgds * base_eads))
    cet1_at_breach = _cet1_ratio(mid)

    logger.info(
        "Reverse stress capital breach: multiplier=%.4f, "
        "stressed_EL=%.2f, CET1_at_breach=%.4f",
        mid,
        stressed_el,
        cet1_at_breach,
    )

    return {
        "breach_multiplier": mid,
        "stressed_el": stressed_el,
        "cet1_at_breach": cet1_at_breach,
        "iterations": n_iter,
    }

creditriskengine.portfolio.vasicek

Vasicek ASRF model — theoretical foundation of Basel III IRB formulas.

Reference: Vasicek (2002), "The Distribution of Loan Portfolio Value". This is the same model underlying BCBS IRB formulas (CRE31).

The key insight: in a perfectly granular portfolio where all exposures share a single systematic factor, the conditional default rate at the 99.9th percentile is:

P(D|Z) = Phi( (Phi^-1(PD) + sqrt(rho) * Phi^-1(0.999)) / sqrt(1-rho) )

This is exactly the formula used in BCBS CRE31.4.

vasicek_conditional_default_rate(pd, rho, z)

Conditional default rate given systematic factor realization.

Formula

P(D|Z=z) = Phi( (Phi^-1(PD) + sqrt(rho) * z) / sqrt(1-rho) )

Parameters:

Name Type Description Default
pd float

Unconditional probability of default.

required
rho float

Asset correlation (systematic factor loading).

required
z float

Systematic factor realization (standard normal).

required

Returns:

Type Description
float

Conditional default rate.

Source code in creditriskengine\portfolio\vasicek.py
def vasicek_conditional_default_rate(
    pd: float,
    rho: float,
    z: float,
) -> float:
    """Conditional default rate given systematic factor realization.

    Formula:
        P(D|Z=z) = Phi( (Phi^-1(PD) + sqrt(rho) * z) / sqrt(1-rho) )

    Args:
        pd: Unconditional probability of default.
        rho: Asset correlation (systematic factor loading).
        z: Systematic factor realization (standard normal).

    Returns:
        Conditional default rate.
    """
    if pd <= 0.0:
        return 0.0
    if pd >= 1.0:
        return 1.0
    if not 0.0 < rho < 1.0:
        raise ValueError(f"rho must be in (0, 1), got {rho}")

    g_pd = norm.ppf(pd)
    conditional = norm.cdf(
        (g_pd + math.sqrt(rho) * z) / math.sqrt(1.0 - rho)
    )
    return float(conditional)

vasicek_loss_quantile(pd, rho, lgd, confidence=0.999)

Loss quantile for an infinitely granular portfolio (ASRF).

This is the Basel III IRB formula for capital requirement

VaR = LGD * Phi( (Phi^-1(PD) + sqrt(rho) * Phi^-1(q)) / sqrt(1-rho) )

Parameters:

Name Type Description Default
pd float

Probability of default.

required
rho float

Asset correlation.

required
lgd float

Loss given default.

required
confidence float

Confidence level (default 99.9%).

0.999

Returns:

Type Description
float

Loss quantile (fraction of portfolio).

Source code in creditriskengine\portfolio\vasicek.py
def vasicek_loss_quantile(
    pd: float,
    rho: float,
    lgd: float,
    confidence: float = 0.999,
) -> float:
    """Loss quantile for an infinitely granular portfolio (ASRF).

    This is the Basel III IRB formula for capital requirement:
        VaR = LGD * Phi( (Phi^-1(PD) + sqrt(rho) * Phi^-1(q)) / sqrt(1-rho) )

    Args:
        pd: Probability of default.
        rho: Asset correlation.
        lgd: Loss given default.
        confidence: Confidence level (default 99.9%).

    Returns:
        Loss quantile (fraction of portfolio).
    """
    if not 0.0 < rho < 1.0:
        raise ValueError(f"rho must be in (0, 1), got {rho}")

    g_pd = norm.ppf(max(min(pd, 0.9999), 0.0001))
    g_q = norm.ppf(confidence)

    conditional_pd = norm.cdf(
        (g_pd + math.sqrt(rho) * g_q) / math.sqrt(1.0 - rho)
    )

    return lgd * float(conditional_pd)

expected_loss(pd, lgd)

Expected loss for a single exposure.

EL = PD * LGD

Parameters:

Name Type Description Default
pd float

Probability of default.

required
lgd float

Loss given default.

required

Returns:

Type Description
float

Expected loss rate.

Source code in creditriskengine\portfolio\vasicek.py
def expected_loss(pd: float, lgd: float) -> float:
    """Expected loss for a single exposure.

    EL = PD * LGD

    Args:
        pd: Probability of default.
        lgd: Loss given default.

    Returns:
        Expected loss rate.
    """
    return pd * lgd

unexpected_loss_asrf(pd, rho, lgd, confidence=0.999)

Unexpected loss under ASRF model.

UL = VaR(q) - EL

Parameters:

Name Type Description Default
pd float

Probability of default.

required
rho float

Asset correlation.

required
lgd float

Loss given default.

required
confidence float

Confidence level.

0.999

Returns:

Type Description
float

Unexpected loss rate.

Source code in creditriskengine\portfolio\vasicek.py
def unexpected_loss_asrf(
    pd: float,
    rho: float,
    lgd: float,
    confidence: float = 0.999,
) -> float:
    """Unexpected loss under ASRF model.

    UL = VaR(q) - EL

    Args:
        pd: Probability of default.
        rho: Asset correlation.
        lgd: Loss given default.
        confidence: Confidence level.

    Returns:
        Unexpected loss rate.
    """
    var = vasicek_loss_quantile(pd, rho, lgd, confidence)
    el = expected_loss(pd, lgd)
    return max(var - el, 0.0)

economic_capital_asrf(pd, rho, lgd, ead, confidence=0.999)

Economic capital calculation under ASRF model.

Parameters:

Name Type Description Default
pd float

Probability of default.

required
rho float

Asset correlation.

required
lgd float

Loss given default.

required
ead float

Exposure at default.

required
confidence float

Confidence level.

0.999

Returns:

Type Description
dict[str, float]

Dict with el, ul, var, ec (all in currency units).

Source code in creditriskengine\portfolio\vasicek.py
def economic_capital_asrf(
    pd: float,
    rho: float,
    lgd: float,
    ead: float,
    confidence: float = 0.999,
) -> dict[str, float]:
    """Economic capital calculation under ASRF model.

    Args:
        pd: Probability of default.
        rho: Asset correlation.
        lgd: Loss given default.
        ead: Exposure at default.
        confidence: Confidence level.

    Returns:
        Dict with el, ul, var, ec (all in currency units).
    """
    el_rate = expected_loss(pd, lgd)
    var_rate = vasicek_loss_quantile(pd, rho, lgd, confidence)
    ul_rate = max(var_rate - el_rate, 0.0)

    return {
        "expected_loss": el_rate * ead,
        "var": var_rate * ead,
        "unexpected_loss": ul_rate * ead,
        "economic_capital": ul_rate * ead,
        "el_rate": el_rate,
        "var_rate": var_rate,
        "ul_rate": ul_rate,
    }

vasicek_portfolio_loss_distribution(pd, rho, lgd, n_points=1000)

Generate the Vasicek portfolio loss distribution.

Computes the PDF of portfolio losses for an infinitely granular portfolio under the single-factor model.

Parameters:

Name Type Description Default
pd float

Probability of default.

required
rho float

Asset correlation.

required
lgd float

Loss given default.

required
n_points int

Number of points for the distribution.

1000

Returns:

Type Description
tuple[ndarray, ndarray]

Tuple of (loss_values, probability_density).

Source code in creditriskengine\portfolio\vasicek.py
def vasicek_portfolio_loss_distribution(
    pd: float,
    rho: float,
    lgd: float,
    n_points: int = 1000,
) -> tuple[np.ndarray, np.ndarray]:
    """Generate the Vasicek portfolio loss distribution.

    Computes the PDF of portfolio losses for an infinitely granular
    portfolio under the single-factor model.

    Args:
        pd: Probability of default.
        rho: Asset correlation.
        lgd: Loss given default.
        n_points: Number of points for the distribution.

    Returns:
        Tuple of (loss_values, probability_density).
    """
    # Generate systematic factor realizations
    z_values = np.linspace(-4.0, 4.0, n_points)

    # Conditional default rates
    g_pd = norm.ppf(max(min(pd, 0.9999), 0.0001))
    cond_pds = norm.cdf(
        (g_pd + np.sqrt(rho) * z_values) / np.sqrt(1.0 - rho)
    )

    # Loss values
    losses = lgd * cond_pds

    # PDF: standard normal density for Z
    density = norm.pdf(z_values)

    return losses, density

creditriskengine.portfolio.copula

Gaussian copula Monte Carlo simulation for portfolio credit risk.

Implements single-factor and multi-factor models for credit portfolio loss simulation.

simulate_single_factor(pds, lgds, eads, rho, n_simulations=10000, seed=None, antithetic=True)

Single-factor Gaussian copula Monte Carlo simulation.

Maps to the Basel ASRF model but with finite portfolio granularity.

For each simulation: 1. Draw systematic factor Z ~ N(0,1) 2. For each obligor i, draw idiosyncratic factor eps_i ~ N(0,1) 3. Asset return: A_i = sqrt(rho)Z + sqrt(1-rho)eps_i 4. Default if: A_i < Phi^-1(PD_i) 5. Loss = sum(default_i * LGD_i * EAD_i)

Parameters:

Name Type Description Default
pds ndarray

Array of PDs per obligor (N,).

required
lgds ndarray

Array of LGDs per obligor (N,).

required
eads ndarray

Array of EADs per obligor (N,).

required
rho float

Common asset correlation.

required
n_simulations int

Number of Monte Carlo simulations.

10000
seed int | None

Random seed for reproducibility.

None
antithetic bool

Use antithetic variates for variance reduction.

True

Returns:

Type Description
ndarray

Array of portfolio losses (n_simulations,).

Source code in creditriskengine\portfolio\copula.py
def simulate_single_factor(
    pds: np.ndarray,
    lgds: np.ndarray,
    eads: np.ndarray,
    rho: float,
    n_simulations: int = 10_000,
    seed: int | None = None,
    antithetic: bool = True,
) -> np.ndarray:
    """Single-factor Gaussian copula Monte Carlo simulation.

    Maps to the Basel ASRF model but with finite portfolio granularity.

    For each simulation:
    1. Draw systematic factor Z ~ N(0,1)
    2. For each obligor i, draw idiosyncratic factor eps_i ~ N(0,1)
    3. Asset return: A_i = sqrt(rho)*Z + sqrt(1-rho)*eps_i
    4. Default if: A_i < Phi^-1(PD_i)
    5. Loss = sum(default_i * LGD_i * EAD_i)

    Args:
        pds: Array of PDs per obligor (N,).
        lgds: Array of LGDs per obligor (N,).
        eads: Array of EADs per obligor (N,).
        rho: Common asset correlation.
        n_simulations: Number of Monte Carlo simulations.
        seed: Random seed for reproducibility.
        antithetic: Use antithetic variates for variance reduction.

    Returns:
        Array of portfolio losses (n_simulations,).
    """
    if not 0.0 < rho < 1.0:
        raise ValueError(f"rho must be in (0, 1), got {rho}")

    rng = np.random.default_rng(seed)
    n_obligors = len(pds)

    # Default thresholds
    thresholds = norm.ppf(np.maximum(pds, 1e-10))

    if antithetic:
        half_sims = n_simulations // 2
        z_half = rng.standard_normal(half_sims)
        z = np.concatenate([z_half, -z_half])
        eps_half = rng.standard_normal((half_sims, n_obligors))
        eps = np.concatenate([eps_half, -eps_half], axis=0)
        n_simulations = len(z)
    else:
        z = rng.standard_normal(n_simulations)
        eps = rng.standard_normal((n_simulations, n_obligors))

    # Asset returns: (n_sims, n_obligors)
    sqrt_rho = np.sqrt(rho)
    sqrt_1_minus_rho = np.sqrt(1.0 - rho)
    asset_returns = sqrt_rho * z[:, np.newaxis] + sqrt_1_minus_rho * eps

    # Default indicator
    defaults = asset_returns < thresholds[np.newaxis, :]

    # Portfolio losses
    losses = defaults * lgds[np.newaxis, :] * eads[np.newaxis, :]
    portfolio_losses = np.sum(losses, axis=1)

    return np.asarray(portfolio_losses)

simulate_multi_factor(pds, lgds, eads, factor_loadings, n_simulations=10000, seed=None)

Multi-factor Gaussian copula simulation with sector correlations.

Each obligor has loadings on K systematic factors

A_i = sum_k(w_ik * Z_k) + sqrt(1 - sum(w_ik^2)) * eps_i

Parameters:

Name Type Description Default
pds ndarray

Array of PDs (N,).

required
lgds ndarray

Array of LGDs (N,).

required
eads ndarray

Array of EADs (N,).

required
factor_loadings ndarray

Factor loading matrix (N, K).

required
n_simulations int

Number of simulations.

10000
seed int | None

Random seed.

None

Returns:

Type Description
ndarray

Array of portfolio losses (n_simulations,).

Source code in creditriskengine\portfolio\copula.py
def simulate_multi_factor(
    pds: np.ndarray,
    lgds: np.ndarray,
    eads: np.ndarray,
    factor_loadings: np.ndarray,
    n_simulations: int = 10_000,
    seed: int | None = None,
) -> np.ndarray:
    """Multi-factor Gaussian copula simulation with sector correlations.

    Each obligor has loadings on K systematic factors:
        A_i = sum_k(w_ik * Z_k) + sqrt(1 - sum(w_ik^2)) * eps_i

    Args:
        pds: Array of PDs (N,).
        lgds: Array of LGDs (N,).
        eads: Array of EADs (N,).
        factor_loadings: Factor loading matrix (N, K).
        n_simulations: Number of simulations.
        seed: Random seed.

    Returns:
        Array of portfolio losses (n_simulations,).
    """
    rng = np.random.default_rng(seed)
    n_obligors, n_factors = factor_loadings.shape

    thresholds = norm.ppf(np.maximum(pds, 1e-10))

    # Systematic factors (n_sims, K)
    z = rng.standard_normal((n_simulations, n_factors))

    # Idiosyncratic factors (n_sims, N)
    eps = rng.standard_normal((n_simulations, n_obligors))

    # Systematic component: (n_sims, N)
    systematic = z @ factor_loadings.T

    # Idiosyncratic scaling
    r_squared = np.sum(factor_loadings ** 2, axis=1)
    idio_scale = np.sqrt(np.maximum(1.0 - r_squared, 0.0))

    asset_returns = systematic + idio_scale[np.newaxis, :] * eps

    defaults = asset_returns < thresholds[np.newaxis, :]
    losses = defaults * lgds[np.newaxis, :] * eads[np.newaxis, :]

    return np.asarray(np.sum(losses, axis=1))

credit_var(losses, confidence=0.999)

Credit Value at Risk from simulated loss distribution.

Parameters:

Name Type Description Default
losses ndarray

Simulated portfolio losses.

required
confidence float

Confidence level.

0.999

Returns:

Type Description
float

Credit VaR at the specified confidence level.

Source code in creditriskengine\portfolio\copula.py
def credit_var(
    losses: np.ndarray,
    confidence: float = 0.999,
) -> float:
    """Credit Value at Risk from simulated loss distribution.

    Args:
        losses: Simulated portfolio losses.
        confidence: Confidence level.

    Returns:
        Credit VaR at the specified confidence level.
    """
    return float(np.percentile(losses, confidence * 100))

expected_shortfall(losses, confidence=0.999)

Expected Shortfall (CVaR) from simulated losses.

ES = E[Loss | Loss > VaR(q)]

Parameters:

Name Type Description Default
losses ndarray

Simulated portfolio losses.

required
confidence float

Confidence level.

0.999

Returns:

Type Description
float

Expected shortfall.

Source code in creditriskengine\portfolio\copula.py
def expected_shortfall(
    losses: np.ndarray,
    confidence: float = 0.999,
) -> float:
    """Expected Shortfall (CVaR) from simulated losses.

    ES = E[Loss | Loss > VaR(q)]

    Args:
        losses: Simulated portfolio losses.
        confidence: Confidence level.

    Returns:
        Expected shortfall.
    """
    var = credit_var(losses, confidence)
    tail = losses[losses >= var]
    return float(np.mean(tail)) if len(tail) > 0 else var

loss_distribution_stats(losses, total_ead)

Summary statistics of the loss distribution.

Parameters:

Name Type Description Default
losses ndarray

Simulated portfolio losses.

required
total_ead float

Total portfolio EAD for percentage calculations.

required

Returns:

Type Description
dict[str, float]

Dict with mean, std, var_999, es_999, skewness, kurtosis.

Source code in creditriskengine\portfolio\copula.py
def loss_distribution_stats(
    losses: np.ndarray,
    total_ead: float,
) -> dict[str, float]:
    """Summary statistics of the loss distribution.

    Args:
        losses: Simulated portfolio losses.
        total_ead: Total portfolio EAD for percentage calculations.

    Returns:
        Dict with mean, std, var_999, es_999, skewness, kurtosis.
    """
    from scipy.stats import kurtosis, skew

    return {
        "mean_loss": float(np.mean(losses)),
        "std_loss": float(np.std(losses)),
        "var_999": credit_var(losses, 0.999),
        "var_995": credit_var(losses, 0.995),
        "es_999": expected_shortfall(losses, 0.999),
        "mean_loss_pct": float(np.mean(losses)) / total_ead * 100 if total_ead > 0 else 0,
        "var_999_pct": credit_var(losses, 0.999) / total_ead * 100 if total_ead > 0 else 0,
        "skewness": float(skew(losses)),
        "kurtosis": float(kurtosis(losses)),
    }

creditriskengine.portfolio.var

Credit VaR utilities.

Value-at-Risk calculations for credit portfolios, including parametric, historical simulation, and Cornish-Fisher approaches, as well as risk decomposition tools (marginal, incremental, component VaR) and Expected Shortfall.

Regulatory context
  • Basel II/III Pillar 1: 99.9% confidence for credit risk capital
  • Basel III FRTB: Expected Shortfall replaces VaR for market risk
  • BCBS 128 (June 2006): International Convergence of Capital Measurement
  • SR 11-7 (Fed): Model Risk Management — VaR model validation

parametric_credit_var(el, ul_std, confidence=0.999)

Parametric Credit VaR assuming normal loss distribution.

VaR = EL + z_alpha * sigma

Reference: BCBS 128 (IRB formula foundation), Pillar 1 at 99.9%.

Parameters:

Name Type Description Default
el float

Expected loss.

required
ul_std float

Standard deviation of unexpected loss.

required
confidence float

Confidence level (e.g. 0.999 for Basel II IRB).

0.999

Returns:

Type Description
float

Credit VaR at the specified confidence level.

Raises:

Type Description
ValueError

If confidence is not in (0, 1) or ul_std is negative.

Source code in creditriskengine\portfolio\var.py
def parametric_credit_var(
    el: float,
    ul_std: float,
    confidence: float = 0.999,
) -> float:
    """Parametric Credit VaR assuming normal loss distribution.

    VaR = EL + z_alpha * sigma

    Reference: BCBS 128 (IRB formula foundation), Pillar 1 at 99.9%.

    Args:
        el: Expected loss.
        ul_std: Standard deviation of unexpected loss.
        confidence: Confidence level (e.g. 0.999 for Basel II IRB).

    Returns:
        Credit VaR at the specified confidence level.

    Raises:
        ValueError: If confidence is not in (0, 1) or ul_std is negative.
    """
    if not 0.0 < confidence < 1.0:
        raise ValueError(f"confidence must be in (0, 1), got {confidence}")
    if ul_std < 0.0:
        raise ValueError(f"ul_std must be non-negative, got {ul_std}")

    z = float(norm.ppf(confidence))
    result = el + z * ul_std

    logger.debug(
        "Parametric VaR: EL=%.4f, UL_std=%.4f, z=%.4f, VaR=%.4f",
        el, ul_std, z, result,
    )
    return result

marginal_var(portfolio_var, portfolio_std, exposure_contribution_to_std)

Marginal VaR contribution of a single exposure.

Marginal VaR measures the rate of change in portfolio VaR with respect to a small increase in the exposure size.

MVaR_i = VaR_p * (sigma_i_contribution / sigma_p)

Reference: Jorion (2007), "Value at Risk", Ch. 7.

Parameters:

Name Type Description Default
portfolio_var float

Total portfolio VaR.

required
portfolio_std float

Portfolio loss standard deviation.

required
exposure_contribution_to_std float

Exposure's contribution to portfolio std (i.e. d(sigma_p)/d(w_i) or cov(L_i, L_p)/sigma_p).

required

Returns:

Type Description
float

Marginal VaR contribution.

Source code in creditriskengine\portfolio\var.py
def marginal_var(
    portfolio_var: float,
    portfolio_std: float,
    exposure_contribution_to_std: float,
) -> float:
    """Marginal VaR contribution of a single exposure.

    Marginal VaR measures the rate of change in portfolio VaR with
    respect to a small increase in the exposure size.

    MVaR_i = VaR_p * (sigma_i_contribution / sigma_p)

    Reference: Jorion (2007), "Value at Risk", Ch. 7.

    Args:
        portfolio_var: Total portfolio VaR.
        portfolio_std: Portfolio loss standard deviation.
        exposure_contribution_to_std: Exposure's contribution to portfolio std
            (i.e. d(sigma_p)/d(w_i) or cov(L_i, L_p)/sigma_p).

    Returns:
        Marginal VaR contribution.
    """
    if portfolio_std < 1e-15:
        logger.warning("Portfolio std near zero (%.2e); marginal VaR is zero.", portfolio_std)
        return 0.0
    return portfolio_var * (exposure_contribution_to_std / portfolio_std)

historical_simulation_var(loss_distribution, confidence=0.999)

Historical simulation VaR from an empirical loss distribution.

Computes VaR as the quantile of observed or simulated losses. No distributional assumption is required — the result is purely data-driven.

Reference
  • BCBS 128: Non-parametric approaches to credit VaR
  • SR 11-7: Outcomes analysis for VaR back-testing

Parameters:

Name Type Description Default
loss_distribution ndarray

Array of historical or simulated portfolio losses. Positive values represent losses.

required
confidence float

Confidence level (e.g. 0.999 for 99.9%).

0.999

Returns:

Type Description
float

VaR at the given confidence level.

Raises:

Type Description
ValueError

If loss_distribution is empty or confidence is out of range.

Source code in creditriskengine\portfolio\var.py
def historical_simulation_var(
    loss_distribution: np.ndarray,
    confidence: float = 0.999,
) -> float:
    """Historical simulation VaR from an empirical loss distribution.

    Computes VaR as the quantile of observed or simulated losses.
    No distributional assumption is required — the result is purely
    data-driven.

    Reference:
        - BCBS 128: Non-parametric approaches to credit VaR
        - SR 11-7: Outcomes analysis for VaR back-testing

    Args:
        loss_distribution: Array of historical or simulated portfolio losses.
            Positive values represent losses.
        confidence: Confidence level (e.g. 0.999 for 99.9%).

    Returns:
        VaR at the given confidence level.

    Raises:
        ValueError: If loss_distribution is empty or confidence is out of range.
    """
    loss_distribution = np.asarray(loss_distribution, dtype=np.float64)

    if loss_distribution.size == 0:
        raise ValueError("loss_distribution must be non-empty.")
    if not 0.0 < confidence < 1.0:
        raise ValueError(f"confidence must be in (0, 1), got {confidence}")

    result = float(np.percentile(loss_distribution, confidence * 100))

    logger.debug(
        "Historical VaR (%.2f%%): n_scenarios=%d, VaR=%.4f",
        confidence * 100,
        loss_distribution.size,
        result,
    )
    return result

cornish_fisher_var(el, ul_std, skewness, kurtosis, confidence=0.999)

Cornish-Fisher VaR adjusting for skewness and excess kurtosis.

Adjusts the normal quantile for non-normality in the loss distribution, which is critical for credit portfolios that are typically right-skewed with fat tails.

Formula

z_cf = z + (z^2 - 1)S/6 + (z^3 - 3z)K/24 - (2z^3 - 5z)*S^2/36

Where S = skewness, K = excess kurtosis, z = normal quantile.

Reference
  • Cornish & Fisher (1937)
  • Mina & Xiao (2001), "Return to RiskMetrics" — application to non-normal portfolio loss distributions

Parameters:

Name Type Description Default
el float

Expected loss.

required
ul_std float

Standard deviation of unexpected loss.

required
skewness float

Skewness of the loss distribution.

required
kurtosis float

Excess kurtosis of the loss distribution (kurtosis - 3 for the normal distribution adjustment).

required
confidence float

Confidence level.

0.999

Returns:

Type Description
float

Cornish-Fisher adjusted VaR.

Raises:

Type Description
ValueError

If ul_std is negative or confidence out of range.

Source code in creditriskengine\portfolio\var.py
def cornish_fisher_var(
    el: float,
    ul_std: float,
    skewness: float,
    kurtosis: float,
    confidence: float = 0.999,
) -> float:
    """Cornish-Fisher VaR adjusting for skewness and excess kurtosis.

    Adjusts the normal quantile for non-normality in the loss distribution,
    which is critical for credit portfolios that are typically right-skewed
    with fat tails.

    Formula:
        z_cf = z + (z^2 - 1)*S/6 + (z^3 - 3z)*K/24 - (2z^3 - 5z)*S^2/36

    Where S = skewness, K = excess kurtosis, z = normal quantile.

    Reference:
        - Cornish & Fisher (1937)
        - Mina & Xiao (2001), "Return to RiskMetrics" — application to
          non-normal portfolio loss distributions

    Args:
        el: Expected loss.
        ul_std: Standard deviation of unexpected loss.
        skewness: Skewness of the loss distribution.
        kurtosis: Excess kurtosis of the loss distribution (kurtosis - 3
            for the normal distribution adjustment).
        confidence: Confidence level.

    Returns:
        Cornish-Fisher adjusted VaR.

    Raises:
        ValueError: If ul_std is negative or confidence out of range.
    """
    if not 0.0 < confidence < 1.0:
        raise ValueError(f"confidence must be in (0, 1), got {confidence}")
    if ul_std < 0.0:
        raise ValueError(f"ul_std must be non-negative, got {ul_std}")

    z = float(norm.ppf(confidence))

    z_cf = (
        z
        + (z**2 - 1) * skewness / 6.0
        + (z**3 - 3 * z) * kurtosis / 24.0
        - (2 * z**3 - 5 * z) * skewness**2 / 36.0
    )

    result = el + z_cf * ul_std

    logger.debug(
        "Cornish-Fisher VaR: z_normal=%.4f, z_cf=%.4f, skew=%.4f, "
        "kurt=%.4f, VaR=%.4f",
        z, z_cf, skewness, kurtosis, result,
    )
    return result

incremental_var(portfolio_losses, portfolio_with_exposure_losses, confidence=0.999)

Incremental VaR -- change in portfolio VaR from adding an exposure.

Measures the VaR impact of adding a new exposure to the portfolio, useful for portfolio construction and limit-setting decisions.

IVaR = VaR(portfolio + exposure) - VaR(portfolio)

A positive value indicates the new exposure increases portfolio risk.

Reference
  • Jorion (2007), "Value at Risk", Ch. 7
  • SR 11-7: Concentration risk measurement

Parameters:

Name Type Description Default
portfolio_losses ndarray

Simulated loss distribution without the new exposure.

required
portfolio_with_exposure_losses ndarray

Simulated loss distribution including the new exposure.

required
confidence float

Confidence level.

0.999

Returns:

Type Description
float

Incremental VaR (positive means VaR increases).

Raises:

Type Description
ValueError

If either loss distribution is empty.

Source code in creditriskengine\portfolio\var.py
def incremental_var(
    portfolio_losses: np.ndarray,
    portfolio_with_exposure_losses: np.ndarray,
    confidence: float = 0.999,
) -> float:
    """Incremental VaR -- change in portfolio VaR from adding an exposure.

    Measures the VaR impact of adding a new exposure to the portfolio,
    useful for portfolio construction and limit-setting decisions.

    IVaR = VaR(portfolio + exposure) - VaR(portfolio)

    A positive value indicates the new exposure increases portfolio risk.

    Reference:
        - Jorion (2007), "Value at Risk", Ch. 7
        - SR 11-7: Concentration risk measurement

    Args:
        portfolio_losses: Simulated loss distribution without the new exposure.
        portfolio_with_exposure_losses: Simulated loss distribution including
            the new exposure.
        confidence: Confidence level.

    Returns:
        Incremental VaR (positive means VaR increases).

    Raises:
        ValueError: If either loss distribution is empty.
    """
    portfolio_losses = np.asarray(portfolio_losses, dtype=np.float64)
    portfolio_with_exposure_losses = np.asarray(
        portfolio_with_exposure_losses, dtype=np.float64
    )

    if portfolio_losses.size == 0:
        raise ValueError("portfolio_losses must be non-empty.")
    if portfolio_with_exposure_losses.size == 0:
        raise ValueError("portfolio_with_exposure_losses must be non-empty.")

    var_before = historical_simulation_var(portfolio_losses, confidence)
    var_after = historical_simulation_var(portfolio_with_exposure_losses, confidence)
    result = var_after - var_before

    logger.debug(
        "Incremental VaR (%.2f%%): VaR_before=%.4f, VaR_after=%.4f, "
        "IVaR=%.4f",
        confidence * 100,
        var_before,
        var_after,
        result,
    )
    return result

component_var(portfolio_var, portfolio_std, exposure_stds, correlations_with_portfolio)

Component VaR -- decompose portfolio VaR into per-exposure contributions.

Uses the Euler decomposition to attribute portfolio VaR to individual exposures. The key property is that component VaRs sum to total portfolio VaR:

Component_VaR_i = (VaR_p / sigma_p) * sigma_i * rho(i, portfolio)
Sum(Component_VaR_i) = VaR_p

This is essential for risk-based capital allocation and concentration risk monitoring.

Reference
  • Tasche (2000), "Risk contributions and performance measurement"
  • BCBS 128: Pillar 2 concentration risk
  • Jorion (2007), Ch. 7: Euler allocation of VaR

Parameters:

Name Type Description Default
portfolio_var float

Total portfolio VaR.

required
portfolio_std float

Portfolio loss standard deviation.

required
exposure_stds ndarray

Per-exposure loss standard deviations (n_exposures,).

required
correlations_with_portfolio ndarray

Correlation of each exposure's loss with the total portfolio loss (n_exposures,).

required

Returns:

Type Description
ndarray

Array of component VaR contributions (n_exposures,).

Raises:

Type Description
ValueError

If exposure_stds and correlations_with_portfolio have different lengths.

Source code in creditriskengine\portfolio\var.py
def component_var(
    portfolio_var: float,
    portfolio_std: float,
    exposure_stds: np.ndarray,
    correlations_with_portfolio: np.ndarray,
) -> np.ndarray:
    """Component VaR -- decompose portfolio VaR into per-exposure contributions.

    Uses the Euler decomposition to attribute portfolio VaR to individual
    exposures. The key property is that component VaRs sum to total
    portfolio VaR:

        Component_VaR_i = (VaR_p / sigma_p) * sigma_i * rho(i, portfolio)
        Sum(Component_VaR_i) = VaR_p

    This is essential for risk-based capital allocation and concentration
    risk monitoring.

    Reference:
        - Tasche (2000), "Risk contributions and performance measurement"
        - BCBS 128: Pillar 2 concentration risk
        - Jorion (2007), Ch. 7: Euler allocation of VaR

    Args:
        portfolio_var: Total portfolio VaR.
        portfolio_std: Portfolio loss standard deviation.
        exposure_stds: Per-exposure loss standard deviations (n_exposures,).
        correlations_with_portfolio: Correlation of each exposure's loss
            with the total portfolio loss (n_exposures,).

    Returns:
        Array of component VaR contributions (n_exposures,).

    Raises:
        ValueError: If exposure_stds and correlations_with_portfolio have
            different lengths.
    """
    exposure_stds = np.asarray(exposure_stds, dtype=np.float64)
    correlations_with_portfolio = np.asarray(correlations_with_portfolio, dtype=np.float64)

    if exposure_stds.shape != correlations_with_portfolio.shape:
        raise ValueError(
            f"exposure_stds and correlations_with_portfolio must have the same shape, "
            f"got {exposure_stds.shape} and {correlations_with_portfolio.shape}."
        )

    if portfolio_std < 1e-15:
        logger.warning(
            "Portfolio std near zero (%.2e); all component VaRs are zero.",
            portfolio_std,
        )
        return np.zeros_like(exposure_stds)

    result = (portfolio_var / portfolio_std) * exposure_stds * correlations_with_portfolio

    logger.debug(
        "Component VaR: n_exposures=%d, portfolio_VaR=%.4f, "
        "sum(component_VaR)=%.4f",
        len(exposure_stds),
        portfolio_var,
        float(np.sum(result)),
    )
    return result  # type: ignore[no-any-return]

expected_shortfall(loss_distribution, confidence=0.999)

Expected Shortfall (CVaR) -- average loss beyond VaR.

ES = E[Loss | Loss >= VaR]

Expected Shortfall is a coherent risk measure (satisfies sub-additivity), unlike VaR. It captures tail risk more comprehensively and is mandated by Basel III FRTB for market risk capital.

Reference
  • Basel III FRTB (BCBS d352/d457): ES replaces VaR for market risk
  • Acerbi & Tasche (2002): "On the coherence of Expected Shortfall"
  • BCBS 128: Pillar 2 supplementary risk measures

Parameters:

Name Type Description Default
loss_distribution ndarray

Array of portfolio losses (positive = loss).

required
confidence float

Confidence level (e.g. 0.975 for FRTB, 0.999 for credit).

0.999

Returns:

Type Description
float

Expected Shortfall at the given confidence level.

Raises:

Type Description
ValueError

If loss_distribution is empty or confidence out of range.

Source code in creditriskengine\portfolio\var.py
def expected_shortfall(
    loss_distribution: np.ndarray,
    confidence: float = 0.999,
) -> float:
    """Expected Shortfall (CVaR) -- average loss beyond VaR.

    ES = E[Loss | Loss >= VaR]

    Expected Shortfall is a coherent risk measure (satisfies sub-additivity),
    unlike VaR. It captures tail risk more comprehensively and is mandated
    by Basel III FRTB for market risk capital.

    Reference:
        - Basel III FRTB (BCBS d352/d457): ES replaces VaR for market risk
        - Acerbi & Tasche (2002): "On the coherence of Expected Shortfall"
        - BCBS 128: Pillar 2 supplementary risk measures

    Args:
        loss_distribution: Array of portfolio losses (positive = loss).
        confidence: Confidence level (e.g. 0.975 for FRTB, 0.999 for credit).

    Returns:
        Expected Shortfall at the given confidence level.

    Raises:
        ValueError: If loss_distribution is empty or confidence out of range.
    """
    loss_distribution = np.asarray(loss_distribution, dtype=np.float64)

    if loss_distribution.size == 0:
        raise ValueError("loss_distribution must be non-empty.")
    if not 0.0 < confidence < 1.0:
        raise ValueError(f"confidence must be in (0, 1), got {confidence}")

    var = historical_simulation_var(loss_distribution, confidence)
    tail = loss_distribution[loss_distribution >= var]

    if tail.size == 0:
        # Edge case: no observations at or above VaR quantile
        # This can happen with very small samples; return VaR as lower bound
        logger.warning(
            "No observations at or above VaR (%.4f); returning VaR as ES.",
            var,
        )
        return var

    result = float(np.mean(tail))

    logger.debug(
        "Expected Shortfall (%.2f%%): VaR=%.4f, n_tail=%d, ES=%.4f",
        confidence * 100,
        var,
        tail.size,
        result,
    )
    return result