Methodology

How We Grade AI Risk

The SIRA framework, the Medha Grade, and the VaR model behind the AI Risk Pulse. What we measure, why it matters, and where the data comes from.

The Framework

Seven Layers of AI Risk

SIRA (Synthetic Intelligence Risk Assessment) maps risk vertically from energy infrastructure to human cognition. Most frameworks stop at the application layer. We don't.

L7
HumanMost critical, least examined

Cognitive atrophy, deskilling, safety culture erosion

L6
Workforce

Layoffs, role displacement, institutional knowledge loss

L5
Market

Stock impact, competitive distortion, sector contagion

L4
Vendor

Concentration risk, dependency lock-in, supply chain fragility

L3
Application

Hallucination, data leakage, integration failure

L2
Infrastructure

Outages, compute dependency, latency spikes

L1
Energy

Power consumption, sustainability, resource strain

Each instrument is assessed across all seven layers with proprietary sensitivity weights. High-exposure layers drive the grade; low-exposure layers are monitored but weighted accordingly.

The Grade

A Credit Rating for AI Health

Seven grades from equilibrium to failure. Each represents a composite assessment of AI adoption risk across all SIRA layers, incorporating dependency, vendor concentration, cognitive reserve, and the value ledger.

AAAEquilibriumAI augments without dependency. Minimal value destruction.
AAHealthyMinor dependency risks. Value creation far exceeds destruction.
AAdequateMonitor trends. Emerging risks in 1–2 layers.
BBBWarningRestructure needed. Value destruction measurable.
BBStressedIntervention required. Multiple layers under stress.
BCriticalDissolve non-essential AI. Destruction outpacing creation.
CCCFailureAI destroying more value than it creates.

Grades are derived from proprietary scoring of event severity, layer exposure weights, and structural risk indicators. The specific thresholds and calibration model are internal to Purna Medha.

Value at Risk

Two-Component VaR Model

VaR isn't a simple currency conversion. It integrates two distinct sources of monetary risk, each priced differently by region.

AI Spend Risk

The portion of an organisation's annual AI investment that is at risk of being wasted, misallocated, or actively value-destroying. Derived from the Medha Grade's risk range applied to the instrument's benchmark AI spend per employee.

Converted to local currency at indicative exchange rates.

Workforce Disruption Cost

The salary cost of employees at risk of displacement, deskilling, or role degradation. This component is priced at regional salary benchmarks — not converted from USD — because the economic cost of losing a worker in Mumbai is fundamentally different from Zurich.

Displacement factors are proprietary, derived from client data and reversal research.

Why this matters

A ₼BB grade in India produces a different total VaR than a ₼BB grade in Switzerland — even for the same instrument and org size. The AI spend component is similar (global tool pricing), but the workforce component diverges sharply because local salaries differ by 5\u201310x. This reflects reality: the economic cost of AI-driven displacement is a function of local labor markets, not just exchange rates.

Instruments

What Gets Graded

Each instrument is a defined sector of AI adoption with its own risk profile, SIRA layer exposure, and benchmark economics. Like a financial instrument, each has a ticker, a grade, and a VaR.

GLBL.AI

Global AI Ecosystem

Aggregate risk across all sectors

SAAS.AI

Enterprise SaaS AI

Copilot, Gemini Workspace, Claude for Work adoption

FIN.AI

Financial Services AI

Trading, risk modelling, fraud detection, client servicing

MED.AI

Medical & Healthcare AI

Clinical decision support, diagnostic imaging, drug discovery, surgical robotics

CXOP.AI

Customer Operations AI

Support/success teams replaced or augmented by AI agents

AUTO.AI

Autonomous Systems

Self-driving, robotics, drone delivery

DEVX.AI

Developer Productivity AI

Coding copilots, AI-assisted CI/CD, testing

LAW.AI

Legal & Compliance AI

Contract review, legal research, due diligence, compliance monitoring, e-discovery

IND.AI

Heavy Industry & Manufacturing AI

Predictive maintenance, quality inspection, supply chain optimisation, digital twins, industrial robotics

GAME.AI

Entertainment & Gaming AI

AI-generated content, procedural worlds, NPC intelligence, real-time rendering, training simulation

Each instrument has proprietary layer sensitivity weights and an AI spend benchmark calibrated from enterprise data. New instruments are added as AI adoption expands into new sectors.

Layer Sensitivity

How We Weight Each Layer per Instrument

Each instrument's SIRA layer sensitivity controls the heatmap and determines which events affect which instruments most. Weights range from 0 (no exposure) to 1.0 (maximum exposure).

Domain Relevance

Which SIRA layers are structurally important to this sector? For MED.AI, L3 (Application) is 1.0 because a diagnostic hallucination IS a misdiagnosis. For IND.AI, L2 (Infrastructure) is 0.9 because factory uptime depends on cloud and compute reliability.

Failure Consequence

What happens when this layer fails for this instrument? L2 outages in healthcare (MED.AI) are life-critical (weight 0.8), while L2 outages for developer tools (DEVX.AI) are inconvenient but not dangerous (weight 0.4).

Historical Event Correlation

Which layers have historically produced the most impactful events for this sector? CXOP.AI weights L7 (Human) and L6 (Workforce) at 1.0 because the Klarna pattern — mass replacement followed by reversal — plays out primarily in those layers.

Regulatory & Liability Exposure

Which layers carry regulatory risk? FIN.AI weights L3 (Application) at 0.9 because hallucination in client-facing financial tools triggers regulatory liability. LAW.AI weights L3 at 1.0 because fabricated citations in court filings have already caused sanctions.

Dependency Depth

How deeply does this sector depend on AI at each layer? AUTO.AI weights L1 (Energy) at 0.5 — higher than any other instrument — because autonomous systems require sustained compute power with zero interruption.

Calibration Process

Layer weights are calibrated through a three-step process:

  1. Initial assignment based on sector domain analysis and structural dependency mapping.
  2. Back-testing against historical events — do high-weight layers correspond to the layers where real incidents occurred?
  3. Client validation — weights are refined as enterprise assessments reveal actual layer exposure in the field.

The specific weight values are proprietary. The methodology is transparent; the calibration numbers are internal to Purna Medha.

Central Benchmark

Stability-Plasticity Estimate (SPE)

A second dimension beyond the Medha Grade. SPE measures how much an instrument can absorb AI change without systemic breakage — the balance between stability and adaptability.

Stability

Can this sector maintain operations if AI fails or is removed? High stability means human processes still exist as fallbacks. Low stability means AI removal would cause operational collapse.

Plasticity

Can this sector adopt new AI capabilities without creating fragility? High plasticity means the sector can integrate AI safely. Low plasticity means each new AI tool adds compounding risk.

70–100%ResilientCan function with or without AI. Augmentation model, not replacement.
40–69%ModeratePartial AI dependency. Some workflows need AI, others have fallbacks.
0–39%BrittleHigh AI dependency, low rollback capacity. Removal causes breakage.

Why SPE matters

The Medha Grade tells you how much risk exists now. The SPE tells you how fragile the system is if conditions change. A ₼BBB instrument with 25% SPE is far more dangerous than a ₼BBB instrument with 60% SPE — because the first one cannot absorb shocks, while the second can fall back to human processes. SPE is the difference between a controlled correction and a systemic failure.

Infrastructure Risk

Systemic Exposure Index (SEI)

Infrastructure risk — GPU monopoly, energy grid, compute concentration — is real but sits at the industry level, not the firm level. Companies don’t directly bear GPU costs, so adding infrastructure dollars to firm VaR is economically wrong. The SEI captures this as a separate dimension.

The Abstraction Chain

AI infrastructure cost flows through layers of abstraction: GPU chips (Nvidia/TSMC) → cloud providers (AWS/Azure/GCP) → API services (OpenAI/Anthropic) → SaaS subscriptions. Each layer absorbs some risk but passes concentration through.

A company paying $50/seat/month for Copilot doesn’t see the GPU cost — but if TSMC fab output drops 20%, the effects cascade up the chain to every AI-dependent business.

Three Inputs to SEI

40%L2 (Infrastructure) — the instrument’s weight on the infrastructure SIRA layer. Higher = more compute-dependent.
25%L1 (Energy) — the instrument’s weight on the energy SIRA layer. Data centers competing for grid capacity.
35%Supply Chain Concentration — how locked-in the sector is to monopoly suppliers (Nvidia Drive for auto, Siemens for industrial, cloud APIs for SaaS).
70–100%CriticalDeep dependency on monopoly infrastructure. Supply shock cascades immediately.
50–69%ElevatedSignificant exposure. Cloud abstracts but doesn’t eliminate concentration.
30–49%ModerateSome infrastructure dependency. Lighter compute needs reduce exposure.
0–29%LowMinimal infrastructure dependency. Operations function without GPU-intensive compute.

VaR Stress Multiplier

The SEI drives a convex stress multiplier: multiplier = 1 + 3 × SEI^1.5. This curve means low-SEI sectors barely budge under stress, while high-SEI sectors see their VaR multiply dramatically.

IND.AI92%Critical3.65x
AUTO.AI75%Critical2.95x
GLBL.AI66%Elevated2.61x
MED.AI50%Elevated2.06x
DEVX.AI42%Moderate1.82x
LAW.AI32%Moderate1.54x

Why SEI matters

Two companies can have the same Medha Grade and VaR — but if one is in heavy industry (SEI 92%) and the other in legal (SEI 32%), their real exposure is wildly different. A GPU supply crisis multiplies the manufacturer’s risk by 3.65x while barely touching the law firm. Which industry you belong to changes your VaR more than your company size. The SEI makes this visible.

The Core Claim

Structural Indicators Precede Events

The value of risk assessment equals the time delta between warning and event. If the framework warns before the event materialises, it has demonstrable predictive value.

The value of risk assessment = Δt between warning and event.

If Δt > 0, the assessment has value.

Safety departures predict governance failures. Vendor concentration predicts outage impact. Workforce displacement patterns predict reversal rates. The SIRA framework tracks these structural indicators and measures the time between signal and event across all seven layers.

Specific timing proof data — which predictions were made and when they materialised — is tracked internally and shared with audit clients. The Pulse page shows events as they occur; the timing proof validates the framework over time.

Severity

How Events Are Scored

Each event on the Pulse is classified by type, assigned to a SIRA layer, and scored for severity on a 1–5 scale.

Safety Exit

Resignation or termination of AI safety personnel

Outage

Service disruptions at major AI vendors

Market

Stock/sector impact from AI-related events

Layoff

AI-driven workforce reductions

Incident

Physical harm, data breach, or financial loss from AI

Reversal

Companies reversing AI displacement decisions

1–2StableNormal conditions, isolated minor events
2–3ElevatedNotable events in 1–2 SIRA layers
3–4StressedMultiple high-severity events, action needed
4–5CriticalSystemic signals, immediate assessment recommended

The severity scoring rubric — how each event type maps to a specific score — is proprietary. Events are curated and scored by the Purna Medha team.

Data Sources

Where the Signals Come From

Eight categories of data feed the Pulse, each mapped to specific SIRA layers.

AI Incident DatabaseDays–WeeksL4–L7
Vendor Status PagesReal-timeL2–L3
Layoff & Reversal TrackersDays–WeeksL6–L7
Safety DeparturesDaysL7
Regulatory & Policy FeedWeeks–MonthsL5–L7
S&P 500 AI Risk DisclosuresQuarterlyAll
Regional Salary BenchmarksAnnualL6–L7

Regional Salary Benchmark Sources

Workforce VaR uses median enterprise employee cost (salary + employer benefits) from official labour statistics and established compensation surveys in each region.

United StatesBureau of Labor Statistics (BLS) Occupational Employment Statistics
EU-27Eurostat Structure of Earnings Survey
United KingdomONS Annual Survey of Hours and Earnings (ASHE)
India (Tier 1)Mercer Total Remuneration Survey + Aon India Salary Increase Survey
JapanMHLW Basic Survey on Wage Structure
SwitzerlandFSO Labour Force Survey (LSE)
SingaporeMOM Labour Force Survey
UAEMOHRE / GulfTalent Salary Survey

What This Is Not

Not financial advice. VaR figures are indicative benchmarks for risk awareness, not forecasts or investment recommendations.

Not a real-time feed. Events are curated and updated weekly. Automated feeds are on the roadmap.

Not company-specific. Instruments are sector-level benchmarks. For your organisation's Medha Grade, book an assessment.

Not static. Grades, severity, and VaR update as new events are recorded. The framework improves with every assessment.

See the methodology in action

The AI Risk Pulse applies this framework live, every week. Or get your own organisation graded.