How We Grade AI Risk
The SIRA framework, the Medha Grade, and the VaR model behind the AI Risk Pulse. What we measure, why it matters, and where the data comes from.
Seven Layers of AI Risk
SIRA (Synthetic Intelligence Risk Assessment) maps risk vertically from energy infrastructure to human cognition. Most frameworks stop at the application layer. We don't.
Cognitive atrophy, deskilling, safety culture erosion
Layoffs, role displacement, institutional knowledge loss
Stock impact, competitive distortion, sector contagion
Concentration risk, dependency lock-in, supply chain fragility
Hallucination, data leakage, integration failure
Outages, compute dependency, latency spikes
Power consumption, sustainability, resource strain
Each instrument is assessed across all seven layers with proprietary sensitivity weights. High-exposure layers drive the grade; low-exposure layers are monitored but weighted accordingly.
A Credit Rating for AI Health
Seven grades from equilibrium to failure. Each represents a composite assessment of AI adoption risk across all SIRA layers, incorporating dependency, vendor concentration, cognitive reserve, and the value ledger.
Grades are derived from proprietary scoring of event severity, layer exposure weights, and structural risk indicators. The specific thresholds and calibration model are internal to Purna Medha.
Two-Component VaR Model
VaR isn't a simple currency conversion. It integrates two distinct sources of monetary risk, each priced differently by region.
AI Spend Risk
The portion of an organisation's annual AI investment that is at risk of being wasted, misallocated, or actively value-destroying. Derived from the Medha Grade's risk range applied to the instrument's benchmark AI spend per employee.
Converted to local currency at indicative exchange rates.
Workforce Disruption Cost
The salary cost of employees at risk of displacement, deskilling, or role degradation. This component is priced at regional salary benchmarks — not converted from USD — because the economic cost of losing a worker in Mumbai is fundamentally different from Zurich.
Displacement factors are proprietary, derived from client data and reversal research.
Why this matters
A ₼BB grade in India produces a different total VaR than a ₼BB grade in Switzerland — even for the same instrument and org size. The AI spend component is similar (global tool pricing), but the workforce component diverges sharply because local salaries differ by 5\u201310x. This reflects reality: the economic cost of AI-driven displacement is a function of local labor markets, not just exchange rates.
What Gets Graded
Each instrument is a defined sector of AI adoption with its own risk profile, SIRA layer exposure, and benchmark economics. Like a financial instrument, each has a ticker, a grade, and a VaR.
Global AI Ecosystem
Aggregate risk across all sectors
Enterprise SaaS AI
Copilot, Gemini Workspace, Claude for Work adoption
Financial Services AI
Trading, risk modelling, fraud detection, client servicing
Medical & Healthcare AI
Clinical decision support, diagnostic imaging, drug discovery, surgical robotics
Customer Operations AI
Support/success teams replaced or augmented by AI agents
Autonomous Systems
Self-driving, robotics, drone delivery
Developer Productivity AI
Coding copilots, AI-assisted CI/CD, testing
Legal & Compliance AI
Contract review, legal research, due diligence, compliance monitoring, e-discovery
Heavy Industry & Manufacturing AI
Predictive maintenance, quality inspection, supply chain optimisation, digital twins, industrial robotics
Entertainment & Gaming AI
AI-generated content, procedural worlds, NPC intelligence, real-time rendering, training simulation
Each instrument has proprietary layer sensitivity weights and an AI spend benchmark calibrated from enterprise data. New instruments are added as AI adoption expands into new sectors.
How We Weight Each Layer per Instrument
Each instrument's SIRA layer sensitivity controls the heatmap and determines which events affect which instruments most. Weights range from 0 (no exposure) to 1.0 (maximum exposure).
Domain Relevance
Which SIRA layers are structurally important to this sector? For MED.AI, L3 (Application) is 1.0 because a diagnostic hallucination IS a misdiagnosis. For IND.AI, L2 (Infrastructure) is 0.9 because factory uptime depends on cloud and compute reliability.
Failure Consequence
What happens when this layer fails for this instrument? L2 outages in healthcare (MED.AI) are life-critical (weight 0.8), while L2 outages for developer tools (DEVX.AI) are inconvenient but not dangerous (weight 0.4).
Historical Event Correlation
Which layers have historically produced the most impactful events for this sector? CXOP.AI weights L7 (Human) and L6 (Workforce) at 1.0 because the Klarna pattern — mass replacement followed by reversal — plays out primarily in those layers.
Regulatory & Liability Exposure
Which layers carry regulatory risk? FIN.AI weights L3 (Application) at 0.9 because hallucination in client-facing financial tools triggers regulatory liability. LAW.AI weights L3 at 1.0 because fabricated citations in court filings have already caused sanctions.
Dependency Depth
How deeply does this sector depend on AI at each layer? AUTO.AI weights L1 (Energy) at 0.5 — higher than any other instrument — because autonomous systems require sustained compute power with zero interruption.
Calibration Process
Layer weights are calibrated through a three-step process:
- Initial assignment based on sector domain analysis and structural dependency mapping.
- Back-testing against historical events — do high-weight layers correspond to the layers where real incidents occurred?
- Client validation — weights are refined as enterprise assessments reveal actual layer exposure in the field.
The specific weight values are proprietary. The methodology is transparent; the calibration numbers are internal to Purna Medha.
Stability-Plasticity Estimate (SPE)
A second dimension beyond the Medha Grade. SPE measures how much an instrument can absorb AI change without systemic breakage — the balance between stability and adaptability.
Stability
Can this sector maintain operations if AI fails or is removed? High stability means human processes still exist as fallbacks. Low stability means AI removal would cause operational collapse.
Plasticity
Can this sector adopt new AI capabilities without creating fragility? High plasticity means the sector can integrate AI safely. Low plasticity means each new AI tool adds compounding risk.
Why SPE matters
The Medha Grade tells you how much risk exists now. The SPE tells you how fragile the system is if conditions change. A ₼BBB instrument with 25% SPE is far more dangerous than a ₼BBB instrument with 60% SPE — because the first one cannot absorb shocks, while the second can fall back to human processes. SPE is the difference between a controlled correction and a systemic failure.
Systemic Exposure Index (SEI)
Infrastructure risk — GPU monopoly, energy grid, compute concentration — is real but sits at the industry level, not the firm level. Companies don’t directly bear GPU costs, so adding infrastructure dollars to firm VaR is economically wrong. The SEI captures this as a separate dimension.
The Abstraction Chain
AI infrastructure cost flows through layers of abstraction: GPU chips (Nvidia/TSMC) → cloud providers (AWS/Azure/GCP) → API services (OpenAI/Anthropic) → SaaS subscriptions. Each layer absorbs some risk but passes concentration through.
A company paying $50/seat/month for Copilot doesn’t see the GPU cost — but if TSMC fab output drops 20%, the effects cascade up the chain to every AI-dependent business.
Three Inputs to SEI
VaR Stress Multiplier
The SEI drives a convex stress multiplier: multiplier = 1 + 3 × SEI^1.5. This curve means low-SEI sectors barely budge under stress, while high-SEI sectors see their VaR multiply dramatically.
Why SEI matters
Two companies can have the same Medha Grade and VaR — but if one is in heavy industry (SEI 92%) and the other in legal (SEI 32%), their real exposure is wildly different. A GPU supply crisis multiplies the manufacturer’s risk by 3.65x while barely touching the law firm. Which industry you belong to changes your VaR more than your company size. The SEI makes this visible.
Structural Indicators Precede Events
The value of risk assessment equals the time delta between warning and event. If the framework warns before the event materialises, it has demonstrable predictive value.
The value of risk assessment = Δt between warning and event.
If Δt > 0, the assessment has value.
Safety departures predict governance failures. Vendor concentration predicts outage impact. Workforce displacement patterns predict reversal rates. The SIRA framework tracks these structural indicators and measures the time between signal and event across all seven layers.
Specific timing proof data — which predictions were made and when they materialised — is tracked internally and shared with audit clients. The Pulse page shows events as they occur; the timing proof validates the framework over time.
How Events Are Scored
Each event on the Pulse is classified by type, assigned to a SIRA layer, and scored for severity on a 1–5 scale.
Resignation or termination of AI safety personnel
Service disruptions at major AI vendors
Stock/sector impact from AI-related events
AI-driven workforce reductions
Physical harm, data breach, or financial loss from AI
Companies reversing AI displacement decisions
The severity scoring rubric — how each event type maps to a specific score — is proprietary. Events are curated and scored by the Purna Medha team.
Where the Signals Come From
Eight categories of data feed the Pulse, each mapped to specific SIRA layers.
Regional Salary Benchmark Sources
Workforce VaR uses median enterprise employee cost (salary + employer benefits) from official labour statistics and established compensation surveys in each region.
What This Is Not
Not financial advice. VaR figures are indicative benchmarks for risk awareness, not forecasts or investment recommendations.
Not a real-time feed. Events are curated and updated weekly. Automated feeds are on the roadmap.
Not company-specific. Instruments are sector-level benchmarks. For your organisation's Medha Grade, book an assessment.
Not static. Grades, severity, and VaR update as new events are recorded. The framework improves with every assessment.
See the methodology in action
The AI Risk Pulse applies this framework live, every week. Or get your own organisation graded.