Gap analysis
Why existing standards are not enough
Current governance frameworks do two things well: they evaluate systems before deployment, and they analyze failures after they happen. The gap — the moment where execution actually occurs — is left unaddressed.
| Standard / Framework | What it addresses | What it misses |
|---|---|---|
| EU AI Act (Art. 9, 17) | Risk management, quality management, technical documentation | Whether the authorized mandate still matches operational state after deployment |
| ISO/IEC 42001 | AI management system requirements | A mechanism for detecting mandate drift in live systems |
| Model risk management (EBA, ECB) | Model validation, performance monitoring | Whether the decision identity has changed without the model changing |
| SOC 2 / ISAE 3402 | Controls over service organization operations | Decision-level continuity and re-legitimation requirements |
| Monitoring dashboards | Threshold violations, latency, accuracy metrics | Whether the decision being executed is still the decision that was authorized |
This is not a criticism of existing frameworks. They address real and important concerns. The gap they leave is structural — it is the layer between approval and ongoing legitimacy that has not yet been formalized. That is the layer Decision Integrity defines. Read the full field definition →
Governance Gap Matrix