SNAPOS.ORGONLINE
|
FRAMEWORKDIP-CORE-1.0
|
EU AI OFFICE510c3274
MANDATE INTEGRITYPROTOCOL ACTIVE
|
AUDIT@SNAPOS.ORG
Evidence base

Publications

Each publication addresses a specific gap in existing AI governance. This page maps the problem each paper solves — not just what it is called.

Governance protocols

Operational specifications for detecting and responding to mandate drift in production systems.

Problem: Systems operate after mandate assumptions have expired
Compliance dashboards stay green while mandate alignment silently erodes. DASR is a 30-day structured audit protocol that measures the divergence between an AI system's authorized mandate and its current operational state — before the gap becomes a governance incident. Delivers Drift Magnitude, Drift Velocity, and Cumulative Exposure as audit-grade metrics.
DOI: 10.5281/zenodo.18824037 March 2026 Operational protocol 30-day audit High-risk AI
Problem: No formal structure for whether a decision is still allowed to execute
Existing frameworks verify that decisions were made correctly. DCF addresses a different question: is this decision still legitimately authorized to execute right now? Defines three closure conditions — Authority Closure, Assumption Closure, Evidence Closure — that must hold continuously, not just at approval time. Includes Authority Rebinding and Failure Transition Control.
zenodo.org/records/19135942 2026 Governance protocol Continuous legitimacy
Problem: No standard for evaluating AI capability and compliance under drift
AI governance assessments lack a unified framework for measuring both capability and compliance as system conditions change. GCCL provides a structured evaluation framework with drift detection, autonomy levels, and witnessable certification. Submitted to the EU AI Office Expert Forum as a candidate governance standard (Contribution ID: 510c3274).
DOI: 10.5281/zenodo.18362037 2025 Compliance framework EU AI Office submission

Diagnostic frameworks

Methods for detecting instability and drift from observable outputs — without access to model internals.

Problem: AI system instability cannot be detected from outside the system
Most reliability assessments require access to model internals. Output-Only Diagnostics provides a black-box framework for detecting multi-turn instability and structural unreliability in language models from observable outputs alone — applicable in production environments where internal state is not accessible.
zenodo.org/records/18361523 Black-box diagnostics Language models

Scientific foundations

The theoretical basis underlying Decision Integrity — why meaning and decision identity require their own formal treatment.

Problem: Semantic change in AI systems has no formal governance object
AI systems can remain statistically stable while their internal meaning structures shift in ways that affect decision identity. This paper establishes SSE as the scientific field that makes internal semantic states, drift trajectories, and meaning evolution formally observable. The canonical field definition that underlies all Decision Integrity analysis.
DOI: 10.5281/zenodo.17711427 November 2025 Field definition Scientific foundation
Problem: Certification and correctness do not guarantee decision identity
A system can pass all certification checks and maintain behavioral correctness while its decision identity has already drifted. This paper provides the formal separation showing why certification alone is insufficient for governance — and why decision identity requires its own invariants and witnessability conditions.
zenodo.org/records/18115847 Formal methods Decision identity
Problem: Meaning-state change in adaptive systems is not operationally tracked
Adaptive systems undergo internal meaning-state changes that are invisible to standard monitoring. This paper provides the dynamical framework for analyzing how meaning states evolve, when they shift regimes, and how long internal structure remains interpretable — the operational foundation for drift detection in Decision Integrity applications.
zenodo.org/records/17880809 Dynamical systems Drift analysis

Category anchors

Foundational documents establishing the category claim and implementation architecture.

Category anchor: Semantic Stability Engineering as a distinct discipline
Foundational claim and timestamped category anchor for Semantic Stability Engineering. Establishes SSE as independent from statistical concept drift, alignment research, and interpretability tooling — and defines the conditions under which meaning-state analysis becomes necessary for governance.
zenodo.org/records/17635174 Category claim Timestamped
Implementation: Reference architecture for governance under drift
Research artifact and implementation package for reflexive audit architecture and semantic drift stabilization. Demonstrates that Decision Integrity principles are operationally implementable — not only theoretically specified.
zenodo.org/records/17592669 Implementation artifact
Complete research index

All publications

The complete publication record is maintained on Zenodo and indexed under ORCID 0009-0000-6493-4599. All publications are open access under CC BY-NC 4.0 unless stated otherwise.

Search Zenodo ↗ ORCID profile ↗
Authority and research credentials