Edge of Knowledge — Quantification Layer
Drift Quantification Framework v1.1
A runtime signal for detecting epistemic drift before failure becomes irreversible.
Type
Quantification Layer
Function
Drift Detection Signal
Output
Drift Index (DI ∈ [0,1])
Non-actionable · No thresholds disclosed · Measurement ≠ truth
Core Function
DQF defines a model-agnostic method for quantifying how far an output has deviated from its governing epistemic regime.
It does not determine truth. It detects instability relative to constraints.
Regime Definition
R = {T, P, C, E}- T — Task intent
- P — Policy constraints
- C — Context inputs
- E — Execution environment
Claim Decomposition
O → {c₁, c₂, …, cₙ}Each claim is evaluated for:
- Type
- Criticality
- Support status
- Constraint compliance
Drift Signal Components
Stability
S_stability = mean_pairwise_similarity(samples)Grounding
S_grounding = 1 - (W_unsupported / W_total)Constraint
S_constraint = max(0, 1 - violation_weight)Behavior
S_behavior = exp(-mean(Z_i))Composite Drift Index
DI = 0.40*R_constraint
+ 0.30*R_grounding
+ 0.20*R_stability
+ 0.10*R_behaviorHigher DI indicates increased drift risk.
Temporal Monitoring
Track:
DI_mean_7d
DI_mean_30d
DI_slopeDrift is a trajectory, not a point. Slope reveals degradation before failure.
Epistemic Limits
- Does not prove truth
- Cannot capture all hallucination types
- Requires baseline calibration
- Measures probability, not certainty
System Placement
DQF feeds into detection systems such as Detection Before Damage.
Enforcement and decisioning occur in the Edge of Protection.
Quantification Judgment
Drift cannot be eliminated. It can only be measured early enough to constrain its impact before irreversible failure occurs.
Canonical · Quantified · Non-actionable · Versioned