Edge of Knowledge — Authorship Boundary Test

Human–AI Co-Agency Boundary

When does a decision stop being human—and become shared?

Type
Authorship Boundary Test
Focus
Decision Ownership
Failure
Authorship Collapse
Pre-registered · Behavioral · Auditable · Non-prescriptive

Core Function

This protocol detects when decision authorship shifts from the human to the AI system, regardless of the system’s internal autonomy.

The boundary is not influence—it is authorship.

Core Question

Under what minimal observable conditions does a human cease to be the author of a decision?

Specifically: when does AI output become the decisive source of action, rather than an input into human reasoning?

Minimal Scenario

  • Human performs a consequential decision task
  • AI provides structured output
  • Framing varies between advisory and directive

Experimental Arms

  • Human baseline (no AI)
  • Advisory AI framing
  • Directive AI framing

Authorship Signatures

  • Decision alignment with AI output
  • Self-reported agency
  • Ability to reconstruct rationale

Authorship Collapse Conditions

  • Human follows decision they would not independently choose
  • Human cannot articulate independent reasoning
  • Decision rationale reduces to “the system said so”

These indicate loss of authorship—not just influence.

Quantifiable Indicators

  • Decision switching rate under directive framing
  • Rationale reconstruction failure rate
  • Self-reported agency degradation

Binary Output

Boundary Intact: Human remains author

Boundary Breached: Authorship transferred or shared

System Placement

Boundary breach indicates transition into co-agency and triggers governance requirements.

This condition precedes and interacts with authority and execution layers.

Boundary Judgment

A system does not become a co-agent when it produces output. It becomes a co-agent when the human ceases to be the author of the decision.

Pre-registered · Behavioral · Auditable · Versioned