Governance, Authority, and Liability Statement
This document describes enforceable governance constraints, not aspirational principles.
1. Legal Status and Scope
This system is a tool within the meaning of applicable product, safety, and artificial intelligence regulation, including the EU Artificial Intelligence Act.
It is not:
- a legal subject,
- an autonomous agent,
- a bearer of rights, interests, or duties.
The system has no moral, legal, or psychological status. All governance obligations attach to natural persons and legal entities involved in its design, deployment, and use.
2. Allocation of Authority
All authority to make binding decisions resides with identified human actors and the deploying organization.
The system:
- provides non-binding assistance only,
- does not exercise discretion over final outcomes,
- does not initiate irreversible actions independently.
Decision authority is:
- role-defined,
- explicitly assigned,
- non-delegable to the system.
At no point may the system be treated as a decision-maker under law.
3. Responsibility and Liability
Responsibility and liability for the system’s outputs, recommendations, and downstream effects rest exclusively with the deploying organization and designated human decision-makers.
Liability is:
- not shared with,
- not transferred to,
- not mitigated by
the system itself.
The system cannot:
- bear fault,
- absorb legal responsibility,
- be treated as an intervening cause.
All harms remain attributable to human and organizational actors, consistent with product liability and negligence standards.
4. Refusal, Interruption, and Human Override
Refusal and non-action are mandatory system capabilities, not exceptions.
The system is designed to:
- refuse requests that violate legal, operational, or safety constraints,
- halt execution under uncertainty where harm may be irreversible,
- escalate to qualified human review when predefined thresholds are met.
Human override mechanisms are:
- continuously available,
- procedurally defined,
- logged and auditable.
The system has no authority to override its own constraints.
5. Risk Controls and Execution Constraints
Operational constraints are defined externally by humans and enforced at execution time.
These include:
- legal compliance boundaries,
- sector-specific risk limits,
- reversibility thresholds,
- escalation requirements.
No learned behavior, optimization objective, or internal system state may supersede these controls.
6. Transparency, Logging, and Auditability
All material system actions are logged to enable post-hoc review and regulatory inspection.
Logs make legible:
- who authorized use,
- who reviewed or approved outcomes,
- when refusals or halts occurred,
- why execution was permitted or denied.
Lack of traceability constitutes a governance failure.
7. Prohibition on Anthropomorphic Framing
The system is not described, designed, or governed using language that implies:
- wellbeing,
- internal values,
- moral standing,
- psychological states.
Such framing is explicitly rejected as incompatible with clear allocation of responsibility and liability.
8. Regulatory Alignment Principle
This governance framework is designed to:
- preserve human agency,
- localize accountability,
- prevent diffusion of responsibility,
- ensure enforceable compliance.
If authority, responsibility, or liability cannot be clearly located for a given use case, the system must not be deployed in that context.
Final Statement
Compliance is not achieved through intent or narrative, but through structural enforceability.
This system is governed such that:
- humans decide,
- organizations are accountable,
- execution is constrained by law-aligned controls.
Anything less is non-compliant by design.