Edge of Practice
Short-cycle experiments that break false assumptions at human scale.
Edge of Practice is a public index of small, decisive experiments executable with standard laboratory tools, commodity materials, and short timelines. These experiments are not designed to optimize systems or invent products. They exist to surface hidden assumptions quickly, test them cleanly, and expose where reality resists narrative.
Cycle
Short-cycle experiments built for rapid falsification.
Method
Small decisive tests using accessible tools and bounded conditions.
Purpose
Break false assumptions before they scale into doctrine or deployment.

Assumptions often survive because nobody tests them small.
Edge of Practice exists to make hidden failure visible before scale gives it cover.
Where Edge of Knowledge defines governing boundaries, Edge of Practice tests them against short-cycle reality.
Experiment Lifecycle
Practice work moves through bounded experimental stages. This page is the authoritative index for short-cycle tests: fast, falsifiable, and deliberately scoped to surface meaningful contradiction before larger commitments are made.
Short-Cycle
Rapid falsification under bounded conditions. This index.
Extended Cycle
Longer-duration testing once short-cycle contradictions survive initial challenge.
Persistence
Work that remains durable across repeated challenge, time, and operational stress.
The lifecycle is not a growth ladder. It is an admissibility filter.
Small, decisive experiments are often the fastest way to break large illusions.
These experiments are designed to surface hidden assumptions with direct human, operational, or material relevance using constrained setups rather than elaborate theoretical framing.
Structured by experimental function
Each domain groups experiments by the type of assumption being challenged so readers can move through automation, materials, passive systems, and human exposure without losing boundary context.
Automation, Cognition, and Control
Short-cycle experiments focused on authority, supervision, cognitive failure, refusal integrity, and drift under automation pressure.
- Agentic Normalization Drift
- Alarm Parsing Collapse Threshold
- Autonomous Handoff Blackout
- Irreversible Cognitive Dead Zones
- Irreversible Normalization Drift
- Human Supervision as Failsafe
- Post-Deployment Monitoring in AI
- Polyphonic AI Under Bounded Authority
- Confidence Suppression of Dissent, Verification, and Refusal
- Pre-Commitment Dampening
- Refusal Must Remain Outside Optimization
Materials, Polymers, and Surface Effects
Rapid experiments on interfacial behavior, morphology, additives, cooling asymmetry, durability, and material response under constrained conditions.
- Antibiotic Resistance Gene Cleaning
- Compostable Packaging Microfragments
- Glove Additive Leaching
- HDPE Die-Entrance Melt Fracture
- HDPE–LDPE Interfacial Toughening
- PC–ABS Interfacial Microdamping
- PC–PMMA Scratch Resistance
- PE–Paraffin Thermal Buffering
- PS–PDMS Surface Lubricity
- PET–PVDF Electret Humidity Edge Case
- Gauge-Correlated Cooling Asymmetry
- Architected Micro-Lattice EV Battery Enclosure
- POM Path-Memory Bimodal Basin Test
- TPU Segmental Network Decoupling Test
Energy, Physics, and Passive Systems
Short-cycle experiments probing passive dynamics, constructive physical effects, oscillation behavior, and non-obvious energy-relevant boundary conditions.
Environment, Exposure, and Human Health
Experiments surfacing hidden exposure pathways, indoor-environment effects, contamination risks, and everyday human health boundary conditions.
Non-negotiable admissibility invariants
All Edge of Practice experiments operate under canonical constraints. These are referenced for admissibility only and are not restated, interpreted, or modified here.
From experiment to regulated relevance
The experiments indexed here surface failure modes that often emerge only at runtime, especially under uncertainty, pressure, or delegation.
For organizations assessing whether AI systems are suitable to move beyond experimentation into regulated or high-consequence environments, the relevant neutral evaluation aid is the Runtime Authority Checklist.
Runtime Authority Checklist (v1.0)The checklist defines outcome-level requirements for scope limits, refusal integrity, uncertainty handling, predictability under stress, and post-incident reconstruction without prescribing technical architecture.
This page is the authoritative index of all short-cycle experiments. Entries are fixed at publication and revised only through explicit versioning so epistemic continuity remains visible rather than overwritten.