Workload Profiles
Different workloads need different baselines. A 100ms API response is unhealthy, but a 100ms video encoding step is insanely fast.
| Profile | Baseline | Use Case |
|---|---|---|
| LIGHT | 10ms | Health checks, cache reads |
| STANDARD | 100ms | REST APIs, database queries |
| HEAVY | 5s | Video transcode, ML inference |
| EXTREME | 60s | Genome sequencing, AI swarms |
// Default (STANDARD)atrion.route('api/users', telemetry)
// Heavy computationatrion.route('ml/inference', telemetry, { profile: 'HEAVY'})
// Long-running with leaseconst lease = await atrion.startTask('genom/sequence', { profile: 'EXTREME', abortController: controller,})
lease.heartbeat({ progress: 0.5 })await lease.release()Auto-Tuning
The engine uses Z-Score based statistics to dynamically adjust thresholds:
dynamicBreak = μ(R) + 3σ(R)
Break point is mean resistance plus three standard deviations
Bootstrap
Initial samples collected. Safe defaults applied.
Operational
Thresholds track your traffic using EMA.
Adaptive
Seasonal patterns and traffic shifts handled.
Zero Configuration: Auto-tuning is enabled by default. No manual thresholds needed.
State Providers
Pluggable state backends for different deployment scenarios:
InMemory (Default)
Single Nodeconst atrion = new Atrion()// Uses InMemoryProvider by defaultRedis
Distributedimport { Atrion, RedisStateProvider } from 'atrion'
const atrion = new Atrion({ stateProvider: new RedisStateProvider({ url: 'redis://localhost:6379', keyPrefix: 'atrion:', })})All Options
| Option | Default | Description |
|---|---|---|
| engine | 'auto' | 'wasm' | 'js' | 'auto' |
| stateProvider | InMemory | StateProvider instance |
| bootstrapSamples | 100 | Samples before operational mode |
| decayRate | 0.1 | Scar tissue forgiveness rate (λ) |
| scarFactor | 5 | Trauma weight (σ) |