Understanding speed
Reader-first tour — why Manic feels fast, what to read next, and where tradeoffs live.
Understanding speed
Manic optimizes for fast dev startup, short production builds, and small runtime graphs. This page is the on-ramp; deeper mechanics live in linked internals below.
The short mental model
Initializing diagram...
| Question | Answer in one line |
|---|---|
Why does manic dev start quickly? | One watched Bun process + native Bun.serve — no separate Node-powered bundler bootstrap (Dev internals). |
| Why are builds fast? | Bun.build + OXC transform/minify share one toolchain; work is batched, not sprayed across many tools (Performance model). |
| Why does the SPA feel snappy? | Routes load on demand; Link / preloadRoute warm chunks; matching is regex + scores, not giant switches (Lazy chunks). |
| Where are numbers? | Framework benchmarks — fixtures + hardware documented there. |
Reading order
- Performance model — architectural advantages vs typical stacks + honesty about limits
- OXC toolchain — what replaces Babel/Terser/ESLint in practice
- Production client bundle — hashing, HTML rewrite,
NODE_ENVdefine - HMR & Fast Refresh — how dev differs from prod transforms
- Lazy chunks & cache — router
componentCache+ prefetch - Fullstack API runtime —
/api, OpenAPI, catalog — why deploy graphs stay small
Tradeoffs (still “fast”, not magic)
- No
tscin the transform path — typecheck CI separately (Performance model). - Huge apps — still pay one lazy chunk per route; browser concurrency caps parallelism (Lazy chunks).
- Heavy plugins — can dominate wall-clock regardless of bundler (Caveats).