Understanding speed

Reader-first tour — why Manic feels fast, what to read next, and where tradeoffs live.

Understanding speed

Manic optimizes for fast dev startup, short production builds, and small runtime graphs. This page is the on-ramp; deeper mechanics live in linked internals below.


The short mental model

Initializing diagram...
QuestionAnswer in one line
Why does manic dev start quickly?One watched Bun process + native Bun.serve — no separate Node-powered bundler bootstrap (Dev internals).
Why are builds fast?Bun.build + OXC transform/minify share one toolchain; work is batched, not sprayed across many tools (Performance model).
Why does the SPA feel snappy?Routes load on demand; Link / preloadRoute warm chunks; matching is regex + scores, not giant switches (Lazy chunks).
Where are numbers?Framework benchmarks — fixtures + hardware documented there.

Reading order

  1. Performance model — architectural advantages vs typical stacks + honesty about limits
  2. OXC toolchain — what replaces Babel/Terser/ESLint in practice
  3. Production client bundle — hashing, HTML rewrite, NODE_ENV define
  4. HMR & Fast Refresh — how dev differs from prod transforms
  5. Lazy chunks & cache — router componentCache + prefetch
  6. Fullstack API runtime/api, OpenAPI, catalog — why deploy graphs stay small

Tradeoffs (still “fast”, not magic)

  • No tsc in the transform path — typecheck CI separately (Performance model).
  • Huge apps — still pay one lazy chunk per route; browser concurrency caps parallelism (Lazy chunks).
  • Heavy plugins — can dominate wall-clock regardless of bundler (Caveats).

See also

On this page