Getting started
This guide takes you from a clean machine to a running microVM workload — both shapes (command-style services and function-style entrypoints), then how to read the result and logs.
Prerequisites
- Rust 1.85+ (for
edition = "2024"). - Python 3.10+ and
uv, or - Node.js 22.6+ (for
--experimental-strip-types) andpnpm. mvminstalled and reachable asmvmctlonPATHfor actual VM boots. See https://gomicrovm.com/ for backend setup.
You only need the SDK for the language(s) you’ll write workloads in.
Install
The host CLI (mvmforge) is a Rust binary; install it once. The SDKs ship as
ordinary pip / npm packages — install only the language(s) you’ll author
workloads in.
Host CLI
git clone https://github.com/tinylabscom/mvmforge && cd mvmforgecargo install --path crates/mvmforge(A pre-built binary distribution is on the roadmap.)
Python workloads
pip install mvmTypeScript workloads
npm install mvm-sdk# or: pnpm add mvm-sdk | yarn add mvm-sdkVerify the install
From any directory:
mvmforge --helpFor repo contributors only — clone, then run just ci from the root to see
docs-check passed, adr-check passed, schema-check passed, the SDK drift
checks, the test suite, and corpus-check passed.
Two workload shapes
| Shape | When to use | What runs in the VM |
|---|---|---|
| Command entrypoint | Long-running services, daemons | Your command exec’d at boot |
| Function entrypoint | Discrete request/response calls (add(2, 3) from the host) | A wrapper that reads stdin, dispatches module:function, writes stdout |
Both flow through mvmforge emit → mvmforge compile → mvmctl up. See Generated artifact for what each one produces.
Quickstart 1 — command-entrypoint service
mkdir hello-world && cd hello-worldmkdir -p src/hellocat > src/hello/__init__.py <<'EOF'def main(): print("hello from mvmforge")EOFcat > src/hello/__main__.py <<'EOF'from . import mainmain()EOFapp.py:
import mvm as mv
mv.workload(id="hello")
@mv.app( name="hello", source=mv.local_path("src"), image=mv.nix_packages(["python312"]), entrypoint=mv.entrypoint(command=["python", "-m", "hello"]), resources=mv.resources(cpu_cores=1, memory_mb=256, rootfs_size_mb=512),)def hello() -> None: passBoot:
mvmforge up app.pyThis composes emit → compile → mvmctl up --flake <artifact>. mvmctl’s stdout/stderr/exit pass through unchanged.
Quickstart 2 — function-entrypoint call
Same plumbing, but the VM stands ready to accept a function call from the host.
Python
mkdir adder && cd addercat > adder.py <<'EOF'def add(a: int, b: int) -> int: return a + bEOFapp.py:
import mvm as mv
mv.workload(id="adder")
@mv.app( name="adder", source=mv.local_path("."), image=mv.nix_packages(["python312"]), entrypoint=mv.entrypoint_function(module="adder", function="add"), resources=mv.resources(cpu_cores=1, memory_mb=256, rootfs_size_mb=512), dependencies=mv.no_deps(),)def add(a: int, b: int) -> int: return a + bTwo notes:
dependencies=mv.no_deps()is required for function-entrypoint workloads (usemv.python_deps(lockfile=...)for third-party deps). Failing to declare raisesE_DEPS_REQUIRED_FOR_FUNCTION_WORKLOAD.- The decorated
addis now aRemoteFunction. Callingadd(2, 3)runs the local function (in-process testing).add.remote(2, 3)dispatches in the VM.
TypeScript (short form)
import * as mv from "mvm-sdk";
export const add = mv.func( { name: "adder", image: mv.nixPackages(["nodejs_22"]), resources: mv.resources({ cpuCores: 1, memoryMb: 256, rootfsSizeMb: 512 }), module: "adder", }, function add(a: number, b: number) { return a + b; },);mv.func({...}, fn) registers the workload + app + entrypoint in one call. Use the long form (workload({...}) + app({...})) for multi-app workloads.
Emit, compile, deploy
mvmforge emit app.py # canonical IR on stdoutmvmforge compile manifest.json --out artifact/mvmforge up app.py # end-to-end pipelineFor function-entrypoint workloads, mvmforge compile bundles nix/factories/mkPythonFunctionService.nix (or mkNodeFunctionService.nix) into the artifact. mvm’s mkGuest consumes the factory’s output to bake the wrapper into the rootfs.
Getting output
Function-entrypoint output: the result
The result of a function call is whatever the user function returned. The host SDK encodes args over stdin, mvmctl invoke relays them, the wrapper dispatches, and the encoded return comes back.
Python (sync):
result = add.remote(2, 3) # → 5TypeScript (async):
const result = await add.remote(2, 3); // → 5For multiple calls, open a session:
with mvmforge.session("adder"): add.remote(2, 3) # boots the VM add.remote(4, 5) # reuses itawait mv.session("adder", async () => { await add.remote(2, 3); await add.remote(4, 5);});The session id propagates automatically (Python contextvars, TS AsyncLocalStorage).
Function-entrypoint output: the error envelope
When user code in the VM raises, the wrapper writes a single-line JSON envelope to stderr and exits non-zero:
{"kind":"ValueError","error_id":"abc-123","message":"negative input"}The host SDK parses this and raises a structured exception:
try: add.remote(-1, 2)except mvmforge.RemoteError as e: e.kind # "ValueError" e.error_id # "abc-123" — stable; match in tests e.message # human-readabletry { await add.remote(-1, 2);} catch (e) { if (e instanceof mv.RemoteError) { e.kind; // "Error" e.errorId; // "abc-123" e.remoteMessage; }}In mode = "prod" (default), the wrapper sanitizes message (no traceback, no file paths, no payload bytes). In mode = "dev", the full traceback is echoed alongside the envelope. Never ship a mode = "dev" artifact to production. See Wrapper Security & Threat Model.
Command-entrypoint output: stdout, stderr
Your service writes to stdout/stderr like any other process. mvm captures these into the VM’s log stream. Reach them via:
mvmctl logs <vm-name> # tail the VM's logmvmctl logs <vm-name> --follow # stream livemvmforge up forwards mvmctl up’s stdout/stderr unchanged. To run detached:
mvmforge up app.py -- --detach --name myvmmvmctl logs myvmmvmctl stop myvmDev / production posture
Per ADR-0010 §2:
| Phase | Dev | Production |
|---|---|---|
| Build | SDK declares → IR → flake artifact | Same |
| Trigger | SDK .remote(...), Session(...) from dev box against locally-booted VM | External (orchestrator/scheduler/queue) → mvm-native dispatch. No SDK on this path. |
| Observation | SDK gets typed return / RemoteError | Output → log stream / volume; logs via mvmctl logs. Read externally. |
Two safety gates enforce this:
MVMFORGE_EMITTING=1(set by host emit subprocess):RemoteFunction.remote()andmvmforge.session(...)raiseEmittingContextError. Prevents build-time recursion.- Wrapper
mode = "prod" | "dev"(in-VM): gates dev-only execution surfaces. Defaults to"prod"; never ship dev to production.
Troubleshooting
mvmctl not found via MVMFORGE_MVM_BIN or PATH
Set MVMFORGE_MVM_BIN to the absolute path:
export MVMFORGE_MVM_BIN=/path/to/mvmctlmvmctl invoke ... timed out after 60s
Default per-invoke timeout is 60s. Tune via MVMFORGE_INVOKE_TIMEOUT_SEC (see Environment variables). On expiry, the SDK SIGKILLs the mvmctl process group and raises MvmTransportError.
EmittingContextError: RemoteFunction.remote(...) is unreachable during 'mvmforge emit'
Your entry module is calling .remote() at import time. Hide the call behind a guard or move it out of the entry module — the build pipeline runs the entry module to register IR; Layer-3 calls are dev-only by design (per ADR-0010 §2).
MsgpackUnavailable: workload declared format='msgpack'
pip install msgpack # Pythonpnpm add @msgpack/msgpack # TypeScriptOr switch to format="json" (the default).
E_UNSUPPORTED_LANGUAGE at validate
You declared entrypoint.language = "<X>" where <X> isn’t in SUPPORTED_LANGUAGES. Today the allowlist is python, node. Adding a language is a one-PR change in mvmforge.
E_DEPS_REQUIRED_FOR_FUNCTION_WORKLOAD
Function entrypoints must declare dependencies=. Use mv.no_deps() for stdlib-only workloads, mv.python_deps(lockfile="uv.lock") / mv.node_deps({lockfile: "pnpm-lock.yaml"}) for third-party packages.
E_NETWORK_WILDCARD
Your mv.network(...) egress allowlist contains a wildcard host. Enumerate concrete host:port pairs:
mv.egress([ mv.host_port("api.example.com", 443), mv.host_port("registry.npmjs.org", 443),])Where to next
- CLI reference — full subcommand surface and exit codes.
- Generated artifact — annotated walkthrough of
flake.nix+launch.jsonfor both shapes. - Python SDK / TypeScript SDK — full API reference.
- Workload IR — the v0 field set.
- Source bundling — what gets baked into the rootfs.
- Wrapper Security & Threat Model — trust boundaries.
- Environment variables — every variable the SDKs and host CLI read.