title: Jobs & Commands
Jobs & Commands
Jobs are the unit of work in Putnami. Commands expose jobs on the CLI, while the job system plans, orchestrates, caches, and executes them across projects.
Job definitions
Jobs are declared in putnami.extension.json and executed as subprocesses using the JSONL event protocol:
{
"jobs": {
"build": {
"kind": "command",
"command": "bun",
"args": ["@my/extension:build"],
"cwd": "{projectRoot}",
"filePatterns": ["src/**/*", "package.json"],
"cache": true
}
},
"commands": {
"build": "./build"
}
}Job properties:
| Property | Type | Description |
|---|---|---|
kind |
"command" |
Discriminator for command-based jobs |
command |
string |
Command to execute (e.g., "bun", "node") |
args |
string[] |
Arguments to pass to the command |
cwd |
string? |
Working directory (supports template variables) |
env |
Record<string, string>? |
Environment variables |
output |
"text" | "jsonl" | "both"? |
Output mode (default: "jsonl") |
timeoutMs |
number? |
Timeout in milliseconds (default: 300000) |
dependsOn |
string[]? |
Job dependencies, for project-level overrides (see below) |
filePatterns |
string[]? |
Glob patterns for cache invalidation |
cache |
boolean? |
Whether to cache results |
priority |
number? |
Higher wins when multiple extensions provide the same job (default: 0) |
quiet |
boolean? |
Suppress output on success. Quiet jobs are visible while running and on failure, but hidden from the recap once they complete successfully. |
activationFiles |
string[]? |
Glob patterns that must match at least one file in the project for this job to be active. |
Template variables for cwd, args, and env:
{workspaceRoot}— Absolute path to workspace root{projectRoot}— Absolute path to project root{extensionRoot}— Absolute path to extension package{outputRoot}— Output directory for the job{cacheRoot}— Cache directory for the extension
Job dependency syntax (dependsOn)
Dependencies are declared on pipeline steps in putnami.extension.json and resolved into a dependency tree at runtime. Steps can reference both intra-pipeline steps (by step id) and external dependencies using prefix syntax.
| Pattern | Meaning |
|---|---|
"stepId" |
Intra-pipeline step dependency |
"^job" |
Job on upstream dependencies |
"*job" |
Job on downstream dependents |
"/job" |
Workspace-level job |
Example:
{
"commands": {
"build": {
"run": [
{ "id": "generate", "task": "build-generate", "dependsOn": ["^build"] },
{ "id": "transpile", "task": "build-transpile", "dependsOn": ["generate"] }
]
}
}
}The same syntax is also supported on dependsOn in project-level job overrides.
Embedded flags
Append CLI flags after an external ref to pass arguments to the dependency:
"dependsOn": ["^build --generate"]The planner splits each string on whitespace. The first token is the job reference (with optional prefix), and the remaining tokens are forwarded as CLI arguments to the dependent job. This lets you request a specific subset of work — for example, running only the code generation phase of a build rather than the full pipeline.
Flags work with all prefixes (^, *, /) and with extension-scoped references. When multiple steps schedule the same dependency with different flags, the planner upgrades to an unrestricted execution so the dependency satisfies all dependents.
Orchestration and parallelism
Job orchestration happens in two phases: planning and execution.
Planning (execution tree build)
- Resolve target projects (explicit project name(s),
--all,--impacted, etc.). - Load extension job definitions.
- Expand
dependsOninto concrete job nodes (project + job pairs). - Build a dependency tree (DAG) and attach a
JobContextto each node.
Execution
- Create a job execution session.
- Start all jobs concurrently.
- Jobs wait on their dependencies; once deps complete, they run.
- Results/events are collected and printed as a single session.
Parallelism is automatic: jobs that don't depend on each other run at the same time.
Use --max-parallel <n> to cap concurrency (0 = unlimited). The default is the number of CPU cores.
Execution tree cycle handling
If a dependency is already in the current scheduling stack, that edge is skipped to avoid infinite recursion. Planning continues, but the cycle indicates a configuration problem that should be fixed.
Requested jobs ──► planJobSession()
├─ resolve projects
├─ expand dependsOn (^, *, /)
├─ build job nodes
└─ skip cyclic edges (scheduling stack)
│
▼
runJobsInSession()
├─ create execution session
├─ start all jobs
├─ wait for deps
└─ collect resultsDisabling jobs
Jobs can be disabled at the workspace or project level. Disabled jobs are excluded during planning.
Workspace level — add disable.jobs to the putnami field in the root package.json:
{
"name": "my-workspace",
"putnami": {
"disable": {
"jobs": ["lint"]
}
}
}Project level — add disable.jobs under the putnami key in package.json:
{
"name": "@myorg/my-app",
"putnami": {
"disable": {
"jobs": ["@putnami/typescript:lint"]
}
}
}Matching rules:
"lint"— disables all jobs namedlint, regardless of extension."@putnami/typescript:lint"— disables only that extension'slintjob. Other extensions providinglintare unaffected.
Project-level and workspace-level settings are additive.
Overriding extension jobs
A project can override extension-provided jobs by defining them in its .putnamirc.json under the jobs key. Override jobs use the same schema as extension job definitions and take precedence over extension-provided jobs for that project only.
{
"name": "my-custom-app",
"extensions": ["@putnami/typescript"],
"jobs": {
"build": {
"kind": "command",
"command": "bun",
"args": ["run", "{projectRoot}/scripts/build.ts"],
"cwd": "{projectRoot}",
"dependsOn": ["^build"],
"filePatterns": ["src/**/*.ts", "scripts/**/*.ts"]
}
}
}When a project overrides a job:
- The project acts as its own job provider —
{extensionRoot}resolves to{projectRoot}. - The override only affects that specific project, not other projects.
- Job dependencies (
dependsOn) are resolved normally. - You can also define entirely new job names that don't exist in any extension.
- Disabled jobs (
disable.jobs) are still respected for override jobs.
Job status and skipping
Jobs return one of: OK, FAILED, or SKIP (and may be shown as cached when served from cache).
Common skip cases:
- A dependency failed → the dependent job returns
SKIP. --skip-ciis set andCI=true→ the job is not scheduled.- The job decides to no-op in its own implementation.
- The job is listed in
disable.jobsat the workspace or project level.
Project scope resolution
Project selection is handled by ProjectJobOptions and resolves to a set of projects before planning:
| Input | Resolution |
|---|---|
. |
Current project (if inside a project directory) |
project-name |
Single project |
proj1,proj2 |
Multiple projects |
* or --all |
All projects |
[impacted] or --impacted |
Git-impacted projects |
^project-name |
Project and its dependents |
Default behavior
- If run from a project directory: default target is
. - If run from the workspace root: default target is
[impacted]
Impacted projects (touched files)
--impacted resolves projects based on git changes compared to the main branch. A project is considered impacted if:
- It has changed files, or
- It depends on a project with changed files
This resolution is used both by CLI commands (--impacted) and by APIs like ProjectsService.listImpactedProjects().
Caching and file pattern resolution
Caching is driven by job inputs and file patterns:
- The planner attaches
filePatternsfromputnami.extension.jsonto the job context. - During execution, the runner computes a project state hash by scanning the project directory for those patterns.
- Cache keys include:
- Job identity (plugin, job, project)
- Job parameters
- Dependent job hashes
- Project state hash (if
filePatternsare provided)
Flags
--no-cacheskips reading from cache, but still writes results. Useputnami cache cleanto delete cached data.
If any file matched by filePatterns changes, the project state hash changes and the cache is invalidated.
Watch mode
The --watch (-w) flag enables pipeline-aware watch mode. It runs the full job pipeline, then watches all source files across the dependency graph for changes. When files change, running jobs are aborted and the pipeline is re-executed. The cache system automatically skips unchanged jobs, so only affected work is repeated.
# Watch and re-run build on changes
putnami build --all --watch
# Watch a serve pipeline (generate → build → serve)
putnami serve my-app --watch
# Watch tests for impacted projects
putnami test --impacted --watchWatch mode:
- Monitors all projects involved in the execution plan
- Classifies changes across the dependency graph to determine which jobs are affected
- Aborts long-running processes (e.g.,
serve) and waits for port release before restarting - Re-plans the pipeline on each change (picks up new dependencies if
package.jsonwas edited)
Multi-service watch sessions
When running putnami serve, watch mode automatically discovers and starts all runtime service dependencies alongside the primary target. Services are discovered via the putnami.runsWith config in package.json, or by walking the dependency graph for projects with a ./serve export.
Multi-service sessions use a two-phase architecture:
- Build phase — runs all build/generate jobs through the execution runner (blocking)
- Serve phase — starts each service as an independent process with its own port
On file change, only affected services are selectively restarted. Unaffected services keep running.
# Automatically discovers and starts all services for web-app
putnami serve web-app
# Disable multi-service discovery
putnami serve web-app --no-servicesSee TypeScript — serve for full details on multi-service mode, port allocation, and conflict resolution.
Command naming scheme
Putnami uses a consistent naming scheme for commands across all extensions:
| Command | Scope | Target | Description |
|---|---|---|---|
build |
project | artifacts | Compile and bundle |
test |
project | - | Run tests |
lint |
project | - | Format and lint code |
publish |
project | registry/assets | Publish to npm, docker registries, and binary asset storage |
deploy |
project | environment | Deploy to server, cloud, or k8s |
release |
workspace | git | Changelog, git tag, version bump |
Notes:
publishwithout flags → pre-release (e.g.,1.0.0-abc1234withcanarytag on main,devon branches)publish --stable→ stable release (e.g.,1.0.0withlatesttag), plus stable asset tag manifestsrelease --promote→ changelog, git tag, version bump (after publishing)
Quick examples
# Build all projects
putnami build --all
# Build only impacted projects
putnami build --impacted
# Run multiple jobs in a single session
putnami lint,test,build --impacted
# Run a job for two specific projects
putnami test ui,web
# Run a job for a project and its dependents
putnami build ^web
# Publish pre-release versions
putnami publish --all
# Publish stable versions
putnami publish --all --stable
# Finalize a release (changelog, tag, bump)
putnami release --promote