cligentic
← back to catalog
cligentic block

next-steps

Post-command guidance for agents and humans. Emit structured next-step hints to stderr as NDJSON for agents, formatted block for humans. Chains with json-mode.

Install
next-steps demo
NPM dependencies
  • picocolors
Block dependencies
Size

95 LOC

The problem

Agents operating CLIs have two jobs: read the output, decide the next command. Job one is easy - parse stdout. Job two is hard - how does the agent know which command to try next?

The usual answer is "stuff a manual into the system prompt." That approach has three failure modes:

  • Stale - the prompt drifts out of sync with the CLI's actual commands
  • Expensive - every agent invocation pays for the whole manual
  • Lossy - the agent has to re-derive "what makes sense next" from first principles, every time

The next-steps block replaces that with structured post-command hints written to stderr as NDJSON. The CLI tells the agent what to try next. The agent reads it. No prompt engineering.

Quickstart

import { emit } from "@/cli/agent/json-mode";
import { emitNextSteps } from "@/cli/agent/next-steps";
 
program.command("list").action(async (opts) => {
  const items = await fetchItems();
  emit(items, opts);
 
  emitNextSteps(
    [
      { command: "myapp show <id>", description: "see details for one item" },
      { command: "myapp quote <id>", description: "live price" },
      {
        command: "myapp export --format csv",
        description: "export all items",
        optional: true,
      },
    ],
    opts,
  );
});

The agent loop

const { stdout, stderr } = await spawn("myapp", ["list", "--json"]);
 
const data = JSON.parse(stdout);
 
const hints = stderr
  .split("\n")
  .filter(Boolean)
  .map((line) => JSON.parse(line))
  .filter((obj) => obj.type === "next-step");
 
nextCommand(data, hints);

Signature

export type NextStep = {
  command: string;
  description: string;
  optional?: boolean;
};
 
export function emitNextSteps(
  steps: NextStep[],
  opts?: EmitOptions,
): void;

The discipline rule

next-steps writes to stderr only. Never stdout. This keeps stdout mono-typed as "the answer to this command" and stderr as "everything else."

Chained with json-mode

Installing next-steps automatically pulls json-mode via registry dependencies. The two share detectMode() so mode detection stays consistent.

Source

The full file, 95 lines.

This is the exact TypeScript that lands in your project when you run the install command. Read it, copy it, edit it, own it. cligentic never touches it again.

src/cli/agent/next-steps.ts
// cligentic block: next-steps
//
// Post-command guidance for agents and humans. Emit structured "what to do
// next" hints after a command completes so agents can chain operations
// without re-planning from scratch, and humans get visible next actions.
//
// Design rules:
//   1. stderr only. Never stdout — stdout is data territory (see json-mode).
//   2. NDJSON in json mode (one object per line, stream-parseable).
//   3. Formatted block with arrow bullets in human mode.
//   4. Never throws, never exits. Pure emission.
//   5. Each step is { command, description, optional? } — agents key off
//      `command`, humans read `description`.
//
// Usage:
//   import { emit } from "./json-mode";
//   import { emitNextSteps } from "./next-steps";
//
//   program.command("list").action(async (opts) => {
//     const items = await fetchItems();
//     emit(items, opts);
//     emitNextSteps([
//       { command: "myapp show <id>", description: "see details for one item" },
//       { command: "myapp export --format csv", description: "export all items", optional: true },
//     ], opts);
//   });
//
// Agent consumption (Node/Bun pseudocode):
//   const proc = spawn("myapp", ["list", "--json"]);
//   const data = JSON.parse(await readStdout(proc));       // data from emit()
//   const steps = readStderr(proc)
//     .split("\n")
//     .filter(Boolean)
//     .map(line => JSON.parse(line))
//     .filter(obj => obj.type === "next-step");             // hints from emitNextSteps()
//
//   // Agent now has both data (decisions input) and hints (what to do next).
//
// Depends on:
//   - picocolors (for human-mode coloring)
//   - json-mode block (chained via registryDependencies — provides detectMode)

import pc from "picocolors";
import { type EmitOptions, detectMode, shouldColor } from "./json-mode";

export type NextStep = {
  /** The literal command the user/agent should run next. */
  command: string;
  /** A short (2-10 word) hint for why this step matters. */
  description: string;
  /** If true, the step is suggested but not required. */
  optional?: boolean;
};

/**
 * Emits a list of next-step hints to stderr.
 *
 * In json mode: one NDJSON object per line, each tagged with `type: "next-step"`
 * so agents can distinguish them from other stderr content.
 *
 * In human mode: a formatted block with arrow bullets (→ for required,
 * ○ for optional) and dim descriptions.
 *
 * Does nothing if the steps array is empty.
 */
export function emitNextSteps(steps: NextStep[], opts: EmitOptions = {}): void {
  if (steps.length === 0) return;

  const mode = detectMode(opts);

  if (mode === "json") {
    for (const step of steps) {
      process.stderr.write(
        `${JSON.stringify({ type: "next-step", ...step })}\n`,
      );
    }
    return;
  }

  // Human mode — formatted block.
  const color = shouldColor();
  const header = color ? pc.bold("Next steps:") : "Next steps:";
  process.stderr.write(`\n${header}\n`);

  for (const step of steps) {
    const marker = step.optional ? "○" : "→";
    const cmd = color ? pc.cyan(step.command) : step.command;
    const desc = color ? pc.dim(step.description) : step.description;
    process.stderr.write(`  ${marker} ${cmd}  ${desc}\n`);
  }

  process.stderr.write("\n");
}