Reference · Terminal

lingcode CLI

Interactive REPL and one-shot ask in your terminal. Full agentic mode with Claude (tools, MCP, hooks); text-only chat against OpenAI, Groq, Together, OpenRouter, Mistral, xAI, Fireworks, Ollama, and any OpenAI-compatible endpoint.

Install

Two paths. Pick whichever you already have.

Standalone — no app required

curl -fsSL https://lingcode.dev/install-cli.sh | sh

Downloads a signed, notarized tarball to ~/.lingcode/cli/ and symlinks it to ~/.local/bin/lingcode. The bundled Node bridge ships inside, so the Claude provider works without LingCode.app installed (you still need Node.js and an API key).

From an installed LingCode.app

/Applications/LingCode.app/Contents/Resources/bin/lingcode install

The install subcommand tries /usr/local/bin first, falls back to ~/.local/bin, prints a PATH hint if needed.

Quickstart

# One command to store a key in the Keychain (opens browser)
lingcode auth login

# Interactive session (Claude by default)
lingcode

# Interactive session with another provider
lingcode --provider ollama --model llama3.2
lingcode --provider groq --model llama-3.1-70b-versatile

# One-shot
lingcode ask "explain this error" < build.log
echo "what's 2+2?" | lingcode ask -

# Check your setup
lingcode auth status
lingcode --help

Interactive REPL

lingcode with no arguments drops you into an interactive multi-turn session. It's the same as lingcode repl, just shorter to type.

The REPL has two flavors depending on provider:

Input editor

When stdin is a TTY, you get a proper line editor:

/ Recall previous / next input
/ / Home / EndCursor movement
TabComplete slash commands (builtin + custom)
Trailing \Continue the input on the next line
Ctrl-CCancel running query (Claude) / abort line
Ctrl-DExit on empty line, otherwise delete character
Bracket-pastePaste multi-line clipboard content as one input

Prompt

The REPL prompt shows turn count and cumulative cost once they're non-zero:

lingcode> explain this repo
…
lingcode [1t $0.003]> now suggest a refactor
…
lingcode [2t $0.011]>

Providers

Pick with --provider; override model, base URL, or API key env var as needed.

ProviderEnv varDefault modelTools?
claude (default)ANTHROPIC_API_KEYSDK default
deepseekDEEPSEEK_API_KEYdeepseek-chat
openaiOPENAI_API_KEYgpt-4o-mini
groqGROQ_API_KEYllama-3.1-70b-versatile
togetherTOGETHER_API_KEYLlama-3.3-70B-Instruct-Turbo
openrouterOPENROUTER_API_KEYanthropic/claude-sonnet-4
mistralMISTRAL_API_KEYmistral-large-latest
xaiXAI_API_KEYgrok-2-latest
fireworksFIREWORKS_API_KEYllama-v3p1-70b-instruct
ollama— (none)llama3.2
deepseek-compatDEEPSEEK_API_KEYdeepseek-chat

Any OpenAI-compatible endpoint

lingcode --provider openai \
  --base-url https://my-proxy.example.com/v1 \
  --api-key-env MY_KEY_VAR \
  --model gpt-4o

Tool use, MCP, hooks, and session resumption are Claude-only — those features depend on the Agent SDK. Everything else in the REPL (slash commands, markdown rendering, token display, history, tab completion) works across all providers.

One-shot ask

# Simple
lingcode ask "summarise this repo's architecture"

# Pipe content in
cat error.log | lingcode ask "explain this"

# Read the entire prompt from stdin
echo "what is 2+2?" | lingcode ask -

# Switch provider
lingcode ask --provider groq "quick: what does this regex do? /\d{3}-\d{4}/"

# Claude with full tool use, auto-approved
lingcode ask --provider claude --yolo "add a .gitignore entry for .DS_Store"

# Attach a file
lingcode ask --provider claude --file screenshot.png "what's wrong in this UI?"

# Structured output (good for scripts)
lingcode ask --output-format stream-json "fix the failing test in main.swift"

Useful flags

--continueResume the most recent Claude session in this directory
--resume <id>Resume a specific session by ID
--file <path>Attach a file (image, PDF, text). Repeatable. Claude only.
--allowed-tools Read,WriteAllow only these tools (Claude only)
--disallowed-tools BashDisallow these tools
--add-dir ~/otherExtra directory the agent can touch. Repeatable.
--thinkingEnable Claude's extended thinking
--system-prompt "..."Override the system prompt
--append-system-prompt "..."Append to the system prompt
--output-format text|json|stream-jsonOutput shape
--output path/out.mdWrite the response to a file
--verboseDon't truncate tool input/output
--yoloAuto-allow every tool call. Dangerous.

Slash commands

Type these at the REPL prompt.

All providers

/helpList commands
/model <name>Switch model mid-session
/resetClear conversation (Claude: also starts a fresh session)
/costCumulative token usage
/export [path]Save transcript as markdown
/clearClear screen
/quitExit

Claude-only

/mode <mode>Change permission mode (default / acceptEdits / plan / dontAsk / bypassPermissions)
/yoloBypass all permission prompts
/compactSummarise and compress conversation history
/sessionShow current session ID
/toolsList available built-in tools
/commandsList custom slash commands
/doctorDiagnose environment (node, API key, bridge, MCP, hooks)
/initGenerate CLAUDE.md for the current project

OpenAI-compatible only

/system <text>Replace the system prompt for this session

Custom

Put a Markdown file at .claude/commands/<name>.md and invoke it as /project:<name>. Files at ~/.claude/commands/ are invoked as /user:<name>. Bare /<name> checks project first, then user. Any text after the command name is appended to the body.

$ echo "Explain the changes staged in git in plain English." > .claude/commands/explain-staged.md
$ lingcode
lingcode> /project:explain-staged

Auth & API keys

Three sources, checked in this order: environment variable, macOS Keychain, config file. Any one is enough.

# Browser-assisted: opens the provider console, prompts for the key, stores in Keychain
lingcode auth login
lingcode auth login --provider deepseek

# Direct
lingcode auth set anthropic sk-ant-...
lingcode auth set deepseek sk-...

# See what's configured (values masked)
lingcode auth status

# Remove a key
lingcode auth delete anthropic

About OAuth: Anthropic does not issue OAuth client IDs to third-party CLIs, so a true /login OAuth flow is not possible. lingcode auth login is the practical replacement — it opens the console in your browser, waits for you to paste the key, and stores it.

Other providers (openai, groq, together, openrouter, mistral, xai, fireworks, ollama) don't use the Keychain path — just set their environment variable in your shell:

export OPENAI_API_KEY=sk-...
export GROQ_API_KEY=gsk-...

MCP servers, hooks, custom commands

Compatible with Claude Code's project conventions.

MCP servers

Put a .mcp.json at the project root (or ~/.claude.json globally):

{
  "mcpServers": {
    "filesystem": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "."]
    }
  }
}

Override with --mcp-config <path>. Disable with --no-mcp. MCP is Claude-only.

Hooks

Put hooks in .claude/settings.json. They run as shell commands with CLAUDE_TOOL_NAME, CLAUDE_TOOL_INPUT, and (for PostToolUse) CLAUDE_TOOL_RESULT in the environment:

{
  "hooks": {
    "PreToolUse": [
      { "matcher": "Write|Edit", "hooks": [{ "type": "command", "command": "echo about to edit: $CLAUDE_TOOL_INPUT" }] }
    ],
    "Stop": [
      { "hooks": [{ "type": "command", "command": "say done" }] }
    ]
  }
}

Custom slash commands

Markdown files under .claude/commands/ (project) or ~/.claude/commands/ (user). Same format as Claude Code — the file body is the prompt.

Talking to the running app

When LingCode.app is open, these commands drive it over a Unix socket at ~/Library/Application Support/LingCode/ipc.sock:

CommandWhat it does
lingcode pingCheck whether the app is running and responsive.
lingcode open path/to/file.swiftOpen a file or folder in the app (launches it if needed).
lingcode statusFocused project and any active agent runs.
lingcode ask "..."Route the prompt through the app; stream its answer to stdout.
lingcode watchTail agent activity in real time. Ctrl-C to stop.

Pass --headless to ask to force local execution even with the app open.

Subcommand reference

SubcommandPurpose
repl (default)Interactive multi-turn session. lingcode with no args.
askOne-shot prompt; supports streaming, JSON output, attachments, resume.
authlogin / set / get / delete / status against the macOS Keychain.
configRead or write ~/.lingcode/config.json — defaults, stored keys.
initGenerate a CLAUDE.md for the current project by analysing the repo.
historyList past sessions from ~/.lingcode/history.jsonl.
completion zsh|bash|fishPrint a shell completion script.
pingIPC probe for the running app.
openOpen file or folder in the running app.
statusApp focus / agent status.
watchStream live agent activity.
installSymlink lingcode onto your PATH.

Troubleshooting

lingcode: ANTHROPIC_API_KEY is not set

Fix with lingcode auth login, or export in your shell, or run lingcode config set anthropic-api-key sk-ant-....

lingcode: <PROVIDER_API_KEY> is not set

OpenAI-compatible providers use plain env vars. For custom names: lingcode --provider openai --api-key-env MY_KEY.

lingcode: couldn't find Claude agent bridge

The Claude provider needs bridge.mjs, which ships inside LingCode.app or the standalone CLI tarball. Either install LingCode.app at /Applications, or point at a dev copy with export LINGCODE_AGENT_BRIDGE_DIR=/path/to/agent-bridge.

lingcode: cannot find `node` on PATH

Install Node.js (nodejs.org), reopen your shell, then retry. Or set NODE to an explicit path. Only needed for the Claude provider; OpenAI-compatible providers use pure Swift.

Rate-limited / 429 / overloaded

Both ask and repl retry automatically with exponential backoff (up to 5 attempts, up to 60s apart). You'll see rate limited (attempt N/5) — retrying in Xs… on stderr.

The app responds, but I wanted headless

lingcode ask routes through the app by default when it's open. Pass --headless to force local execution.

I pasted an API key into my terminal

Rotate it at the provider's dashboard. Anything in your shell history or tmux scrollback should be treated as leaked. Prefer lingcode auth login (browser paste) or lingcode auth set <provider> <key> — neither writes the key to stdout.