Get Help

Support

Report bugs, ask questions, or get in touch. We aim to respond within 48 hours.

Report Issues

Bug reports & feature requests

Open an issue on GitHub:

github.com/Xavierhuang/LingCode/issues

Please include: macOS version, LingCode version, steps to reproduce, and any error messages.

Documentation

README & guides

View README for installation and usage. Skills guide for slash commands.

Product overview and download: LingCode home. Full Features page (Mac and iPad). Multiple Claude Code sessions and iPad remote control.

Discussions

Questions & community

GitHub Discussions for questions, ideas, and sharing.

FAQ

Detailed answers to the questions people actually ask before and after trying LingCode. Each section is a short education on how the feature works and why it matters — not just "yes/no" responses. Jump to Getting started · Privacy & data · Work protection · What the agent sees and does · Local models & offline · LingCode vs other tools · Troubleshooting.

Getting started

How do I set up my API key?

On first launch, LingCode prompts you to pick a provider and paste a key. You can also open Settings (⌘,) and go to AI & Providers at any time. Supported providers include Anthropic, OpenAI, DeepSeek, Gemini, Kimi (global or China), Qwen — or a local Ollama endpoint for zero-cost offline inference.

Keys are stored in the macOS Keychain, encrypted at rest by the OS. If you have iCloud Keychain enabled, keys will sync to your other Macs automatically over Apple's end-to-end encrypted channel. LingCode never writes keys to a plaintext file and never routes them through our servers — they go straight from Keychain to the provider's API over HTTPS.

If you have multiple providers configured, you can switch between them per chat session, per model, or set a default. You can also run different AI agents (Claude Code, Codex, Gemini CLI) in separate tabs, each pointing at its own provider.

Is LingCode free? Can I use it commercially?

Yes to both. The LingCode app is free to download and use for any purpose — personal, commercial, at work, for clients, in production. There's no Pro tier, no seat license, no "free for open source only" clause.

You pay the AI provider directly for inference (your Anthropic, OpenAI, or Gemini API bill), or $0 if you run Ollama locally. We don't take a cut, mark up API calls, or cache your prompts for resale. This is structurally different from Cursor's $20/month subscription model — LingCode is free because the business model doesn't depend on an AI proxy layer.

What are the system requirements?

OS: macOS 15 (Sequoia) or later. CPU: Apple Silicon (M1 and newer) or Intel. Disk: about 120 MB for the app itself, plus whatever your Ollama models take if you use local inference (Qwen 2.5 Coder 32B is about 20 GB quantized; smaller models are 4–10 GB). RAM: 8 GB minimum for cloud-only use, 16 GB comfortable, 32 GB+ if you plan to run 30B+ local models.

No Rosetta is needed on Apple Silicon — LingCode is fully native ARM64. The app is notarized and signed with Developer ID, so Gatekeeper lets it launch without warnings.

Where are keyboard shortcuts, and can I change them?

Open Settings (⌘,) and go to Keyboard shortcuts. You can customize shortcuts for the main menu and toolbar actions; saving applies immediately, no restart needed. If two commands share a shortcut (for example Quick Open and Print on ⌘P), change one to avoid conflicts.

Built-in defaults (before you customize):

  • File & navigation: Open folder ⌘⇧O, Open file ⌘O, Save ⌘S, Save as ⌘⇧S, New window ⌘⇧N, Quick open ⌘P, Global search ⌘⇧F
  • Editor: Find ⌘F, Find & replace ⌥⌘F, Go to line ⌃G, Go to definition ⌘D, Find references ⌘⇧R, Back / Forward after jumps ⌃- / ⌃⇧-
  • Panels & AI: Toggle bottom panel ⌘J, Toggle AI panel ⌘⇧L, Command palette ⌘⇧P, Agent mode ⌘I, AI inline edit ⌘K, Refactor ⌘⇧E
  • Integrations: Claude Code open ⌥⌘L, New Claude session ⇧⌥⌘L; Codex ⌥⌘X / ⇧⌥⌘X; New terminal ⇧⌃`

Privacy & data

Is my code sent to LingCode's servers?

No. LingCode is direct-to-provider: prompts and file snippets go from your Mac straight to Anthropic, OpenAI, Gemini, or your local Ollama endpoint over TLS. There is no LingCode proxy, no intermediate cache, no embedding server, and no telemetry.

You can verify this yourself: open Activity Monitor's Network tab while using the agent, or run nettop -J bytes_in,bytes_out -p LingCode in Terminal. Every outbound TLS connection goes to a named provider API (api.anthropic.com, api.openai.com, generativelanguage.googleapis.com, etc.), not to any LingCode-owned domain.

This is an architectural choice. Because we don't run a proxy, we have no infrastructure to be hacked, subpoenaed, or accidentally log your prompts to. Your code only ever transits networks you chose (your ISP → the provider).

Do you train models on my code or prompts?

No. LingCode has no training pipeline, no dataset collection, and no analytics of any kind. We don't see your prompts because they never touch our servers.

Whatever privacy terms apply to your prompts are the provider's terms. Anthropic does not train on API inputs by default. OpenAI does not train on API inputs by default (different from ChatGPT consumer). Gemini's API has its own policy. Most providers offer enterprise or zero-retention modes for stricter guarantees — configure those in your provider console and LingCode will use them automatically, since it calls the same API endpoints.

If you want zero data retention anywhere: use Ollama with a local model. Prompts stay on your Mac and do not cross any network.

Where are my API keys stored?

In the macOS Keychain, encrypted at rest by the OS and accessible only to LingCode. We never write keys to a plaintext file, never sync them through a cloud service of ours, and never log them.

If you enable iCloud Keychain, keys sync to your other Macs via Apple's end-to-end encrypted channel — LingCode is not involved in that sync. This is one of the specific things Electron-based editors like Cursor cannot do: the Keychain is a native macOS framework that requires linking against Security.framework, which Electron apps can't.

To revoke: delete the key in Settings → AI & Providers (removes from Keychain) and rotate it in the provider console. Takes 30 seconds.

What telemetry does LingCode collect?

Zero. No analytics, no usage pings, no feature flags phoning home, no crash telemetry, no A/B experiments. The app does not make any outbound connection that isn't either (a) an AI provider API you configured or (b) a software update check to our notarized update feed.

If you opt in to Apple's MetricKit at the OS level, Apple may collect crash and performance reports and share them with developers — those go to Apple (not to us) and only fire on crashes. This is transparent and controllable in System Settings → Privacy & Security → Analytics & Improvements.

Is there a proxy layer between me and the AI provider?

No. Your prompt goes from LingCode's process memory, through macOS's TLS stack, directly to the provider's public API. No intermediate server, no rewrites, no prompt injection, no shared rate-limit pool, no cached embeddings.

This is structurally different from Cursor and similar products, which route all AI traffic through their own servers (their "Cursor API") that sits in front of Anthropic/OpenAI. Their proxy lets them do useful things (fast retrieval, Tab completion tuning, context packing) but also means your code transits a third party's infrastructure and your prompts are subject to their retention policy, not just the model provider's.

Work protection — what if the agent breaks something?

What happens if the agent corrupts my project.pbxproj or Info.plist?

Nothing is lost. Before the agent writes a byte to project-critical files (project.pbxproj, Info.plist, build.gradle, AndroidManifest.xml, *.entitlements, *.xcscheme), LingCode copies the original to ~/Library/Application Support/LingCode/WorkProtection/ with a timestamp.

Three ways to open the Recovery panel, pick whichever fits your flow:

  1. Keyboard shortcut: ⌃⌘R from anywhere in the app. Remappable in Settings → Keyboard shortcuts.
  2. Menu: View → Work Protection Recovery…
  3. Toolbar button: the shield icon labeled "Recover". It ships available-but-not-default so the toolbar stays uncluttered — right-click the toolbar → Customize Toolbar… and drag it in if you want it one click away.

Any of the three opens the panel as a sheet showing all snapshots with timestamps, previews of the saved file, and a Restore button on each. Because snapshots are independent of git, this works even if you had not committed before the agent edited.

Why we built this: Xcode project files and Android manifests are notorious for breaking silently under generated edits. The agent might introduce a trailing comma, a mis-nested dict, or a duplicate UUID that doesn't surface until your next build fails in CI. Pre-edit snapshots let you time-travel out of those problems without hunting through git history.

How do I undo a multi-file refactor the agent did?

Use semantic time-travel undo. LingCode maintains a separate undo stack for AI operations, independent of the editor's per-keystroke undo. Cmd-Z in the chat panel rewinds one full agent step — which may span 40 files — in a single action.

The practical consequence: you can keep typing after the agent finishes, and still rewind just the agent's changes without losing your own edits since then. Regular editor undo would roll everything back in keystroke order and lose your work. Semantic undo is scoped to the agent operation, not your cursor position.

For even larger rewinds, the worktree isolation workflow (next question) is a better fit.

Can the agent work on risky experiments without touching my main code?

Yes — worktree isolation. Ask the agent something like "create a worktree for this and try the Swift 6 concurrency migration there" and it will:

  1. Create an isolated git worktree off your current branch
  2. Work against that worktree — all edits happen in a separate directory on disk
  3. Run tests, builds, and verifications inside the worktree
  4. Report back with a summary when done

Your main checkout stays untouched. You can then: merge the branch into your current one, keep the branch for later review, or discard it entirely — all with one click.

When to use: framework migrations, dependency upgrades, experimental refactors, and anything where you want the agent to try something bold but aren't sure how it'll land.

Do I have to accept every AI edit in bulk?

No. Every AI edit appears as a per-file diff card with syntax-highlighted hunks. Each hunk has its own Accept and Revert buttons. You can keep four changes in a file and reject the fifth without touching the others.

This is the opposite of one-big-apply workflows. The per-hunk view is what makes large AI refactors actually reviewable — it's easier to scan 40 small changes than to compare two versions of a 2000-line file.

There's also a read-before-write rule: the agent cannot edit a file it hasn't read first. This prevents "blind overwrite" bugs where an agent edits a file based on a filename alone.

What the agent can see and do

Can the AI agent really see my live build errors?

Yes — as structured data, not scraped stdout. LingCode's agent calls BuildLogService directly as an in-process Swift method. It reads the current build log's errors, warnings, and diagnostic objects with file paths and line numbers, straight from the build system.

Ask "why did the last build fail?" and the agent reads the actual error list and points at the offending code. No copy-pasting stdout into a chatbot, no "paste the error and I'll help." The agent already has it.

Why this matters: Electron-based agents (Cursor, Continue) can only spawn("xcodebuild …") and regex-parse stdout. That misses structured error codes, strips file paths, lags the live log, and breaks when Apple changes the output format. In-process access gives the agent the same view the IDE has.

Does the agent actually drive the debugger?

Yes. LLDB is exposed to the agent as a first-class tool with breakpoints, stepping, variable inspection, and REPL evaluation. Same for Android via a JDWP-backed Kotlin debugger.

Useful prompt patterns:

  • "Set a breakpoint where this crashes and run to it."
  • "Step into the next call and tell me what self.user is."
  • "Print the view hierarchy when this renders wrong."
  • "Attach to the Simulator process and break on NSException."

The agent sets the breakpoint, runs the app, pauses at the frame, reads the variables, and reports back in one chat turn. This is not possible in cloud-agent IDEs because the debugger session lives inside a native UI they can't reach.

Does LingCode have the iOS Simulator built in?

Yes. Full simctl integration: boot simulators, install .app bundles, launch by bundle ID, and stream live logs into the Run Console. The run-destination picker is native SwiftUI — pick from running simulators, booted devices, or your physical iPhone connected by USB.

When you hit Run, LingCode does exactly what Xcode does: invoke xcrun simctl to install and launch, attach LLDB with process attach -n <exe> -w (so the debugger is ready before the app wakes up), and activate Simulator.app via NSWorkspace. It's the Xcode flow, not a Node script wrapping simctl.

You need Xcode installed for the SDK and Simulator runtimes, but you don't need to open Xcode.app. Many developers keep Xcode installed purely as a runtime dependency after switching.

Does LingCode have the Android Emulator built in?

Yes. Full integration with the Android SDK command-line tools: list AVDs via avdmanager, boot the emulator, install APKs via adb, stream logcat into the Run Console with tag filtering, and attach a JDWP-backed DAP session for Kotlin debugging with breakpoints, frame inspection, and variable watches.

You do not need Android Studio installed. LingCode works with a standalone Android SDK install. Gradle builds, APK/AAB output, signed release builds, and Google Play deploy all happen inside LingCode.

What actually happens when I say "ship this to TestFlight"?

Six real steps, all as native tool calls:

  1. Detect the active scheme, target, and bundle ID
  2. Archive via xcodebuild archive for Generic iOS Device
  3. Read your team ID and signing identity from the macOS Keychain
  4. Re-sign the archive for distribution if needed
  5. Upload to App Store Connect via xcrun altool
  6. Tail the processing status and report back when Apple accepts the build

The agent surfaces specific diagnostics if something fails: missing provisioning profile, expired signing identity, rejected binary, build errors. You can tell the agent to fix what it can ("regenerate the provisioning profile and retry") and it will.

This works because every step is an in-process native call — not shell scripts glued to stdout parsing. Magic Deploy uses the same architecture for Google Play, web hosts, and other targets.

Local models & offline

Can I run LingCode offline with a local model?

Yes. LingCode ships with one-click Ollama integration. Install Ollama from Settings (we can install it for you if it's not present), pick a local model (Qwen 2.5 Coder, Llama 3, Mistral, CodeLlama, DeepSeek), and every AI feature — chat, agent, edits, memory, skills, rules — runs against localhost:11434 with zero network traffic.

Use cases: flights, trains with no signal, SCIFs, client engagements where code cannot leave the room, air-gapped networks, and privacy-sensitive projects. The IDE doesn't lose any features when offline — editing, Git, LSP, build, debug, test all work because they don't depend on the network in the first place.

Cursor cannot do this, and it's not a roadmap question. Their Tab completion is a proprietary fine-tuned model on their servers. Their codebase index lives in their infrastructure. Their agent orchestration runs server-side. Pull the network cable and most of Cursor's value proposition stops working.

How do I set up Ollama in LingCode?

  1. Open Settings (⌘,) and go to AI & Providers → Local Models
  2. Click Install Ollama if it's not detected (LingCode can install it for you, or you can install manually from ollama.com)
  3. Click Download Model and pick one. qwen2.5-coder:32b is the current strongest open model for code; qwen2.5-coder:7b is faster and good enough for everyday use on smaller Macs
  4. In any AI panel, select the Ollama provider and the downloaded model

You can have Ollama and cloud providers configured simultaneously and switch per-session. Some users use cloud models (Claude Sonnet) for heavy agent work and a local model for quick edits or sensitive files.

Which local models work best for coding?

Rough guidance as of 2026, for Swift / Kotlin / general coding:

  • Qwen 2.5 Coder 32B — currently the strongest open code model. Close to GPT-4-class on some tasks. Needs ~20 GB RAM quantized. Recommended if you have 32+ GB unified memory on an M-series Mac.
  • Llama 3.1 70B — strong generalist, needs 40+ GB RAM. Good for reasoning-heavy tasks.
  • CodeLlama 34B — older but solid code model, 20 GB.
  • DeepSeek Coder 33B — strong for code, 20 GB.
  • Qwen 2.5 Coder 7B — fast, fits comfortably on 16 GB Macs, quality is noticeably lower than 32B but acceptable for everyday edits.

For agent workflows (tool use, structured output, multi-step reasoning), larger models are much better — 32B+ recommended. Small models can chat but struggle with complex tool orchestration.

Do all features work with local models?

Chat, agent mode, file edits, structured memory, skills, slash commands, rules (.mdc, CLAUDE.md, .cursorrules), and tool calling all work with Ollama. Agent quality depends on the model's ability to follow tool-use schemas — newer Qwen Coder and Llama 3.1 models handle this well; older or smaller models may struggle.

Tab completion is a mixed story: it depends on the model's speed at low-latency completions. 7B-class models are usable; larger ones are too slow on most Macs for inline tab completion. For best Tab completion, a cloud provider is still faster.

LingCode vs other tools

How is LingCode different from Cursor?

Four structural differences, covered in detail on Why not Cursor?:

  1. Trust. LingCode snapshots project-critical files before AI edits, so nothing is one-way. Cursor has no pre-edit snapshot and no semantic multi-file undo.
  2. IDE-aware. LingCode's agent calls native services (BuildLogService, LLDB, SimulatorController) directly in-process. Cursor's agent shells out and scrapes stdout.
  3. No proxy. LingCode calls AI providers directly with your keys. Cursor routes all AI traffic through their servers for $20/month.
  4. Local models. LingCode runs fully offline via Ollama. Cursor's core features (Tab, agent, indexing) require their cloud.

Plus: LingCode is a native SwiftUI Mac app with iOS Simulator, Android Emulator, LLDB, and Keychain-backed signing. Cursor is an Electron VS Code fork without a real iOS or Android story.

Do I still need Xcode installed?

You need the Xcode toolchain (SDKs, Simulator runtimes, signing infrastructure, command-line tools) but you do not need to open Xcode.app for day-to-day work. LingCode wraps xcodebuild, simctl, xcrun, altool, and LLDB, and ships native UIs for the run-destination picker, signing panel, scheme editor, and debugger.

Many developers keep Xcode installed purely as a runtime dependency after switching — Xcode.app itself rarely gets opened.

Do I still need Android Studio installed?

No. LingCode talks to the Android SDK command-line tools (gradle, adb, emulator, avdmanager) directly — you only need the standalone Android SDK, not the Android Studio IDE. Gradle builds, AVD management, APK/AAB output, logcat streaming, and Kotlin debugging all happen inside LingCode via the same native destination picker used for iOS.

Install the SDK via brew install android-commandlinetools or from Google's downloads page, point LingCode at it in Settings, done.

How does LingCode compare to GitHub Copilot?

Different category. Copilot is an extension that lives inside VS Code, JetBrains, or Xcode. LingCode is the IDE itself.

Copilot's core value is tab completion and chat inside your existing editor. LingCode ships tab completion too — Copilot-style ghost text with FIM (Fill-in-Middle) support against your cloud or local model — so you're not giving that up by switching. The broader value is agent mode, IDE-aware tools, work protection, three AI providers (Claude Code, Codex, Gemini CLI) in tabs, and an all-in-one iOS + macOS + Android toolchain.

If you love Copilot inside VS Code, LingCode is a replacement for both Copilot and VS Code — plus Xcode, Android Studio, and Cursor as four more separate apps you can stop running.

Can I migrate my Cursor settings and rules?

Yes. LingCode reads .cursorrules, .cursor/rules/, and .cursor/skills/ directly with no migration step. Drop your Cursor project into LingCode and your existing rules and skills just work. WORKSPACE.md and CLAUDE.md files are also read as rules, so anything you've written for Claude Code projects transfers too.

Troubleshooting

Why can't I run terminal commands?

LingCode's terminal uses a login shell by default (/bin/zsh -l), which inherits your profile's PATH. If a command works in Terminal.app but not in LingCode, it almost always means the command's directory is missing from PATH in the shell file LingCode actually sourced.

Fix:

  1. Run echo $PATH in LingCode's terminal and compare to Terminal.app
  2. If Homebrew paths (/opt/homebrew/bin on Apple Silicon, /usr/local/bin on Intel) are missing, add eval "$(/opt/homebrew/bin/brew shellenv)" to ~/.zprofile
  3. Restart the terminal tab

You can override the shell in Settings → Terminal (use /bin/bash -l or a custom path).

How do I control Claude Code from my phone?

See the Remote Control guide. The setup is SSH + Tailscale + tmux:

  1. Tailscale gives your Mac a stable private IP reachable from anywhere, over any network, with no port forwarding
  2. SSH from the LingCode iPad/iPhone app into your Mac's Tailscale IP
  3. tmux on the Mac runs a persistent claude session you can attach and detach

Result: full chat control from anywhere — bed, coffee shop, airport — with no cloud relay and no exposed ports.

LingCode won't launch, or crashes on open

First: check Console.app for crash reports (filter by "LingCode"). Common causes:

  • macOS too old. Minimum is macOS 15 (Sequoia). Earlier releases will refuse to launch.
  • Quarantine bit from download. If you see "cannot be opened because the developer cannot be verified" despite the app being notarized, run xattr -d com.apple.quarantine /Applications/LingCode.app
  • Corrupted preferences. Rarely, the preferences plist gets corrupted. Move ~/Library/Preferences/dev.lingcode.LingCode.plist aside and relaunch.
  • Keychain access denial. If you denied Keychain access on first launch, reset it in System Settings → Privacy & Security → Keychain.

If none of these apply, send the Console.app crash report to hhuangweijia@gmail.com.

The AI agent seems stuck or stopped responding

Check the bottom panel for the agent status row — it shows the last tool the agent called and how long it's been running. Most "stuck" cases are actually the agent waiting for a long build, a network-slow model response, or an unresponsive Ollama server.

If the agent is genuinely hung, click Cancel in the chat panel to stop the current step. The conversation history is preserved; you can reword the prompt and continue. Cancelling does not undo edits the agent has already accepted — use semantic undo if you want to rewind them too.

Contact

Direct contact