On this page
- Structuring prompts for Swift/iOS
- When to use AI vs. write it yourself
- Code review habits with AI-generated code
- Project organization for AI-assisted work
- Testing AI-generated code
- Git workflow with AI coding
- Managing context in prompts
- Security — what not to share
- Leveraging work protection, not fighting it
- Asking IDE-aware questions
1. Structuring prompts for Swift/iOS
Claude performs significantly better when you front-load three things: the Swift version, the target framework, and the constraint that matters most. Vague prompts produce generic code that won't compile on the first try.
The baseline template
Start every non-trivial request with this shape:
Swift 5.10, iOS 17+, SwiftUI.
[State the goal in one sentence.]
[Constraints: no third-party dependencies / must use Swift concurrency / etc.]
[Existing code to work from — paste the relevant type or function.]
Framework context matters
Claude knows UIKit and SwiftUI well, but it defaults to UIKit patterns for older-sounding prompts. Say SwiftUI explicitly if that's what you're building. Same applies for Combine vs. async/await — Claude will pick one; tell it which.
Use @file references liberally
In LingCode's chat, type @YourModel.swift to attach the file to the prompt. Attaching the concrete types Claude will work with cuts hallucinated API calls by roughly half compared to describing the types in prose.
Tip: For SwiftUI layout bugs, paste the full View body and describe what you're seeing vs. what you expect. "This VStack isn't centering" gives Claude almost nothing. "This VStack has alignment: .center but the Text is left-aligned on iPad — here's the code" gets a fix on the first response.
2. When to use AI vs. write it yourself
AI coding assistance is not uniformly faster. There are task types where it reliably saves 60–80% of the work, and task types where it adds net overhead.
High leverage — let Claude lead
- Boilerplate:
Codableconformances,Hashable,Equatable,Comparable - Repetitive CRUD: Core Data stack setup, URL session wrappers, UserDefaults keys
- Tests:
XCTestCasesubclasses for a type you've already written - Localization: generating
Localizable.stringsentries from existing UI text - Documentation:
///doc comments on existing functions - Migrations: converting Objective-C patterns to modern Swift
Low leverage — write it yourself
- Custom gesture recognizers with non-standard state machines
- Performance-sensitive render loops (Metal, Core Animation)
- App architecture decisions — Claude doesn't know your product's growth trajectory
- Any code that touches your authentication session or keychain directly
- Anything where you cannot read and explain every line Claude produces
Rule of thumb: If the task takes 10 minutes to do manually and you've done it before, do it manually — the review overhead of AI output may exceed 10 minutes. If the task is 2+ hours of work you've never done in Swift before, AI should lead and you should review.
3. Code review habits with AI-generated code
AI-generated code must be reviewed at least as carefully as code from a junior developer — more carefully in domains you're unfamiliar with, because you're less likely to catch a plausible-sounding but wrong API call.
Review the imports first
Claude sometimes introduces frameworks you didn't ask for. Check every import statement. If you see an unfamiliar framework, look it up before trusting the usage.
Run the compiler, not just your eyes
AI code can look correct and fail to compile due to deprecated APIs or wrong generic constraints. Build before reviewing logic. Fix compilation errors first — they often reveal a misunderstanding in the generation that invalidates the rest of the code.
Spot-check against Apple docs
For any API call that feels slightly unfamiliar, open the Apple Developer documentation directly. Claude's training data has a cutoff; it may confidently use APIs that were deprecated or renamed in the last major SDK release.
Use /review in LingCode
Select the generated code, then type /review in chat. This asks Claude to review its own output with fresh context — it catches different errors than your first-pass read, because it's pattern-matching against best practices rather than following the logic flow.
Non-negotiable: Never ship AI-generated code that you cannot explain line-by-line to a teammate. If a section is opaque to you, ask Claude to explain it in comments before you merge.
4. Project organization for AI-assisted work
AI assistance improves significantly when your project is structured so that context is local — Claude can understand a file without needing to read 12 others to know what it does.
Prefer small, focused files
A UserProfileViewModel.swift that is 80 lines is more AI-friendly than a 600-line monolith. When you attach the file to a prompt, Claude gets full context. When you describe a 600-line file in prose, you lose precision.
Co-locate tests with their targets
Put UserProfileViewModelTests.swift immediately adjacent to the implementation in your Xcode group. When you ask Claude to write a test, attach both files — it will produce tests that accurately match the actual interface.
Keep a CLAUDE.md at the project root
This file is the single most effective way to improve every prompt session. Include:
- Swift and iOS version targets
- Third-party dependencies and why they're there
- Architecture pattern (MVVM, TCA, MV, etc.) and the one-line reasoning behind it
- File naming conventions
- Things Claude should never do (e.g., "never use
UserDefaultsfor auth tokens")
Attach @CLAUDE.md to the start of any session that touches core architecture. It functions as a persistent system prompt without you having to re-explain the project each time.
5. Testing AI-generated code
AI-generated tests have a specific failure mode: they test the implementation rather than the contract. If Claude generates a test by reading your code, it will often write a test that passes even if the behavior were wrong.
Write the test contract first
Before asking Claude to generate tests, write one or two func test_ method signatures with comments describing what invariant should hold — don't implement them. Then attach this stub and ask Claude to fill in the implementations. This forces Claude to match a behavior spec rather than mirror the code.
Ask for tests of the failure path
Claude defaults to happy-path tests. Explicitly request:
Write XCTestCase tests for UserProfileViewModel.fetch().
Include: one happy-path test, one test for network error,
one test for 401 (unauthorized), one test for malformed JSON.
Verify async test patterns
AI-generated async tests frequently use XCTestExpectation when they should use async/await, or vice versa. Check that the concurrency pattern matches what your codebase uses, and that await fulfillment(of:) timeouts are reasonable.
Don't skip the test run. AI-generated test code is more likely to be syntactically valid but semantically vacuous — passing trivially because the assertion is always true. Run the suite and temporarily break the implementation to confirm the test actually catches the failure.
6. Git workflow with AI coding
AI-assisted sessions produce changes rapidly. Without deliberate Git discipline, you end up with a large diff that's impossible to review, bisect, or revert cleanly.
Commit before every AI session
Before opening LingCode's chat for a significant task, make sure your working tree is clean. If the session goes sideways, you can revert to a known-good state without reconstructing what you had manually.
Commit after every logical unit
AI can generate a lot of code fast. Resist the urge to batch it all into one commit. A commit that says "add Codable conformance to UserProfile" is reviewable. A commit that says "AI session: profile, networking, tests, and refactor" is not.
Use LingCode's AI commit messages as a starting point
LingCode can generate a commit message from the diff. Treat this as a first draft — read it, correct the verb, and add the "why" that the diff alone doesn't explain. A message that says "extract authentication logic into AuthService to isolate keychain access from ViewModels" is more useful than "refactor auth."
Tag sessions that produced large changes
If an AI session touched 10+ files, create a lightweight tag before merging: git tag pre-ai-refactor-auth. This gives you a stable restore point without cluttering your branch history.
7. Managing context in prompts
Claude's context window is large but not infinite, and more context is not always better. Irrelevant context crowds out relevant context, and the model's attention distributes across everything you send.
Attach the right files, not all the files
For a bug in FeedViewController, attach FeedViewController.swift, the FeedViewModel it uses, and the FeedItem model. Do not attach the entire project. File attachment via @filename in LingCode is precise; use it precisely.
State the constraint before the context
Lead with what you need, then provide the code. "Here is 200 lines of code, what's wrong?" forces Claude to scan everything. "There is a race condition in the async image loading — here is the code" focuses its attention on the right problem class.
Reset context when switching tasks
Open a new session (Cmd+N in LingCode) when you switch from one area of the codebase to another. Long sessions accumulate context that can quietly bias Claude's responses — it may avoid patterns it "remembers" you rejected earlier in the session, even when those patterns are appropriate for the new task.
Use Claude Sessions for parallel workstreams
LingCode supports multiple Claude sessions running simultaneously. Use this to keep feature work and refactoring work in separate contexts, so each session stays coherent. See the Claude Sessions guide for setup.
8. Security — what not to share with AI
The content of your prompts and attached files is transmitted to Anthropic's API. Treat it with the same care as any outbound network request containing business-sensitive data.
Never include in any prompt: API keys, private keys (.p8, PEM), service account JSON, database connection strings with credentials, JWT secrets, hardcoded passwords, or any value that would grant access to a system if extracted from a log.
Use placeholder values in code examples
If you're showing Claude code that references secrets, replace real values before attaching the file:
// Before attaching to a prompt:
let apiKey = "sk-ant-REDACTED" // replace real value
let dbURL = "postgres://REDACTED" // replace real value
Claude does not need the real values to reason about the code structure. It only needs the shape of how they're used.
Keep secret files in .lingcodeignore
Add a .lingcodeignore file (same syntax as .gitignore) to your project root and include:
.env
.env.local
*.p8
*service-account*.json
Secrets.swift
APIKeys.swift
This prevents LingCode from indexing these files, which also prevents them from appearing in autocomplete suggestions when you type @ in chat.
Treat AI-generated crypto and auth code with extra scrutiny
Claude can produce plausible-looking cryptography code that has subtle implementation errors — incorrect IV handling, wrong padding mode, or a timing side-channel. Never use AI-generated cryptographic implementations in production without independent expert review. Use Apple's CryptoKit APIs instead, which Claude understands well and which are harder to misuse than raw CommonCrypto.
The simple rule: If you wouldn't paste it into a public Slack channel, don't paste it into a prompt. Anthropic has a privacy policy and uses data to improve models — handle your company's secrets accordingly.
9. Leveraging work protection, not fighting it
LingCode snapshots project-critical files before the agent touches them, renders every edit as a reversible diff card, and keeps a separate multi-file undo stack for AI operations. These aren't emergency features — they're part of the workflow. The developers who move fastest with AI are the ones who trust the safety net and let the agent take bigger swings.
Check the recovery panel after any multi-file edit
Before you commit, open the recovery panel and scan the snapshots LingCode saved. If the agent touched project.pbxproj, Info.plist, build.gradle, or an entitlements file, review the pre-edit copy alongside the current state — those files corrupt silently and surface as "build works on my machine" bugs days later.
Fastest way to open it: ⌃⌘R. Also available as View → Work Protection Recovery… in the menu bar, or as a toolbar shield icon (drag it in via Customize Toolbar… if you want it one click away).
Use worktree isolation for risky experiments
When you're asking the agent to do something you're not sure about — a framework migration, a dependency upgrade, a dozen-file refactor — send it into a worktree first. From chat: "Create a worktree for this and try the migration there." LingCode spins up an isolated branch, the agent works against that tree, and your main checkout stays untouched. When it's done: keep the branch, merge it, or discard it with one click.
Review per-hunk, not per-file
The diff card on each file lets you accept or revert every hunk independently. Don't default to "Accept all" — the value of the per-hunk view is that the agent might make four good changes and one wrong one, and you can keep the four. Scanning per-hunk is faster than it sounds; most hunks are obvious.
Trust semantic undo over git
If you realize mid-session that the last cross-file refactor went wrong, don't reach for git reset — the semantic undo stack rewinds just the AI operation, leaving your own edits since then intact. Cmd-Z in the chat panel walks back one agent step at a time, not one editor keystroke.
Why it matters: The developers who produce the most with AI are the ones who let it try risky things. Work protection is what makes that safe — treat it as permission to be bold, not as paranoia infrastructure.
10. Asking IDE-aware questions
LingCode's agent reads your build log, simulator logs, debugger session, active file, git state, and run console as live, structured data — not as scraped stdout. The biggest workflow shift from Cursor or Copilot is learning to ask questions that use this. If your prompts could have been typed into a cloud chatbot with a copy-paste of your code, you're leaving most of the value on the table.
Ask about the current IDE state directly
Instead of pasting error text, just ask. The agent already sees it.
- "Why did the last build fail?" — reads the live
BuildLogServiceoutput, not your paste. - "The app froze in the simulator just now. What's in the console?" — pulls the last few minutes of
RunConsoleServicelogs and correlates. - "Which test just failed and why?" — reads the test result structure, not a diff of terminal output.
- "What's on my clipboard that I need to paste into
Info.plist?" — sees the clipboard and the file.
Let the agent drive the debugger
LLDB is a tool the agent can call, not just a UI you use. Useful patterns:
- "Set a breakpoint where this crashes and run to it."
- "Step into the next call and tell me what
self.useris." - "Print the view hierarchy when this renders wrong."
The agent sets the breakpoint, runs the app, pauses at the frame, reads the variables, and reports back — in one message. This doesn't work on any IDE that treats the debugger as a side panel the AI can't reach.
Use build + ship as a single agent action
The canonical demo is real: "Ship this to TestFlight." The agent picks the scheme, runs the archive, signs with your team from the Keychain, uploads to App Store Connect, and tails the processing status — in one chat turn. Variants:
- "Archive for App Store and export the IPA to ~/Desktop."
- "Build a debug APK and install it on the connected Pixel."
- "Run this on the iPad Air simulator and open Safari to localhost when it's up."
When you're stuck, ask about the whole picture
Cloud-side agents can answer questions about code. IDE-aware agents can answer questions about the state of your development session. If you've been debugging for an hour and don't know what to try next, a useful prompt is literally: "Here's what I've tried — look at the build log, the last five commits, the current diff, and the last simulator log, and tell me what I'm missing." The agent reads all of that directly.
The mental shift: With Cursor or Copilot, your job is to give the agent enough context to be useful. With LingCode, your job is to ask — the agent already has the context; it needs the question.
More guides:
Getting Started · Claude Sessions · Skills & slash commands · Why not Cursor?
Back to Docs →