Prompts for features and tasks: Cursor, Claude Code and Copilot
TL;DR - Prompt with goal, scope and context; the right mode (Ask/Edit/Agent/Plan). Local snippet: Copilot; multi-file feature: Agent. At the end: run tests and review the diff every time.
Asking the AI “make a login endpoint” usually produces generic code or something that doesn’t match the project. When I give goal, scope and a bit of context, the answer fits the first time. In this post: what to put in the prompt, what context/tokens/models are, Cursor modes (Ask, Edit, Agent, Plan) and when to use each tool.
What to put in the prompt
Three things make a difference: goal (what should happen), scope (file, module or repo) and one or two constraints (“no new deps”, “keep tests”). The vaguer the request, the more the AI invents. Something like “implement X in Y, following the pattern of Z” cuts rework.
Example I avoid: “make a login endpoint”. Example I use: “add POST /auth/login in auth-controller, validate body with existing DTO, return 401 if credentials invalid, no new lib”.
What context is
Context is everything the AI “sees” to answer: open files, snippets pasted in the chat, folders and files referenced with @ (e.g. @src/auth). Without enough context it guesses. With the right file in @ or in the conversation scope, the answer fits your code.
I usually paste the relevant snippet or reference the module with @ before asking for the change.
What a token is
A token is the unit the model uses to process text: words, subwords or symbols. The tool’s context limit (e.g. 200k tokens) is how much text plus response fit in one conversation. A huge prompt or many @ files consume tokens and can leave less room for the answer or increase cost. So it pays to be precise in the prompt and reference only what’s needed.
What models are
Models are the “versions” of the AI (Claude, GPT, etc.) trained to understand and generate code/text. Cursor, Claude Code and Copilot use different models under the hood; each has its own context size and response style. Choosing Ask/Edit/Agent/Plan (in Cursor) is choosing how the model acts (read-only, edit, explore, plan), not which model runs.
Ask, Edit, Agent and Plan (Cursor)
These are the interaction types in Cursor:
- Ask: read-only. The AI answers, explains, searches the code and doesn’t change anything. I use it when I want to understand before changing.
- Edit (Composer without Agent): the AI proposes changes (diffs) in one or more files; you apply or reject. It doesn’t run the terminal or explore the repo on its own.
- Agent: explores the codebase, edits multiple files and can run commands. I use it when the task spans many files or needs tests/scripts.
- Plan: the AI researches the project and builds an implementation plan for you to review (and adjust) before running it. Good for large or ambiguous tasks.
Practical summary: Ask for questions; Edit for bounded change; Agent for multi-file feature; Plan when scope is large or you want fine control before coding.
flowchart LR
subgraph Tipos Cursor
Ask[Ask: read-only]
Edit[Edit: diffs]
Agent[Agent: edit + terminal]
Plan[Plan: plan first]
end
Cursor vs Claude Code vs Copilot
Cursor: chat + Composer (Edit/Agent/Plan) and Agent with terminal. Claude Code: agent that navigates the project and runs commands; good for long reasoning. Copilot: inline (complete line, block or snippet from a comment); not an agent.
For a change in one file or “complete this here”, Copilot or chat with Edit is enough. For “endpoint X in the backend + button in the front that calls it”, Cursor Agent or Claude Code.
Plan first, Agent after?
Only when the task is large or ambiguous. I ask for the plan (or the steps in chat), review it and then ask for implementation. For a small, clear task (“rename method and update 3 call sites”), Agent directly is faster.
flowchart TB
Task[Task or feature] --> Tamanho{Clear, small scope?}
Tamanho -- Yes --> Agent[Agent / Composer directly]
Tamanho -- No --> Plan[Ask for plan first]
Plan --> Revisar[Review and adjust]
Revisar --> Agent
Agent --> Testar[Run tests / review diff]
After the code: run tests and review the diff every time. No exceptions.