Agentic Coding part 1: Cursor’s best practices • Enlite

Over the past few months, our colleague Niels has been working intensively with AI supported development tools. What started as “just trying it out” has grown into a fundamentally different way of working. Not because AI magically writes code, but because it has changed his workflow.

This article is the starting point of a series on agentic coding, a term you hear more and more often, but with few concrete tools. In this series, Niels shares his experiences: what works, what doesn't, and how you can work more productively with AI without compromising on quality.

In this first article, Niels kicks off with a practical introduction to Cursor and the features that make the difference for him.

1) What is “agentic coding”?

In traditional AI-assisted development, you ask for a piece of code and accept (or reject) the result. With agentic coding, it goes further: the AI itself reads your codebase, asks questions, makes a plan, performs changes across multiple files, and helps you verify that it works.

The cycle looks like this: collect context → make plan → implement changes → verify → iterate.

Your role changes: you give direction, set preconditions, and monitor quality. The AI largely does the executive work.

2) What is Cursor?

Cursor is an AI-native code editor built on VS Code. It combines everything you know from VS Code (extensions, keybindings, settings) with integrated AI features, such as:

  • Agent chat: ask the AI to write code, give explanations, or solve problems
  • Inline edits: select code and give an instruction, the AI adapts it immediately
  • Tab completion: intelligent autocomplete that learns from your edits
  • Agent modes: Plan, Agent, Ask, Debug, different workflows for different tasks
  • Context: refer to files, folders, the entire codebase, or even a browser
  • Browser Automation: Agents autonomously control an in-editor browser for debugging and verification tasks

Why Cursor?

Cursor goes beyond autocomplete. It's built for agentic workflows: the AI can read code by itself, create plans, modify multiple files, execute commands, and resolve errors. You remain in control, but the heavy lifting is done.

Reference: cursor.com

3) Edit styles: how to interact with Cursor

Cursor offers several ways to interact with the AI. These are not “modes,” but interaction methods: how to select code, add context, and trigger edits.

Reference: cursor.com/docs/inline-edit/overview

3.1 Inline edit

How: select code, press Cmd+K (or Ctrl+K), and give a short instruction. Cursor adjusts the selection immediately.

Use for: rename, minor refactors, error fixes, better naming, add comments, minor UI tweaks.

Tip: keep instruction focused, one intent per edit.

Example:

Inline edit
Inline edit input
Inline edit result

3.2 Select code + Chat

How: select code in your editor and open Chat (Cmd+L). The selection is automatically included as context.

Use for: request explanations about specific code, get refactor suggestions, or select as a starting point for a larger task.

Tip: combine with @file or @folder to add additional context.

Example:

Add to chat example
Add to chat result

3.3 Chat (without selection)

How: Open Chat and describe what you want. Add context with @file or @folder.

Use for: new features, multi-file changes, codebase questions, scaffolding, trade-off discussions.

Example (modify existing code):

@folder src/lib/components
Add a loading skeleton to all card components. Make sure it is consistent with the existing styling.

Example (new code scaffolden):

Create a new API route for user preferences with GET and PUT endpoints, Zod validation and TypeScript types.

3.4 Tab completion

Tab completion

Tab is Cursor's specialized auto complete model. It learns from your edits: the more you accept (Tab) or reject (Escape), the better the suggestions become.

Reference: cursor.com/docs/tab/overview

What can Tab:

  • Multi-line edits: modify not only the current line, but several lines at once
  • Auto-imports: automatically add import statements (TypeScript, Python)
  • Jump in file: after an edit, Tab jumps to the next logical place to edit
  • Jump across files: suggestions for related edits in other files
  • Context-aware: based on recent changes, linter errors, and accepted edits

Keyboard shortcuts:

  • Tab: accept suggestion
  • Escape: reject suggestion
  • Cmd+→: accept word-for-word (partial accept)

Tips:

  • Let Tab “learn” by consistently accepting/rejecting
  • Use Tab in Peek views to customize signatures and fix call sites

4) Cursor agent modes: Ask / Plan / Agent / Debug

Besides “how” you edit (inline/chat/multi-file), there is also “in what mode” you deploy Cursor agents.

Reference: Cursor documentation on modes (incl. Plan) is here: cursor.com/docs/agent/modes.

4.1 Ask (understand & decide)

Use for: understanding codebase, explaining patterns, trade-offs, “which option is better?”.
Output: read-only exploration and explanation, no automatic code changes.

Sample prompt:

Ask mode

4.2 Plan (scope, approach & guardrails)

Use for: more complex changes where you want a reviewable plan first.
Output: (1) clarifying questions, (2) codebase research, (3) a detailed plan that you can review and modify before code is written.

How Plan mode works (practical):

  • Step 1, clarify: the agent asks pointed questions to get requirements in focus.
  • Step 2, research: the agent explores relevant code (read/search) to gather context.
  • Step 3, Plan: you get a concrete implementation plan (TODOs + files-to-touch + risks + checks).
  • Step 4, Review/Edit: you modify the plan (in a markdown plan file).
  • Step 5, construction: not until you are on Build click Cursor is going to implement the plan.

Plans are displayed by Cursor as a (temporary) plan file; if you want to keep them, you can save them in .cursor/plans/.

Sample prompt:

Plan mode

Build phase (after your agreement):

  • You don't have to prompt anything extra: after your review, click “Build” and Cursor executes the plan (multi-file changes, run commands, fix errors).
  • I stay “in the loop” with checkpoints: briefly verify (build/test) after major steps and only then move on.

4.3 Agent (execute & implement)

Use for: direct implementation of features, refactors, end-to-end slices, when you don't need a comprehensive plan in advance.
Output: concrete code changes (multi-file), run commands, fix errors.

Like Plan mode, you simply describe whatever you want + context/edge conditions; the agent gets to work by himself.

Sample prompt:

@folder src/lib/components @file src/routes/settings/+page.svelte

Add a dark mode toggle to the settings page. Use the existing CSS variables, save the preference in localStorage.

4.4 Debug (root cause, smallest safe fix)

Use for: failing tests, runtime errors, rare edge cases, regressions.
Output: root cause analysis + minimum fix + regression test.

Sample prompt:

Debug mode

5) Give context: @file, @folder, @Codebase, @Browser

AI output usually improves through better context, prompts and better constraints.

Reference: cursor.com/docs/context/mentions

5.1 Context strategy

  • Keeping context small: add only what is needed (relevant files/folders).
  • Making context explicit: name files and scope (“modify only X”).
  • Keeping context alive: let the agent first summarize what he sees and what assumptions he makes.

Tip: Start a new chat for each change, this keeps the context small and prevents contamination.

5.2 Prompt template: context + guardrails

@folder src/lib/components @file src/routes/dashboard/+page.svelte

Create a sales metrics card for the dashboard that shows total revenue. Must have loading and error states, no new dependencies.

5.3 Making “Done” concrete

Examples that work well:

  • “Empty state + error state + loading state are present”
  • “No new dependencies”
  • “TypeScript strict remains green”
  • “Unit tests cover edge cases”

5.4 @Browser: visual verification

Cursor has a built-in browser that you can pass as a context via @Browser. This allows the agent:

  • Taking screenshots and visually checking what's on the page
  • Reading the DOM and finding specific elements
  • Click, type, navigate, just like a real user

How it works:

  1. Open the in-editor browser (via Command Palette or the browser icon in chat)
  2. Navigate to the page you want to test
  3. In your prompt: use @Browser to include the current page as context

Sample prompt:

@Browser Check that the dashboard metrics are working properly. Do you see the Total Revenue card and sales chart? Are there any errors in the console?

Tips:

  • Be specific about what you expect to see
  • Use it for verification, not as a substitute for real tests
  • Combine with Debug mode for runtime issues

6) Rules: persistent instructions for the agent

LLMs have no memory between sessions. Rules solve this: they give the agent consistent, reusable context that is automatically included, without having to repeat the same instructions every time.

Reference: cursor.com/docs/context/rules

6.1 4 Types of Rules

TypeWhereScopeWhen to use
Project Rules.cursor/rules/Per project (version controlled)Codebase-specific conventions, templates, workflows
User RulesCursor Settings -> RulesGlobal (all your projects)Personal preferences (communication style, formatting)
Team RulesCursor DashboardAll of your team/orgOrganization-wide standards (security, compliance)
AGENTS.mdProject root (or subdirs)Per project/folderSimple instructions with no metadata overhead

6.2 Project Rules (what you work with the most)

Project Rules live in .cursor/rules/ and consist of folders with a RULE.md file. You can set how each rule is applied:

  • Always Apply: every chat session
  • Apply Intelligently: agent decides based on description
  • Apply to Specific Files: only for certain file patterns (globs)
  • Apply Manually: only if you @mentioned the rule (@my-rule)

Sample folder structure:

.cursor/rules/
  api-conventions/
    RULE.md # Main line
    templates/ # Optional templates
  component-style/
    RULE.md

Example RULE.md:

---
description: "Conventions for Go API services"
alwaysApply: false
---

When creating API endpoints:
- Use the default error response struct
- Always input validation with proper error messages
- Log to structured logger, no fmt.Println
- Add OpenAPI comments for documentation

@templates/api-handler.go

6.3 AGENTS.md (the simple variant)

For projects where you don't need complex rule configuration, you can put an AGENTS.md file in your project root. This is just markdown, no metadata, no folders.

# Project Instructions

## Code Style
- TypeScript for all new files
- Functional React components (no classes)
- snake_case for database columns

## Architecture
- Repository pattern for data access
- Business logic in service layer, not in handlers

Nested AGENTS.md: you can also nest AGENTS.md files in subdirectories. Instructions are combined, with deeper directories given priority.

6.4 Practical Tips for Rules

  • Keep rules short (< 500 rules), split large rules into multiple composable rules
  • Give concrete examples or refer to template files with @filename
  • Avoid vague guidance, write rules as if they were internal documentation
  • Reuse rules when you find yourself repeating the same prompts in chat
  • Version control your rules, they are part of your codebase

7) Model selection: which model when?

Cursor supports multiple AI models. The choice impacts quality, speed and cost.

Here's how I approach it:

Reference: docs.cursor.com/settings/models

7.1 Models I use

ModelStrengthsWeaknessesWhen to use
Claude Sonnet 4.5Excellent in code, long context (200k), fast, good planningSometimes too elaborate in explanationDefault for daily development
Claude Opus 4.5Best reasoning, deepest analysis, complex architectureSlower, more expensiveArchitecture, security reviews, troublesome bugs
GPT-5.3 CodexOptimized for code, very fast, good autocompleteLess strong in long contextInline edits, autocomplete, quick fixes
GPT-5.2Strong in reasoning, good at explanation, broadly trainedLess code-specialized than CodexDocumentation, explanations, trade-off discussions

7.2 Thinking vs non-thinking modes

Some models (such as Claude Opus) have a “thinking” mode where the model reasons first before answering. This costs more tokens (higher cost) but provides better output on complex tasks.

When to use thinking mode:

  • Complex refactors hitting multiple files
  • Architecture Decisions
  • Security/performance reviews
  • Debugging tricky issues

When non-thinking (faster, cheaper):

  • Simple code generation
  • Writing documentation
  • Minor fixes and renames
  • Autocomplete-like tasks

7.3 Cost-conscious working

Tips to control costs without losing quality:

  • Use the right model for the task: not every problem requires Opus with thinking
  • Limit context: less tokens in,less cost. Add only relevant files
  • Use Plan mode for large tasks: one good plan + build is cheaper than 10 separate agent runs
  • Switch to faster models for iteration: use Sonnet 4.5 for fast feedback loops

Reference: cursor.com/docs/models#model-pricing

Closing

After months of working with Cursor daily, my conclusion is simple: agentic coding is not a trick, it is a different way of working. The gain is not in “AI writes my code,” but in the combination of fast iterations, smart context, and the fact that you remain in control.

The features I have described in this article (modes, context, rules, model selection) are the basics. But there's more. In the next blog post, I'll dive into MCP (Model Context Protocol): how to integrate Cursor with external tools, databases, and APIs to make the agent even more powerful.

Until then: try it out! Start small, with one feature or one refactor and keep experimenting!