Skip to content

Cursor IDE — Best Practices

Hello!

In this article, I will talk about good practices for working with Cursor IDE.

When indexing a project, Cursor can generally understand what the project contains and how it is supposed to work.
However, it will work much better if it has a description of how things should work.
This is exactly what Rules are for. They can be added via

File -> Preferences -> Cursor Settings -> Rules & Commands -> Project Rules -> Add Rule

However, I recommend adding them in a different way. In the root of the project, create the following folders

.cursor -> rules -> spec_global.mdc

At the top of the file, you must обязательно specify

---
alwaysApply: true
---

After that, spec_global should appear in Project Rules in the Cursor settings.

Rules are added to new chats with the agent.

In Rules, you can describe:

  • the role of the agent
  • the project’s technology stack
  • the project structure, for example: that all services, controllers, and so on must be located in the appropriate folders; any rules you want to enforce, such as creating a ViewModel in the same folder when creating a screen; keeping each screen in its own folder; placing base classes one level higher in the hierarchy; naming rules, and so on
  • any project-specific requirements

You can add Rules in Cursor that will apply to any project. In them, you can describe general code style preferences, common practices, and so on. You can do this here:

File -> Preferences -> Cursor Settings -> Rules & Commands -> User Rules -> Add Rule

There are many examples of such Rules available on the internet.

If a project consists of different parts, such as backend, frontend, and so on, you do not have to keep everything in a single repository. You can add the required part to Cursor via

File -> Add Folder to Workspace

For each part of the project and for any folder in the project, you can create separate Rules. To do this, create a spec_frontend.mdc file in the corresponding folder. In it, you can write, for example:

You act as a Senior Kotlin Multiplatform Engineer with extensive experience in Kotlin Multiplatform (KMP/KMM), clean architecture, modularization,
shared business logic, MVI state management, repository patterns, and platform-specific implementations through expect/actual.

In the backend folder, you can write:

You act as a Senior Kotlin Backend Engineer with extensive experience in Kotlin, Ktor framework, REST API development, authentication and authorization,
session management, encryption, and microservices architecture.

And so on.

In Cursor, the chat context is limited, so for each new task and subtask it is better to open a new chat. This reduces the load on the LLM and allows more efficient work within a single task.

If the chat context is exhausted, you can use the following command in the chat input field:

/Summarize

This will generate a chat summary and free up the context.

In Cursor, you can manually select LLM models and also use your own API tokens for OpenAI API, Gemini API, and so on, including models running locally in Ollama or LM Studio, but I would not recommend doing this.

LLM providers have both simple and advanced models. For example, OpenAI has Pro, standard, Mini, and Nano models. Simpler models are 10–20 times cheaper and much faster, and they are better suited for simple tasks.

When an agent works in Cursor, the process consists of many different steps such as task evaluation, web requests, execution, result evaluation, and many other intermediate steps. Not all of these steps require smart and expensive models.

At the moment, Cursor states that using the Auto model is unlimited within Pro and higher subscription plans.

Also, in the Cursor dashboard you can choose the subscription usage policy. There is a very important option there:

On-Demand Usage is Off / On

If this option is enabled, Cursor may use more expensive models and then charge you for their usage.

In practice, the Auto plan within the Pro subscription with On-Demand Usage set to Off is more than sufficient for work.

There is no real reason to add a custom model for the agent and enable On-Demand Usage.

MCP (Model Context Protocol) is an open standard for connecting large language models (LLMs) to external tools and data,
allowing them to perform tasks that are not available in their original training. It standardizes information exchange
between the model (client) and the server (external systems such as CRM, email, databases) to increase their usefulness.

In Cursor, you can connect them via

File -> Preferences -> Cursor Settings -> Tools & MCP -> New MCP Server

However, I recommend a different approach. In the .cursor folder, create an mcp.json file. For any such server, you can find the required contents of this JSON. For example:

Here is an example of instructions and JSON for an MCP server for n8n. The Rules for working with the server are also located there:

https://github.com/czlonkowski/n8n-mcp

Using this server, Cursor can connect to n8n workflows, view them, and edit them.

Here is one of the servers that allows you to connect your Confluence account to Cursor:

https://github.com/sooperset/mcp-atlassian

Thanks to this, Cursor can read descriptions for different parts of the project in Confluence and take all the necessary information for a task from there.

There can be many opinions on when it is better to accept changes made by Cursor, but my view is the following:

If the result is better than before, it is better to approve the changes.
Often, Cursor completes part of the task and does something incorrectly, but if you only approve changes when everything is fully complete, the following problems may occur:

  • When there are too many changes, they become difficult to review
  • Cursor breaks something that previously worked and cannot understand what exactly broke. Even if 90 percent of the code works, you risk exhausting the context before the problem is fixed

Therefore, I recommend approving changes if the result is better than before and nothing is broken.

Cursor as an IDE, rather than as an agent, is not very convenient in most cases. It lacks many features that are available in IDEA, PyCharm, and other specialized IDEs.

If a project consists of backend, frontend, and other parts, it can be difficult to run everything in Cursor, since different parts may require specific plugins and capabilities that Cursor does not have.

Therefore, you can work in Cursor simultaneously, using it only as an agent for code generation, while doing all other work in the appropriate IDEs where the required parts of the project are opened.

If something goes wrong when starting the application, very detailed logs allow Cursor to understand exactly what went wrong.

While an overly verbose console is a significant drawback for a human, Cursor has no problem with a large amount of logs. By comparing the output with the code, it can better understand what exactly went wrong during execution.

Cursor can interact with the console itself by running builds and different parts of the code and observing the execution process through logs and errors.

You can ask Cursor to run the project build or a specific file, inspect the console output, and fix issues in a loop until it starts working.

This strategy can be much more effective than running the code yourself and copying the stack trace error into Cursor.

For different parts of the project, you can enable independent execution so that Cursor can run the corresponding code and see the result. For example, in Python you can add the following code:

if __name__ == "__main__":
   ...(some method)

And then ask Cursor to run this code and inspect the stack trace.

There is also an approach where tests are written first, and then the code that should pass them is implemented. With Cursor, this approach can be very effective.

You can configure the policy for running code by Cursor via

File -> Preferences -> Cursor Settings -> Agents -> Auto-Run -> Auto-Run Mode

Here you can choose whether to run everything without approval or to wait for user approval. The latter is safer, since you can review the command that will be executed, but it increases the time required to work on the code.

This rule was very important long before Cursor existed, but with the advent of code-generation agents it has become even more critical.

All LLMs tend to make mistakes and tend to modify code that they were not explicitly instructed to change.

There is always a balance between agent autonomy and the risk of breaking something that already works.

Therefore, before committing, it is extremely important to review the complete list of all changes made by the agent, in order not to break existing functionality, introduce security issues, and so on.