Prompt engineering is the practice of writing clear, structured instructions to guide AI coding agents like GitHub Copilot Agent or Cursor. Just like you’d explain a task to a junior developer, your job is to describe what you want, where to do it, and how to do it — in a way the AI can understand and act on.
During our internal experiment, we learned that prompt quality was one of the most critical success factors. Good prompts led to clean, scalable code. Vague or overly broad prompts caused hallucinations, bugs, and wasted time
Why It Matters
A prompt is the main way you feed task-level context to the AI. Since the model can’t guess what you’re thinking, it relies entirely on what you say — and how you say it.
Well-crafted prompts:
Improve accuracy and consistency
Reduce hallucinations
Make AI-generated code easier to validate and maintain
Save time during reviews and rework
Core Tips on prompting
Start prompts with a clear action, expected output, and any important constraints.
When possible specify the exact file, service, or component where the change should happen.
Break large features into small, independent prompts whenever possible.
Use “step-by-step” or “think like an expert” instructions to guide deeper reasoning.
Create a POST /users/login endpoint using NestJS. It should accept email and password, validate input, and return a JWT if credentials are correct. Use class-validator and JWT module.
Don’t:
Add login functionality.
Define scope: one task at a time
Do:
Add email format validation to the user registration form in RegisterForm.tsx.
Don’t (This is too broad — likely to produce incomplete or scattered results):
Finish all validations for the signup flow.
Add context: Include file names, project structure, and relevant implementation details.
Do:
In auth.controller.ts, add a new endpoint that consumes authService.validateUser() and returns a JWT if valid.
Also, attach files or use the #codebase tag to help Copilot Agent or Cursor read project content.
Describe expected behavior and constraints
Do:
Add unit tests for parseMedicalReport() in report.utils.ts. Cover edge cases like empty file, invalid format, and corrupted content.
Don’t:
Write tests for report parser.
Use a reasoning-first prompt format
Three Experts Method
Simulate three different experts answering the below questions. All experts will write down 1 step of their thinking and then share it with the group. Then all the experts will go on the next step, etc. If any expert realizes they are wrong at any point, then they leave. Stop once you have the final answer for each question.
Self-Refinement Loop
Try solving this <add context from IDE>. Then improve your answer in 3 iterations by critiquing and rewriting each version.
These methods improve architectural decisions and reduce low-quality responses.
Iterate and refine
It’s normal to go through multiple prompt rounds. Use a feedback loop: