AI in production: business rules in a file at the root
TL;DR - A root .md with business rules becomes onboarding for AIs; spec and TDD before asking for code. The file holds context; tests validate and stop the AI from inventing requirements.
When I use AI to generate code that goes to production, I need two things: business context and a contract for what must be implemented. Without that, the session drifts and the AI starts making decisions I didn’t ask for.
The file that anchors context
I use a file at the project root: REGRAS.md or CONTEXTO.md (or RULES.md / CONTEXT.md in English). The AI reads it before generating. Without it, in long sessions it loses the thread, drifts from the architecture and invents behavior. The file works as permanent onboarding: whoever (or whatever) enters the repo already knows the rules of the game.
How I structure the file
I organize it in three blocks:
- What must not happen. Constraints, limits, what the system must not do. Guardrails.
- How I want the AI to collaborate. Code patterns, examples of good and bad usage, known project pitfalls. Guidance.
- Shared context. Architecture decisions, glossary, why things are the way they are. Guideline.
Each implementation cycle that reveals something new becomes a line in that file. It grows with the project.
Spec and TDD as contract
For the contract I use spec-driven with TDD. I define the behavior (inputs, outputs, rules for that piece), write the tests and only then ask for the code.
Spec first avoids vague scope. Tests first ensure the implementation meets what was specified. The AI doesn’t decide what to do; it fills in how from what’s already written and tested.
flowchart TB
Root[REGRAS.md at root] --> IA[AI reads context]
IA --> Spec[Spec + behavior]
Spec --> Testes[Write tests]
Testes --> Gera[AI generates code]
Gera --> Valida[Tests validate]
Valida -->|Fail| Gera
Valida -->|Pass| Fica[Code stays]
Fica --> Atualiza[Update REGRAS.md]
Atualiza --> Root
The flow in practice
- The AI reads the rules file.
- I define the piece in spec (behavior, inputs, outputs) and write the tests.
- Only then I ask for the code. The AI fills in the implementation; the tests decide if it stays.
- If it fails, I adjust the prompt or the code until it passes.
- After each cycle I update
REGRAS.mdwith what I learned.
Spec-driven keeps scope clear. TDD becomes the guard that stops the AI from inventing requirements or drifting from the contract. The AI generates; the spec and tests arbitrate.
Summary
- One file at the root (
REGRAS.md/CONTEXTO.mdorRULES.md/CONTEXT.md) with constraints, patterns and context. Onboarding for AIs. - Spec before code. Behavior and contract written; tests written next.
- Code last. The AI implements; the tests validate. If they pass, it stays; if not, adjust.
- Update the file after each cycle. What went wrong or right becomes a rule for the next time.
With that I can use AI in production without losing control of what’s being delivered.