Before asking Codex to "work on AI adoption for the company", tighten the task until it has a file, a command, and a stop condition.
That is the useful business decision behind OpenAI's new `/goal` feature for Codex. It is not a magic background employee. It is a way to give Codex a durable objective for long-running work when the outcome can be verified.
For a Luxembourg SME, this matters because most departments do not need more vague AI experiments. They need one piece of work to move from messy to reliable: a project intake file, a campaign migration, a monthly report, an onboarding portal, or a support triage workflow.
Use `/goal` only when the department can define what done means.
The wrong prompt is "make our finance process better." The useful prompt is "build this reporting script, reconcile it against this control file, run these checks, and stop when the mismatch list is empty."
What /goal changes
OpenAI describes `/goal` as a Codex CLI feature for long-running tasks with a clear success condition and validation loop. The official use-case page says it is best for migrations, large refactors, deployment retry loops, experiments, games, side projects, and teams running longer experiments with clear success criteria.
The feature is not for a loose list of unrelated work. OpenAI's guidance says a good goal is bigger than one prompt but smaller than an open-ended backlog. It should say what Codex should achieve, what it should not change, how it should validate progress, and when it should stop.
As of 16 May 2026, the docs also mark `/goal` as experimental. It has to be enabled from `/experimental`, or by adding `goals = true` under `[features]` in `config.toml`. Once enabled, the user can set a goal with `/goal <objective>`, check it with `/goal`, and use `/goal pause`, `/goal resume`, or `/goal clear` to control the run.
If the work has no repository, no files, no sample inputs, no acceptance checks, and no responsible owner, it is not a good `/goal` candidate yet.
The tutorial: write the goal contract first
Before using `/goal`, write five lines.
- Objective: the specific artifact Codex should produce or change.
- Sources: the files, docs, tickets, logs, examples, screenshots, or exports it must read first.
- Boundaries: what it must not change, publish, delete, send, or infer.
- Validation: the commands, screenshots, sample outputs, review checklist, or reconciliation file that proves progress.
- Stop rule: the condition that tells Codex to stop, pause, or ask for human guidance.
This is the difference between an AI task and an operating contract.

The five department use cases
1. Operations: standardise project intake
Many SMEs have the same problem in different clothes: every project file looks different. Some include budget. Some include risk. Some include a supplier comparison. Some include a decision owner. Leadership then spends the meeting reconstructing the file instead of deciding.
A useful `/goal` task is to build or improve a project-intake generator from an approved SOP.
/goal Build the project-intake generator from `docs/project-intake-sop.md`. It must create briefs from the five sample requests in `samples/intake/`, keep the approved section order, flag missing budget, owner, risk, and deadline fields, and stop when the tests pass and the generated samples match `review/checklist.md`.
Proof: sample briefs are consistent, missing information is flagged, and the review checklist passes.
2. Marketing and sales: migrate a campaign page without breaking it
Marketing teams often need website, landing page, email, or CRM changes that are too large for one prompt but too structured to wait three weeks. A good `/goal` is a migration or cleanup where the target design, content source, and verification steps are clear.
/goal Migrate the five campaign pages listed in `campaign-migration-plan.md` to the new component system. Preserve copy, metadata, forms, and analytics attributes. Use browser screenshots to compare desktop and mobile. Stop when build, link checks, and visual review notes pass.
Proof: the site builds, links resolve, metadata is intact, and screenshots show the page still looks right.
3. Finance and admin: create a controlled monthly KPI pack
Finance should be careful with AI because the downside of a confident mistake is real. That does not mean the department should avoid Codex. It means the first goal should use controlled exports, anonymised samples, and reconciliation checks.
/goal Build the monthly KPI pack generator from the anonymised CSV exports in `finance/sample-data/`. Reconcile revenue, cost, cash, and overdue invoice totals against `finance/control-totals.csv`. List every mismatch. Do not connect to bank, payroll, or accounting APIs. Stop when reconciliation tests pass and the output PDF is generated.
Proof: totals match the control file, anomalies are listed, and no production credential is introduced.
4. HR and people: turn onboarding into a checked portal
HR is usually rich in documents and poor in structure. The handbook exists. The role checklist exists. The security training exists. The new employee still gets five PDFs, three links, and a manager who explains the missing parts on a call.
A useful `/goal` is to turn approved onboarding material into a small portal, checklist, or knowledge base prototype.
/goal Build an onboarding portal from `hr/handbook.md`, `hr/security-basics.md`, and `hr/first-week-checklist.md`. Create pages for new starters, managers, and IT setup. Do not use employee records or answer legal/employment questions beyond the supplied text. Stop when all links resolve and the five sample onboarding questions cite source sections.
Proof: pages render, links work, sample answers cite the approved source docs, and sensitive questions are routed to a human owner.
5. Customer support and IT: prototype narrow triage
Support and IT teams are good candidates because they already work with tickets, known issue lists, runbooks, logs, and recurring questions. That gives Codex the ingredients it needs: sources, examples, and validation cases.
/goal Prototype a ticket-triage assistant from `support/runbooks/` and `support/sample-tickets.json`. It should classify tickets into billing, access, bug, how-to, and escalation. It must refuse password, payment-card, and personal-data prompts. Stop when the classification tests pass and the refusal cases are documented.
Proof: sample tickets route correctly, refusal cases work, and the owner can inspect the test output.
How to run it without losing control
Put the goal in a clean branch or worktree. Give Codex the files to read first. Name the checks it must run. Ask it to work in checkpoints and keep a short progress log.
Then inspect the run with `/goal`. If the status becomes vague, tighten the contract. Tell Codex which checkpoint matters next, which command proves it, and what should cause it to pause.
The control commands matter. Use `/goal pause` when the business direction changes, `/goal resume` when the next checkpoint is clear, and `/goal clear` when the run is done or no longer valid.
Jonathan's opinion
My opinion: `/goal` is most useful for SMEs when it forces better management discipline.
The feature is interesting because Codex can keep working. But the business value comes from the contract the human has to write before that happens. A clear objective. Named sources. Boundaries. Validation. A stop rule.
That is exactly where many AI experiments fail. The tool is clever, but the work is undefined.
For a 50-person company, the win is not to let an agent wander through operations for a day. The win is to pick one recurring artifact and define it so clearly that progress can be tested.
What to do this quarter
- Pick one department where the work already lives in files, tickets, exports, or docs.
- Choose one recurring artifact: a brief, page, report, portal, or triage output.
- Write the five-line goal contract before opening Codex.
- Run the goal only on sample or approved data first.
- Review the output with the department owner before treating it as a workflow.
The useful question is not "Which department can use AI?"
The useful question is: "Which department can define a goal well enough that we can verify the result?"
Caveats and sources
`/goal` is experimental. The official OpenAI docs describe it as a Codex CLI feature that must be enabled before use. The examples above are business adaptations of OpenAI's coding-focused guidance, not separate OpenAI product promises.
This is not legal, HR, finance, or compliance advice. For sensitive workflows, keep real personal data, payroll data, customer secrets, and regulated information out of first tests. Use sample data, approved source documents, and human review.
