I’ve been building PipelineSage, a Spring Boot + Gradle service that analyses CI pipeline failures and returns a structured summary, category, and suggested fix. It’s also my sandbox for “golden-path” GitLab CI/CD for AI-enabled apps – including a dedicated analyze_failure stage that calls back into the service with JSON logs.
Along the way, I ended up with a surprisingly useful pattern:
Let Codex read the plan, create the GitLab issues, and update the docs – using a GitLab API token it never actually sees.
Here’s how that loop works.
1. Plans live in docs, not in my head
The project roadmap sits in docs/project-notes.md as phases, checklists, and “definition of done” for each slice.
For example, Phase 2 (“Fake AI & failure flow”) spells out:
-
implement rule-based analysis at
POST /api/analysis, -
wire a GitLab
analyze_failurejob that calls it on test failure, -
document the pattern as a “golden path” CI template.
Each item is now linked to a real GitLab issue ([#10], [#11], etc.), but those links didn’t exist when I wrote the plan – Codex added them.
2. Secrets live in the shell, not in the prompt
To keep things safe, I created a small env loader script so my shell knows how to talk to GitLab, but Codex never sees the raw token:
-
GITLAB_API_TOKEN– a personal access token for the GitLab project -
PIPELINESAGE_GITLAB_URL– the project URL
These are loaded from a local .env file via a helper script in dev-env, and exported into my shell as environment variables.
Codex only ever works with $GITLAB_API_TOKEN and $PIPELINESAGE_GITLAB_URL as names – not as literal values.
3. Codex does the boring bits end-to-end
The workflow now looks like this:
-
I start a Codex session “in project mode” using the shared system context doc, which describes the API (
LogAnalysisRequest/LogAnalysisResult), theFailureCategoryenum, and the CI shape. -
I ask Codex to read
docs/project-notes.mdand identify the tasks for, say, Phase 2. -
I then ask:
“Using the env vars already available in this shell, generate and run thecurlcalls needed to create GitLab issues for each task, with sensible titles and descriptions.”
Because the token and project URL are already in the environment, Codex doesn’t need to see them; it just uses $GITLAB_API_TOKEN and $PIPELINESAGE_GITLAB_URL in the commands.
The neat part: Codex actually executes the calls, so by the time it’s finished, GitLab has one issue per task, correctly labelled and grouped by phase.
4. Codex then updates the docs to point back at GitLab
Once the issues exist, I ask Codex to:
“Update
docs/project-notes.mdso each checklist item links to its new GitLab issue.”
Codex re-reads the doc, inserts markdown links like:
…for every item across phases, and writes the updated file back into the repo.
I didn’t cut and paste a single issue URL by hand.
The result is:
-
docs that know about the backlog, and
-
a backlog that clearly points back to the design and CI docs (architecture, paved road, system context).
Why this feels like “real” DevEx
What I like about this pattern is that it stays true to my original principles for PipelineSage:
-
Golden paths over heroics – GitLab issues and pipelines follow a clear, documented pattern instead of ad-hoc commands.
-
Quality via automation – even the planning artefacts (docs → issues) are automated.
-
Security by design – API tokens never get pasted into prompts; they live in env and are described in the system context doc, not hard-coded.
Codex isn’t “being clever” here. It’s just doing the plumbing work that I’d otherwise do by hand:
-
reading the plan,
-
generating issue creation calls,
-
executing them,
-
and wiring the links back into the docs.
That’s exactly the kind of work we used to script ourselves for CI/CD.
The difference now is that I can describe the outcome in words, let Codex discover the right GitLab API calls, and keep my attention on the design of the system – not on copying issue URLs around.
For a DevEx / platform engineer, that feels like the right division of labour between humans, tools, and the growing number of AI “team members” on the bench.
No comments:
Post a Comment