mvn, JaCoCo, JFRwhile (goal_not_met) { measure() // run the tool, capture metrics analyze() // reason about what to improve apply_change() // modify code with best optimization re_measure() // verify improvement }
// The prompt I'll feed to Claude Code: Run `mvn clean test jacoco:report`, then read target/site/jacoco/jacoco.csv to determine line coverage %. If below 92%, analyze the report for uncovered classes, write JUnit 5 tests targeting those gaps, and re-run. Keep looping until line coverage ≥ 92%.
// The prompt I'll feed to Claude Code: Iteratively optimize this Java app for performance. Run the benchmark, analyze the timing breakdown, pick the single most impactful optimization, apply it. If ≥ 2% faster: commit + update CHANGELOG. If < 2%: count as failure. 3 consecutive failures = stop.
git log for auto-generated commit messagesCHANGELOG.md for documented optimizations| Optimization | Category | Impact |
|---|---|---|
| Bubble sort → Collections.sort() | Algorithm | Very Large |
| File re-reading → load once | I/O | Large |
| Pattern.compile in loop → static | Object creation | Moderate |
| String += → StringBuilder | String handling | Moderate |
| DateTimeFormatter per call → cached | Object creation | Moderate |
| ArrayList scan → HashMap | Data structure | Moderate |
| Redundant copies → single pass | Memory | Small |
mvn, JaCoCo, JFR)// The universal pattern: measurable_goal + tool_access + termination_condition = AI loop
npm install -g @anthropic-ai/claude-code