How to Adopt an AI-First Software Delivery Approach While Preserving Engineering Discipline

By

Introduction

Transitioning to an AI-first software delivery model doesn't mean throwing out proven engineering practices. Wes Reisz outlines a strategic balance, leveraging agentic workflows tailored to your context. This guide walks you through applying a two-by-two decision matrix and the RIPER-5 framework to ensure innovation without chaos. By following these steps, you'll learn when to deploy supervised versus unsupervised agents and how to maintain discipline through structured phases.

How to Adopt an AI-First Software Delivery Approach While Preserving Engineering Discipline
Source: www.infoq.com

What You Need

  • Understanding of your codebase's lifecycle – knowledge of how long your code typically lives before being rewritten or decommissioned.
  • Automated verification tools – such as CI/CD pipelines, unit tests, integration tests, and monitoring systems.
  • A defined set of engineering practices – existing standards for code reviews, documentation, and deployment.
  • Team buy-in – commitment from developers, QA, and operations to experiment with AI-assisted workflows.
  • A small pilot project – ideally non-critical, to test the approach safely.
  • Access to AI tooling – agent frameworks (e.g., LangChain, AutoGPT) or integration with AI coding assistants.

Step-by-Step Process

Step 1: Classify Your Code Using the Two-by-Two Matrix

Begin by evaluating each software component on two axes: code longevity (how long the code will be actively maintained) and automated verification coverage (how thoroughly tests and checks cover the code). Plot these variables into four quadrants:

  • High longevity, high verification – ideal for unsupervised agents that can autonomously refactor and optimize.
  • High longevity, low verification – require supervised agents with human oversight to prevent regressions.
  • Low longevity, high verification – can tolerate unsupervised agents since changes are short-lived.
  • Low longevity, low verification – stick with supervised agents or manual processes to avoid introducing bugs in fragile code.

This classification guides the level of autonomy you grant AI agents. For instance, a core library used for years with extensive tests may benefit from full automation, while a one-off script with sparse tests demands careful human intervention.

Step 2: Choose the Right Agent Mode – Supervised or Unsupervised

Based on the quadrant from Step 1, decide the agentic workflow:

  • Supervised agents: The AI suggests changes but requires human approval before integration. Use when verification is weak or code is critical. Example: AI proposes a refactor; a developer reviews and merges.
  • Unsupervised agents: The AI directly commits changes, relying on automated tests to catch issues. Use only when verification is robust and code is short-lived or well-tested. Example: AI auto-corrects minor style issues in a prototype branch.

Document these decisions for each component to maintain governance.

Step 3: Implement the RIPER-5 Framework

RIPER-5 (Research, Innovate, Plan, Execute, Review) brings discipline to AI-assisted delivery. Apply each phase:

  1. Research: Investigate the problem and gather data. Use AI to analyze logs, map dependencies, or surface patterns. Example: Let an agent scan bug reports to identify root causes.
  2. Innovate: Brainstorm solutions with AI. Generate multiple approaches and evaluate them. Keep human oversight to filter out impractical ideas.
  3. Plan: Define a clear, stepwise implementation. Use AI to estimate effort, create task lists, or detect risks. This plan must be reviewed by the team.
  4. Execute: Build and deploy using the chosen agent mode (from Step 2). Monitor automated checks continuously. Agents can assist with coding, but humans should validate critical paths.
  5. Review: After deployment, analyze outcomes. Did the AI meet performance goals? Were there regressions? Feed learnings back into the Research phase to refine future cycles.

Continue iterating over these phases for each feature or fix. The framework ensures that innovation is always balanced with rigorous validation.

How to Adopt an AI-First Software Delivery Approach While Preserving Engineering Discipline
Source: www.infoq.com

Step 4: Establish Guardrails and Feedback Loops

Even with the matrix and RIPER-5, you need safety nets:

  • Automated rollback triggers – if tests fail or error rates spike, immediately revert AI-generated changes.
  • Human-in-the-loop checkpoints – define criteria that require manual approval (e.g., touching security settings, database schemas).
  • Performance baselines – monitor speed, cost, and accuracy of AI agents to detect drift.
  • Communication channels – ensure teams can flag issues easily without disrupting the workflow.

For example, if an unsupervised agent is assigned to a high-longevity, high-verification component, set a policy that any change affecting APIs must involve a human reviewer.

Step 5: Pilot on a Small Project and Iterate

Select a low-risk module (e.g., an internal tool with good test coverage) to run a trial. Follow Steps 1–4, then compare outcomes with historical data. Measure: deployment frequency, bug rates, developer satisfaction. Adjust the matrix thresholds and RIPER-5 cycle speed. Share learnings with the team and expand to more components gradually.

Tips for Success

  • Start conservative – favor supervised agents initially, even if tests are strong. Build trust with your team before granting full autonomy.
  • Resist one-size-fits-all – the two-by-two matrix proves that different code needs different controls. Don't force a single agent style across the board.
  • Document your decisions – for each component, record the quadrant assignment and agent mode. This transparency prevents confusion and eases onboarding.
  • Combine RIPER-5 with existing agile ceremonies – tie the Review phase to sprint retrospectives to reinforce learning.
  • Monitor AI costs – unsupervised agents may run up bills if they iterate excessively. Set budget limits and timeboxes.
  • Celebrate small wins – when an unsupervised agent saves time or catches a bug, share the story. Positive reinforcement drives adoption.
  • Iterate on the matrix – as your verification coverage improves, you can move components from supervised to unsupervised zones.
Tags:

Related Articles

Recommended

Discover More

Crypto Markets Rally: Meme Coins Surge, Monero Hits ATH, and Regulatory Developments Unfold10 Essential Details About the Fedora Linux 44 Global Release PartyTop 10 Android Game and App Deals You Can't Miss Today: Star Wars, Tablets & MoreRussian GRU Hackers Hijack Routers to Intercept Microsoft Office Authentication Tokens10 Key Facts About the Landmark Wind and Battery Project That Sealed a Historic Community Benefits Deal