Docs
Methodology

Methodology

Aircury's OpenSpec Extended workflow — how the cycle works from idea to production code, and why we work this way.


Aircury uses an extended version of OpenSpec. Our extension adds architectural criteria, a more complete testing strategy, and the idea that specifications should become executable tests.

Why Spec-Driven Development

AI coding assistants are not deterministic by nature. Give the same vague prompt on two different days and you may get structurally different codebases. That’s a problem when you’re building something a team has to maintain, extend, and evolve.

Spec-Driven Development addresses this by shifting the contract upstream:

Agree on what to build before any code is written.

When the agent has a well-defined specification, its solution space narrows significantly. Instead of guessing intent, it fulfils a contract. The result is code that’s more predictable, more reviewable, and more correct.

OpenSpec in one line

OpenSpec is a lightweight layer of structured artifacts (proposal, specs, design, tasks) that sits between your idea and the agent’s implementation — establishing an agreement before a single line of code is written.

The OpenSpec Extended Lifecycle

Our workflow has five phases. The first three are spec work, the last two are implementation work:

propose → spec → design → implement → validate → archive

Proposeopsx:propose "your idea"

Creates the change proposal. This is a short document (1-2 pages) answering:

  • Why is this change needed? What problem does it solve?
  • What will change? What new capabilities are being added?
  • Impact — what code, APIs, and systems are affected?

The proposal is the agreement between you and the agent about the purpose of the change. Everything downstream builds on this.

Spec — Write specs/<capability>/spec.md

For each capability identified in the proposal, write a specification. Specs define what the system must do — not how. They use a structured format:

### Requirement: User can reset password
The system SHALL send a reset email within 60 seconds.

#### Scenario: Successful reset request
- **WHEN** user submits email on reset form
- **THEN** system sends password reset link to that email
- **AND** link expires after 24 hours

The SHALL/MUST language and the WHEN/THEN scenario format are intentional — see Specs as Executable Tests below.

Design — Write design.md

The design document explains how the system will be built. Key decisions, architectural patterns, trade-offs, file structure. This is where the architecture rules, layer boundaries, and SOLID constraints get defined for the specific change.

Implementopsx:apply

The agent reads all the artifacts (proposal → specs → design → tasks) and implements the change. With well-written artifacts, the implementation is focused, structured, and reviewable. The agent operates within the constraints you’ve defined.

Archiveopsx:archive

Once the implementation is validated (tests passing, code reviewed), archive the change. Specs get promoted to the core openspec/specs/ directory, becoming the living documentation of the system’s capabilities.

Specs as executable tests

One of the more interesting ideas in this framework extension: OpenSpec specifications and Gherkin/BDD tests describe the same thing — just in two different forms.

An OpenSpec scenario:

#### Scenario: Successful password reset
- **WHEN** user submits email on reset form
- **THEN** system sends reset link to that email
- **AND** link expires after 24 hours

The corresponding Gherkin feature file:

Feature: Password Reset

  Scenario: Successful password reset
    When the user submits their email on the reset form
    Then the system sends a password reset link to that email
    And the link expires after 24 hours

They describe the same behaviour. The spec is human-readable alignment. The Gherkin is machine-executable validation. Write the spec well and the test is mostly already there.

The dual-purpose artifact

Every well-written OpenSpec scenario is a potential Gherkin scenario. When you invest time writing clear, testable specs, you’re not just documenting requirements — you’re defining the acceptance tests that will validate your implementation.

This connects directly to the testing strategy: Unit tests validate internal logic, Integration tests validate system boundaries, and BDD/Behavioural tests (from specs) validate system behaviour from the outside. Each layer plays a distinct role.

The rebuildable codebase

There’s an interesting consequence of working this way: a codebase governed by specs + tests + rules could be rebuilt from scratch with very high fidelity.

  • Specs encode WHAT the system must do
  • Tests validate that the implementation fulfils the specs
  • Architecture rules encode HOW the implementation must be structured

If you lose the implementation, you still have the contract. Give the agent the specs, tests, and rules, and it will rebuild something functionally equivalent.

Specs (what)  +  Tests (validation)  +  Rules (how)

Codebase independent of its specific implementation

Freedom to refactor, replace, or rebuild AI-generated code
The practical implication

You’re not locked into what the agent produced. You’re locked into the contract. The implementation is just the current state of that contract — and it can change.

Quality guardrails

OpenSpec alone gets you spec-compliant code. Our extension adds architecture-compliant code by injecting rules at each phase:

GuardrailApplied whereEffect
Architecture rules (Hexagonal, DDD, SOLID)Design phaseAgent generates code with correct structure and boundaries
Test-first thinkingSpec phaseSpecs are written in a testable way
Conventional CommitsImplementationAgent uses feat:, fix:, refactor: prefixes

Each guardrail is documented in its own page. Together, they steer the agent from fast-but-unconstrained code generator to a contributor that follows team conventions.

The framework in practice

A typical feature implementation at Aircury looks like this:

  1. Engineer writes the proposal (5-10 min) — description of the business need and capability boundaries, with as much constraints and trying to preempt as many edge-cases as possible
  2. Agent generates specs from the proposal
  3. Agent generates design from specs
  4. Agent generates tasks from design
  5. Engineer reviews proposal, design and tasks — making changes as needed, or reverting and starting again expanding the proposal
  6. Agent implements the tasks — following the architecture rules defined in the design
  7. Engineer reviews the code — using the quality guardrails checklist
  8. Agent or engineer writes BDD tests — from the existing specs
  9. All tests pass
  10. Agent archives the change
  11. A commit is created with the incremental change

The engineer’s role shifts here: from writing code to defining contracts and reviewing whether the agent actually honours them.