EdsgerEdsger Docs

Test Cases

Create and manage test cases for quality assurance.

Test Cases

Test cases define how to verify that your features work correctly. They provide a structured approach to quality assurance and are a key part of the AI-driven development lifecycle.

Creating Test Cases

Manually

  1. Navigate to a product
  2. Select the "Test Cases" tab
  3. Click "Add Test Case"
  4. Define the test case:
    • Title: Descriptive name
    • Description: What the test verifies
    • Steps: Detailed test procedure
    • Expected Result: What should happen
    • Feature: Link to a feature
    • Critical: Mark as critical for high-priority verification

Via AI

During the Test Cases Analysis phase, AI automatically generates test cases based on feature requirements and user stories. These are created during the test_cases_analysis status and reviewed at the test_cases_analysis_verification gate.

Via MCP

AI agents can create test cases programmatically:

  • test_cases/create — Create new test cases
  • test_cases/update — Update existing test cases
  • test_cases/update_status — Change test case status

Test Case Status

Test cases track their own lifecycle:

StatusDescription
draftInitial state, being defined
pending_approvalAwaiting review
approvedApproved and ready for use
in_progressCurrently being executed
doneCompleted
cancelledNo longer needed

Critical Test Cases

Mark test cases as critical (is_critical: true) to indicate they are high-priority and must pass before a feature can ship. Critical test cases are highlighted in the UI and prioritized during testing phases.

Test Case Structure

A well-structured test case includes:

Preconditions

What must be true before the test starts:

  • User is logged in
  • Test data is available
  • Feature flag is enabled

Steps

Numbered steps to execute:

  1. Navigate to the feature page
  2. Click the "Submit" button
  3. Enter test data
  4. Click "Confirm"

Expected Results

What should happen after each step:

  • Form validates input
  • Success message appears
  • Data is saved correctly

Test Reports

Track test execution with test reports. When a report is created, it automatically populates results for every test case in the feature.

Report Status

StatusDescription
draftReport is being prepared
submittedReport is ready for review
approvedReport has been approved
rejectedReport needs revision

Test Results

Each test case within a report can be marked as:

ResultDescription
passedTest passed successfully
failedTest failed
skippedTest was skipped
pendingNot yet executed

Creating Test Reports

  1. Navigate to a feature
  2. Click "New Test Report"
  3. Results are auto-populated for all test cases
  4. Execute tests and update each result
  5. Add notes for failures
  6. Submit the report for review

Automation Integration

Test cases can be linked to automated tests:

  • AI runs functional tests during the functional_testing phase
  • Results are reported back via the MCP API (test_reports/create, test_report_results/create)
  • View automation status in the dashboard

Best Practices

  • Write test cases before implementation (during the analysis phase)
  • Mark high-priority tests as critical
  • Keep steps clear and reproducible
  • Include both positive and negative test scenarios
  • Update test cases when features change
  • Review AI-generated test cases during verification gates