Test Cases
Create and manage test cases for quality assurance.
Test Cases
Test cases define how to verify that your features work correctly. They provide a structured approach to quality assurance and are a key part of the AI-driven development lifecycle.
Creating Test Cases
Manually
- Navigate to a product
- Select the "Test Cases" tab
- Click "Add Test Case"
- Define the test case:
- Title: Descriptive name
- Description: What the test verifies
- Steps: Detailed test procedure
- Expected Result: What should happen
- Feature: Link to a feature
- Critical: Mark as critical for high-priority verification
Via AI
During the Test Cases Analysis phase, AI automatically generates test cases based on feature requirements and user stories. These are created during the test_cases_analysis status and reviewed at the test_cases_analysis_verification gate.
Via MCP
AI agents can create test cases programmatically:
test_cases/create— Create new test casestest_cases/update— Update existing test casestest_cases/update_status— Change test case status
Test Case Status
Test cases track their own lifecycle:
| Status | Description |
|---|---|
draft | Initial state, being defined |
pending_approval | Awaiting review |
approved | Approved and ready for use |
in_progress | Currently being executed |
done | Completed |
cancelled | No longer needed |
Critical Test Cases
Mark test cases as critical (is_critical: true) to indicate they are high-priority and must pass before a feature can ship. Critical test cases are highlighted in the UI and prioritized during testing phases.
Test Case Structure
A well-structured test case includes:
Preconditions
What must be true before the test starts:
- User is logged in
- Test data is available
- Feature flag is enabled
Steps
Numbered steps to execute:
- Navigate to the feature page
- Click the "Submit" button
- Enter test data
- Click "Confirm"
Expected Results
What should happen after each step:
- Form validates input
- Success message appears
- Data is saved correctly
Test Reports
Track test execution with test reports. When a report is created, it automatically populates results for every test case in the feature.
Report Status
| Status | Description |
|---|---|
draft | Report is being prepared |
submitted | Report is ready for review |
approved | Report has been approved |
rejected | Report needs revision |
Test Results
Each test case within a report can be marked as:
| Result | Description |
|---|---|
passed | Test passed successfully |
failed | Test failed |
skipped | Test was skipped |
pending | Not yet executed |
Creating Test Reports
- Navigate to a feature
- Click "New Test Report"
- Results are auto-populated for all test cases
- Execute tests and update each result
- Add notes for failures
- Submit the report for review
Automation Integration
Test cases can be linked to automated tests:
- AI runs functional tests during the
functional_testingphase - Results are reported back via the MCP API (
test_reports/create,test_report_results/create) - View automation status in the dashboard
Best Practices
- Write test cases before implementation (during the analysis phase)
- Mark high-priority tests as critical
- Keep steps clear and reproducible
- Include both positive and negative test scenarios
- Update test cases when features change
- Review AI-generated test cases during verification gates