Approvals & Feedback
Human verification gates with email-based approvals and structured AI-guidance feedback.
Approvals & Feedback
Approvals and Feedback are the two mechanisms that keep your team in control of the AI pipeline. Approvals are gatekeeping checkpoints that require human sign-off before work proceeds. Feedback is structured guidance that gets injected directly into AI prompts to steer re-runs.
Together, they form the core of Edsger's "AI builds, your team controls" philosophy.
Part 1: Approvals
Why Approvals?
Without approval gates, AI would run through every phase autonomously. That might be fine for some phases, but for critical decisions — architecture choices, requirement definitions, test coverage — your team needs to review and sign off.
Approvals let you:
- Gate specific phases: Choose exactly which phases require human review
- Assign reviewers: Designate who can approve each phase
- Get notified: Receive emails with one-click approve/reject links
- Maintain audit trail: Every approval decision is logged with reviewer, timestamp, and feedback
Supported Phases
You can configure approval gates for these phases:
| Phase | What's Being Reviewed |
|---|---|
| Feature Analysis | AI's understanding of requirements and scope |
| User Stories Analysis | Generated user stories and acceptance criteria |
| Test Cases Analysis | Generated test cases and coverage |
| Technical Design | Architecture, technology choices, implementation plan |
| Branch Planning | Git branch strategy for implementation |
| Functional Testing | Test execution results and coverage |
Configuring Approvals
- Navigate to your product → Settings or Approvals tab
- Click Add next to a phase
- Assign team members as approvers
Key rules:
- If a config exists for a phase, approval is required — no config means auto-advance
- If a config exists but the assignees list is empty, the feature is blocked — nobody can approve
- Assignees are snapshotted when an approval is created — later config changes don't affect pending approvals
Approval Flow
AI completes phase
→ System checks: does this status require approval?
→ No: feature advances automatically
→ Yes: creates approval request
→ Email sent to all assignees
→ Feature pauses at verification gate
→ Reviewer opens email link or visits platform
→ Approve: feature moves to ready_for_ai, pipeline continues
→ Reject: feature stays at current status, reviewer provides feedbackEmail Notifications
When an approval is created, all assignees receive an HTML email containing:
- Product and feature name
- Feature description (rendered from Markdown)
- Current status and requested next status
- A secure link to the review page
- Link expiry date (7 days from creation)
The email link goes to /approvals/<token> — a dedicated review page that works without navigating through the product UI.
Review Page
The approval review page shows context based on the phase being reviewed:
| Phase | Context Shown |
|---|---|
| Feature Analysis | User stories and test cases generated |
| Technical Design | Full technical design document |
| Branch Planning | Planned feature branches |
| Functional Testing | Last 5 test reports |
Reviewers can:
- Approve: Feature advances to
ready_for_aiand the pipeline continues - Reject: Opens a dialog for the reviewer to explain why, with the reason saved as feedback
Both actions are logged to the feature's audit trail with the reviewer's identity and any feedback provided.
Security
- Approval tokens are unique 32-byte hex strings with 7-day expiry
- Only users in the assignees list can approve (enforced by
user_can_approveRPC) - The
review_approvalRPC was deliberately removed — approvals can only happen through the authenticated web UI, never through MCP or API, ensuring a real human is always in the loop - Expired tokens show an error page — a new approval must be created
Part 2: Feedback
Why Feedback?
When you review AI work at a verification gate, simply rejecting it doesn't help the AI improve. The AI needs to know exactly what to change and why.
Edsger's feedback system provides structured, targeted guidance that is:
- Injected directly into AI prompts: When a phase re-runs, all active feedback for that phase is automatically included in the AI's context
- Typed and prioritized: Different feedback types (requirement, constraint, suggestion, etc.) are grouped and formatted for the AI
- Targeted precisely: Point to specific lines, ranges, or entire phases
- Persistent across re-runs: Feedback stays active until explicitly resolved, guiding multiple iterations
- Scoped flexibly: Apply to a single feature or all features in a product
Feedback Types
| Type | When to Use |
|---|---|
| Requirement | Additional requirements that must be met: "Must support OAuth 2.0 PKCE flow" |
| Constraint | Hard constraints to respect: "Cannot use any npm packages with GPL license" |
| Preference | Preferred approaches: "Prefer composition over inheritance" |
| Context | Background information: "This API will be consumed by mobile clients with poor connectivity" |
| Quality Criteria | Standards to meet: "All public APIs must have JSDoc comments" |
| Suggestion | Improvement ideas: "Consider using a connection pool for database queries" |
| Question | Clarification needed: "Should the retry logic use exponential backoff?" |
| Issue | Problems found: "The error handling doesn't account for network timeouts" |
Target Types
Phase-Level
General guidance for an entire phase. No specific document or line reference.
"The technical design should use PostgreSQL instead of MongoDB for the primary database."
Line-Level
Attached to a specific line in a document (e.g., technical design). Shows the exact line content as context.
Line 47: "Consider using a write-ahead log here instead of synchronous writes."
Range-Level
Attached to a span of lines. Useful for commenting on a section or block.
Lines 23-45: "This entire authentication section needs to account for the SSO integration."
Scope: Feature vs Product
Feature-level feedback applies only to the specific feature. Use this for feature-specific guidance.
Product-level feedback applies to all features in the product for the specified phase. Use this for team-wide standards.
When a phase runs, both feature-level and product-level feedbacks are merged and injected into the AI prompt together.
Priority
Feedback has a priority from 1 to 10:
| Range | Label | Meaning |
|---|---|---|
| 9-10 | Critical | Must be addressed immediately |
| 7-8 | High | Important, address soon |
| 5-6 | Medium | Should be addressed (default) |
| 3-4 | Low | Nice to have |
| 1-2 | Minimal | Minor suggestion |
Higher priority feedback appears first in the AI context, ensuring the most important guidance is seen even if context is truncated.
Creating Feedback
From the Feedbacks Manager
- Navigate to a feature → Feedbacks tab (or product-level feedbacks)
- Select the target phase
- Click Add Feedback
- Fill in: scope (feature/product), type, title, content (Markdown), priority
- Save
From the Technical Design View
The technical design document supports inline feedback:
- Open a feature's Technical Design tab
- Hover over any line — a
+button appears - Click to add line-level feedback directly at that position
- The line's content is automatically captured as context
Lines with feedback are highlighted in blue, with the feedback thread displayed inline below.
From User Stories and Test Cases
- In the Feedbacks Manager, use the User Stories or Test Cases tab
- Expand any item to see its existing feedbacks
- Click Add Feedback to add item-specific guidance (auto-scoped to the relevant analysis phase)
How Feedback Reaches the AI
When a phase re-runs, the pipeline:
- Fetches all active, unresolved feedbacks for that phase (including product-level)
- Groups them by feedback type
- Formats them into structured Markdown sections:
## Additional Requirements(requirement type)## Constraints(constraint type)## Issues to Address(issue type)- etc.
- Each feedback shows: title, priority badge, content, and targeting details (line number, range, context snippet)
- Appends a compliance footer: "These feedbacks take precedence over default behavior"
- Injects the entire block into the AI prompt
For phases with existing work (e.g., a technical design that needs updates), the AI is instructed to only address the feedback points without rewriting unchanged sections.
Managing Feedback
- Toggle active: Temporarily disable a feedback without deleting it (inactive feedbacks are excluded from AI context)
- Resolve: Mark a feedback as resolved when the AI has addressed it (resolved feedbacks are excluded from future runs)
- Edit: Update the content, priority, or type at any time
- Delete: Permanently remove the feedback
After a successful verification, the pipeline can bulk-resolve all feedbacks for that phase.
Audit Trail
Every feedback creation, update, and deletion on feature-level feedbacks is automatically logged to the feature's audit trail via a database trigger.
Best Practices
Approvals
- Start selective: Configure approvals for the phases that matter most to your team (e.g., Technical Design, Feature Analysis)
- Keep assignee lists small: 1-3 reviewers per phase prevents bottlenecks
- Act on emails promptly: Tokens expire after 7 days — stale approvals block the pipeline
- Use rejection feedback: When rejecting, provide specific feedback so the AI knows what to fix on re-run
Feedback
- Be specific: "Use PostgreSQL" is better than "Consider a different database"
- Use the right type: Requirements and constraints are treated as mandatory by the AI; suggestions and preferences are treated as guidance
- Target precisely: Line-level feedback on the technical design is more actionable than phase-level "the design needs work"
- Use product-level for standards: "All APIs must return proper error codes" as product-level feedback applies to every feature automatically
- Resolve when addressed: Keep the feedback list clean so the AI doesn't re-address already-fixed issues