EdsgerEdsger Docs

Approvals & Feedback

Human verification gates with email-based approvals and structured AI-guidance feedback.

Approvals & Feedback

Approvals and Feedback are the two mechanisms that keep your team in control of the AI pipeline. Approvals are gatekeeping checkpoints that require human sign-off before work proceeds. Feedback is structured guidance that gets injected directly into AI prompts to steer re-runs.

Together, they form the core of Edsger's "AI builds, your team controls" philosophy.

Part 1: Approvals

Why Approvals?

Without approval gates, AI would run through every phase autonomously. That might be fine for some phases, but for critical decisions — architecture choices, requirement definitions, test coverage — your team needs to review and sign off.

Approvals let you:

  • Gate specific phases: Choose exactly which phases require human review
  • Assign reviewers: Designate who can approve each phase
  • Get notified: Receive emails with one-click approve/reject links
  • Maintain audit trail: Every approval decision is logged with reviewer, timestamp, and feedback

Supported Phases

You can configure approval gates for these phases:

PhaseWhat's Being Reviewed
Feature AnalysisAI's understanding of requirements and scope
User Stories AnalysisGenerated user stories and acceptance criteria
Test Cases AnalysisGenerated test cases and coverage
Technical DesignArchitecture, technology choices, implementation plan
Branch PlanningGit branch strategy for implementation
Functional TestingTest execution results and coverage

Configuring Approvals

  1. Navigate to your product → Settings or Approvals tab
  2. Click Add next to a phase
  3. Assign team members as approvers

Key rules:

  • If a config exists for a phase, approval is required — no config means auto-advance
  • If a config exists but the assignees list is empty, the feature is blocked — nobody can approve
  • Assignees are snapshotted when an approval is created — later config changes don't affect pending approvals

Approval Flow

AI completes phase
  → System checks: does this status require approval?
    → No: feature advances automatically
    → Yes: creates approval request
      → Email sent to all assignees
      → Feature pauses at verification gate
      → Reviewer opens email link or visits platform
        → Approve: feature moves to ready_for_ai, pipeline continues
        → Reject: feature stays at current status, reviewer provides feedback

Email Notifications

When an approval is created, all assignees receive an HTML email containing:

  • Product and feature name
  • Feature description (rendered from Markdown)
  • Current status and requested next status
  • A secure link to the review page
  • Link expiry date (7 days from creation)

The email link goes to /approvals/<token> — a dedicated review page that works without navigating through the product UI.

Review Page

The approval review page shows context based on the phase being reviewed:

PhaseContext Shown
Feature AnalysisUser stories and test cases generated
Technical DesignFull technical design document
Branch PlanningPlanned feature branches
Functional TestingLast 5 test reports

Reviewers can:

  • Approve: Feature advances to ready_for_ai and the pipeline continues
  • Reject: Opens a dialog for the reviewer to explain why, with the reason saved as feedback

Both actions are logged to the feature's audit trail with the reviewer's identity and any feedback provided.

Security

  • Approval tokens are unique 32-byte hex strings with 7-day expiry
  • Only users in the assignees list can approve (enforced by user_can_approve RPC)
  • The review_approval RPC was deliberately removed — approvals can only happen through the authenticated web UI, never through MCP or API, ensuring a real human is always in the loop
  • Expired tokens show an error page — a new approval must be created

Part 2: Feedback

Why Feedback?

When you review AI work at a verification gate, simply rejecting it doesn't help the AI improve. The AI needs to know exactly what to change and why.

Edsger's feedback system provides structured, targeted guidance that is:

  • Injected directly into AI prompts: When a phase re-runs, all active feedback for that phase is automatically included in the AI's context
  • Typed and prioritized: Different feedback types (requirement, constraint, suggestion, etc.) are grouped and formatted for the AI
  • Targeted precisely: Point to specific lines, ranges, or entire phases
  • Persistent across re-runs: Feedback stays active until explicitly resolved, guiding multiple iterations
  • Scoped flexibly: Apply to a single feature or all features in a product

Feedback Types

TypeWhen to Use
RequirementAdditional requirements that must be met: "Must support OAuth 2.0 PKCE flow"
ConstraintHard constraints to respect: "Cannot use any npm packages with GPL license"
PreferencePreferred approaches: "Prefer composition over inheritance"
ContextBackground information: "This API will be consumed by mobile clients with poor connectivity"
Quality CriteriaStandards to meet: "All public APIs must have JSDoc comments"
SuggestionImprovement ideas: "Consider using a connection pool for database queries"
QuestionClarification needed: "Should the retry logic use exponential backoff?"
IssueProblems found: "The error handling doesn't account for network timeouts"

Target Types

Phase-Level

General guidance for an entire phase. No specific document or line reference.

"The technical design should use PostgreSQL instead of MongoDB for the primary database."

Line-Level

Attached to a specific line in a document (e.g., technical design). Shows the exact line content as context.

Line 47: "Consider using a write-ahead log here instead of synchronous writes."

Range-Level

Attached to a span of lines. Useful for commenting on a section or block.

Lines 23-45: "This entire authentication section needs to account for the SSO integration."

Scope: Feature vs Product

Feature-level feedback applies only to the specific feature. Use this for feature-specific guidance.

Product-level feedback applies to all features in the product for the specified phase. Use this for team-wide standards.

When a phase runs, both feature-level and product-level feedbacks are merged and injected into the AI prompt together.

Priority

Feedback has a priority from 1 to 10:

RangeLabelMeaning
9-10CriticalMust be addressed immediately
7-8HighImportant, address soon
5-6MediumShould be addressed (default)
3-4LowNice to have
1-2MinimalMinor suggestion

Higher priority feedback appears first in the AI context, ensuring the most important guidance is seen even if context is truncated.

Creating Feedback

From the Feedbacks Manager

  1. Navigate to a feature → Feedbacks tab (or product-level feedbacks)
  2. Select the target phase
  3. Click Add Feedback
  4. Fill in: scope (feature/product), type, title, content (Markdown), priority
  5. Save

From the Technical Design View

The technical design document supports inline feedback:

  1. Open a feature's Technical Design tab
  2. Hover over any line — a + button appears
  3. Click to add line-level feedback directly at that position
  4. The line's content is automatically captured as context

Lines with feedback are highlighted in blue, with the feedback thread displayed inline below.

From User Stories and Test Cases

  1. In the Feedbacks Manager, use the User Stories or Test Cases tab
  2. Expand any item to see its existing feedbacks
  3. Click Add Feedback to add item-specific guidance (auto-scoped to the relevant analysis phase)

How Feedback Reaches the AI

When a phase re-runs, the pipeline:

  1. Fetches all active, unresolved feedbacks for that phase (including product-level)
  2. Groups them by feedback type
  3. Formats them into structured Markdown sections:
    • ## Additional Requirements (requirement type)
    • ## Constraints (constraint type)
    • ## Issues to Address (issue type)
    • etc.
  4. Each feedback shows: title, priority badge, content, and targeting details (line number, range, context snippet)
  5. Appends a compliance footer: "These feedbacks take precedence over default behavior"
  6. Injects the entire block into the AI prompt

For phases with existing work (e.g., a technical design that needs updates), the AI is instructed to only address the feedback points without rewriting unchanged sections.

Managing Feedback

  • Toggle active: Temporarily disable a feedback without deleting it (inactive feedbacks are excluded from AI context)
  • Resolve: Mark a feedback as resolved when the AI has addressed it (resolved feedbacks are excluded from future runs)
  • Edit: Update the content, priority, or type at any time
  • Delete: Permanently remove the feedback

After a successful verification, the pipeline can bulk-resolve all feedbacks for that phase.

Audit Trail

Every feedback creation, update, and deletion on feature-level feedbacks is automatically logged to the feature's audit trail via a database trigger.

Best Practices

Approvals

  • Start selective: Configure approvals for the phases that matter most to your team (e.g., Technical Design, Feature Analysis)
  • Keep assignee lists small: 1-3 reviewers per phase prevents bottlenecks
  • Act on emails promptly: Tokens expire after 7 days — stale approvals block the pipeline
  • Use rejection feedback: When rejecting, provide specific feedback so the AI knows what to fix on re-run

Feedback

  • Be specific: "Use PostgreSQL" is better than "Consider a different database"
  • Use the right type: Requirements and constraints are treated as mandatory by the AI; suggestions and preferences are treated as guidance
  • Target precisely: Line-level feedback on the technical design is more actionable than phase-level "the design needs work"
  • Use product-level for standards: "All APIs must return proper error codes" as product-level feedback applies to every feature automatically
  • Resolve when addressed: Keep the feedback list clean so the AI doesn't re-address already-fixed issues