Open to Engineering Manager / Director rolesLet's connect
Leadership
Leadership

The 23% Tax: How Poor Ticket Quality Silently Kills Velocity

Nearly a quarter of an active sprint's work items had missing descriptions — six already in development or code review — while 60% of meeting time was spent clarifying requirements that should have been written down. This isn't a documentation problem. It's a delivery tax with a measurable cost.

4–6 min read

Key Takeaways

  • Measure the hidden cost before proposing the fix.
  • Pilot with a kill switch.
  • Address patterns, not just process.

Context

I came into a team mid-stream — an established product with an active sprint, existing workflows, and an engineering culture that had been operating without formal process gates for some time. The team was capable, the product was shipping, but there were signals that something underneath wasn't working efficiently. Delivery dates were unpredictable, rework was frequent, and daily standups regularly turned into 30-minute requirements clarification sessions.

Rather than making assumptions, I started with data. I ran a documentation quality audit across the active sprint backlog and separately quantified the team's meeting overhead since January.

How I measured it

The documentation audit was straightforward. I reviewed every work item in the active sprint and classified each into one of three categories: adequate description (clear problem statement, acceptance criteria or equivalent for the work type), inadequate description (partial information, placeholder text, or missing critical context), and no description (empty or title-only). I then cross-referenced each item's documentation status against its workflow position — whether it was in refinement, to-do, active development, or code review. Items in later workflow stages with poor documentation represented a process gate failure, not just a backlog hygiene issue. I also tracked which team members created the undocumented tickets to identify whether this was a team-wide pattern or concentrated among specific contributors.

The meeting cost calculation was built from calendar data. I pulled every recurring daily meeting since January and counted total occurrences (119 meetings), total hours consumed (97 hours), and multiplied by average attendance (3 developers) to get aggregate developer hours (291). The 60% estimate for time spent on requirements clarification came from observing the meetings over several weeks and categorizing discussion topics — roughly six out of every ten minutes were spent on "what does this ticket mean," "what's the expected behavior," or "did we decide X or Y." That ratio produced the ~175 developer hour figure. I was transparent that this was an initial estimate to be validated during the pilot, not a precise measurement — but even at a conservative 40%, the number was significant enough to make the case.

The challenge

The audit surfaced two compounding problems.

23%

Items with poor docs

175

Dev hours lost / year

6

Active items, zero desc

23% of work items had inadequate or missing descriptions. Seven had no description at all. Two had placeholder text. Four were already in code review — a reviewer expected to evaluate work against requirements that didn't exist. Two more were in active development with nothing written down. One engineer had created all four undocumented items in code review, indicating the gap wasn't just in refinement — it was in ticket creation itself.

the hidden cost

Since January, 119 daily meetings had consumed 97 hours. With three developers per meeting, that was 291 developer hours. An estimated 60% was spent clarifying requirements — ~175 developer hours, nearly 22 workdays, of coding capacity redirected to conversations that should have been unnecessary.

The two problems reinforced each other. Sparse tickets generated more meetings. More meetings normalized verbal requirements. Verbal requirements went undocumented. The cycle repeated.

The downstream effects were predictable: development velocity inflated by hidden rework, code reviews that checked syntax but couldn't verify correctness, QA guessing at expected behavior without acceptance criteria, and knowledge that lived in Slack threads instead of the ticket system.

What I did

Halted undocumented work in progress. The six items in active development or code review without descriptions were moved back to refinement. This impacted short-term velocity, but allowing undocumented work to ship would have validated the pattern and guaranteed recurrence. The signal mattered more than the sprint.

Quantified the cost in capacity terms. "We need better tickets" is a process argument that's easy to deprioritize. "We're losing 22 workdays of developer capacity per year to requirements clarification meetings" is a business argument. I built the data case before proposing any solution so the conversation started with impact, not opinion.

The pitch

This wasn't positioned as adding bureaucracy — it was positioned as removing waste. The cost was already being paid across 175 hours of clarification meetings, invisible rework cycles, and QA guessing at requirements. Clear tickets return developer hours to coding

Proposed a time-boxed pilot with a kill switch. Rather than mandating a permanent process change, I pitched a 2–3 sprint experiment with a clear commitment: if it doesn't measurably improve predictability and capacity, we drop it. The pilot introduced four lightweight rules:

A Definition of Ready — tickets must have acceptance criteria and a QA review before entering a sprint. Stories need a problem statement and acceptance criteria. Bugs need reproduction steps, logs, user context, and expected vs. actual behavior. Spikes need research questions and a timebox.

Scope change logging — any mid-sprint change documented in the ticket so QA and developers stay aligned.

A Definition of Done — all acceptance criteria met, documentation and release notes updated before closing.

Bug ticket standards — mandatory fields for reproduction steps, diagnostic logs, visual evidence, and detection context.

Coached the pattern, not just the process. The concentration of undocumented tickets from specific contributors required direct conversations. The reframe: the description isn't for you — it's for your reviewer, your QA, your manager, and the future engineer who inherits your work. Workflow enforcement catches the symptom; coaching addresses the cause.

Results

The pilot framing secured buy-in from both the team and product leadership.

who benefits

Developers get fewer scope changes and more coding time. QA gets test plans ready before development starts. Product gets predictable delivery dates. Engineering leadership gets fewer production fires. The business gets reduced delivery risk with recovered capacity.

Metrics tracked during the pilot included QA blocked time, bugs caused by unclear requirements, meeting hours spent clarifying scope, and percentage of tickets meeting Definition of Ready — giving us concrete data to evaluate whether the investment was paying off.

Takeaways

1

Measure the hidden cost before proposing the fix. The 175 developer hours number made the case in a way "we need better tickets" never could. Quantifying the problem in capacity terms turned a process conversation into a business conversation.

2

Pilot with a kill switch. A time-boxed experiment with explicit success criteria and a commitment to drop it if it doesn't work is easier to approve, easier to execute, and builds more credibility than a top-down mandate.

3

Address patterns, not just process. Workflow enforcement prevents undocumented tickets from advancing. Coaching the contributors who created them prevents undocumented tickets from being created. One without the other either doesn't stick or doesn't scale.