The Developer Attention Economy
Where does developer time actually go? A framework for understanding why your team feels busy but PRs still stall.
Ask any engineering manager where their developers' time goes and you'll get a confident answer: "Writing code, reviewing PRs, and meetings." Ask them for the percentages and the confidence evaporates.
The truth is, most teams have no idea how developer attention is actually allocated. They track story points and velocity — measures of output — while the real constraint is input: how many hours of focused attention do developers have, and where does it go?
The attention budget
A developer has roughly 6 hours of productive time per day. Not 8 — meetings, Slack, lunch, and the general friction of existing in an organization eat the rest. Some studies put it lower. A survey by the Developer Productivity Alliance found that developers self-report about 4.5 hours of "deep work" per day, with the rest consumed by interruptions and overhead.
That 4.5-6 hour window is your team's most valuable resource. Every minute spent chasing a notification, context-switching between tasks, or reviewing a PR they're not qualified to evaluate is a minute not spent on the work that actually moves the product forward.
Let's break down where it goes.
Code review: the hidden time sink
Code review is rarely budgeted as a primary activity. It's treated as something developers do between "real" work — a quick look at a PR here and there throughout the day.
The reality is different. Data from LinearB and Sleuth suggest that senior developers on active teams spend 25-40% of their productive hours on review. For tech leads, it's often higher. That's 1.5 to 2.5 hours per day reading, understanding, and commenting on other people's code.
And that number is growing. As AI coding tools increase PR volume — some teams report a 2-3x increase after adopting Copilot or Cursor — the review burden scales proportionally. The code gets generated in minutes; the review takes just as long as it always did.
Here's the math that should worry engineering leaders:
- Before AI tools: 5 developers, each opening ~1 PR/day = 5 PRs to review daily
- After AI tools: 5 developers, each opening ~2-3 PRs/day = 10-15 PRs to review daily
- Review capacity: unchanged (still the same 5 developers with the same 6 hours)
The review queue grows, cycle time increases, PRs go stale, and developers start rubber-stamping to keep up. Quality degrades silently.
Context-switching: the invisible tax
Every time a developer switches between tasks — from writing code to reviewing a PR, from one review to another, from code to Slack and back — there's a cognitive cost. Research by Gloria Mark at UC Irvine found that it takes an average of 23 minutes to regain deep focus after an interruption.
Now count the interruptions in a typical developer's day:
- 3-5 PR review requests
- 10-20 Slack messages requiring response
- 1-3 meetings
- Multiple CI/CD notifications
- Email (GitHub, Jira, misc)
Each interruption doesn't just cost the time to handle it. It costs the time to get back to where you were. A developer who handles 5 PR review requests throughout the day doesn't lose 5 × 15 minutes of review time. They lose 5 × (15 minutes of review + 10-15 minutes of context recovery) = over 2 hours.
This is why the common advice to "review PRs as they come in" is actually counterproductive. Interrupt-driven review is fast for the PR and devastating for the reviewer's other work.
Triage: the work nobody sees
Before a developer can review a PR, they have to decide whether to review it right now. This triage step is invisible in most workflow discussions, but it consumes real attention:
- Is this PR urgent?
- Am I the right reviewer?
- How big is this change?
- Is it related to what I'm working on (so the context switch is small)?
- Has someone else already started reviewing?
Without good tooling, this triage happens in the worst possible way: the developer opens GitHub, scans their notification inbox, clicks into several PRs to assess them, and then decides which one to review first. That's 10-15 minutes of overhead before any actual review work begins.
Teams that batch reviews — dedicating specific blocks to review work — save the context-switching cost but introduce latency. Teams that review inline — as notifications arrive — save latency but pay the switching tax. Both approaches are compromises.
Decision-making: the thing AI can't do
The highest-leverage use of developer attention is making decisions. Not writing code, not even reviewing code — deciding what to build, how to architect it, when to take shortcuts, and when to invest in quality.
These decisions compound. A good architectural decision saves hundreds of hours over the life of a system. A bad one creates technical debt that drags on the team for years. And they require the kind of contextual judgment that no AI tool provides.
Yet decision-making gets squeezed. It's the first thing that suffers when review queues grow and Slack notifications pile up. Developers spend their best hours on reactive work (reviewing, responding, triaging) and make their most important decisions in whatever time is left — often late in the day when they're cognitively depleted.
Measuring attention allocation
Most engineering metrics measure output: PRs merged, story points completed, deployment frequency. These tell you what the team produced. They don't tell you whether the team's attention was well-allocated.
Better metrics for attention allocation:
Review time distribution. How much of each developer's day goes to review vs. creation? If your senior engineers spend 50%+ on review, you might need more reviewers — or better triage.
First-response time. How long between a PR being opened and the first reviewer engaging? This measures notification effectiveness, not reviewer speed.
Context switches per day. How many times does a developer shift between review requests, their own work, and other tasks? High numbers suggest poor batching or noisy notifications.
Stale PR rate. What percentage of PRs have no activity for 24+ hours? This is the clearest signal that attention isn't reaching where it needs to go.
Decision latency. How long do architectural questions, RFC comments, and design doc reviews sit before getting meaningful feedback? This is the most important metric and the hardest to measure.
Redesigning for attention
If developer attention is the scarce resource, every process and tool should be evaluated through one lens: does this help developers allocate attention effectively, or does it waste it?
Notifications should be precise. One message to the right person beats a broadcast to a channel. Fewer, better notifications means less triage overhead.
Status should be visible without clicking. A reviewer should know the state of a PR (needs review, changes requested, CI failing) from the notification itself — not after opening GitHub and reading a thread.
Updates should consolidate. When a PR goes through 5 state changes overnight, the reviewer should see one current-state summary, not 5 chronological events to parse.
Batching should be supported. If a developer prefers to review in a dedicated block, the tools should accommodate that — surfacing the most important PRs first, with context pre-loaded.
This is the workflow Tenpace is building toward. Today, we handle the notification layer — getting the right PR information to the right person at the right time, with updates that consolidate instead of stack. But the attention economy problem goes deeper, and so will we.
Where does your team's attention actually go? If you've measured it — or tried to — we'd genuinely love to compare notes: hello@tenpace.com