Why Learning Velocity Beats Headcount in Engineering
Your engineering team doesn't have a shipping problem. It has a learning problem.
I've watched this pattern repeat across every team I've built or inherited. The struggling ones weren't short on talent or tooling. They'd stopped learning. Not because anyone decided to stop, but because the organization's own success made learning structurally harder. More users meant more maintenance. More headcount meant more communication overhead. More quarterly pressure meant less room for curiosity. And then Conway's law did what it always does: it faithfully reproduced that dysfunction in the product architecture.
Here's what I think most engineering leaders get backwards. The fix isn't shipping faster. It's learning faster.
TLDR
- Learning velocity — not headcount or shipping speed — is the strongest predictor of long-term engineering team success (DORA, 2023)
- Scale impairs learning through communication overhead, maintenance gravity, and performance pressure that displaces curiosity
- Conway's law amplifies both dysfunction and health — fix your team's learning patterns and the architecture follows
- AI coding tools reclaim time for domain expertise and customer understanding, not just faster feature delivery
Why Do Most Leaders Treat Headcount as a Proxy for Velocity?
Fred Brooks warned us in 1975. Adding people to a late software project makes it later. The communication channels on a team grow as n(n-1)/2 — a team of 6 has 15 links, but a team of 50 has 1,225 (Brooks, The Mythical Man-Month, 1975). Every link is a place where context gets lost and learning gets filtered.
And yet, fifty years later, most engineering leaders respond to slowing velocity the same way: hire more people. It's intuitive. More hands, more output. The logic feels airtight until you realize that what you actually needed wasn't more hands. You needed better feedback loops.
The industry treats headcount as a leading indicator of capacity. Sprint ceremonies, design reviews, and RFCs multiply to "align" the growing team. But alignment isn't learning. You can have a perfectly aligned team that has no idea whether the thing they're building actually solves the customer's problem. I've seen it happen at scale, and I've seen three people in a room with a whiteboard outlearn a forty-person org in a quarter.
How Does Scale Destroy Your Feedback Loops?
The Allen Curve showed that communication frequency drops exponentially with distance — at just 50 meters of separation, the probability of communicating weekly falls below 10% (Allen, MIT, 1977). Organizational distance has the same effect. As teams grow, the people writing code get further from the people experiencing outcomes.
There are three specific ways this breaks down.
Communication overhead grows quadratically. Brooks's formula isn't theoretical. I've watched it play out in real time. Every new hire adds meetings, handoff documents, and Slack threads. The team spends more time talking about the work than doing the work. Each of those communication paths is an opportunity for learning to get delayed, diluted, or dropped entirely.
Success creates maintenance gravity. At Wix, I saw what happens when a product serves hundreds of millions of users. A meaningful chunk of engineering time went to keeping existing systems alive — patching edge cases, handling scale incidents, maintaining backward compatibility. Stripe's Developer Coefficient study found that developers spend 42% of their work week on technical debt and maintenance (Stripe, 2018). That's nearly half your engineering capacity pointed backward instead of forward.
Pressure displaces curiosity. When quarterly targets tighten, exploration time is the first casualty. Amy Edmondson's research on psychological safety demonstrated that learning behavior mediates between psychological safety and team performance (Edmondson, 1999). Google's Project Aristotle confirmed it at scale: psychological safety was the number one factor in team effectiveness across 180+ teams (Google, 2015). When people don't feel safe to experiment and fail, they optimize for predictability — which is the opposite of learning.

Have you ever seen a team that ships on time every sprint but can't tell you what they've learned in the last six months? That's what happens when pressure replaces curiosity as the primary driver.
Do the Fastest Learners Actually Outperform the Biggest Teams?
The 2024 DORA State of DevOps Report found that elite performers deploy 182x more frequently, with 127x faster lead times and 2,293x faster recovery from failures than low performers (DORA, 2024). Those aren't shipping metrics. They're learning metrics. Each deployment is a hypothesis tested. Each recovery is a lesson absorbed. Elite teams don't ship more because they type faster. They ship more because their feedback loops are shorter.
The 2023 DORA report made it explicit: teams with a generative, learning-oriented culture achieve 30% higher organizational performance (DORA, 2023). Culture — not tooling, not process, not headcount — was the differentiator.
Wu, Wang, and Evans analyzed 65 million papers, patents, and software products and found that smaller teams consistently produce more disruptive innovations, while larger teams tend to develop existing ideas (Nature, 2019). Disruption requires learning something the market doesn't yet know. Development requires executing on what's already understood. Both matter, but if your entire org is optimized for development, you've stopped learning.
When Learning Stops, Does Conway's Law Reshape Your Product?
"Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations." Melvin Conway wrote that in 1967, and it's still the most underappreciated law in software engineering.
Here's why it matters in this context. When a team can't learn, its communication patterns become defensive. Silos form. Handoffs multiply. Meetings replace direct conversations. And Conway's law faithfully mirrors all of that dysfunction into the product architecture. The codebase starts looking like the org chart — fragmented, territorial, full of boundaries that exist for organizational reasons rather than technical ones.
Nagappan, Murphy, and Basili demonstrated this empirically: organizational structure metrics predicted software failure-proneness with 85% precision and recall, outperforming code-based metrics like churn, complexity, and coverage (Microsoft Research, 2008). Your org chart is literally a better predictor of bugs than your code metrics.
But here's what most Conway's law discussions miss: it's not a one-way street. If you deliberately fix your team's communication patterns — shorten feedback loops, break down silos, reconnect engineers with customers — Conway's law will propagate that health into your architecture. Skelton and Pais call this the Inverse Conway Maneuver: design your org structure to produce the architecture you want, not the other way around.
At Marqii, we lived this. The pivot from a private-labeled, resold product to an owned platform wasn't just a technical rewrite. It was an organizational restructuring. We changed how the team communicated first — broke down the walls between engineering, product, and design, got engineers talking directly to restaurant operators, shortened every feedback loop we could find. The architecture followed. Conway's law carried the health of those communication patterns straight into the codebase. That doesn't happen by accident. You have to design the org you want the product to reflect.
How Do AI Coding Tools Change the Equation?
Developers with GitHub Copilot access completed tasks 55.8% faster than the control group (Peng et al., 2023). That's real. AI adoption in development is accelerating — 62% of developers used AI tools in 2024, up from 44% the prior year (Stack Overflow, 2024).
But here's what I think most leaders are getting wrong about AI coding tools. The conversation is almost entirely about shipping speed. "We'll ship 2x the features." That's the wrong frame. If your team's core problem is a learning deficit, shipping twice as many features you don't understand is twice as much maintenance gravity pulling you backward.
The real opportunity is what happens to the time AI frees up. At Marqii, we shipped AI-powered features — automated review response, location intelligence, market discovery — back when it still raised eyebrows in board meetings. What I noticed wasn't just faster output. It was that engineers had more headspace for the high-value work that AI can't do: understanding the hospitality domain, talking to customers, thinking about architecture, running experiments. The boilerplate was handled. The thinking wasn't.
When engineers spend less time on boilerplate and more time understanding the problem domain, their conversations change. They talk about customers instead of configurations. They ask "why" instead of "how." And Conway's law takes notice. The communication patterns shift toward the product and the user — and the architecture follows.
What Should You Measure Instead of Shipping Speed?
Start tracking learning cycle time: how long it takes your team to go from hypothesis to validated insight. If it's longer than two weeks, your feedback loops are broken. The 2023 DORA report found that faster code reviews improve delivery performance by up to 50% (DORA, 2023). That's not because fast reviews ship more code. It's because fast reviews compress the learning loop between writing code and understanding its impact.
Here's the approach that's worked for me:
- Treat learning time like production uptime. Protect it. If your team has zero hours for exploration, experimentation, or customer conversations, your org is running on borrowed understanding.
- Apply the Inverse Conway Maneuver. Look at your product architecture. If it mirrors your org chart's pathologies, restructure communication patterns to reflect the architecture you actually want.
- Hire for curiosity, not just competence. I've written before about this: a small team of curious people will outrun a large team of checkbox-fillers every time.
- Use AI tools to reclaim learning time, not just shipping time. The goal isn't 2x features. It's 2x the time for engineers to understand the domain, talk to customers, and think about architecture.

Three Changes You Can Make This Quarter
Audit your team's learning loops this week. Map the time from "we tried something" to "we understood what happened." Draw it on a whiteboard. If the answer involves more than three handoffs or takes more than two sprint cycles, you've found the bottleneck. That map alone will tell you more about your team's health than any velocity chart.
Second, apply the Inverse Conway Maneuver to one product area. Pick the part of your architecture that most frustrates you. Now look at who owns it and how they communicate. Restructure the team topology around the architecture you want. Skelton and Pais's Team Topologies gives you the vocabulary — stream-aligned teams, enabling teams, platform teams — but the insight is simple: change the communication, change the code.
Third, redirect AI productivity gains toward learning. When your team adopts AI coding tools, don't immediately raise the feature target. Instead, protect the freed time. Let engineers use it for domain research, customer shadowing, architectural spikes, and reading. That investment compounds in ways that shipping more features never will.
Where Does This Break Down?
This framework isn't universal. Regulated industries — healthcare, finance, aerospace — genuinely need more process, and that process itself has learning value when done well. Not all slow teams have a learning problem; some are genuinely resource-constrained. And AI coding tools are still maturing. The 2025 Stack Overflow survey showed that while 84% of developers now use or plan to use AI tools, trust in AI output actually declined — 46% say they don't trust it, up from 31% the prior year (Stack Overflow, 2025). Healthy skepticism has its place.
Conway's law is descriptive, not prescriptive. Intentional org design is hard, and it doesn't always produce the architecture you envisioned. But the direction holds: teams that learn faster communicate better, and better communication produces better systems.
Frequently Asked Questions
Doesn't this just apply to startups?
No. The learning velocity principle applies at any scale. Large organizations can create small, empowered teams with short feedback loops. The key is reducing communication distance between the people doing the work and the people experiencing the outcomes. Google's Project Aristotle studied 180+ teams across a massive org and found that psychological safety — the foundation of learning — was the top predictor of effectiveness regardless of team size (Google, 2015).
How do you actually measure learning velocity?
Start with lead time from hypothesis to validated insight. Track how often teams run experiments, how quickly they process results, and how frequently architectural decisions get revisited based on new information. DORA metrics are a useful proxy — elite performers learn from production faster, which shows up as 2,293x faster recovery and 127x shorter lead times (DORA, 2024).
Won't AI tools just create more technical debt?
They can, if you optimize purely for output speed. The argument here is different: use the time savings for learning and domain expertise, not just for shipping more code. More code without more understanding is exactly how you get the maintenance gravity problem described above. The 42% of time developers already spend on debt (Stripe, 2018) doesn't need to grow.
The Job Is Protecting the Team's Ability to Learn
The teams that win aren't the biggest or the fastest shippers. They're the fastest learners. Conway's law means that learning health propagates into architectural health. AI tools are giving us a window — maybe the first one in a decade — to reclaim the time that scale and success have been stealing from our teams.
The irony of my career isn't lost on me. I started as a support rep, where learning the customer was literally the entire job. Now I'm a VP of Engineering, and the job hasn't actually changed. It's still about learning — just now it's about protecting the team's ability to do it. Every organizational decision I make comes back to the same question: is this making it easier or harder for my team to learn?
If you're asking that question, you're already ahead.
Read more about my unconventional career path.