The “Last Mile” Problem of Professional Development
Blog Posts

Joel Podolny
CEO, Honor Education | Former Vice President and Dean, Apple University
Introduction
We talk a lot about “last mile” problems in the world of physical infrastructure — laying fiber optic cables to individual homes, getting packages from a distribution center to a doorstep. The pattern is always the same: the first 90% of the system scales beautifully, and then that final stretch becomes disproportionately difficult and expensive.
Professional development has its own last mile problem. And AI is making it urgent.
Higher education institutions and corporate learning providers — LinkedIn Learning, Degreed, Coursera, and plenty of others — have gotten remarkably good at developing content around skills and knowledge that cut across corporate environments. Project management. Data analysis. Leadership communication. This is what economists call general human capital, and the infrastructure for building it has never been better.
But the last mile of professional development is something different entirely — and unlike the literal last mile in logistics, which is ultimately a matter of routing, optimization, and cost, this one is neither routine nor solvable through better algorithms. It’s the development of wisdom, judgment, discernment, and taste — the ability to apply knowledge and skills within a particular corporate context. It’s knowing not just what best practice looks like in the abstract, but how to exercise judgment in a way that reflects the values, priorities, and culture of the specific organization you’re in.
Beyond Processes and Procedures
Economists would call this specific human capital, and while I’m comfortable with that label, I think it undersells what’s at stake. There’s a tendency to equate “specific” human capital with detailed knowledge of internal processes and routines — where to find the right template, how the approval workflow runs. That matters, but it’s the easy part.
The hard part is the more ephemeral stuff: the cultural intuitions, the implicit standards of quality, the organizational sensibility that separates someone who is merely competent from someone whose work truly fits. I find it helpful to distinguish three layers.
The first layer is procedural fluency — the systems, workflows, tools, and organizational geography that any new employee needs to learn. This is the stuff that onboarding can handle, and most organizations have gotten reasonably good at it. A capable person can achieve procedural fluency within weeks.
The second layer is what I’d call interpretive knowledge — understanding the organization’s mental models, its characteristic ways of framing problems and evaluating solutions. At Apple, this means understanding that user experience isn’t a department but a design philosophy that permeates engineering decisions. At Amazon, it means understanding what “working backwards from the customer” requires when you’re making a specific tradeoff between speed and completeness. Every strong organizational culture has these interpretive frameworks, and learning them takes months, not weeks.
The third layer — and this is where the last mile really lives — is evaluative judgment: the ability to look at a piece of work and know whether it’s right, not by some universal standard but by the standard that this particular organization has cultivated over decades. But evaluative judgment doesn’t show up only in the assessment of answers. It shows up, perhaps even more powerfully, in the questions people know to ask. The person with deep organizational judgment walks into a meeting and reframes the problem — not because they have better data, but because their sense of what matters here leads them to see something others don’t. They ask the question that redirects the entire conversation. That capacity to formulate the right question, not just evaluate the right answer, is the essence of wisdom, taste, and discernment. And it is the layer that takes years — sometimes a decade or more — to develop, because the mechanisms we’ve relied on to transmit it are inherently slow: mentorship, apprenticeship, hallway conversations, osmosis.
And this third layer is rarely monolithic. Different parts of an organization may apply the same cultural principles differently depending on the demands they face — a regulatory team exercises boldness differently than a research team. Part of evaluative judgment is knowing when that variation reflects healthy adaptation and when it reflects drift.
And unlike the literal last mile, this one is never fully traversed. As the organization evolves, the frontier of evaluative judgment moves with it. The last mile is not a destination but a continuous condition.
Why AI Makes This Urgent
This is where AI changes the equation — and not in the way most people think.
Consider what happened at Klarna, the financial services company. In 2024, Klarna deployed an AI assistant that handled two-thirds of all customer service interactions, projected $40 million in savings, and initially matched human satisfaction scores. It was a compelling proof of concept for AI’s ability to handle the early miles — the routine, pattern-matchable problems where the right answer is deterministic and the path to it is well-defined. But as the AI took on more of the workload and Klarna reduced its human staff, customer satisfaction dropped sharply. The reason was precisely the last mile: when customers had problems that required judgment, contextual reading, and the ability to navigate ambiguity — the tier-two and tier-three issues that don’t map neatly onto a script — the AI couldn’t deliver. By 2025, Klarna’s CEO publicly acknowledged that the company had prioritized cost over quality and began rehiring human agents.
The Klarna story is not a story about AI failing. It’s a story about AI succeeding at the early miles and exposing the last mile in sharp relief. General competence is increasingly commoditized. Which means the relative value of the last mile goes up.
And the asymmetry runs deeper than the Klarna case alone might suggest. For decades, the ability to produce a well-structured strategy document or a clean financial model served as a signal of professional maturity. It told your manager that you’d internalized certain ways of thinking, that your judgment had been sharpened through the discipline of doing the work. When AI can produce that output for anyone, the signal breaks. Organizations find it harder to distinguish between someone who has genuinely internalized the thinking and someone who has a good prompt.
This shifts the entire locus of professional value toward the contextual judgment that AI cannot provide — which is exactly the last mile. AI is extraordinarily good at answering questions. It is far less good at knowing which questions to ask. The last mile operates at the level of problem formulation, not just problem solving, and that is precisely the level where AI’s current capabilities fall short.
And this is where organizations face a choice that is not just strategic but philosophical. One approach — increasingly common — is to build systems that extract organizational knowledge so that AI can execute on it. The implicit assumption is that the value lives in the knowledge itself, separable from the people who carry it. The last mile argument points in a fundamentally different direction. The goal is not to extract wisdom from people so machines can use it. It is to build infrastructure that helps more people develop wisdom themselves.
The Klarna story illustrates exactly this fork. Klarna extracted the process, automated it, and discovered that the value was in the judgment the process couldn’t capture. Process extraction and judgment development lead to fundamentally different futures. That choice is not a technical decision. It is a decision about what kind of organization you want to be.
The Cost of Not Solving It
The consequences of leaving the last mile to chance are not abstract. They show up in three patterns that most leaders will recognize.
The first is cultural drift: any disruption to the mentoring chain — rapid growth, remote work, unexpected turnover — causes the culture to dilute, producing work that is competent but generic. The second is the bottleneck problem: the small number of people who carry institutional wisdom become the only ones who can exercise judgment in consequential situations, constraining the organization’s ability to scale decision-making. The third is the succession cliff: when those people leave, the wisdom they accumulated over decades walks out the door with them, and organizations often don’t realize what they’ve lost until they see it in the work that follows.
From the employee’s perspective, the cost is equally real. High performers who lack access to the third layer — because mentorship is uneven, because the right experienced people are unavailable or overextended — hit a ceiling that has nothing to do with their capability and everything to do with the infrastructure around them. They plateau not because they can’t learn but because there is no system to teach them what matters most.
The Last Mile in Practice
These patterns show up across every industry, but they’re easiest to see in organizations with strong, distinctive cultures.
Apple is well known for caring deeply about the emotional dimensions of technology — not just whether a product is functional but whether it feels right to use. Amazon is well known for “working backwards from the customer.” In both cases, the principle is public and widely discussed. But knowing the principle and knowing what it demands when you’re making a specific tradeoff — weighing interface simplicity against feature richness, or speed of delivery against completeness of solution — are very different things. An engineer or product manager can arrive at either company with extraordinary general human capital and still spend years developing the judgment to apply these principles the way the culture requires. That gap between the principle and its application is the last mile.
Consider Moderna, whose culture is inseparable from its strategy. Moderna’s mission — delivering the greatest possible impact to people through mRNA medicines — entails additional risks than a conventional biotech company faces. Beyond the usual biology risk, Moderna faces technology risk associated with manufacturing a novel molecule at scale, while building a platform across many therapeutic areas simultaneously (execution and financial risk). Given this multi-dimensional risk, Moderna’s culture — its emphasis on boldness, curiosity, collaboration, and relentlessness — is essentially a collective investment on how people need to show up to deliver on the mission. Noubar Afeyan, Moderna’s co-founder and chairman, has described the experience of someone joining Moderna as “parachuting into the jungle with your suit on” — a vivid image that captures how even accomplished people, armed with all the general human capital that made them successful elsewhere, can find themselves disoriented because their prior instincts were calibrated for a different environment. What the pandemic demonstrated — scaling to ship 800 million COVID vaccine doses in 2021 — was a natural field experiment in whether this cultural investment mattered. It is difficult to imagine Moderna succeeding if it had behaved like a typical pharmaceutical company.
Or consider the challenge from a different angle. At Pinterest, leadership recognized that an important aspect of the company’s culture lives in how people work — how decisions get made, how disagreement gets surfaced, how stories get told about what matters, how quality gets defined — but that the organization had not traditionally focused on it, and hadn’t even developed a language for it. The knowledge existed in practice but not in any form that could be examined, shared, or deliberately cultivated. As Doniel Sutton, Pinterest’s CHRO, puts it: “Our culture is grounded in something specific, our mission of helping people find inspiration. That mission, in turn, shapes how we work as a team and make decisions, how we tell stories about what matters, how we think about growth, and what we’re willing to prioritize, including things like youth mental health that set us apart. But we hadn’t always made that cultural logic explicit. As we invest more in AI, we’re realizing that the human capabilities our leaders need most — decision-making clarity, enterprise thinking, the ability to hold cultural tensions implicit in the richness of a mission that asks a lot — are exactly the ones you can’t automate. You have to build them deliberately.”
Apollo Global Management illustrates the last mile from yet another angle. In alternative asset management, analytical rigor is table stakes. What distinguishes Apollo is a set of leadership principles — among them what the firm calls “No Walls” — that shape how the firm operates. The No Walls principle requires people to operate across the entire capital structure as a single integrated platform rather than a collection of siloed businesses. The principle is easy to state and extraordinarily hard to live. When Apollo acquired Credit Suisse’s securitized products group — a $24 billion equity commitment involving an $85 billion balance sheet business — every major competitor walked away because the operational complexity was overwhelming. Apollo mobilized 150 people across more than a dozen functions, signed in under six weeks, and closed in under three months. That execution was only possible because of a culture that expects what the firm calls “enterprise-first instincts”: treating relationships and opportunities as firm assets rather than personal property, recognizing when an opportunity is outside your lane but inside someone else’s, and having the humility to bring in the right people early.
As Matt Breitfelder, Apollo’s CHRO, puts it: “We believe that our ‘No Walls’ approach to teamwork is a major competitive advantage, especially in a world where high-growth companies often become susceptible to silo-centric thinking. No Walls is a mindset that we constantly reinforce so that it becomes truly second nature across our team.”
Consider Siemens Digital Industries Software, which approaches the last mile from the perspective of the engineering mindset and skills gap. In smart manufacturing and digital twin, there is a distinct chasm between a constrained engineering curriculum and the nuanced, high-stakes reality of industrial application. When Siemens launched its ABET-recognized “Expedite — Skills for Industry” microcredential, the initial hypothesis was that this gap was primarily an undergraduate problem. “Even with solid engineering degrees, new hires needed significant onboarding time,” said Janelle Simmonds, Global Enablement Lead for Future Workforce at Siemens. But they quickly discovered significant demand from early-career and working professionals as well. Even experienced engineers constantly face new last miles — gaps where they possess the general technical capability but lack the specific contextual judgment to apply new digital workflows to complex industrial challenges.
What these examples share — from technology to biotech to finance to advanced manufacturing — is that the most consequential organizational knowledge is the knowledge that is hardest to transmit. And that wisdom is one of the most valuable assets any organization has. It’s also one of the most fragile. It lives in the heads of experienced people. It gets transmitted unevenly, through mentorship and hallway conversations and osmosis. And when those people leave, much of it leaves with them.
Rethinking How Wisdom Gets Transmitted
Part of what makes the last mile feel intractable is our mental model of how tacit knowledge gets transmitted. We tend to think of mentorship and apprenticeship as grounded in deep, sustained one-to-one relationships — an experienced person and a newer one, working closely together over years. And if that’s the only model, then of course any attempt to scale it will feel like a loss. It also means that access to institutional wisdom becomes inequitable — dependent on who you happen to be assigned to, which office you sit in, which experienced person has the time and inclination to invest in you. People learn at different paces, but they also learn in different conditions, and the classical model does nothing to level that playing field.
But that isn’t the only model. Think about how mentorship works in an emergency room. Medical interns don’t learn clinical judgment from a single attending physician over a long apprenticeship. They learn it from dozens of doctors across hundreds of micro-interactions, each one contextualized around a specific patient, a specific decision, under specific conditions. The attending doesn’t simply demonstrate the right answer. They ask “what are you seeing?” and “what would change your mind?” — they model a way of interrogating the situation, teaching the intern not just what to think but how to formulate the questions that lead to good judgment. The learning is richer for being distributed — the intern is exposed to multiple styles of questioning, multiple ways of reading a situation, which produces a more textured and resilient sense of what good clinical judgment looks like than any single mentor could provide. The depth comes from the variety, not from the duration of any one relationship.
The ER is not just a convenient metaphor. James Thompson, one of the great organizational scholars of the twentieth century, drew a distinction between different forms of interdependence in organizations. The simplest form — pooled interdependence — is what you find in, say, a chain of retail stores: each unit contributes to the whole but operates largely independently. Sequential interdependence describes the assembly line, where one unit’s output becomes the next unit’s input. But the highest form — reciprocal interdependence — is what you find in the ER: people constantly adjusting to each other in real time, where the quality of any one person’s judgment depends on their ability to read and respond to what others are doing. And reciprocal interdependence is exactly the mode in which organizations that are about innovation, change, and wrestling with complex problems operate.
This matters because the micro-apprenticeship model — learning judgment through many brief, high-intensity interactions with different people — isn’t just a convenient way to transmit knowledge. It is the native learning mode of reciprocally interdependent organizations. Apollo’s Credit Suisse acquisition is a case in point: 150 people across more than a dozen functions, adjusting to each other in real time under extreme time pressure, each person’s judgment depending on their ability to read and respond to what others were doing. That kind of execution doesn’t come from a training manual, though being explicit about behaviors helps. Recognizing this fact, the firm’s leadership recently shared an articulation of What Makes Apollo Apollo, as part of ongoing work to protect and evolve the culture that has defined it for the past 35 years. Of course, what matters even more than these punctuated moments of clear articulation is having a culture in which people have been exposed to enough high-intensity micro-interactions to have internalized the firm’s way of operating. And this is where AI re-enters the picture — not as a replacement for these interactions but as a way to create them at a scale and frequency the purely organic model never could. To the degree that we build technology to enhance those micro-apprenticing moments — to make them more accessible, more frequent, more broadly distributed — we increase the collective flexibility and adaptability of the organization itself. The learning infrastructure and the operating model reinforce each other.
This is a fundamentally different model of how wisdom gets transmitted, and it’s the one that scales. When you bring together a cohort of experienced people — each with their own history and perspective — and create the conditions for them to surface and share their judgment around the same problems, you’re replicating something much closer to the ER model than to the classical apprenticeship. The knowledge that emerges is not flattened into a single “right answer.” It’s enriched by the plurality of perspectives, and the learner develops judgment not by absorbing one person’s view but by navigating among many.
Building Infrastructure for the Last Mile
So the question becomes: how do you build infrastructure for the last mile?
The organizations described above — Moderna, Pinterest, Apollo, Siemens, and others — are beginning to answer that question. What their efforts have in common is a recognition that institutional wisdom cannot remain trapped in the heads of a relatively small number of experienced people if it is to survive at scale. It has to be captured, made shareable, and put to work.
In practice, this starts with making it easy for subject matter experts to create culturally resonant learning content, and extends to cohort-based learning experiences that surface the kind of nuanced, perspective-rich discussion you’d find in a great case classroom — but asynchronously and at scale.
And increasingly, it means using AI in two specific ways.
First, generative AI that creates simulations to test and develop an individual’s ability to apply their skills in culturally aligned ways. Imagine a mid-level product manager at a company that prides itself on design simplicity. She’s asked to make a recommendation on a feature that would add functionality but increase interface complexity. A generative simulation presents her with a realistic scenario — complete with internal stakeholders arguing different sides — and then evaluates not just her decision but her reasoning, testing whether she’s weighing the tradeoffs in a way that reflects how this particular company thinks about simplicity. That’s fundamentally different from a generic case study on product management. The question is not “do you know the right answer?” but “can you exercise judgment the way this organization needs you to?”
Second, agentic AI that draws on an organization’s accumulated wisdom to coach people in the flow of work itself. To return to the Apple example: an AI coach grounded in the company’s own institutional wisdom can help an engineer pressure-test whether a communication or presentation sufficiently reflects Apple’s well-known emphasis on the emotional dimensions of technology. Critically, this AI isn’t coaching from generic best practices. It’s coaching from the organization’s own institutional wisdom — its courses, its case discussions, the shared insights of its most experienced people. To be clear: what makes this a culture coach is precisely that it is an evaluative judgment coach. An organization’s culture, at its deepest level, is its shared standard for what good judgment looks like. The two concepts are not separate. That distinction — between generic coaching and coaching grounded in the specific evaluative standards of a particular organization — is the whole ballgame.
From Heads to Infrastructure
What is most promising about this direction is that the wisdom doesn’t just flow top-down through coaching. As organizations build out these learning artifacts — the courses, the case discussions, the simulations — they’re creating something that can propagate laterally, embedded in the tools and workflows that people use every day.
The traditional model of transmitting institutional wisdom is essentially artisanal — one person to another, slowly, through relationship. What these new approaches make possible is something closer to what happens when an oral tradition gets written down. The codification doesn’t replace the oral tradition; it makes the knowledge available in fundamentally new ways, to fundamentally more people, at fundamentally different speeds.
The deeper point is that when institutional wisdom lives in infrastructure rather than just in people, it becomes something the organization can iterate on collectively rather than something that exists only in the private experience of individuals. Institutional judgment stops living exclusively in the heads of senior leaders and starts living in the infrastructure itself, where it can compound over time rather than dissipate with turnover. This matters most acutely in moments of transformation — when a company is acquired, when strategy shifts, when leadership turns over — because those are precisely the moments when the gap between the culture an organization needs and the culture its people carry becomes most visible. Organizations that have built this infrastructure can navigate those transitions; organizations that haven’t are left trying to rebuild institutional judgment from scratch.
What Technology Cannot Replace
The strongest objection to this line of thinking is that tacit knowledge is tacit for a reason — that the slow, organic process of absorption is doing something that can’t be replicated through technology. There is truth in this. Moderna’s “jungle” metaphor captures something real: the disorientation of arriving in a new environment is itself part of the learning. Relationships matter. Time matters.
But the question is not whether technology can replace mentorship. It is whether the current model — where the entire last mile depends on informal, unscalable, fragile human transmission — is adequate given the pace at which AI is commoditizing everything else. And the ER model reminds us that the richest forms of professional learning have never depended on a single deep relationship. They have depended on the breadth and variety of judgment one is exposed to. Technology’s role is not to replace the human element but to dramatically expand the rotation — to give people access to the perspectives, the standards, the cultural sensibility of far more experienced practitioners than any organic process could provide.
Where Learning and Work Converge
The last mile of professional development has always been the hardest to scale and the slowest to traverse. It’s also where the most value lives — and in a world where AI is compressing the distance of every other mile, it’s where the future of learning and work converges.
The organizations that figure this out will have a profound advantage: not just better-trained employees, but a self-reinforcing system in which institutional wisdom gets captured, shared, enriched, and applied — continuously, at scale, and in the flow of work itself. And in doing so, they will have answered the deeper question: whether AI serves to replace human judgment or to develop it. The last mile will never be effortless. But it no longer has to be left to chance.
