Incentive.
Why people respond to the signals we create, not the outcomes we intend.
The British administration in colonial India had a cobra problem. Snakebite killed tens of thousands of people across the subcontinent each year. For a governing authority concerned above all with order, revenue and continuity, that was not simply a humanitarian tragedy but a disruption. It meant lost labour, slowed trade, destabilised households, and a colonial economy built on predictability that faltered whenever bodies meant to work, travel, and pay tax were suddenly absent.
Administrators were expected to act on risks that were visible and measurable, and so someone, somewhere within that machinery of governance, proposed a solution that must have seemed entirely consistent with the logic of that system. If the presence of cobras was the problem, then their removal should be encouraged. If their removal could be encouraged, then proof of removal could be rewarded. Attach payment to a dead cobra, and people will kill cobras. The dead snake became the evidence — a receipt, in effect — that the system could count, record and reward.
And it was sound logic, as far as it went. The problem was where it stopped. The moment a dead cobra became worth money, so did live cobras. It no longer mattered much how it had died, or whether it had ever been wild. You could breed them. You could farm them at scale. You could bring in receipts without ever having touched the problem the receipts were supposed to represent.
And that’s precisely what happened.
But it’s what happened when the authorities eventually realised that people were gaming the system and cancelled the scheme that makes this story so good. The breeders, now holding worthless stock, released the snakes back into the streets.
The outcome? There were more cobras in Delhi after the programme than before it.
It is a near-perfect story — and for that reason, it deserves a moment’s scrutiny before it does any more work. Historians have found references to bounty schemes in colonial records, but the fuller version — the breeding, the cancellation, the release — survives more clearly in economic writing than in administrative archives. It has been attributed to the economist Horst Siebert, repeated in textbooks and lecture halls, and by now circulates with the confidence of an established fact, though the documentary trail is considerably thinner than the telling suggests.
But that uncertainty is not a reason to set the story aside. If anything, it is part of the point. We attach value to it because it is vivid and useful, and repeat it long before we ask how true it is.
What the story captures is something we recognise the moment we encounter it. The administrator wanted fewer cobras. The bounty produced more dead ones, which is a different thing entirely. The signal was clear, the response was rational, and the gap between them was the problem no one had thought to design for.
That gap has a long history, and this chapter is an attempt to trace it — not through the dramatic failures, though there are plenty of those, but through the quieter and more pervasive ways it shows up in ordinary organisational life. The cobra effect is not an exotic exception. It is closer to a default, and the reason it keeps reasserting itself across such different contexts and eras is not that people are irrational or dishonest. It is that our picture of what actually drives behaviour has always been more partial than we tend to assume — partial enough that even our best attempts to design better incentives keep producing outcomes we didn’t anticipate. We have come a long way in that understanding. But the understanding we have, it turns out, is only the beginning of what we need.
We could have begun almost anywhere. The cobra story persists because it is vivid, but it is far from unique. Historians of policy and economists of incentives collect examples like these the way naturalists once collected specimens — small, contained episodes in which behaviour drifts away from intention once value is attached to a narrow outcome.
In the Soviet Union, factories were rewarded for meeting production quotas measured by weight. Nails grew heavier. Not more useful, not better made — just heavier, because heavier was what the system valued. When the problem became obvious, the quota shifted to quantity instead — and the nails grew smaller and more numerous, flooding a system that needed neither.
In Hanoi, a bounty was placed on rat tails to reduce infestation. You can see where this is going. People began breeding rats, cutting off their tails, and releasing the tailless animals back into the city to keep breeding. The rat population, if anything, grew.
Each case is different in its details but identical in its shape. The incentive is clear, the response is entirely rational, and the outcome is a genuine surprise — not to us, reading it now, but to the system that designed it. Which is, of course, the point.
It is tempting to treat these as curiosities — historical footnotes from distant systems or different eras. But the logic they expose is not confined to colonial administrations or planned economies. It appears wherever success becomes defined in a way that can be measured, compared and rewarded.
One of the most consequential examples did not emerge from bureaucracy or empire but from reform. In the late twentieth century, governments across the United States and United Kingdom began placing increasing emphasis on standardised testing in schools. The goal was real and, at least initially, widely shared — to make sure that every child, regardless of background or geography, was actually learning, and that schools which were failing them could be identified and held to account. If outcomes were visible, the argument went, then the system could be steered toward them. Testing offered a way to make performance legible — to parents, to policymakers, and to the institutions themselves.
At first, the reform seemed not just sensible but overdue. And then, as test scores became consequential — tied to school funding, league tables, inspections and reputations — the behaviour of the system began to shift in ways nobody had quite designed for.
Teaching increasingly aligned with what was examined — teaching to the test, as it came to be known, a phrase that entered the language precisely because what it described was so recognisable. Subjects that didn’t appear on the tests lost ground quietly but steadily. Music, drama, art, sport — the parts of school that tend to matter most in memory, that develop the capacities no exam adequately captures — found themselves squeezed to the margins, then in some schools removed altogether. In England, by the mid-2010s, nearly half of schools had stopped entering pupils for GCSE Music. A similar proportion had dropped Drama. The curriculum hadn’t been formally narrowed; it had narrowed itself, following the logic of what the system had decided to count.
Students near grade boundaries received disproportionate focus because their results could most easily move the numbers — a practice so widespread it acquired its own name too: “borderline boosting.” As with many things we explore in this book, this isn’t about bad faith or manipulation; it followed directly from what the system had made consequential.
As a system, the reform worked perfectly. Schools responded exactly as any system does when consequences are attached to measurable outcomes — they optimised for the measure. The problem was the same one that haunted the cobra scheme, the Soviet nail quota, and the rat bounty in Hanoi. The signal was clear, the response was rational, and the gap between the outcome intended and the outcome produced was, in retrospect, entirely predictable. What nobody had designed for was the possibility that hitting the target and achieving the goal might turn out to be two different things — and that once the target became the thing with value, it would quietly become the goal itself.
The test didn’t replace education. It simply became the only way of determining the value of education. Its receipt. The dead cobra, the rat’s tail.
Corporate life had arrived at the same place earlier, and by a different route. By the time shareholder primacy had fully settled into the operating assumptions of American business — gradually, and then all at once, as we saw earlier — the mechanism through which it was felt day to day was already in place. Quarterly earnings reporting had been a standard expectation for publicly listed American companies for decades, and whatever its original purpose, it had long since become something more than a reporting interval. It had become the primary rhythm around which expectations formed, analysts built their models, boards evaluated their executives, and careers rose or fell.
In the early 2000s, a group of researchers at Duke and Stanford decided to ask what American CFOs were actually doing inside that system. They surveyed and interviewed more than four hundred senior financial executives, and the results were striking not for their complexity but for their candour. More than half said they would turn down a new investment — one they believed to be genuinely profitable over time — if accepting it meant missing the current quarter’s earnings consensus. Nearly four fifths said they would sacrifice real economic value in order to report smoother numbers. What was remarkable was the absence of defensiveness in how they described this. They were not rogue actors or ideologues. They were experienced professionals, explaining the logic of a system they had learned to inhabit — doing, in their own context, exactly what the cobra breeders had done in theirs.
Nobody had instructed them to think this way. The quarter had simply become the thing with value. Nobody was saying long-term thinking no longer mattered — but quarterly reporting had become the corporate world’s equivalent of teaching to the test. What could be measured in ninety days crowded out what could only be seen in years, not because anyone chose that trade-off, but because the system made one visible and left the other to fend for itself.
The conditions made certain choices more legible than others, certain risks more visible, certain outcomes more discussable, and people responded accordingly, reading the system they were inside and behaving rationally within it. Nobody had designed that outcome. The system had simply made certain things easier to see than others, and behaviour, as it always does, followed the visibility.
What links these stories is not simply that incentives shape behaviour. That much has been understood for as long as people have been organised into groups. What has changed, and changed profoundly, is how we think about the mechanism.
For most of human history, the answer to the question of how you get people to do what you need them to do was not subtle. You told them. You prescribed the action, prohibited the alternative, and enforced the boundary. The military ran on orders. The factory floor ran on rules. If the task was repetitive, the output measurable, and the worker replaceable, instruction worked well enough.
The cracks appeared when the nature of work began to change. Through the middle decades of the twentieth century, as economies shifted away from physical production toward something harder to name — knowledge work, service work, work that depended on judgement, creativity, discretion — the old machinery began to show its limits. You could instruct a person to operate a lathe. You could not instruct them to have a good idea, to care about a customer, to solve a problem nobody had anticipated. Compliance could be commanded. Commitment could not.
That recognition opened a door. If you couldn’t prescribe the behaviour you needed, perhaps you could design the conditions under which people would choose it themselves. Instead of rules, incentives. Instead of commands, consequences. The argument was compelling and, in many ways, genuinely humane — it treated people as rational agents capable of making their own choices, rather than subjects to be directed. It promised alignment where earlier systems had relied on force, and it travelled fast. Through the latter half of the twentieth century, governments, regulators and organisations increasingly moved away from command and control toward systems intended to shape behaviour indirectly, trusting that if the right consequences were attached to the right outcomes, behaviour would follow.
And often it did. The difficulty lay not in the principle but in the assumption beneath it — the belief that the designer could see the full landscape into which those consequences would land. In practice, every incentive entered a world already structured by other pressures: professional norms, reputational risks, institutional routines, personal ambitions, moral commitments, and the simple human need to belong. The signal introduced by policy or management did not replace those forces. It joined them.
In organisations, we still tend to speak about incentives as though they were discrete levers — bonus schemes, performance targets, promotion criteria, recognition programmes — and when behaviour doesn’t unfold as expected, the assumption is usually that the levers must be wrongly set. But the lived experience of work suggests something more complicated, and more interesting. People do not respond only to what is formally rewarded. They respond to what is noticed, what is spoken about, what carries reputational risk, what signals belonging and what protects them from exclusion. The official incentive is only one signal in a wider field, and it is not always the most powerful one. Which raises an obvious question: if behaviour is shaped by signals that formal incentive design rarely sees, what does that fuller landscape actually look like?
In the 1990s, Joan Lancourt, Edwin Nevis and Helen Vassallo, working out of MIT and the Gestalt Institute, argued that organisations shape behaviour through multiple channels at once, most of which operate below the level of formal policy. Their work grew out of a simple observation: people rarely learn what matters in a system from formal statements alone. They infer it from the signals they encounter every day. They identified seven of these channels: persuasive communication, structural rearrangement, extrinsic rewards, coercion, participation, role modelling, and expectancy.
Persuasive communication is what leaders say — the language of strategy, values and purpose. The word persuasive matters: this is communication designed to move people toward a particular understanding, offered with a degree of free choice rather than compulsion. Which is precisely why it can fail so quietly. Nobody is forced to believe the town hall. They simply choose, over time, not to — and when that happens, the gap between what the organisation says and what people experience becomes the distance we explored in an earlier chapter.
Structural rearrangement is what leaders organise — reporting lines, processes, systems, written rules, procedures, the architecture of work itself. It is perhaps the most underestimated channel, because it operates without anyone having to say anything. The way a meeting is structured, who reports to whom, how decisions travel through a hierarchy — all of it communicates priorities and power more reliably than any declared value.
Extrinsic rewards are the traditional incentives — pay, bonuses, promotions, titles — and they rest on a particular assumption about human behaviour: that without external reinforcement, the desired behaviour won’t be maintained. That assumption was already being seriously questioned at roughly the time Lancourt and her colleagues were writing. Drawing on a large body of research in social psychology, Alfie Kohn argued in Punished by Rewards that the relationship between incentives and performance was almost precisely the reverse of what organisations believed — that the more you reward someone for doing something, the less interest they tend to have in doing it, and that for tasks requiring creativity, judgement or problem-solving, people offered a reward often produce lower quality work than those offered none at all. His argument was contested, and rewards do work for certain kinds of routine, low-engagement tasks. But for the knowledge work that had come to define the modern organisation, the finding was uncomfortable.
It became more uncomfortable still when Dan Ariely and his colleagues tested it directly. In a series of experiments conducted across different cultures and task types, they found that performance followed a pattern nobody had designed for: modest incentives helped, larger ones made no difference, and the largest ones — rewards significant enough to genuinely matter to the people receiving them — produced the worst performance of all. The mechanism was telling: large incentives caused people to think consciously about tasks better performed without conscious thought, disrupting exactly the kind of open, associative thinking that creative and complex work requires. In a separate experiment with Intel factory workers, cash bonuses — the most common form of extrinsic reward in organisations — produced the worst results of any reward type tested, and the day after they were paid, workers were measurably less productive than colleagues who had received nothing. The channel organisations reach for most instinctively, it turned out, was the one most likely to behave in ways they hadn’t anticipated — which, if the cobra effect taught us anything, should perhaps not have been a surprise.
Coercion sits at the other end of that spectrum. Where extrinsic rewards assume people need incentivising, coercion assumes they will comply because they feel unable to leave — unable to walk away from the relationship, the role, the income, the identity that the organisation provides. In most organisations, it operates not through explicit threat but through what people observe happening to others: who gets passed over, whose challenge goes unrewarded, who is quietly managed to the margins. People don’t need to experience it directly to learn its lesson. Watching is enough.
Participation shapes behaviour by signalling whose voice matters and who gets to shape the future. But it works in both directions, and its absence is as powerful as its presence. When people feel genuinely included — when their input visibly shapes outcomes — participation becomes a source of commitment and investment. When they don’t, it becomes one of the few levers they still control, and they use it accordingly, withdrawing effort, going quiet, doing what is asked and nothing more. Organisations that talk about engagement when they mean participation are often looking at the symptom rather than the cause. There is more to say about this, and we will return to it.
Role modelling communicates priorities through what influential people actually do rather than what they claim to value — and crucially, people absorb it without being aware they are doing so. It doesn’t require a formal audience or a conscious lesson. It operates through the thousand small observations of daily working life: how someone responds under pressure, what they let pass, what they push back on, whose judgement they defer to in a room. Influential here means both formally and informally — the senior leader whose behaviour sets the tone, but also the respected peer whose reaction to a difficult moment tells everyone present what the real rules are.
Expectancy is perhaps the most subtle channel, and in some ways the most consequential. It describes the assumptions people hold about one another, and unlike every other channel it cannot be broadcast — it only operates in close interpersonal relationship, which is what makes it both so powerful and so difficult to see. What I expect of you shapes how I interpret everything you do. If I think well of you, a mistake becomes an anomaly — something must be wrong, how can I help? If I don’t, the same mistake confirms what I already believed. And you know, without being told, roughly what I expect of you, because expectancy communicates itself through tone, attention, opportunity and the thousand small signals of daily working life. It shapes what people attempt, what they risk, what they hide, and what they quietly give up on — the inducement, as Lancourt puts it, of self-fulfilling prophecies, in which expected behaviour becomes reality. Over time, people tend to become what the room has already decided about them, which is either the most hopeful or the most sobering thing about organisations, depending on what the room has decided.
Taken together, these channels describe something richer than an incentive scheme. They describe an environment of meaning — the landscape inside which all the formal incentives land. People come to understand what counts not from any single signal, but from the pattern those signals form together. A bonus may encourage one behaviour, but if recognition, influence and belonging attach to something else, the lesson is quickly learned. The organisation may declare one set of priorities while quietly reinforcing another through the way attention, opportunity and trust are distributed. And because these channels operate simultaneously, the pattern they form is rarely the one anyone designed. It emerges from the interaction of all seven, shaped by history, habit, and the accumulated weight of a thousand small signals that nobody thought to examine.
What I have come to see and believe, through years of working inside organisations and watching how they actually function, is that incentives are far less like a lever than they are a landscape. We tend to imagine that behaviour follows the signal we designed — the bonus, the target, the value statement — but what it actually follows are the signals people experience, and those are rarely the same thing. The landscape people inhabit is shaped by all seven channels at once, by the organisation and by each other, and it looks quite different depending on where you stand within it.
But here is what the framework does not say loudly enough, and what experience keeps confirming. These channels do not only run downward. Leadership does not simply transmit signals that everyone else receives. Every person in the system is simultaneously sending and receiving across all seven channels — shaping the environment for others while being shaped by it themselves. A peer who absorbs a colleague’s frustration without comment is modelling what stoicism looks like here. A team that closes ranks around someone who spoke up and got burned is transmitting a signal about participation that no all-hands meeting will easily override. A middle manager who lets a difficult truth pass unremarked is quietly revising what expectancy means in that room. None of this is designed. All of it is consequential.
And yet it would be wrong to suggest that everyone’s signals carry equal weight. Power still concentrates whose version of the environment tends to prevail. A leader’s role modelling reaches further than a peer’s. Whose participation is sought matters more than who volunteers it. The expectations of those with authority shape the room more forcefully than those without it. The multi-directional point matters precisely because it complicates the simple top-down model — but it doesn’t dissolve hierarchy. It sits inside it, which is part of what makes the landscape so difficult to change from any single position, including the top.
This matters because it changes what alignment actually requires. If the seven channels only ran downward, changing behaviour would be a leadership problem — adjust what leaders say and do, and the environment shifts accordingly. But if every person in the system is an active participant in producing the environment, then the challenge is of a completely different order. You cannot change it from the top alone, because it is not being made at the top alone. It is being continuously remade, in every interaction, at every level, in ways that no single vantage point can fully see.
Which is where the real difficulty lies. Not in understanding that these channels exist, or even in mapping them with Lancourt’s precision. The difficulty is that any organisation trying to understand its own incentive landscape is trying to see something it is simultaneously inside and helping to create. The signals that shape behaviour are mostly invisible to the people producing them, and the information channels most organisations rely on — surveys, appraisals, town halls, management reporting — were never designed to carry that kind of signal. They capture what people are willing to say, in the formats the system has made safe. They rarely capture what is actually shaping behaviour day to day.
That is the information problem at the heart of this chapter. Not that organisations lack data, but that the data they have was never built to show them what they most need to see.
What the framework gives us is language for something most people have already felt — but the feeling itself is worth pausing on, because it points to something remarkable about human beings. The landscape communicates without words, and people read it fluently and instinctively, often long before they could articulate what they’ve understood. This is both the reason organisations are so hard to steer and the reason they function at all. We are exquisitely attuned to the social environment around us — to shifts in tone, to whose confidence fills a room, to the difference between a decision that’s genuinely open and one that’s already been made. No policy document produces that sensitivity. No training course instals it. It’s simply what people do, and it means that the incentive landscape is being read continuously, by everyone, with a precision that no formal system could replicate or fully anticipate.
Which makes it all the more striking that the systems built to shape that landscape — inside organisations, and far beyond them — remain so consistently blind to it.
The post-2008 regulatory response to banking provides perhaps the clearest illustration of why. Among the many interventions that followed the crisis, the European Union’s cap on bankers’ bonuses — introduced under the Capital Requirements Directive in 2014 — was one of the most politically visible and intuitively appealing. Excessive bonuses had been widely identified as a driver of the short-term risk-taking that preceded the collapse, and capping them at twice fixed salary seemed a direct and proportionate response. The problem, which the Bank of England’s own economists had anticipated and which the evidence subsequently confirmed, was that the cap was applied to one variable in a system where all the others remained free to move. Banks responded, entirely rationally, by raising fixed salaries to compensate — with the result that total pay remained broadly unchanged while its composition shifted in precisely the wrong direction. A bonus can be clawed back when things go wrong. A salary cannot. The regulation designed to reduce reckless risk-taking had, in a quietly significant way, made it harder to hold people financially accountable when recklessness occurred.
Andrew Haldane, then the Bank of England’s executive director for financial stability, had identified the underlying problem in a speech at Jackson Hole in 2012, before the bonus cap had fully demonstrated its effects. His argument was that post-crisis regulation had become so elaborate in its attempt to describe and constrain the terrain — running to tens of thousands of pages across Basel III and Dodd-Frank — that it was endeavouring, as he put it, to capture every raindrop rather than look out for thunderstorms. The complexity was itself a form of blindness, because a rule book that sprawling cannot be dynamically updated when the landscape shifts, and creates so many specific provisions that sophisticated actors can navigate between them rather than being constrained by them. The attempt to map every contingency had generated new gaps between the map and the territory — which is, in miniature, exactly what happened with the bonus cap. Regulators could see the number on the payslip. They could not see the competitive labour market, the retention pressures, the contractual flexibility, and the quiet institutional logic that would determine how banks responded once the rule was in place.
The mortgage market told a version of the same story, where tighter post-crisis lending criteria protected against the recklessness that caused the problem while trapping a generation of renters in housing costs higher than the mortgages they had been refused — paying, every month, the affordability they had been told they couldn’t demonstrate. NHS waiting time targets reduced the visible number while reshaping clinical behaviour around the threshold in ways the metric was never designed to detect. In each case, the intervention was aimed at the most legible feature of a complex system, and the system responded through the parts that remained invisible.
What unites these examples — and what connects them to everything the chapter has argued about organisations — is not that the designers were careless or naive. It is that they were trying to steer behaviour through the signals they could see, in a landscape that was always considerably larger than what they could see. That is not a failure of intention or effort. It is a structural condition: the gap between what any system of measurement can capture and the full terrain of human behaviour is where the unintended consequences live, and it does not close simply because the stakes are raised or the rule book is made thicker.
Which returns us, with some urgency, to the question of visibility itself — not how to design better incentives, but how to get better at seeing the landscape we are already in.
Upton Sinclair observed, with characteristic directness, that it is difficult to get a man to understand something when his salary depends upon his not understanding it. The line is usually read as an indictment of bad faith — of people who choose not to see what is inconvenient to see. But the argument of this chapter suggests something more uncomfortable than that. The incentive landscape doesn’t only shape what people do. Over time, it shapes what they can see. The signals that flow through all seven channels — through what is rewarded and what is modelled, through who belongs and what is expected — gradually constitute a way of reading the world that feels not like ideology but like plain observation. People aren’t usually hiding what they know. They have come, in good faith, to know different things depending on where they sit in the landscape and what their position within it has made salient.
This is what made the problem Harkn was trying to solve so genuinely difficult. I spent the better part of a decade building it, and we closed it last year, having found that the organisations most likely to benefit from it were often the least able to recognise why. That is not a criticism of those organisations. It is, I think, the most honest thing I can say about the problem itself — because if the chapter’s argument is right, that resistance was not incidental. It was structural. An organisation shaped by an incentive landscape it cannot fully see will tend, almost by definition, to find it difficult to perceive the value of something designed to make that landscape more visible. The tool failed in the market for something close to the reasons the book predicts it would.
What Harkn was attempting — and what I still believe is necessary — was not primarily a mechanism for surfacing dissent, though it did that too. It was an attempt to create a different kind of information environment: one that gave organisations a means of seeing the terrain as it was actually experienced, rather than as the formal channels reported it. The gap between those two things — between what people would say through the platform and what they would say in a meeting — was not a measure of how many dissenters an organisation contained. It was a measure of how much of the real landscape was simply invisible through the channels organisations normally relied on to understand themselves. Not hidden, but unspoken — because the conditions for saying it didn’t exist, and because the incentive landscape had quietly taught people what it was safe to express and what it was not.
By this point in the book you will know what the platform did and how it worked, so there is no need to retrace it here. What matters in this context is what the experience of building it confirmed: that the information problem is not solved by asking people what they think. It requires creating conditions in which what they actually think can surface at all — which is a harder thing to do, and a harder thing to sell, than almost anything else an organisation might be invited to consider. Sinclair’s salary works in both directions. The people being asked to pay for better visibility are often precisely the people whose position depends, in ways they may not consciously recognise, on things staying as they are.
The question is not whether organisations can design incentives — they already do, constantly, whether they intend to or not — but whether they can learn to see the ones they didn’t mean to create.
There is, in the end, one signal that tells you more about an organisation’s incentive landscape than almost any other. Not what people say, but what they don’t. Silence is not the absence of information — it is the most precise information available about what the landscape has made speakable and what it has not. An organisation in which people have stopped saying difficult things has not achieved consensus. It has achieved something considerably more troubling: an environment in which the incentives to stay quiet have quietly outweighed every other signal in the field. Learning to read that silence — to understand what it means and where it came from — is where the next part of this argument begins.


